Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Explanation of an integration trick for $\int \frac{a_1 \cos x + b_1 \sin x}{a\cos x + b\sin x}dx$ I'm not sure how to formulate my question correctly. Basically it comes from solving the integral:
$$
\int \frac{a_1 \cos x + b_1 \sin x}{a\cos x + b\sin x}dx\\
a^2 + b^2 \ne 0
$$
I haven't been able to solve the integral without the trick I've found after a while. This trick suggests to rewrite:
$$
a_1 \cos x + b_1 \sin x = \frac{a_1a + b_1b}{a^2 + b^2}(a\cos x + b\sin x) + \frac{a_1b - ab_1}{a^2 + b^2}(b\cos x - a\sin x)\tag{1}
$$
After using this trick the integral becomes almost elementary:
$$
I = \int \frac{a_1a + b_1b}{a^2 + b^2} dx + \int \frac{a_1b - ab_1}{a^2 + b^2}\frac{b\cos x - a\sin x}{a\cos x + b\sin x} dx
$$
The first part is trivial, for the second one substitute $u = a\sin x + b\cos x$.
My question is how on earth one could arrive at $(1)$. Is that some sort of well-known expression that I just missed?
Going from RHS to LHS in $(1)$ is easy, but how do I make it the other way round?
Thank you!
| Any linear combination of sine waves with the same period and different phase shifts can be written as a single sine wave with that same period and a suitable phase shift.
\begin{align}
& A\cos(x+\varphi) + B\cos(x+ \psi) \\[8pt]
= {} & A\big(\cos x\cos\varphi - \sin x \sin\varphi\big) \\
& {} + B\big(\cos x\cos\psi - \sin x \sin \psi\big) \\[8pt]
= {} & C\cos x + D\sin x
\end{align}
where
\begin{align}
C & = A\cos\varphi + B \cos\psi \\[8pt]
D & = -A\sin\varphi - B\sin\psi
\end{align}
and then
\begin{align}
& C\cos x + D\sin x \\[8pt]
= {} & \sqrt{C^2+D^2} \left( \frac C {\sqrt{C^2+D^2}} \cos x + \frac D {\sqrt{C^2+D^2}} \sin x\right) \\[8pt]
= {} & \sqrt{C^2+D^2} \big( E\cos x + F\sin x\big).
\end{align}
We now have $E^2+F^2=1$ so $E= \cos\chi$ and $F=\sin\chi$ for some angle $\chi.$ Thus we have
\begin{align}
& E\cos x + F\sin x \\[8pt]
= {} & \cos\chi\cos x + \sin\chi\sin x \tag 1 \\[8pt]
= {} & \cos(x-\chi).
\end{align}
Line $(1)$ above is what you have in the problem you're facing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3416775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Question about Answer to limsup of $\sigma_n=\frac{s_1+s_2+\cdots+s_n}{n}$ here's the relevant question: If $\sigma_n=\frac{s_1+s_2+\cdots+s_n}{n}$ then $\operatorname{{lim sup}}\sigma_n \leq \operatorname{lim sup} s_n$
In the accepted answer, doesn't the last inequality only work if $\sup_{l\geq k}s_l$ is nonnegative?
The "last inequality" I'm referring to is this:
$$\frac 1n\sum_{j=1}^ks_j+\frac{n-k}n\sup_{l\geqslant k}s_l\leqslant \frac 1n\sum_{j=1}^ks_j+\sup_{l\geqslant k}s_l.$$
I ran into this issue when trying to prove the analagous statement for liminf, because in the case of liminf I could only get a similar inequality if $\inf_{l\geq k}s_l \leq 0$, as follows:
$$\sigma_n=
\frac 1n\sum_{j=1}^ks_j+\frac 1n\sum_{j=k+1}^ns_j
\geqslant \frac 1n\sum_{j=1}^ks_j+\frac{n-k}n\inf_{l\geqslant k}s_l
$$
From here, if $\inf_{l\geq k}s_l \leq 0$ then I could continue and write
$\geq\frac 1n\sum_{j=1}^ks_j+\inf_{l\geqslant k}s_l$.
Could someone clarify please?
| You have that
$$ \tag{*}
\sigma_n\geqslant \frac 1n\sum_{j=1}^ks_j+\frac{n-k}n\inf_{l\geqslant k}s_l
$$
and you are right that this is $\ge \frac 1n\sum_{j=1}^ks_j+\inf_{l\geqslant k}s_l$ only if $\inf_{l\geqslant k}s_l \le 0$.
But that estimate is actually not needed: For fixed $k$ you can take the $\liminf_{n \to \infty}$ in $(*)$, this gives
$$
\liminf_{n \to \infty}\sigma_n\geqslant \inf_{l\geqslant k}s_l
$$
because the right-hand side has a limit for $n \to \infty$.
Then take the limit for $k \to \infty$ and conclude that
$$
\liminf_{n \to \infty}\sigma_n\geqslant\liminf_{n \to \infty}s_n\, .
$$
The same approach works for $\limsup$ in the referenced Q&A.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3416895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Complex numbers question - sum of three complex numbers Seems an easy one but i can't figure it out:
$z_1+z_2+z_3=0$
$|z_1|=|z_2|=|z_3|=1$
Need to prove the following:
$z_1^2+z_2^2+z_3^2=0$
Thanks!
| Conjugate $z_1+z_2+z_3=0$ and get $\frac{1}{z_1}+\frac{1}{z_2}+\frac{1}{z_3}=0$ which simplifies to $z_1z_2+z_1z_3+z_2z_3=0$. Now square the original equation and you are done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3417029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Integral over Modified Bessel Functions I've stumbled upon this integral
\begin{equation}
\mathscr{I}_\nu^k(a,b)=\int_0^\infty dr\frac{r^k}{r^2+m^2}e^{-ar^2}I_\nu(br)
\end{equation}
(where $I_\nu(x)$ is the modified Bessel function) during some QFT research, but I cannot seem to crack it. I'm specifically interested in the integrals $\mathscr{I}_1^2$, $\mathscr{I}_1^4$, and $\mathscr{I}_2^3$, each of which have an odd integrand, so that the typical semicircular contours are seemingly inutile. I've tried Weierstrass transforms, Hankel transforms, Laplace transforms, and the like, but each made the evaluation even more complicated. Additionally, I cannot find any similar integrals in the literature. Does anyone know a solution or a plan of attack? Any input would be greatly appreciated.
| Let's do the $(1,2)$ case. First let's denote
$$J(a,b) = \int_0^\infty \frac{r^2}{r^2+m^2}e^{-ar^2}I_1(br)dr = \int_0^\infty e^{-ar^2}I_1(br)dr - \int_0^\infty \frac{m^2}{r^2+m^2}e^{-ar^2}I_1(br)dr$$
Which means we have that
$$\partial_a J(a,b) = \int_0^\infty -r^2e^{-ar^2}I_1(br)dr + \int_0^\infty \frac{m^2r^2}{r^2+m^2}e^{-ar^2}I_1(br)dr $$
$$\implies \partial_a J(a,b) = m^2 J(a,b) - \int_0^\infty r^2e^{-ar^2}I_1(br)dr$$
which gives us a nice ODE with homogeneous solution $J_h(a,b) = B(b)e^{m^2 a}$. Using variation of parameters (i.e. assuming the particular solution is of the form $v\cdot J_h$) gives
$$\partial_a v = - e^{-m^2a}\int_0^\infty r^2e^{-ar^2}I_1(br)dr$$
We'll focus on the integral inside first. Using the series representation for the Modified Bessel function of the first kind, we have that
$$\int_0^\infty r^2e^{-ar^2}I_1(br)dr = \frac{1}{2}\sum_{k=0}^\infty \frac{4^{-k}b^{2k+1}}{k!\Gamma(k+2)} \int_0^\infty r^{2k+3}e^{-ar^2}dr$$
Then using the substitution $s=\sqrt{a}r$ we get
$$\frac{1}{2a^{\frac{3}{2}}}\sum_{k=0}^\infty \frac{4^{-k}\left(\frac{b}{\sqrt{a}}\right)^{2k+1}}{k!\Gamma(k+2)} \int_0^\infty s^{2k+3}e^{-s^2}ds$$
$$ = \frac{1}{2a^{\frac{3}{2}}}\sum_{k=0}^\infty \frac{4^{-k}\left(\frac{b}{\sqrt{a}}\right)^{2k+1}}{k!\Gamma(k+2)} \cdot \frac{\Gamma(k+2)}{2} = \frac{b}{4a^2}e^{\frac{b^2}{4a}}$$
So we have
$$ v = \int -e^{-m^2a}\cdot\left(\frac{b}{4a^2}e^{\frac{b^2}{4a}}\right)da = \frac{1}{b}e^{-m^2a}e^{\frac{b^2}{4a}} + \int \frac{m^2}{b}e^{-m^2a}e^{\frac{b^2}{4a}}da$$
Thus our (almost) final answer is
$$ \mathscr{I}_1^2(a,b) = B(b)e^{m^2a}+\frac{1}{b}e^{\frac{b^2}{4a}} + e^{m^2a}\int_1^a \frac{m^2}{b}e^{-m^2 \alpha}e^{\frac{b^2}{4\alpha}}d\alpha$$
where $B(b)$ can be determined by plugging in $a=1$ for the original integral and solving for the series.
What's nice is that once you have the $(1,2)$ case with whatever method you choose, you have the $(1,4)$ case for free because
$$\mathscr{I}_1^4(a,b) = -\partial_a \mathscr{I}_1^2(a,b)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3417204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why does exponentiating group elements by integers always make sense? Let $G$ be a group.
For any $g\in G$, we can define a mapping $\mathbb Z\to G$ via $n\mapsto g^n$.
This map is a homomorphism between the additive group $\mathbb Z$ and $G$.
On the one hand, it's obvious why we can always do this. We just define $g^n$ as "multiply $g$ with itself $n$ times" (for $n>0$, and then straightforwardly generalise to all $n\in\mathbb Z$).
Thus, $\mathbb Z$ comes into play because we are "counting" the number of times $g$ is operating on itself.
On the other hand, this means that for any group $g\in G$, there is a homomorphism $\mathbb Z\to G$ sending $n$ to $g^n$.
Is there any reason why this group in particular, $\mathbb Z\equiv(\mathbb Z,+)$, ends up being homomorphic to subgroups of every group? Are there groups other than $\mathbb Z$ sharing this property?
| First, for each $g$ your map is really a homomorphism of the integers $\mathbb{Z}$ to $G$. You map $-1$ to $g^{-1}$. You don't want to restrict the map to the positive integers.
All you are really saying is that any group has lots of cyclic subgroups - every element generates one. Any any cyclic group is a homomorphic image of $\mathbb{Z}$.
You can always map any group $H$ homomorphically to $G$ with the trivial map the sends everything to the identity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3417309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
} |
Do bijections like the ones defined in this question exist?
Do everywhere discontinuous bijections $b: \mathbb R \to \mathbb R$ such that $b(\mathbb Q) \cap \mathbb Q= \emptyset$ exist?
| Yes, example: $f(x)=x+\sqrt{2}$.
Edit: for the function to be discontinuous everywhere:
$$
f(x) =
\begin{cases}
x+\pi & \text{if } x \in \mathbb{Q} \\
x+\pi+1 & \text{if } x \in \mathbb{R} \backslash \mathbb{Q}
\end{cases}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3417447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Finding a basis of solutions to a linear homogeneous system with a matrix with non-constant entries. Is there a way to find a basis of solutions to the following linear homogeneous system using the eigenvector method?
\begin{array}{l} \\ {\qquad \boldsymbol{y}^{\prime}(t)=A(t) \boldsymbol{y}(t), \quad A(t)=\left[\begin{array}{cc}{2 t /\left(t^{2}+1\right)} & {0} \\ {2 t} & {2 t}\end{array}\right]}\end{array}
I've tried to but it gets quite unusual, do the matrix entries have to be constants for this method? I'm going to guess that the matrix exponential won't work either because of this.
I've found the solution by just doing the individual vector components, but was just wondering if there was a method to do it by leaving it in "matrix form".
| No, it's not a matrix exponential, though it's sometimes called a "time-ordered exponential", and eigenvectors/eigenvalues won't help. In general there's no way to do it in closed form, though as you mentioned, in this case you can solve it by doing $y_1$ first and then $y_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3417707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $f(x)=e^{-1/x^2}$ for $x\neq 0$ is continuous.
Prove that $f(x)=e^{-1/x^2}$ for $x\neq 0$ is continuous.
So I know I need to use the epsilon-delta limit definition, but when I do I end up needing to show that
$e^{-1/a^2}\left|\dfrac{2}{x^3}e^{-1/x^2+1/a^2}-\dfrac{2}{a^3}\right|<\epsilon\; \forall \epsilon>0.$
I can simplify it to $e^{-1/a^2}\left|\dfrac{2}{x^3}e^{-(x^2-a^2)/(x^2a^2)}-\dfrac{2}{a^3}\right|.$ Then if I let $\delta <1,$ I get $x>a-1$ and so the expression in the absolute value brackets is less than $\dfrac{2}{(a-1)^3}e^{-(x^2-a^2)/((a-1)^2a^2)}-\dfrac{2}{a^3},$ but I don't know how to simplify this to get a fraction. I don't want to use the Taylor expansion for $e.$
Any help would be appreciated.
| We may evaluate the continuity of $e^{\frac{-1}{x^2}}$ straightforwardly using theorems regarding the continuity of composed and elementary functions.
Let $f(x)=\frac{1}{x},$ $g(x) = e^x$, and $h(x)=\frac{1}{x^2}$.
Then $$e^{\frac{-1}{x^2}} = \frac{1}{e^{\frac{1}{x^2}}} = \frac{1}{(g \circ h)(x)}= (f \circ (g \circ h))(x)$$.
We know $h(x)=\frac{1}{x^2}$ is a rational function, so it is continuous on its domain, $\mathbb{R}-\{0\}$. We know $g(x) = e^x$ is an exponential function, so it is continuous on its domain, $\mathbb{R}$. Therefore, $(g \circ h) (x) = e^{\frac{1}{x^2}}$ is continuous on the intersection of the two domains, which is $\mathbb{R}-\{0\}$.
Similar to before, we know $f(x)=\frac{1}{x}$ is a rational function, so it is continuous on its domain, $\mathbb{R}-\{0\}$. And we have already stated that $(g \circ h) (x) = e^{\frac{1}{x^2}}$ is continuous on its domain, $\mathbb{R}-\{0\}$. So $(f \circ (g \circ h))(x)$ is continuous on the intersection of the two domains, which is $\mathbb{R}-\{0\}$.
Therefore, $e^{\frac{-1}{x^2}}=(f \circ (g \circ h))(x)$ is continuous on $\mathbb{R}-\{0\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3417826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
a question on Bernoulli function in the book of Tenenbaum In section 0.2 of Introduction to Analytic and Probabilistic Number Theory by Gérald Tenenbaum, I read that "One easily verifies that these assumptions imply the identity...". I started from the left hand side of the series taking first finite terms and using integration summation by parts but it was not "easily". Is there any simple way to prove this identity.
Thanks!
| Hint: Let $f(x,y)=\sum _{r\geq 0}b_r(x)\frac{y^r}{r!}$ Take $\frac{d}{dx}f(x,y)$ to get
$$\frac{d}{dx}f(x,y)=\frac{d}{dx}(1+\sum _{r>0}b_r(x)\frac{y^r}{r!})=\sum _{r>0}b_{r-1}(x)\frac{y^r}{r!}=yf(x,y),$$ hence $$\frac{\frac{df}{dx}}{f}=y$$ and so
$$f(x,y)=ce^{xy},$$ where $c$ does not depend on $x.$ Now, take
$$\int _{0}^1f(x,y)dx=\int _0^11dx+\sum _{r>0}\int _0^1b_r(x)dx\frac{y^r}{r!}=1,$$
use that $f=ce^{xy}$ to get $$c\int _0^1e^{xy}dx=1.$$ Solve for $c.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3417984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How many numbers have the form $10n+d$ where $d$ is a non-zero digit? How many numbers that do not end in a series of zeros are such that if we erase the last digit, the resulting number will divide the original?
I was even thinking of excluding the possibility of being multiples of 5, and by the divisibility criterion of 4, excluding it too (if the number ends in 2 zeros). Maybe if it's on another number base it might help in some other way.
| Let the number be $(a_{1}a_{2}.....a_{n})_{10}$$=k$
so $k=$$(a_{1}a_{2}.....a_{n})_{10}=$$10$$(a_{1}a_{2}.....a_{n-1})_{10}$$+a_{n}$
now $(a_{1}a_{2}.....a_{n-1})_{10}$ divides $(a_{1}a_{2}.....a_{n})_{10}$
which means $(a_{1}a_{2}.....a_{n-1})_{10}$ divides $a_{n}$ which is only possible for 2 digit numbers
so just a little bit observation tells us the numbers are $10,11,12,13,14,15,16,17,18,19,20,22,24,26,28,30,33,36,39,40,44,48,50,55,60,66,70,77,80,88,90,99$
and trivially any number $(a_{1}a_{2}.....a_{n})_{10}$ where $a_n$ is 0 and $a_{n-1}$ is non zero is applicable
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3418080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Show that $G = M \circledast N$ has a diagonal subgroup iff $M$ is isomorphic to $N$. A subgroup $D$ of $G = M \circledast N$ is a diagonal subgroup provided:
$$D \cap M = 1 = D \cap N$$
$$DM = G = DN$$
(Where $\circledast$ is denoting the internal direct product of $M$ and $N$.)
$$$$
GOAL: Show that $G$ has a diagonal subgroup iff $M \cong N$.
First assume G has a diagonal subgroup as described. Since $G = M \circledast N$ is the internal direct product of $M$ and $N$ we know $M$ and $N$ are normal in $G$, $G=MN$ and $M \cap N = 1$.
We must show that $M \cong N$.
$$$$
(Here are a few results that may or may not be helpful for getting to a solution.
*
*If $G = M \circledast N$ then $G \cong M \times N$.
*If $G = M \circledast N = M \circledast L$ then $N \cong L$.)
I need help with the other direction as well. You have my appreciation in advance.
| The other direction is easy. When $M \simeq N$, then $G \simeq M \times M$ and $\{(m,m), m \in M\}$ is a diagonal subgroup (you are just writing elements of $M$ twice, so every needed check follows from this). Can you explain this? I don't get it.
Now, suppose that $G$ has a diagonal subgroup $D$. As Arturo suggested in his comment, consider the projection maps, $\pi_M : G \to M$ and $\pi_N : G \to N$. We want to determine $\pi_M(D)$. The projection map $\pi_M$ has kernel $N$, so the image of a subgroup $D$ is isomorphic to $DN/N$. Now, using the second isomorphism theorem $DN/N \simeq D/D \cap N = D$. But, since $DN=G$, this is also equal to $G/N \simeq M$.
By symmetry, repeating the argument with $\pi_N$ shows that $D \simeq M \simeq N$. So not only you have proved that $M \simeq N$, but that the cardinality of a diagonal subgroup is always $\frac{1}{2} |G|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3418185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Locations of root theorem confusion Theorem: if f is continuous in [a, b] and f(a) < 0, f(b) > 0, then there exists c in [a, b] such that f(c) = 0.
The most popular proof on the website is proof by contradiction. Thus, we have two cases, f(c) < 0 or f(c) > 0.
first, I suppose f(c) > 0. How can I use continuity to prove that there is a contradiction? The following is my attempt.
$\forall \epsilon > 0, \exists \delta > 0$ s.t $\forall x: |x - c| < \delta$ we have $|f(x) - f(c)| < \epsilon$ Also, I let S = {x $\in$ [a, b] : f(x) < 0}. Since S is bounded, I got a least upper bound c. Then, how can I proceed?
| In the next section we a prove a proposition by contradiction that can be used by the OP.
Proposition 1: Let $g:[a,b] \to \Bbb R$ be a function satisfying $g(x) \lt 0$ for $x \lt b$.
If $g$ is continuous at $x = b$ then $g(b) \lt 0$ or $g(b) = 0$.
Proof
To arrive at a contradiction, assume that $g(b) \gt 0$.
Let
$\quad \varepsilon = g(b)$
Since $g$ is continuous at $b$ we can find a $\delta \gt 0$ such that
$\tag 1 \big (\forall x \in [a,b]\big ) \; |x - b| < \delta \text{ implies } |g(x) - g(b)| < \varepsilon$
It follows that we can find a $x_0 \in [a,b)$ such that
$\tag 2 |g(x_0) - g(b)| < \varepsilon = g(b)$
But then $g(x_0) \gt 0$, a contradiction. $\quad \blacksquare$
The OP can now prove the following theorem.
Theorem 2: Let $f:[a,b] \to \Bbb R$ be a continuous function satisfying $f(a) \lt 0$ and $f(b) \gt 0$. Then there exists one and only one number $c \in (a,b)$ such that $f(x) \lt 0$ for $x \in [a,c)$ and $f(c) = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3418364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Related Rates (point moving along a curve) Consider a point moving along the curve $$f(x) = \sqrt x$$.
a). Find the position of the point on the curve where both coordinates of the point are changing at the same rate.
b). If $\dfrac{dx}{dt}$ is $2 \text{ m/sec}$ at the point $(4,f(4))$, how fast is the point moving away from the origin?
My attempt:
Find point where dy/dt = dx/dt,
Given $y = \sqrt x$ ==> $dy/dt = 1/2\sqrt x$* dx/dt
and this is where I'm still lost
| a) No. You're not looking for the point where the x and y have the same value, you're looking for the point where the values are changing at the same rate. Whenever you see "change" in calculus, that is a free clue that you should be thinking about the derivative. So you are looking for the point where $\frac{dy}{dx}=1$.
b) For this, we are being asked about how quickly a different quantity is changing, so we will need a new function to take the derivative of. The distance from the origin to $(x,(f(x))$ is given by $$g(x)=\sqrt{x^2+(f(x))^2}=\sqrt{x^2+(\sqrt x)^2}=\sqrt{x^2+x}$$ The problem is asking you to calculate $\frac {dg}{dt}$ at $x=4$ given that $\frac{dx}{dt}=2$. That looks like a lot to work through, but remember that the Chain Rule tells us that $\frac{dg}{dt}=\frac{dg}{dx}\cdot\frac{dx}{dt}$, so you've got just enough clues to work it out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3418724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Proving a Supremum of a Set The Question:
Find the supremum of the set $${\{\sqrt[4]{n^4+n^3}-n:n\in \mathbb{N}\}}$$
And then it tells us to plug large values of n to determine a suitable guess, show that is an upper bound and then prove it is the smallest upper bound.
I followed the question, finding a suitable guess for s is 1/4, and showed that this is an upper bound just fine. My issue lies with proving that there is no smaller upper bound. At this point, my working looks like this as I try to prove by contradiction:
Assume h is some other upper bound, such that h < 1/4.
$${\sqrt[4]{n^4+n^3}-n < h}$$
$$n^4+n^3 < (h+n)^4$$
But after expansion, all I can cancel is $n^4$ which leaves me with a lot of unknowns to various powers and a really complicated solution to do by hand
$$n^3 < h^4 + 4h^3n + 6h^2n^2 + 4hn^3$$
Which means I know I've gone down the wrong route but I'm not sure which way I should go about proving this. I adapted an answer from a different book example, but that only went up to power 2, so simplifying this way was much easier.
| Let
$$
f(n) = \sqrt[4]{n^4+n^3}-n = n ((1+\frac{1}{n})^{1/4}-1)
$$
Now by Bernoulli's inequality, $(1+\frac{1}{n})^{1/4} \le 1+\frac{1}{4n}$, so
$$
f(n) \le n ((1+\frac{1}{4n})-1) = \frac{1}{4}
$$
Now we need to show that this is the smallest upper bound. To do so,
let $x = \frac{1}{n}$ and re-consider Bernoulli's inequality, $(1+x)^{1/4} \le 1+\frac{x}{4}$ or $(1+x) \le (1+\frac{x}{4})^4 = 1 + x + (3 x^2)/8 + x^3/16 + x^4/256$ or $0 \le (3 x^2)/8 + x^3/16 + x^4/256$, and the equality limit actually is attained for $x \to 0$ which is $n \to \infty$. So there is no smaller limit and the supremum is $1/4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3418854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Prove that $T$ is continuous. I am having a really hard time trying to solve this problem:
Let $X$ and $Y$ be normed linear spaces and $T : X \to Y$ a linear operator with closed graph and finite-dimensional range $R(T)$. Prove that T is continuous.
Obviously the closed graph theorem cannot be applied since our spaces are not neccessarily Banach ones. Nonetheless, at least "Y" is a Banach space because of its finite-dimensionality. I do not know how I can go further from here. Thanks in advance.
| Assume $T$ is not continuous at zero. Then there exist a sequence $(x_n)\subset X$ with $x_n\to 0$ and $\epsilon>0$ such that $\|Tx_n\|\ge\epsilon$ for $n\in\mathbb N$. Set $u_n := \frac{x_n}{\|Tx_n\|}\in X$. Then $u_n\to 0$ and $\|Tu_n\|=1$ for all $n\in\mathbb N$. Since $R(T)$ is finite-dimensional, there exists a subsequence $(u_n')$ of $(u_n)$ such that $(Tu_n')$ converges. Let $y\in Y$ be its limit. Then from $u_n'\to 0$ and $Tu_n'\to y$ we conclude that $y=0$. But $\|y\| = \lim_n\|Tu_n'\| = 1$. Contradiction!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3418956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $ u_t=u_{xx} $ and $u(x,0)=4x(1-x),~x\in [0,1],~~u(0,t)=u(1,t)=0,~t\geqslant 0$, prove $00$
Let $u$ be the solution of the heat equation initial and boundary value problem:
$$\frac{\partial u}{\partial t}=\frac{\partial^2u}{\partial x^2},~~x\in (0,1),~t>0,$$
and:
$$u(x,0)=4x(1-x),~x\in [0,1]~~ \textrm{and} ~~u(0,t)=u(1,t)=0,~~t\geqslant 0.$$
Prove that $0<u(x,t)<1$ for all $x\in (0,1)$ and $t>0$.
Attempt. Instead of solving the equation, we shall work with the maximum-minimum principle. Since $u=0$ for $x=0,\,1$ and $0\leqslant u\leqslant 1$ for $t=0$, we get the estimate $0\leqslant u\leqslant 1$ on the boundary of $\varOmega=(0,1)\times [0,+\infty).$ By the maximum-minimum principle, we get $0\leqslant u\leqslant 1$ for all $x\in (0,1)$ and $t>0$.
How one can derive the strict inequalities?
Thanks in advance for the help.
| The strong maximum principle states that if the solution attains its maximum $M$ in some $0 < \bar{x} <1$ and $\bar{t} > 0$, then the solution $u(t, x) \equiv M$ for $0 \le x \le 1$ and $0 \le t \le \bar{t}$. Since the initial condition is not constant, you have a contradiction. (Similarly for the minimum)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3419112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that the map $c:{\mathbb {T}}^{3}=S^{1}\times S^{1}\times S^{1}\setminus\ \Delta\longrightarrow \{\pm 1\}$ is continuous. Prove that the map $c:{\mathbb {T}}^{3}=S^{1}\times S^{1}\times S^{1}\setminus \Delta\longrightarrow \{\pm 1\}$ is continuous where $\Delta$ is the diagonal
$\Delta=\{(g_i,g_j,g_k)\}$ for $i=j$ or $j=k$ or $i=k$ and $c$ has the following properties:
1)($\textit{Left-Invarianc}$): $c(ag_1,ag_2,ag_3)=c(g_1,g_2,g_3)$ for all $a,g_1,g_2,g_3\in S^1$ with $g_i\ne g_j$ for any $i$ and $j$
2)($\textit{Co-cycle condition}$): $c(g_1,g_2,g_3)-c(g_1,g_2,g_4)+c(g_1,g_3,g_4)-c(g_2,g_3,g_4)=0$ for all $g_1,g_2,g_3,g_4\in S^1$ with $g_i\ne g_j$ for any $i$ and $j$
These two conditions basically tells us that there is a certain order on circle which is left-invariant.
I want to prove or disprove the continuity of $c$ where $\{\pm 1\}$ is given discrete topology. We see that $\mathbb{T}^{3}=S^{1}\times S^{1}\times S^{1}$ is a three dimensional torus and not much of its topological facts are given in the wiki article. Please suggest where should I start.
EDIT: Image after Paul Plummer's comment
| You can choose such a $c$ to be continuous. Note that means that the 3-torus without the fat diagonal is at least two connected components. How can we see that?
First, instead of trying to imagine a 3-torus, imagine three, possibly with multiplicity, marked points (which are colored/distinguished from each other) on the circle $S^1$ which will correspond to a point in $\mathbb T^3$. So if you "visually" see only two or one marked point that means that some marked points are overlapping (two coordinates are equal) and so is a point in the fat diagonal $\Delta$. Lets call the marked points $r,b,g$ and placed at $-1,i,1$ in $S^1$. It should be clear that you can not move the marked points so that $r,b,g$ are $-1,-i,1$ without crossing the fat diagonal.
It should also be clear, with this interpretation, that the components are fixed by the $S^1$ action, so $c$ can assigns the connected components a single value hence the function is continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3419267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Non-zero admissible representation of $sl_\infty$ Does someone has an example of a non-zero admissible representation of $sl_\infty$ ?
| I think I found one.
We consider a vector space $V_m = \mathbb{C}V_0 \oplus \cdot\cdot\cdot \oplus \mathbb{C} V_m$ and the vector space $V^{(\mathbb{Z})}_n$ of $\mathbb{Z}$ indexed sequences of $V_m$ with a finite number of non zero terms.
Then, for $N$ in $\mathbb{N}$ we introduce the basis $(v^i_p)$ where $0 \leq p \leq n$ and $-N \leq i \leq N$ of the subspace of $V^{(\mathbb{Z})}_n$ for which all terms vanish under $-N$ and above $N$. We define the action of the generators on that basis by :
$$X_iv^k_p = (n-p+1)\delta_{ik}v^k_{p-1}$$
$$Y_iv^k_p = (p+1)\delta_{ik}v^k_{p+1}$$
$$H_iv^k_p = (n-2p)\delta_{ik}v^k_p$$
We can see this satisfies $[X_i, Y_j] = \delta_{ij}H_j$ and similarly for $[H_i, X_j]$ and $[H_i, Y_j]$. Thus, this action defines a structure of representation of $sl_\infty$ on $V^{(\mathbb{Z})}_n$.
This representation is admissible and non zero by definitions of $V^{(\mathbb{Z})}_n$ and admissible representations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3419414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that a function has a derivative at $x=0$ Okay, we have this function:
$f(x)= |x|^\alpha \sin(\frac{1}{x})$, if $x\neq0$
$f(x)= 0$, if $x=0$
The question is, at point x=0:
1) At which value of $\alpha$ does $f(x)$ have a derivative?
2) At which value of $\alpha$ does $f(x)$ have a continuous derivative?
The answer is
1) $\alpha>1$
2) $\alpha>2$
For question 1) I have to prove that this limit exist:
$$\lim_{h\to0} \frac{f(0+h)-f(0)}{h} $$
Which is this:
$$\lim_{h\to0} \frac{|h|^\alpha sin(\frac{1}{h})}{h} $$
We know that $\sin(\frac{1}{h})$ as $h\to0$ doesn't have a limit but it is bounded. So, if we prove that $lim_{h\to0} \frac{|h|^\alpha}{h} = 0 $, then we would have proven that the whole limit is equal to 0, thus, the derivative at point x=0 is equal to 0. Is this reasoning correct?
If $\alpha > 1$, we can say that:
$$\lim_{h\to0} | \frac {h^\alpha}{h} | = \lim_{h\to0} |h^{\alpha-1}| = 0 $$
From this follows that:
$$\lim_{h\to0} \frac{|h|^\alpha}{h}=0 $$
Is this reasoning correct?
For the second question, I am completely lost. This is my first time posting here, and I'd be very thankful if you guys could help me out.
| For question $1$ we need that the following limit exists
$$\lim_{h\to0} \frac{|h|^\alpha \sin(\frac{1}{h})}{h}$$
and since $\sin(\frac{1}{h})$ oscillates and is bounded we need that
$$\lim_{h\to0} \frac{|h|^\alpha }{h}=0 \implies \alpha>1$$
For question $2$ we need to check that the derivative is continuous and since we have
$$f'(x)=|x|^{\alpha-2}\left(\alpha x \sin(\frac{1}{x})-\cos(\frac{1}{x})\right)$$
in order to have continuity at the origin we need that as $x \to 0$ we have $f'(x) \to 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3419499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Difference between topologically complete space and complete metric space Definition: Topological space $(X,\tau)$ is called topologically complete if there is metric $d$ on $X$ which induces the topology $\tau$ of $X$ and $(X,d)$ is complete metric space.
Also the following fact is true: If $f:X\to Y$ where $f$ is homeomorphism and $Y$ is topologically complete then $X$ is also topologically complete.
The proof is not difficult because if $d$ is a metric on $Y$ which induces topology of $Y$ and $(Y,d)$ complete metric space then one can define the metric $\rho$ on $X$ as follows: $\rho(x_1,x_2):=d(f(x_1),f(x_2))$. One can show that $\rho$ induces the topology of $X$ and $(X,\rho)$ is complete metric space.
However, I was wondering about the following moment: the above reasoning shows that the notion of topologically complete is topological property. However completeness is not topological property. The standard example is $(0,1)$ and $\mathbb{R}$, they are homeomorphic, $\mathbb{R}$ is complete but $(0,1)$ is not since the sequence $x_n=1-\frac{1}{n}$ is Cauchy sequence but does not converge in $(0,1)$.
Can anyone explain to me why the above reasoning cannot be applied to the case of $\mathbb{R}$ and $(0,1)$? I guess that $(0,1)$ is not complete in the standard euclidean metric inherited from $\mathbb{R}$ but it maybe complete in the different metric which induces its subspace topology.
Anyway I would be very grateful for useful answer!
| A preferred definition of topologically complete is
S is a topologically complete topological space
when S is homeomorphic to A complete metric space.
Clearly, complete metric spaces are topologically complete topological spaces. In particular, R with the usual metric is topologically complete, a topologically complete topological space.
(0,1) with the inherited subspace metric is not a complete metric space. As (0,1), however is homeomorphic to R, it is a topologically complete topological space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3419657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
$X_n$ is bounded in probability and $Y_n$ converges to 0 in probability then $X_nY_n$ congerges to probablity with 0 I want to show : $X_n$ is bounded in probability and $Y_n \rightarrow 0$ in probability then $X_nY_n \rightarrow 0 $ in probablity.
I know the following definitions that is
Definition 2.17 : We say that $X_n$ is bounded in probability if $X_n = O_P (1)$,
i.e. if for every $\epsilon$ > 0, there exist $M$ and $N$ such that $P(|Xn| < M) >
1 − \epsilon $ for $n > N$.
So I want to show that for every
$\epsilon$ > 0 and $\epsilon '$ > 0 there exist $M>0 $ and $n_o $ such that
$P(|X_n| <M) > 1 -\epsilon/2$
$P(|X_n| <\epsilon /M) > 1 -\epsilon'/2$
for every $n>n_0$
Then I want to show that
$P(|X_n| <M$ and $|Y_n| <\epsilon /M) > 1-\epsilon' $
| $P(|X_nY_n| >\epsilon) \leq P(|X_n| \leq M, |X_nY_n| >\epsilon)+P(|X_n| > M, |X_nY_n| >\epsilon)\leq P(|Y_n| >\frac {\epsilon} M)+[1-P(|X_n|<M)]$. For $n >N$ the second term is less than $\epsilon$ and the first term tends to $0$ as $n \to \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3419796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Artin Schreier equation Let $K$ be the field obtained by adjoining to $\mathbb{Q}_p$ a root $\alpha$ of the polynomial $f(x)=x^p-x-\dfrac{1}{p}$. I should prove that $K \supset \mathbb{Q}_p$ is a Galois extension of degree $p$. I managed to prove that $K$ has degree $p$:taking $\dfrac{1}{\alpha}$ we see that is a root of $x^p-px^{p-1}-p$ which is an Eisenstein polynomial.
I do not know how to show that we have all roots of $f$.
| It's just Hensel's lemma.
First note that the $p$-adic valuation of $\alpha$ is $-1/p$: from the identity $\alpha^p-\alpha=1/p$, it follows that $v_p(\alpha) <0$, and then by triangle inequality $v_p(\alpha^p)=v_p(\alpha^p-\alpha)=-1$.
Putting $K = \mathbb{Q}_p(\alpha)$ and $y = x - \alpha$, we want to show that the polynomial $g(y) = (y + \alpha)^p - (y + \alpha) - \frac{1}{p} \in K[y]$ has $p$ roots in $K$.
Now write $g(y) = y^p + c_{p - 1}y^{p - 1} + \cdots + c_2y^2 + (c_1 - 1)y$, where every $c_i$ is in the maximal ideal of $\mathcal{O}_K$. In fact, every $c_i$ is of the form $p\cdot d_i \cdot \alpha^{p-i}$, where $d_i$ is an integer. We thus have $v_p(p\cdot \alpha^{p-i})= 1-(p-i)/p=i/p>0$.
In the residue field, we have $\overline{g}(\overline{y}) = \overline{y}^p - \overline{y}$, which has $p$ different roots, namely the elements of $\mathbb{F}_p$.
Thus by Hensel's lemma, for every element of $\mathbb{F}_p$ there is a unique root of $g$ in $K$ lifting that element.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3419972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Prove that $n+2$ points in $\Bbb R^n$ cannot all be at a unit distance from each other. I saw this on the FB group "Actually good math problems" where one solution was an illegible photo and the other involved intersecting spheres. I have a solution. I'd like to see how many good answers I get before I post it.
If $S$ is a set of $n+2$ points in $\Bbb R^n$ (with $n\in \Bbb N$) with the usual (Cartesian) norm, show there exist $2$ distinct $p,q\in S$ with $\|p-q\|\ne 1.$
| Let us assume that $\{P_1,\ldots,P_{n+2}\}$ is a set of points in $\mathbb{R}^n$ such that $\|P_i-P_j\|=1$ for any $i\neq j$.
We may assume without loss of generality that $P_{n+2}=O$. From the polarization formula
$$ 2\cos\theta_{ij}=2\langle P_i, P_j \rangle = \|P_i-P_j\|^2 + \|P_i+P_j\|^2 = 1+\|P_i+P_j\|^2\geq 1$$
it follows that $\widehat{P_i O P_j}\geq \frac{\pi}{3}$ and $\langle P_i,P_j\rangle \geq \frac{1}{2}$ for any $i\neq j$ such that $i,j\in{1,\ldots,n+1}$. We have
$$ n=\sum_{k=1}^{n}\|P_{n+1}-P_k\|^2=2n-2\sum_{k=1}^{n}\langle P_k,P_{n+1}\rangle \leq n$$
holding as a strict inequality unless $\langle P_{n+1},P_k\rangle=\frac{1}{2}$ for any $k\in\{1,\ldots,n\}$. This implies that $P_1,\ldots,P_n$ all lie
on a affine hyperplane $\pi$, while $P_{n+1}$ and $P_{n+2}$ lie on opposite sides of it, since $\langle P_{n+1},P_{n+1}\rangle = 1, \langle P_{n+1},P_{n+2}\rangle=0$. These equalities give that $O=P_{n+2}$ and $P_{n+1}$ are actually symmetric with respect to $\pi$. Let $M\in \pi$ be the midpoint of $P_{n+1} P_{n+2}$. By the Pythagorean theorem $P_k M=\frac{1}{2}\sqrt{3}$ for any $k\in\{1,\ldots,n\}$, so we have $n$ points, $1$-apart from each other, on a $S^{n-2}$ with radius $\frac{\sqrt{3}}{2}$ in $\mathbb{R}^{n-1}\simeq \pi$. Let us consider $P_1$ and $P_2$: all the points $P_3,\ldots,P_n$ have to lie on the previous sphere and on the spheres centered at $P_1,P_2$ with unit radius. These spheres do not have a common intersection, so we cannot have $n+2$ points in $\mathbb{R}^n$ which are $1$-apart from each other.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3420134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Determining decreasing sequence For a sequence $x_{n+1}=4x_n-x_{n-1}$, $x_1=4, x_2=15$, show that the sequence $\frac{x_{n+1}}{x_n}$ is decreasing
I know from calculating that $\frac{x_{n+1}}{x_n}-\frac{x_{n}}{x_{n-1}}=\frac{-1}{x_nx_{n-1}}$, but I can't seem to prove that.
Any tips please?
| $$x_{n+1}=4x_n-x_{n-1}$$
divide $x_n$ through out
$\dfrac{x_{n+1}}{x_n}=4-\dfrac{x_{n-1}}{x_n}$
$\implies \dfrac{x_{n+1}}{x_n} - \dfrac{x_{n}}{x_{n-1}} =4-\dfrac{x_{n-1}}{x_n} - \dfrac{x_{n}}{x_{n-1}} = 4-\dfrac{x_n^2+{x_{n-1}}^2}{x_nx_{n-1}} \le 0$
Because $a^2+b^2\ge 4ab$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3420295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Matrix with $i,j$ entry $a_i/(a_i + a_j)$ Suppose $a_1 > a_2 > \cdots > a_n > 0$, consider an $n\times n$ matrix $A$ with $i,j$ entry $\frac{a_i}{a_i + a_j}$. I am wondering if the matrix has a name/ any insight about the eigenvalue/eigenvectors? All I can observr is $A - 1/2$ is a skew-symmetric matrix, but nothing else.
Thanks a lot!
| $A$ is the product of two positive definite matrices $\operatorname{diag}(\mathbf a)$ and $\left(\frac{1}{a_i+a_j}\right)_{i,j\in\{1,2,\ldots,n\}}=\int_0^\infty e^{-x\mathbf a}e^{-x\mathbf a^\top}dx$, where $\mathbf a=(a_1,\ldots,a_n)^\top$ and $e^{\mathbf v}$ denotes the entrywise exponential of a row/column vector $\mathbf v$. Therefore $A$ has a positive spectrum.
It follows that each eigenvalues $\lambda$ of $A$ has a corresponding real unit eigenvector $\mathbf v$. However, as $A+A^\top=\mathbf e\mathbf e^\top$ (where $\mathbf e=(1,1,\ldots,1)^\top$),
$$
2\lambda
=\mathbf v^\top(A+A^\top)\mathbf v
=\mathbf v^\top\mathbf e\mathbf e^\top\mathbf v
=(\mathbf v^\top\mathbf e)^2
\le\|\mathbf v\|_2^2\|\mathbf e\|_2^2
=n.
$$
Therefore $\rho(A)\le\frac n2$. Since $A$ is entrywise positive, by Perron-Frobenius theorem, $\rho(A)$ is a simple eigenvalue of $A$ and it has a corresponding eigenvector that is entrywise positive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3420441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Which of the following relations on $\{1,2,3\}$ is an equivalence relation? $$\begin{array}{l}{R_{1}=\{(1,1),(2,2),(3,3),(1,2),(2,1)\}} \\ {R_{2}=\{(1,1),(2,2)\}} \\ {R_{3}=\{(1,2),(2,3),(3,1)\}}\end{array}$$
$$\begin{array}{l}{R_{4}=\{(1,2),(2,1),(1,3),(3,1),(2,3),(3,2)\}} \\ {R_{5}=\{(1,1),(2,2),(3,3),(1,2),(2,3),(3,1)\}} \\ {R_{6}=\{(1,2),(2,1)\}}\end{array}$$
My answer
In order for $R_i$ to be an equivalence relation it should meet four criteria
*
*$R_i$ should be a subset of $\{1,2,3\} \times \{1,2,3\}$
*$R_i$ should be reflexive. symmetric and transitive.
My conclusion is that none of the relations above are an equivalence relation. But the answer key is $R_1$. The problem is that $3$ is not related to any of the others and one cannot see that $R_1$ is transitive.
Q1: Why am I wrong?
Q2: Why isn't $R_5$ an equivalence relation?
| For transitivity you need for all $a,b,c\in \{1,2,3\}$:
If $a$ is related to $b$ and $b$ is related to $c$, then $a$ is related to $c$. You can indeed check for $R_{1}$ that this is true, since if you pick $a = 3$, then the statement is trivial. Since it only satisfies the "if" condition if you pick $b = 3$, since $3$ is only related to $3$.
Clarification: The thing you have to check for transitivity is for every $(a,b) $ and $(b,c)$ in $R_{1}$ you also have $(a,c)$ in $R_{1}$. Maybe this way of phrasing transitivity helps you to understand it better.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3420588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
If $x=9$, then can we write $\sqrt{x}=\pm3$ or only $\sqrt{x}=3$ If $x=9$, then can we write $\sqrt{x}=\pm3$ or only $\sqrt{x}=3$.
I am confused as square root always gives positive number.
But the irony is that if we have $a^2=9$, then we write $a=\pm3 \text { where $a=\sqrt{a^2}$ }$
| You are right that often $\sqrt{n}$ is meant as the positive root. As a result, it is very common to see the solutions to $x^2=n$ expressed as $x=\pm\sqrt{n}$ rather than $\pm x=\sqrt{n}$. While they are equivalent mathematically, the latte
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3420992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
$\text{tr}(X)=\text{tr}(A^{-1}B)$ for $AX+XA=2B$ Suppose X is a solution to following equation over positive definite matrices $A$,$B$
$$XA+AX=2B$$
The following seems to hold (numerically)
$$\text{tr}(X)=\text{tr}(A^{-1} B)$$
Can anyone see the way to prove this?
| Since $A$ is positive definite, it has an inverse. Hence
$$ +=2 \\ \Rightarrow
A^{-1}XA +X=2A^{-1}B.$$
Taking $Tr(\cdot)$ on both sides leads to
$$Tr(A^{-1}XA) + Tr(X) = 2Tr(A^{-1}B)\\
\Rightarrow Tr(AA^{-1}X) + Tr(X) = 2Tr(A^{-1}B)\\
\Rightarrow Tr(X) + Tr(X) = 2Tr(A^{-1}B) \\
\Rightarrow Tr(X) = Tr(A^{-1}B) $$
The second line above is due to the rotation property of the $Tr(\cdot)$ operation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3421167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Prove that $(11 \cdot 31 \cdot 61) | (20^{15} - 1)$ Prove that
$$ \left( 11 \cdot 31 \cdot 61 \right) | \left( 20^{15} - 1 \right) $$
Attempt:
I have to prove that $20^{15}-1$ is a factor of $11$, $31$, and $61$. First, I will prove
$$ 20^{15} \equiv 1 \bmod11 $$
Notice that
$$ 20^{10} \equiv 1 \bmod 11$$
$$ 20^{5} \equiv 9^{5} \bmod 11 = 9^{4} 9 \bmod 11, \:\: 9^{2} \equiv 4 \bmod 11 $$
$$ \implies 9^{5} \equiv 144 \bmod 11 \implies 20^{5} \equiv 1 \bmod 11 $$
Then the proof is done.
Now I will prove:
$$ 20^{15} \equiv 1 \bmod 31 $$
Notice $20^{2} \equiv 28 \bmod 31$, so
$$20 \times (20^{2})^{7} \equiv 20 \times (28)^{7} \bmod 31 \equiv 20 \times (-3)^{7} \bmod 31 \equiv -60 \times 16 \bmod 31\equiv 32 \bmod 31 $$
then the proof is done.
Also, in similar way to prove the $20^{15} \equiv 1 \bmod 61$.
Are there shorter or more efficient proof?
| $$20\equiv3^2\pmod{11}$$
$20^{15}\equiv(3^2)^{15}\equiv(3^{10})^3\equiv1^3$ by Fermat's Little Theorem
$$20=2^2\cdot5,\implies20^{15}\equiv2^{30}5^{15}$$
By Fermat's Little Theorem
$$2^{30}\equiv1\pmod{31},5^3\equiv1\pmod{31}\implies5^{15}=(5^3)^5\equiv1^5$$
Again $5^3\equiv3\pmod{61},2^6\equiv3$
$$20^{15}\equiv(2^6)^5(5^3)^5\equiv3^5\cdot3^5\pmod{61}$$
Finally
$3^5=243\equiv-1\pmod{61}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3421282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Problem of Harsthorne page 35 problem 5.1. Q Which is which in Figure?
a)$x^2=x^4+y^4$
b)$xy=x^6+y^6$
c)$x^3=y^2+x^4+y^4$
d)$x^2y+xy^2=x^4+y^4$
a)$x^2=x^4+y^4$
This is invariant under the transformation $x\mapsto -x$ and $y\mapsto -y$. Thus it is Tacnode.
b)$xy=x^6+y^6$
It is invariant under the map $(x,y) \mapsto (y,x)$, thus Node or Triple point.
Because triple points meat the small circle of origin in six times and six is the degree of this polynomial, thus I guess this curve is triple point but I cannot prove it in some precise manner.
c)$x^3=y^2+x^4+y^4$
This curve is invariant under the map $y \mapsto -y$ and it is not invariant under the map $x \mapsto -x$, thus Cusp.
d)$x^2y+xy^2=x^4+y^4$
It is invariant under the map $(x,y) \mapsto (y,x)$, thus Node or Triple point.
Because triple points meat the small circle of origin in 4 times and 4 is the degree of this polynomial, thus I guess this curve is Node but I cannot prove it in some precise manner.
I cannot answer for cuves b) and d).
| Hint: The homogeneous term of lowest degree tells you the tangent directions at the origin. For instance, the lowest order term of $x^2 - x^4 - y^4 = 0$ is $x^2 = x \cdot x$, so the line $x=0$ is a double tangent line at the origin. Only one of your graphs has this property--can you see which one?
You can match the other graphs and equations similarly. Just zoom in on the portion of the graph near the origin and see what the tangent lines are.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3421413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
All subsets of $\{1,\{\}\}$ I'm trying to figure out what all subsets of the set $A \colon= \{1,\{\}\}$ are.
I am not sure if the answer is:
*
*$P(A) = \{ \{ \}, \{1\}, \{1;\{\}\}, \{\{\}\} \}$ or
*$P(A) = \{ \{\}, \{1\}, \{1; \{\}\} \}$
| Recall that the power set is the set of all subsets of A, including the empty set and A itself therefore in this case
$$P(A) = \Big\{ \{\}, \{1\}, \{\{\}\}, \{1,\{\}\} \Big\}$$
which indeed, since $|A|=n=2$, has $2^n=4$ elements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3421575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that the convex hull is the union of all the triangles determined by triples of points from X I'm trying to prove that the convex hull of a set X of three or more points in the plane is the union of all the triangles determined by triples of points from X, however I can't think of the meaningful approach to go with. And now I'm really interested what kind of theorems or rules would explain how is the convex hull a union of all the triangles determined by triples of points from X.
| (1) Every triangle must be a subset of the convex hull. (2) Suppose some edge of the convex hull doesn't belong to any triangle -> contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3421726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Confusing proof for $\sqrt{2}$ being irrational $\sqrt{2}$ is irrational using proof by contradiction.
say $\sqrt{2}$ = $\frac{a}{b}$ where $a$ and $b$ are positive integers.
$b\sqrt{2}$ is an integer. ----[Understood]
Let $b$ denote the smallest such positive integer.----[My understanding of this is
that were are going to assume b is the smallest possible integer such that $\sqrt{2}$ = $\frac{a}{b}$, ... Understood]
Then $b^{*}$ := $b(\sqrt{2} - 1)$----[I'm not sure I understand the point that's being made here,
from creating a variable $b^{*} = a - b$ ]
Next, $b^{*}$ := $b(\sqrt{2} - 1)$ is a positive integer such that $b^{*}\sqrt{2}$ is an integer.----[ I get that ($a - b$) has to
be a positive integer, why does it follow that then $b^{*}\sqrt{2}$ is an integer?]
Lastly, $b^{*}<b$, which is a contradiction.----[I can see that given $b^{*}$ := $b(\sqrt{2} - 1)$, we then have
$b^{*}<b$, I don't get how that creates a contradiction]
Any help is appreciated, thank you.
| Your initial premise is that $b$ is defined to be the smallest positive integer such that $b\sqrt 2$ is a positive integer. This is equivalent to reducing $\frac ab$ to its lowest terms (that is, make $a$ and $b$ coprime).
Obviously $b^\mathrm * = b\sqrt 2 - b = a-b$ will be a positive integer such that $b^*<b$.
And we now construct a new positive number $b^\mathrm {**}= b^\mathrm *\sqrt 2$. We can show this new number is an integer because $b^\mathrm*\sqrt 2 = (b\sqrt 2 - b) \sqrt 2 = 2b - b\sqrt 2 = 2b-a$.
But now we have a problem. If $b^\mathrm{**} = b^\mathrm*\sqrt 2$ is a positive integer as we've just shown, then $b$ cannot be the smallest positive integer that possesses the defining property of giving an integer when multiplied by $\sqrt 2$. We've just found an even smaller positive integer with the same property.
We have arrived at a contradiction. Therefore it is not possible to have an integer $b$ and hence $\sqrt 2$ is not rational.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3421858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
} |
Prove $\int_{a}^{b}xf(x)dx=a\int_a^cf(x)dx + b\int_c^bf(x)dx$ $f:[a,b]\to\mathbb{R}$ continous. Show that existe c in $[a,b]$ such that $$\int_{a}^{b}xf(x)dx=a\int_a^cf(x)dx + b\int_c^bf(x)dx.$$
I already tried integration by parts, and mean value theorem for integrals... I need a light
| One can proceed as in Prove for continous function $f$, $\int_0^1 xf (x) dx = \int_c^1 f (x) dx $, where the case $[a,b]=[0, 1]$ is handled.
Define $F(x) = \int_a^x f(t) \, dt$ and integrate by parts:
$$
\int_a^b xf(x) \,dx = xF(x) \bigr]_a^b - \int_a^b F(x) \, dx = bF(b) - \int_a^b F(x) \, dx\, .
$$
From the mean value theorem for integrals we have $\int_a^b F(x) \, dx = (b-a)F(c)$ for some $c \in [a, b]$. It follows that
$$
\int_a^b xf(x) \,dx = bF(b) - (b-a)F(c) = a F(c) + b(F(b)-F(c)) \\
= a \int_a^c f(x) \, dx + b\int_c^b f(x) \, dx \, .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3422060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How I can solve this : $-2^x+3^{x/2}+1=0$ without using numerical ways? This equation : $-2^x+3^{x/2}+1=0$ is confusing me , However it has only one solution which it is an integer $ x=2$ , But i can't resolve it using clear way , I have used the varibale change $y= x/2$ in order to transforme it in equation of degree 2 but am regret that is not such polynomial for which solving it using standard Method , Any way without using numerical methods, I feel the only way work is to use Lambert Function ?
| Using the equivalent equation as of the hint of zeraoulia rafik, one observes that the equation
$$
-1+\left(\frac{\sqrt3}2\right)^x+\left(\frac12\right)^x=0
$$
is constant or strictly monotonously falling in all terms on the left side, so that also the whole left side as their sum is strictly falling.. Which proves that there can only be one root, which was already found by inspection to be $x=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3422149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Unifying the product of odd powers of function with the product of even powers of the same function in one product. Assume that
\begin{equation}
\begin{split}
f_n(x)&= \begin{cases} \big(g(x)\big)\cdot \big(g(x)\big)^3\cdot \big(g(x)\big)^5\cdots \big(g(x)\big)^{n-1},& n \ \text{even},\\
\big(g(x)\big)^2\cdot \big(g(x)\big)^4\cdot \big(g(x)\big)^6\cdots \big(g(x)\big)^{n-1},& n \ \text{odd}, \end{cases}\\
&=\begin{cases}\displaystyle\prod^{n-1}_{k\ \ odd} \big(g(x)\big)^{k} ,& n \ \text{even},\\
\displaystyle\prod^{n-1}_{k\ \ even}\big(g(x)\big)^{k},& n \ \text{odd}, \end{cases}
\end{split}
\end{equation}
where $g(x)$ is any polynomial of order $m$(with real coefficient). I beleive that it's possible to unify the two cases($n$ odd and even) in one product like $\prod^{n-1}_{k=1}(\cdots)$ or $\prod^{n-2}_{k=0}(\cdots)$ $\big($or like $\prod^{n/2}_{k=1}(\cdots)$ or $\prod^{(n/2)-1}_{k=0}(\cdots)$$\big)$ which gives the desired $f_n(x)$ for each $n$. I hope someone helps me to figure it out.
| you can do much better actually. Let me start with $n=2N+1$ odd, then
$$
f_n(x)=g(x)^{\sum_{k=0}^{N}(2k)}=g(x)^{2\sum_{k=0}^{N-1}k}=g(x)^{N(N+1)}
$$
and for $n=2N$ even
$$
f_n(x)=g(x)^{\sum_{k=0}^{N-1}(2k+1)}=g(x)^{2\sum_{k=0}^{N-1}k+N}=g(x)^{N(N+2)}
$$
There are many ways to rewrite this in a single line, one of them is
$$
f_n=g(x)^{\lfloor n/2\rfloor\left(\lfloor n/2\rfloor+1+\frac{1+(-1)^n}{2}\right)}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3422307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Proving $\int_{\Bbb R} \frac{\cos x}{(x^2+t^2)^2}dx=\frac{\pi(t+1)}{2t^3e^t}, t>0$
I want to prove that $$\int_{\Bbb R}\frac{\cos x}{(x^2+t^2)^2}dx=\frac{\pi(t+1)}{2t^3e^t},t > 0$$
Integral calculator with steps gives the following answer.
$\displaystyle\int_{\Bbb R}\frac{\cos{(x)}}{(x^2+t^2)^2 }dx=\frac{(t*\cosh{(t)}-\sinh{(t)})*Si(x+it)-i*(t*\sinh{(t)}-\cosh{(t)})*Ci(x+it)+(t*\cosh{(t)}-\sinh{(t)}*Si(x-it)+i*(t*\sinh{(t)}-\cosh{(t)})*Ci(x-it)}{4*t^3}+\frac{x*\cos{(x)}}{2*t^2*(x^2+t^2)}+C$
Now my question is how to evaluate $Si{(x+it)},Ci(x+it),Si(x-it),Ci(x-it)?$
My another question is how to prove this equation using "Differentiation under Integral Sign making change of variables"
| Another way to do it is to note that$$\frac{1}{x^2+t^2}=\frac{1}{2it}\left(\frac{1}{x-it}-\frac{1}{x+it}\right)\implies\frac{1}{(x^2+t^2)^2}=\frac{-1}{4t^2}\left(\frac{1}{(x-it)^2}+\frac{1}{(x+it)^2}+\frac{i}{t}\left(\frac{1}{x-it}-\frac{1}{x+it}\right)\right).$$The $e^{ix}$ in $\cos x$ doesn't diverge when $x=i\infty$, so the $\Im x\ge0$ semicircle contains a pole $it$ contributing$$\frac{-i\pi}{2t^2}\left.\left(\frac{d}{dx}e^{ix}+\frac{i}{t}e^{ix}\right)\right|_{x=it}=\frac{-i\pi}{2t^2}\left(ie^{-t}+\frac{i}{t}e^{-t}\right)=\frac{\pi(t+1)}{2t^3e^t}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3422707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
An interesting problem of polynomials
In the polynomial
$$
(x-1)(x^2-2)(x^3-3) \ldots (x^{11}-11)
$$
what is the coefficient of $x^{60}$?
I've been trying to solve this question since a long time but I couldn't. I don't know whether opening the brackets would help because that is really a mess. I have run out of ideas. Would someone please help me to solve this question?
| Hint : $1+2+3 +...+11= \frac {11×12}{2} =66 $ so we must find how we can construct number $6=6+0=5+1=4+2=3+3=1+2+3$ and note that $3+3$ impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3422830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
$\sum_{n=1}^{\infty} 1/\sqrt[n]{n}$ converge Does the series:
$$
\sum_{n=1}^{\infty} \frac{1}{\sqrt[n]{n}}
$$
converge or diverge?
I'm unsure where to start with this question. I know that $n$th root of $n$ converges to $1$ but not sure about its reciprocal.
| If a series $\sum_{n=0}^\infty a_n$ converges, then $\lim_na_n=0$. Therefore your series diverges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3423011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
integration the very concept so i was just taught about integration and one thing i do not understand is that say we integrate x dx from 1 to 2 ... there are an infinite number of numbers between them ... so how does the sum turn out to be finite ?
| Integration has got nothing to do with adding up numbers, let alone all numbers between two bounds which would be clearly impossible. It is rather about computing the area of the region bounded by the curve of the function, the $x$-axis, and the two vertical lines defined by $x=1$ and $x=2$.
This computation can be done by considering the limit of a sum of rectangle areas whose width gets smaller and smaller (and this point of view is called Riemann integration), but you should not fall in the trap of thinking like a physicist (with "infinitesimal rectangles of width $dx$") if your goal is to study maths.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3423164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to check if a subset is open in Zariski I'm having troubles determining if a given subset of $\operatorname{Spec}A$ is open or not. The contest is not trivial.
I have to consider a morphism of finitely generated $k$-algebras $A\rightarrow B$, which are also integral domains. We assume that the map induced on the fraction fields defines a finite field extension.
The task is to show that
$$
U:=\{q\in \operatorname{Spec}A \mid B\otimes_A A_q\text{ is finite as $A_q$-modules}\}
$$
I tried to consider $I=\displaystyle\bigcap_{q\in U^c}q$ and I claimed that $V(I)=U^c$, but i'm stack.
Can anybody give me a suggestion? No solutions, please, just a hint.
Thank you!!
| Here's a suggestion.
If you can prove that for any $q\in U$, there is some $f\in A\setminus q$ such that $B\otimes_A A_f$ is finite as an $A_f$-module, then you are done. Do you see why this would mean that you are done?
This is proving that every point in $U$ has a standard neighborhood, $q\in D(f)\subseteq U$. To see why $B\otimes_A A_f$ finite over $A_f$ implies $D(f)\subseteq U$, note that $B\otimes_A A_p = B\otimes_A A_f\otimes_{A_f} A_p$, where $p\in D(f)$, so if $B\otimes A_f$ is finite over $A_f$, then $B\otimes A_p$ is finite over $A_p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3423295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Basics of manifolds and derivatives
I'm really struggling with the very basics definitions of this course on Riemannian geometry and I was wondering if someone could point me in the right direction in regard to this question (Please don't just tell me the answer)
I think I mainly know how to do part (a) because identifying the antipodes of the sphere is basically just enforcing that $\forall x \in S^2:x\text{ ~ }-x $
so I can just calculate:
$\gamma'(0)(f) = (f \circ \gamma)'(0) = (f \circ c)'(0) = 6$
For part b, I have absolutely no idea because all I know is that $\dfrac{\partial}{\partial x_1}\vert_{\gamma(t)}$ is supposedly the directional derivative (as an operator?) along the $x_1$ coordinate.
My thought is that I should compose $\phi $ with $c$ because I guess by avoiding $z_1 = 0$ you can just think about a semiphere without its boundary
but I really have absolutely no clue how to begin writing this in the desired form. Please, I'd greatly appreciate just any tips or an explanation for what is actually going on.
EDIT: I'm now thinking that:
$D\phi(\gamma(t))\gamma'(t) = (\phi \circ \gamma)'(t) = \gamma'(t) \in T_{\phi(\gamma(t))}\mathbb{R}P^2$ which sort of looks like what I want. So I'm thinking I should precompose by $\phi$ and differentiate and the two partial derivatives will be just $(1,0)$ and $(0,1)$ and the coefficients will be the differentiated entries
| Hints:
Step 1. Compute $c'(t)$ for general $t$ as a function of $t$.
Step 2. Compute $h=\varphi\circ \pi$.
Step 3. Compute both $x_1, x_2$ -components of the vector field (along $\varphi\circ \gamma$) $Dh(c'(t))$ as functions of $t$.
Step 4. These will be $\alpha_1(t)$ and $\alpha_2(t)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3423463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to show that $I = \langle x^2+2\rangle \subseteq \mathbb{Z}[x]$ is a prime ideal WITHOUT proving that $x^2+2$ is irreducible?
How to show that $I = \langle x^2+2\rangle \subseteq \mathbb{Z}[x]$ is a prime ideal WITHOUT proving that $x^2+2$ is irreducible in $\mathbb{Z}[x]$?
I know how to show it by showing that it is irreducible, but how do I show it is prime without showing that? Do I assume it's not prime and get a contradiction? Like, take $a,b\in \mathbb{Z}[x]$ such that $ab\in \langle2+x^2\rangle. $ I want to show that $a\in\langle 2+x^2\rangle$ or $b\in \langle 2+x^2\rangle$ (i.e. either $a$ or $b$ is of the form $r(x^2+2), r\in\mathbb{Z}[x]$). I'll use contradiction. Assume $a\not\in\langle x^2+2\rangle$ and $b\not\in \langle x^2+2\rangle.$ Then $a$ and $b$ are not of the form $r(x^2+2),r\in\mathbb{Z}[x].$ I know we can't use the division algorithm since $\mathbb{Z}[x]$ is not a euclidean domain.
| Consider the ring homomorphism $f:\mathbb Z[x] \to \mathbb C$ induced by $x \mapsto \sqrt{2}\,i$. Then $\mathbb Z[x]/\ker f$ is certainly a domain and so $\ker f$ is a prime ideal. Prove that $\ker f = \langle x^2+2\rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3423591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Endomorphism ring of an irreducible module is irreducible Let $M$ be an $A$-module and define $B:=End_A (M)$. Prove that if $M_A$ is irreducible (that is, the only decomposition of $M_A$ in direct sumands is the trivial decomposition) then $B_{B}$ is irreducible.
I have no idea on how to tackle this. I tried this.
Consider $N\leq B_B$ a direct summand of $B_B$, then there exists a projection $\pi:B_B\rightarrow B_B$ such that $\pi^2=\pi$ and $\pi (B_B)=N$. I want to prove that $\pi=0$ or $\pi=1_{B_B}$.
| You’re on the right track.
From where you left off, $\pi(M)\oplus (1-\pi)(M)=M$ is a direct decomposition of $M$. Since one must be zero, you have $\pi=1$ or $\pi=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3423752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does the tensor product respect semidefinite ordering in this way? I'll use $\succeq$ to denote the positive semidefinite ordering: for square matrices $X,Y$, one has $X \succeq Y$ iff $X - Y$ is positive semidefinite. It's a well known fact that if $X, Y \succeq 0$ then $X \otimes Y \succeq 0$. However, if one has two pairs of matrices with $X \succeq Y$ and $X' \succeq Y'$, it isn't necessarily the case that $X \otimes X' \succeq Y \otimes Y'$ (for instance, if $X = X' = I$ and $Y = Y' = -2I$).
My question is: if we add that $Y, Y' \succeq 0$, so our assumptions are
$$ X \succeq Y \succeq 0,\ \ \ X' \succeq Y' \succeq 0 $$
Is it necessarily true that $X \otimes X' \succeq Y \otimes Y'$?
I personally feel that this should be true (and I could swear I've seen this result before but can't find it anywhere), but I'm struggling to prove it. Does anyone have a reference for this fact, and/or know how to prove (or disprove) it?
| \begin{aligned}
&X\otimes X'-Y\otimes Y'\\
&=\left[(X-Y)+Y\right]\otimes\left[(X'-Y')+Y'\right]-Y\otimes Y'\\
&=\left[(X-Y)\otimes(X'-Y')+Y\otimes(X'-Y')+(X-Y)\otimes Y'+Y\otimes Y'\right]-Y\otimes Y'\\
&=(X-Y)\otimes(X'-Y')+Y\otimes(X'-Y')+(X-Y)\otimes Y'.
\end{aligned}
Now the result follows because $X-Y,\,X'-Y',\,Y$ and $Y'$ are all positive semidefinite (and this is where $Y,Y'\succeq0$ is essential).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3423927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Is there a difference between "Linear System of Equations" and "System of linear Equations"? We are translating German math videos to English and stumbled over the fact that there are two translations of "Lineare Gleichungssysteme":
*
*Linear System of Equations
*System of linear Equations (e.g. Wikipedia)
From my point of view those describe two different things. A "linear system" and second a "… system" (which does not have to be linear).
But it seems to be used for the same thing. Why? And is it correct?
PS: Here is the translated video with the (recent) title "Linear System of Equations": https://www.youtube.com/watch?v=SPu8dHq_9VA
| The two terms are interchangeable and mean the same thing. The term "linear system of equations" should be broken up as "(linear) (system of equations)", meaning that the system of equations is linear, i.e. that it involves equations which are linear. This is the same as a "system of linear equations", of course. You can't have a "linear system of equations" with nonlinear equations---the linearity of the equations exactly determines that of the system, they are not separate things, so the distinction is not meaningful.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3424071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Integral of $\int_0^1 \frac{\sin(a \cdot \ln(x))\sin (b \cdot \ln(x))}{\ln(x)} dx$ I'm trying to calculate the integral $$\int_0^1 \frac{\sin\Big(a \cdot \ln(x)\Big)\cdot \sin \Big(b \cdot \ln(x)\Big)}{\ln(x)} dx, $$
but am stuck. I tried using Simpsons' rules and got here:
$$\int_0^1 \frac{\cos\Big((a+b) \cdot \ln(x)\Big) - \cos \Big((a-b) \cdot \ln(x)\Big)}{2\ln(x)} dx, $$
but alas, that also got me nowhere. Does anyone have any ideas?
| For $c \in \mathbb{R}$ we have
\begin{align}
\int \limits_0^\infty \frac{1 - \cos(c t)}{t} \, \mathrm{e}^{-t} \, \mathrm{d} t &= \int \limits_0^\infty \int \limits_0^c \sin(u t) \, \mathrm{d} u \, \mathrm{e}^{-t} \, \mathrm{d} t = \int \limits_0^c \int \limits_0^\infty \sin(u t) \mathrm{e}^{-t} \, \mathrm{d} t \, \mathrm{d} u = \int \limits_0^c \frac{u}{1+u^2} \, \mathrm{d} u \\
&= \frac{1}{2} \ln(1 + c^2) \, ,
\end{align}
so
\begin{align}
\int \limits_0^1 \frac{\sin[a \ln(x)] \sin[b \ln(x)]}{\ln(x)} \, \mathrm{d} x &= \int \limits_0^1 \frac{\cos[(a+b)\ln(x)] - \cos[(a-b) \ln(x)]}{- 2 \ln(x)} \, \mathrm{d} x \\
&\!\!\!\stackrel{x = \mathrm{e}^{-t}}{=} \int \limits_0^\infty \frac{\left(1 - \cos[(a - b) t]\right) - \left(1 - \cos[(a+b) t]\right)}{2t} \, \mathrm{e}^{-t} \, \mathrm{d} t \\
&= \frac{1}{4} \left(\ln[1 + (a-b)^2] - \ln[1 + (a+b)^2]\right) \\
&= \frac{1}{4} \ln \left(\frac{1+(a-b)^2}{1+(a+b)^2}\right) \, .
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3424189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Functional notation usage for arbitrary relation? Question about questionable notation used in a homework assignment.
The assignment defines a relation f := { (0,1), (1,3), (2,1) }. It then asks to "show" this is a function. IMO strictly (i.e. Bourbaki) speaking it is not but so be it. The intention is clear.
It further asks to calculate f[2], f(2), f-1[1], f-1(1), ...
It is understood that the author uses Kuratowski's definition of ordered pairs, the natural numbers as n = {0, ..., n-1} and that f-1 is the inverse relation
f-1 = { (1,0), (3,1), (1,2) }, which is obviously not a function.
So my question is: what could f-1(.) possibly mean for the non-functional relation f-1? It is sometimes used as a sloppy alternative notation for f-1[.] but it is obvious from the question that the author makes a strict distinction between [.] and (.). And that whatever (.) is supposed to mean, its definition should generalize Euler's notation (.) for functions.
| The inverse of a function f is f$^{-1}$ when the inverse of f exists.
The inverse of a relation R is R$^{-1}$ = { (x,y) : yRx }.
The set extensions of a function f are f[A] = { f(x) : x in A }
and f$^{-1}$[A] = { x : f(x) in A }.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3424311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
The number of real roots of the equation $5+|2^x-1|=2^x(2^x-2)$ I'm trying to find the number of real roots of the equation $5+|2^x-1|=2^x(2^x-2)$.
Let $2^x=a$
$$|a-1|=a^2-2a-5$$
Then there are two cases
$$a-1=a^2-2a-5$$
And $$a-1=-a^2+2a+5$$
Solving both equations
$$a=1,-4,-2,3$$
Now -4 and -2 can be neglected so there are two values
1 and 3.
Then $$2^x=1$$
$$x=0$$
And $$2^x=3$$
$$x=\log_2 3$$
But the answer doesn’t seem to consider the $\log_2 3$ as a viable root, and the answer is 1. Why is that the case?
|
Then there are two cases
$$a-1=a^2-2a-5$$
$$a-1=-a^2+2a+5$$
Solving both equations
$$a=1,-4,-2,3$$
You made a small mistake in solving these.
The first equation is $a^2 - 3a - 4 = 0$, so $(a - 4)(a + 1) = 0$, so $a = 4$ or $a = -1$.
Looks like you solved the second equation correctly. You should have
$$
a = -1, 4, -2, 3
$$
But the answer doesn’t seem to consider the $\log_2 3$ as a viable root
Once we have found the values of $a$ that work, we're not done yet!
We have to plug in the values to the original equation to see if they work. The problem is that $|a - 1| = a - 1$ only if $a \ge 1$, and $|a - 1| = -(a - 1)$ only if $a \le 1$, so we might have introduced some "extraneous" answers that are not valid.
As you observed, since $a = 2^x$, $a$ has to be positive; so we are left with just $a = 3$ and $a = 4$ as possibilities.
Next, we have to plug in each to the original equation to see if they work.
$a = 4$ works, but $a = 3$ doesn't. So the only answer is
$$
a = 4 \implies 2^x = 2 \implies \boxed{x = 2}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3424491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
How these 3 vectors form a right triangle? $ A = 2\hat{i} -2\hat{j} + 3\hat{k}$
$ B = 2\hat{i} -\hat{j} + 3\hat{k}$
$ C = \hat{i} -\hat{j} - \hat{k}$
My textbook demands that these 3 vectors form right triangle. Firstly, I think these 3 vectors don't form a triangle.
Actual Condition for triangle : $ A\pm{B}\pm{C} = 0$ and this is flase for these 3 vectors.
Am I wrong?
| The given vectors are position vectors. To get the vectors representing the sides of the triangle do $A-B, B-C, C-A$. You will find that $\angle B = 90^{\circ}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3424658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Evaluating $ \lim_{x\to +\infty} x\left(\frac{\pi}{4} - \arctan\left(\frac{x}{x+1}\right)\right) $
$$ \lim_{x\to +\infty} x\left(\frac{\pi}{4} - \arctan\left(\frac{x}{x+1}\right)\right) $$
I tried to do this with some kind of substitution but failed miserably. Any hints or help?
| Substitute $y:=\frac{1}{x+1}$ to rewrite your limit as$$\lim_{y\to0}(1-y)\frac{(\arctan 1-\arctan(1-y))}{y}=\lim_{y\to0}(1-y)\cdot\arctan^\prime1=\frac12.$$Or if we take @DinnoKoluh's approach,$$\arctan1-\arctan\frac{x}{x+1}=\arctan\frac{1}{2x+1}\approx\frac{1}{2x}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3424752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Finding the complete Taylor expansion of $\frac{1}{1+z^2}$ around $z=0$. For an exercise, I need to find the complete Taylor expansion for $(1+z^2)^{-1}$ around $z=0$. I have tried decomposing first $(1+z^2)^{-1}$ into partial fractions. Since $1+z^2=0$ gives $z=\pm i$, the partial fractions are:
$$\frac{1}{1+z^2} = \frac{1}{(z+i)(z-i)} = \frac{i}{2(z+i)} - \frac{i}{2(z-i)} = \frac{i}{2} \Big{(}\frac{1}{z+i} - \frac{1}{z-i}\Big{)}$$
Therefore, my idea was to find the much easier Taylor expansion of both fractions, and add them together. I have worked out each Taylor expansion, since the $n$-th derivative of each of the fractions can be easily found:
$$\frac{1}{z+i}=\sum_{n=0}^\infty -i^{n+1}z^n, \hspace{25px} \frac{1}{z-i}=\sum_{n=0}^\infty \frac{-z^n}{i^{n+1}}$$
Typing each sum in WolframAlpha returns the original fraction, so I think they are okay. Now, I replace both sums into the partial fractions:
$$\frac{1}{1+z^2} = \frac{i}{2} \Big{(}\sum_{n=0}^\infty -i^{n+1}z^n - \sum_{n=0}^\infty \frac{-z^n}{i^{n+1}}\Big{)}$$
But I always get a sum that does not return the correct Taylor expansion for $1/(1+z^2)$. Where am I getting this wrong?
| Your approach is all correct. Note that your sum can be rewritten as $$\frac{1}{2}\sum_{n=0}^\infty i^n ((-1)^n+1)z^n.$$
For an odd index value $n$, the term equals zero, and for even - it equals $2(-1)^kz^{2k}$, $k\in \mathbb{Z}$.
Therefore
$$\frac{1}{2}\sum_{n=0}^\infty i^n ((-1)^n+1)z^n=\frac{1}{2} ( 2-2z^2+2z^4-2z^6+\ldots)=1-z^2+z^4-z^6+\ldots$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3424898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Disproving $ 0^0 $ by binomial theorem For fun, is something like this true?
Let
\begin{equation} \nonumber
\begin{split}
k &= 0^0 = (a-a)^{a-a} = \frac{(a-a)^{a}}{(a-a)^{a}} \\
&= \frac{\binom{a}{0}a^a(-a)^0 + \binom{a}{1}a^{a-1}(-a)^1 + ... +
\binom{a}{a-1}a^{1}(-a)^{a-1} + \binom{a}{a}a^{0}(-a)^{a}}{\binom{a}{0}a^a(-a)^0 + \binom{a}{1}a^{a-1}(-a)^1 + ... +
\binom{a}{a-1}a^{1}(-a)^{a-1} + \binom{a}{a}a^{0}(-a)^{a}}. \\
\end{split}
\end{equation}
Furthermore, let $a$ be any odd number, then
\begin{equation} \nonumber
\begin{split}
k &= \frac{\binom{a}{0}a^a - \binom{a}{1}a^{a} + ... + \binom{a}{a-1}(a)^{a} - \binom{a}{a}(a)^{a}} {\binom{a}{0}a^a - \binom{a}{1}a^{a} + ... + \binom{a}{a-1}(a)^{a} - \binom{a}{a}(a)^{a}} \\
\end{split}
\end{equation}
since $a$ is odd there is an even number of terms in the polynomials, thus
\begin{equation} \nonumber
\begin{split}
k = \frac{0}{0}. \\
\end{split}
\end{equation}
| You are not disproving anything. $0^0$ is a mathematical expression with no agreed-upon value. The most common possibilities are $k=1$ or leaving the expression undefined, with justifications existing for each, depending on context.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3425066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$a_{n}$ geometric sequence, prove that : $a_{1}+a_{2}+a_{3}+...+a_{n}\|\ a_{1}^{k}+a_{2}^{k}+a_{2}^{k}+...+a_{n}^{k}$ with $(n,k)=1$ Problem :
Let $a_{n}$ be a geometric sequence of the integer numbers $a_{n}\in \mathbb Z$ for all $n\in \mathbb N$. Prove that:
$$a_{1}+a_{2}+a_{3}+...+a_{n} \mid a_{1}^{k}+a_{2}^{k}+a_{2}^{k}+...+a_{n}^{k}$$ with $(n,k)=1$
My attempt :
$$a_{1}+a_{2}+a_{3}+...+a_{n}=a\left(1+r+r^{2}+...+r^{n-1}\right)=a\frac{r^{n}-1}{r-1}$$
Then :
$$a_{1}^{k}+a_{2}^{k}+a_{2}^{k}+...+a_{n}^{k}=a^{k}\left(1+r^{k}+r^{2k}+...+r^{k(n-1)}\right)=a^{k}\frac{r^{nk}-1}{r^{k}-1}$$
Now I'm going to prove :
$$a\frac{r^{n}-1}{r-1} \mid a^{k}\frac{r^{nk}-1}{r^{k}-1}$$
But I don't know how I complete from here ?
| Show that the greatest common divisor of $r^k-1$ and $r^n-1$ is $r-1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3425526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Calculate $\lim_{x\to\infty}\biggr(x\sqrt{\frac{x}{x-1}}-x\biggr)$ $$\lim_{x\to\infty}\biggr(x\sqrt{\frac{x}{x-1}}-x\biggr)$$
I know this limit must be equal to $\frac{1}{2}$ but I can't figure why. This is just one of the thing I tried to solve this limit:
$$\lim_{x\to\infty}\biggr(x\sqrt{\frac{x}{x-1}}-x\biggr)$$
$$\lim_{x\to\infty}x\biggr(\sqrt{\frac{x}{x-1}}-1\biggr)$$
Now I try to evaluate the limit. I know that $\lim_{x\to\infty}\sqrt\frac{x}{x-1}$ is equal to 1 so that means the above limit evaluates to $\infty * 0$ which is indeterminate form. I do not know what to do next, would greatly appreciate some help.
| By application of L' Hopital's rule:
$$\lim_{x\to +\infty}\frac{\sqrt{\frac{x}{x-1}}-1}{\frac{1}{x}}=\left(\frac{0}{0}\right)=
\lim_{x\to +\infty}\frac{\left(\sqrt{\frac{x}{x-1}}-1\right)'}{\left(\frac{1}{x}\right)'}=
\ldots= \frac{1}{2}\lim_{x\to +\infty}\frac{\frac{x^2}{(x-1)^2}}{\sqrt{\frac{x}{x-1}}}=\frac{1}{2}\cdot 1=\frac12.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3425638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 8,
"answer_id": 4
} |
Indexing twice for conditional summation Clearly it is permitted to put restrictions on the indices in a summation, and
One often sees generalizations of this notation in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. Here are some common examples:
$$\sum_{0\le k< 100} f(k)$$
is the sum of $f(k)$ over all (integers) $k$ in the specified range,
$$\sum_{x \mathop \in S} f(x)$$
is the sum of $f(x)$ over all elements $x$ in the set $S$
However, if I don't have a symbol for the elements of my collection $E$, but rather address them as $E_i$, is it acceptable notation to index into the collection both in the condition and in the summand?
$$1\over\displaystyle\sum_{\substack{i=1 \\ E_i\neq 0}}^n\frac1{E_i}$$
If this is acceptable, it should give the same sum as this verbose use of Iverson brackets:
$$1\over\displaystyle\sum_{i=1}^n\frac{[E_i≠0]}{E_i+[E_i=0]}$$
| You don't actually need the condition in the summation, nor the verbose use of Iverson brackets:
$$1\over\displaystyle\sum_{i=1}^n\frac1{E_i}[E_i\neq 0]$$
This succinctly expresses the idea of summing the reciprocals of the non-zero elements. Now, you might counter that this involves the undefined $1/0$ when $E_i=0$, but, in the context of the sum $\sum_k a_k\bigl[P(k)\bigr]$, Concrete Mathematics says that
Sometimes $a_k$ isn't defined for all integers $k$. We get around this difficulty by assuming that $\bigl[P(k)\bigr]$ is “very strongly zero” when $P(k)$ is false; it’s so much zero, it makes $a_k\bigl[P(k)\bigr]$ equal to zero even when $a_k$ is undefined.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3425729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Continuous function and IVT. Let $f:[0,1]->\mathbb R$ be a continuous function which satisfies $f(0)=f(1)$. Prove that there exists a number $a \in [0,1/2]$ such that $f(a)=f(a+1/2)$.
I know that i'll use IVT theorem but i confused a bit.Can you help me ?
| Consider the function $g: [0, \frac{1}{2}] \to \mathbb{R}$ given by
$$
g(x) = f(x) - f\left(x + \frac{1}{2}\right).
$$
Then $g$ is continuous, and $g(0) = -g\left(\frac{1}{2}\right)$. It follows from the intermediate value theorem that $g$ has a zero somewhere.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3425895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solution of simple algebraic equations
Suppose $a=\alpha+2\beta$ and $b=3\alpha+5\beta$. Then by solving this, we get $\alpha=2b-5a$ and $\beta=3a-b$.
I tried substitution method to solve this, but i couldn't get the above solutions viz., $\alpha$ and $\beta$.
I am afraid if this is too basic and it might not be appropriate for this site.
| Since $a=\alpha+2\beta$ and $b=3\alpha+5\beta$,$$b-3a=(3\alpha+5\beta)-3(\alpha+2\beta)=-\beta.$$So, $\beta=3a-b$ indeed. And now it is easy to get $\alpha$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3426053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Irreducible polynomial in integer polynomial ring with complex root is a reciprocal polynomial I am working on this problem:
Let $p(x) \in \mathbb{Z}[x]$ be an irreducible polynomial. Show that $p(x)$ is a reciprocal polynomial (i.e., that its coefficients equidistant from either end are equal) if one of its roots is a complex number $z$ with $|z|= 1$.
My thinking so far is:
Let $z = a+bi$. Then $z = a-bi$ is also a root of $p(x)$, so both $(x-(a+bi))$ and $(x-(a-bi))$ are factors of $p(x)$ $\Rightarrow$ $(x-(a+bi))(x-(a-bi)) = x^2 - 2ax + a^2 + b^2 = x^2-2ax + 1$ is a factor of $p(x)$ (using that $|z| = 1$, and thus, $a^2 + b^2 = 1$).
I suppose that, at this stage, I'm not sure how to use the fact that $p(x)$ is irreducible over $\mathbb{Z}[x]$ to obtain the result. It seems to me that $p(x)$ cannot have any other nontrivial polynomials over $\mathbb{Z}$ as factors, since $p(x) \in \mathbb{Z}[x]$. But then, how can I know about other factors of $p(x)$ ?
Thanks!
| The following proof is due to a classmate of mine:
Let $K$ be the splitting field of $p(x)$ over $\Bbb Z$ and $\sigma\in Aut(K/\Bbb Z)$ such that for every root $\alpha$ of $p(x), \sigma(\alpha)=\dfrac1{\alpha}$. But $\sigma$ sends any root of $p$ to some root of $p$, which means that the set of all roots of $p$ is the same as the set of reciprocals of all roots of $p$. Hence for every root of $p$, its reciprocal is also a root of $p$, and thus $p(x)$ is reciprocal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3426276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is $\lim\limits_{n\to \infty} e^{-n}\sum_{k=0}^n \frac{n^k}{k!}$ not equal to $1$? So I saw the limit $\lim\limits_{n\to \infty} e^{-n}\sum_{k=0}^n \frac{n^k}{k!}$ here the other day:
Evaluating $\lim\limits_{n\to\infty} e^{-n} \sum\limits_{k=0}^{n} \frac{n^k}{k!}$
and when I saw it, I right away thought the answer is $1$ because I thought $\lim\limits_{n\to \infty} \sum_{k=0}^n \frac{n^k}{k!} = \lim\limits_{n\to \infty} e^n$ given that $e^x = \lim\limits_{n\to \infty} \sum_{k=0}^n \frac{x^k}{k!}$ and so the result would be $\lim\limits_{n\to \infty} e^{-n}e^n = 1$ but the result is $\frac{1}{2}$, found using methods that I'm not familiar with.
Could someone please explain why my method is wrong?
Thank you so much in advance!
| What you're doing is taking the identity
$$
\lim_{n\to\infty}\sum_{k=0}^n\frac{x^k}{k!}=e^x\tag1
$$
and plugging in $x=n$ to obtain the (false) statement
$$
\lim_{n\to\infty}\sum_{k=0}^n\frac{n^k}{k!}=e^n.\tag2
$$
Why is (2) false? Setting $x=n$ in (1) is illegal because the $n$ in (1) is busy being used as the label for the $n$th term in your sequence; plugging $x=n$ confuses $x$ with $n$ and changes the nature of the expression you're studying. To see why (2) makes no sense, notice that the LHS of (2) should no longer depend on $n$ when you've passed to the limit, so the RHS should not depend on $n$ either. For more examples of what can go wrong, try setting $x=n$ in the following identities, which are valid for all $x$:
$$
\lim_{n\to\infty}\frac xn=0\tag3
$$
and
$$
\lim_{n\to\infty}\left(1+\frac xn\right)^n=e^{x}\tag4
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3426379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Simplifying ArcSine Function I was wondering if there is a nice formula (or approximation) for $\arcsin(x)$ which is defined $[-1,1]$?
| As noted there is no way to explicitly express $\arcsin(x)$ in terms of e.g. the $\sin,\cos,\tan,\exp,\log$ functions. After all, if there were, why would we need to invent the new notation $\arcsin$? There are however nice approximations to the function. The most obvious one which comes to mind is the Taylor approximation. We have the Taylor series expansion
$$\arcsin(x)=\sum_{k=0}^\infty\frac{(2k-1)!!}{(2k)!!}\frac{x^{2k+1}}{2k+1}.$$
If we truncate the sum by discarding the higher order terms, we end up with decent polynomial approximations to $\arcsin(x)$ on the interval. Many other approximations exist, for example the Padé approximant, which gives for example
$$\arcsin(x)\approx\frac{x-(1709/2196)x^3+(69049/922320)x^5}{1-(2075/2196)x^2+(1075/6832)x^4}.$$
These are not unique to $\arcsin$, of course, and can be applied to any sufficiently nice function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3426557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding the Inverse A function $g$ is defined by $g(t) = 0.3(1 - \cos(2\pi t))$.
For $n \in \mathbb{N}$, $I_n = [\frac{n}{2}, \frac{n+1}{2}]$. I have found that on this $n$-interval, $g$ is one-to-one meaning we can take the inverse of $g$ when restricted to this domain.
For $n \in \mathbb{N}$, $h_n$ is the inverse of $g$ when restricted by the interval $I_n$. How can I find the formula for $h_n$?
I know that the formula depends on whether $n$ is even or odd, but I don't know how to find it.
| The problem here is that when you try to invert the function $g(t)$, it will automatically appear to be in the interval $\left [ 0, \frac{1}{2}\right ]$ for even $n$, or $\left [ -\frac{1}{2}, 0\right ]$ for odd $n$. To amend this, simply define a translation function $f_{even}: \left [ \frac{n}{2}, \frac{n+1}{2}\right ] \to \left[0, \frac{1}{2}\right ]$ or $f_{odd}: \left [ \frac{n}{2}, \frac{n+1}{2}\right ]\to \left [ -\frac{1}{2}, 0\right ] $ where we have
$$
f_{even}(x) \;\; =\;\; x - \frac{n}{2} \hspace{3pc} f_{odd}(x) \;\; =\;\; x - \frac{n+1}{2}.
$$
To get the correct bounds, re-interpret your function $g(t)$ to really be $(g\circ f_*)(t)$. Then the inverse of this will be $\left (f_*^{-1}\circ g^{-1}\right )(t)$ where $* \in\{even, \; odd\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3426709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Minimum value of Complex Trigonometric Expression
Minimum value of of $\displaystyle f(\theta) = \frac{a}{\cos \theta}+\frac{b}{\sin \theta}+\sqrt{\frac{a^2}{\cos^2 \theta}+\frac{b^2}{\sin^2 \theta}}.$
Where $\displaystyle a,b>0, \theta \in \bigg(0,\frac{\pi}{2}\bigg).$
what i try
$$f(\theta)=\frac{2(a\sin \theta +b\cos \theta)+2\sqrt{\bigg(a\sin \theta+b\cos \theta\bigg)^2-a\sin 2 \theta}}{\sin 2 \theta }$$
How do i minimize is Help me please
| \begin{align*}
f'(\theta) &= a \tan \theta \sec \theta + b \tan \theta \sec \theta \hfill \\
&\quad {}+ \frac{2 a^2 \tan \theta \sec^2 \theta + 2 b^2 \tan \theta \sec ^2\theta }{2 \sqrt{a^2 \sec ^2 \theta + b^2 \sec^2 \theta}} \\
&= \tan \theta \sec \theta \left(\cos \theta \sqrt{\left(a^2+b^2\right) \sec^2 \theta }+a+b\right) \\
&= \tan \theta \sec \theta \left( a + b + \sqrt{a^2 + b^2} \right) \text{.}
\end{align*}
$\tan \theta \neq 0$ for $\theta \in (0, \pi/2)$. $\sec \theta \neq 0$ for any $\theta \in \Bbb{R}$. Since $a>0, b>0$, the parenthesized expression is always positive. Therefore, this function has no critical points.
So we check the endpoints to see which one would be the location of the minimum if it were a permissible point. $f(0) = a + b + \sqrt{a^2 + b^2}$ and $\lim_{\theta \rightarrow \pi/2^-} f(\theta) = \infty$ (requiring the use of $a>0, b>0$).
So for $\theta \in (0,\pi/2)$, $f$ has no minimum value, but it takes values arbitrarily close to $a+b+\sqrt{a^2 + b^2}$, a lower bound for values of $f$, as $\theta$ approaches $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3426864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that if $f : [1, 2] →\Bbb R$ is a continuous then there exists $\gamma\in (1, 2)$ with $f(\gamma) = \frac{1}{1 − \gamma}+ \frac{1}{2 − \gamma}.$ Show that if $f : [1, 2] →\Bbb R$ is a continuous function then there exists $\gamma\in (1, 2)$ such that $$f(\gamma) = \frac{1}{1 − \gamma}+ \frac{1}{2 − \gamma}.$$
I have been trying to understand what the question wants me to do but I can't figure it out. Please help!
| Consider the function $$g:x\mapsto f(x)-\frac{1}{1 − x}- \frac{1}{2 − x},1<x<2.$$ Since $\lim_{x\to 1+}f(x)=f(1)$ and $\lim_{x\to 2-}f(x)=f(2)$ we have, $$\lim_{x\to 1+}g(x)=+\infty,$$$$\lim_{x\to 2-}g(x)=-\infty.$$ So by intermediate value theorem we have $\gamma\in (1,2)$ such that, $g(\gamma)=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3427059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Automorphism with no square root Related to Every normal operator on a separable Hilbert space has a square root that commutes with it
Does it exist an automorphism $f$ in a separable $\mathbb C$ Hilbert space, such that $f$ has no square root?
If so, a concrete example would be useful.
| Yes, such operators exist. This was proven by Halmos, Lumer, and Schäffer, Proc. AMS, 4, 1 (1953), 142-149.
Concretely, given a domain $D\subset\mathbb C$ define
$$
D^{1/2}=\{\lambda\in\mathbb C:\ \lambda^2\in D\}.
$$
They proved that the multiplication operator $M_z\in B(L^2(D))$ given by $(M_zf)(z)=zf(z)$ has a square root if and only if $D^{1/2}$ is disconnected. This can be seen to be equivalent to $D$ surrounding (but not containing, obviously) the origin.
So, if for instance you take any disk that does not contain the origin, say $D=\{\lambda:\ |\lambda-2|<1\}$, then $M_z\in B(L^2(D))$ is invertible and has no square root.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3427367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determining Weight function in Sturm Liouville problem By choosing the proper weight function $\sigma (x) $ solve the Sturm-Liouville problem and determine its eigenvalues and eigenfunctions.
$$ \frac{d}{dx}\left[x\frac{dy(x)}{dx}\right] + \frac{2}{x}y(x) +\lambda \sigma (x)y(x)=0,\; y'(1)=y'(2)=0,\; 1 \leq x \leq 2. $$
I don't understand what it means to "choose" the proper weight function. I tried to rewrite the problem in this form.
$$\frac{1}{\sigma(x)}\left[\frac{d}{dx}\left[x\frac{dy(x)}{dx} + \frac{2}{x}y(x)\right] +\lambda\sigma(x)=0\right], $$
then calculate it by setting $p(x)=A(x)\sigma (x), p'(x)=B(x)\sigma(x)$ and using this formula:
$$\sigma(x)=e^{\int \frac{A-B'}{B}\,dX}, $$
but it doesn't get me anywhere; solving this gives you just $1=1.$
I tried extracting information about the weight function from the boundary condition but i am failing at that too and i tried solving the differential equation using an infinite series but that won't work either because of the unknown weight function. Any tips?
| For generic 2nd order (homogeneous) ODE:
$ a(x)y''(x) + b(x)y'(x) +c(x)y(x) = 0 $
To transform it to Sturm Liouville form you need:
$ a(x)>0$ and $a,b,c$ to be on continuous on the interval of definition.
Now the weight function is defined as follows:
$w(x) = \dfrac{1}{a(x)}e^{\int \frac{b(x)}{a(x)}dx}$
We now set:
$ p(x) = w(x)a(x)\\ q(x)=w(x)c(x)$
To do a sanity check find $p'(x)$ which is $b(x)w(x)$
so now you can rewrite the DE to:
$\dfrac{1}{w(x)} \left[ w(x)a(x)y'' + w(x)b(x)y' + w(x)c(x)y \right] =0\\ \dfrac{1}{w(x)} \left[ p(x)y'' + p'(x)y' + q(x) \right] = 0\\ \dfrac{1}{w(x)} \left[ (p(x)y')' + q(x) \right] = 0 $
That's how you reformulate to SL form. Hope this helps and you can work the details for your problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3427467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Could the set composed of real quaternion be represented by a Hilbert space? Assume we have "real" quaternions: $Q = a+bi+cj+dk$ where $a,b,c,d$ are real numbers. The dot product between any two real quaternions is an inner product, and we can define the length of a quaternion $Q$ as $|Q| = \sqrt{<Q,Q>}$.
I am still very new to the analysis part. I wonder if we can represent the set of real quaternions as Hilbert space then?
Thank you.
| The key challenge is choosing a field for $\Bbb H$ to be a space over. (Bear in mind $\Bbb H$, unlike fields, isn't commutative.) One way to do this is to consider $\Bbb H$ a $2$-dimensional Hilbert space over $\Bbb C$ with basis $1,\,j$, so $a+bi+cj+dk=(a+bi)1+(c+di)j$ is a unique decomposition. Then $$\langle a+bi+cj+dk,\,e+fi+gj+hk\rangle:=(a-bi)(e+fi)+(c-di)(g+hi)$$is a suitable inner product. In particular$$\langle a+bi+cj+dk,\,a+bi+cj+dk\rangle=(a-bi)(a+bi)+(c-di)(c+di)=a^2+b^2+c^2+d^2,$$as expected. What we can't do, however, is use the ordinary quaternion multiplication $q_1^\ast q_2$, in which $q_1$ is conjugated, as an inner product, because IPs have to live in the field, in this case $\Bbb C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3427735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is it possible to determine the given matrix is positive semidefinite under these conditions? Suppose I have a $2^n$ by $2^n$ symmetric matrix M. I know the following facts are true about $M$.
*
*The diagonal of M is $n+1$, which is strictly larger than any other non-diagonal entry.
*The sum of each row of the matrix M is exactly $2^n$
*The value of non-diagonal entry cannot be smaller than $-n+1$
*Each row contains the same elements (but the order is different so that $M$ is symmetric)
I really hope to conclude $M$ is positive semidefinite, but I have to admit this may not be true. I know the fact that if each diagonal entry is greater than the sum of the absolute values of all other entries in the corresponding row, $M$ would be positive definite. However, we cannot use here because of fact 2. On the other hand, fact 2 implies $M$ has eigenvector 1 with eigenvalue $2^n$ and there cannot be too many negative entries in $M$. I wonder if these conditions can be sufficient for me to draw such a conclusion.
Courant fischer theorem seems helpful here since we can express the smallest eigenvalue as $\min_{v \perp 1} \frac{v^{T}Mv}{v^{T}v}$
| The hypoteses given on $M$ don't allow to conclude that $M$ is positive semidefinite. The following is a counterexample for $n=3$. Define:
$$A:=\begin{bmatrix}
4& -2& -2& -2\\
-2& 4& -2& -2\\
-2& -2& 4& -2\\
-2& -2& -2& 4\\
\end{bmatrix}
\qquad
B:=\begin{bmatrix}
3 & 3 & 2 & 2\\
3 & 3 & 2 & 2\\
2 & 2 & 3 & 3\\
2 & 2 & 3 & 3\\
\end{bmatrix}
\qquad M:=\begin{bmatrix}A & B\\B& A\end{bmatrix}
$$
For $x=(1,1,1,1,-1,-1,-1,-1)_T$,we have $Mx=-12x$, showing $M$ has at least one negative eigenvalue and therefore is not positive semidefinite. Similar counterexamples can be built for $n> 3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3427821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Why does the Well Ordering Theorem imply decreasing ordinals go to zero? The Well Ordering Theorem states that any set can be well ordered. But in this PBS Infinite Series video, Kelsey states the theorem as, "Any decreasing sequence of ordinals eventually goes to zero." This isn't an obvious equivalence to me. Does deeper work need to be done to prove it, or is it a one-liner?
It may be hard for me to see the connection because I don't know the rigorous definition of ordinals, just the intuitive idea from the above-linked episode.
| If the sequence is decreasing, it has to be finite, since a well-ordering does not have any infinitely decreasing sequences. But that means that we have to stop somewhere, and if you take "decreasing" to mean "continue to decrease as long as you can" that means that the last element of the sequence must be the minimum, i.e. $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3428082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find a restricted domain so that the function is injective The sinus function is not injective when the domain is the whole $\mathbb{R}$.
To find a restricted domain where the function is injective, can we do that only using the graph or is there also an other way?
Such a resticted domain is for example $\left[-\frac{\pi}{2}, \frac{\pi}{2}\right ]$. How can we find it?
| Let $f: A \to B$ be any function.
The restriction of $f$ to the empty set is injective.
Let $f: A \to B$ be any function defined on a nonempty set $A$.
If $a \in A$ then $f$ restricted to the singleton set $\{a\}$ is injective.
Let $f: A \to B$ be any function defined on a nonempty set $A$. Let the range of $f$ be denoted by $D$.
Using the axiom of choice, there exist a subset $C \subset A$ such that the restriction of $f$ to $C$ defines a bijective correspondence between $C$ and $D$.
The function $f(x) = sin(x)$ has range $[-1,+1]$ and the function $g(x) = \text{arcsin}(x)$ is a right inverse of $f$. The range of $g(x)$ is the interval
$\quad [-\frac{\pi}{2},\frac{\pi}{2}]$
The function $f$ restricted to this set in injective.
If $h: [-1,+1] \to \Bbb R$ is any right inverse of $f$, then $f$ restricted to $h\big( [-1,+1] \big)$ will be injective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3428213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How to prove using only set identities? I have given three sets A, B and C and I need to prove that following to statements are equivalent:
S1 = (( − ) − ) ∪ ( − ( − ( ∪ ))) − ( ∩ ( ∩ ))
S2 = ( ∩ ) − ( ∩ ( ∩ )) ∪ ( − )
prove that S1 is equivalent to S2 without using venn diagram.
How can I prove this?
| Using Boolean algebra you consider the Boolean variables
*
*$a,b,c$ with $a=1$ ($1$ stands for $True$) iff $x \in A$. Similarly, with $b$ and $c$.
Now, write for each set the corresponding Boolean expressions
*
*$x \in S_1 = (cb')a' + b(b(a+c)')'(abc)'$
*$x \in S_2 = ab(abc)'+ca'$
Now, the only thing you need to show is that these two expressions are logically equivalent. To do so, you can simply transform them into a normal form (for example here I use disjunctive normal form (DNF)).
If the DNF's are equal, the statements are equivalent:
\begin{eqnarray*} (cb')a' + b(b(a+c)')'(abc)'
& = & a'b'c + b(b'+(a+c))(abc)' \\
& \stackrel{uu'=0, 0+v=v}{=} & a'b'c + (ab+bc)(a'+b'+c') \\
& \stackrel{uu'=0, 0+v=v}{=} & \boxed{a'b'c + abc' + a'bc}
\end{eqnarray*}
\begin{eqnarray*} ab(abc)'+ca'
& = & ab(a'+b'+c') + c(b+b')a' \\
& \stackrel{uu'=0, 0+v=v}{=} & \boxed{abc' + a'bc + a'b'c}
\end{eqnarray*}
They are equal. So, the sets are equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3428354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Which ring is $R[X,Y,Z,T]/(X-Y^2,T-Y^4,T^3-Z)$ isomorphic to?
Which ring is $R[X,Y,Z,T]/(X-Y^2,T-Y^4,T^3-Z)$ isomorphic to?
I already did substitution for $X$ so we get the ring $R[x,x^{1/2},x^6,x^2]$ but I don't know to which ring this is isomorphic.
| Note that $T^3 - Y^{12}$ divisible by $T- Y^4$, so we have the equality of ideals
$$\langle T- Y^4, T^3 - Z\rangle = \langle T- Y^4, T^3 - Y^{12}, T^3 - Z\rangle =\langle T- Y^4, Z- Y^{12}\rangle$$
and so
$$\langle X-Y^2, T- Y^4, T^3 - Z\rangle = \langle X-Y^2, T- Y^4, Z- Y^{12}\rangle$$
Now the evaluation map
$R[Y,X,Z,T]\to R[Y]$,
$X\mapsto Y^2$, $Z\mapsto Y^{12}$, $T\mapsto Y^4$ gives the isomorphism
$$R[Y,X,Z,T]/\langle X-Y^2, T- Y^4, Z- Y^{12}\rangle \simeq R[Y]$$\
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3428440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Is there any relation between $Rank(A^2)$ and $Rank(A^3)$ if $Rank(A)=Rank(A^2)$? It is a question from my textbook :
$A$ is a square matrix of order $n\times n$. If $Rank(A)=Rank(A^2)$ then verify whether $Rank(A^2)=Rank(A^3)$ or not.
It is definite that $Rank(A^3)\leq Rank(A^2)$ but after that I cannot proceed.
Please anyone help me solve it. Thanks in advance.
| In terms of triangularisation of $A$ which does not change the rank you may suppose $A$ upper triangular with the last eigenvalues the zero ones, if algebraic multiplicity and geometric multiplicity of $0$ as eigenvalue are equal, $\textrm{rank}(A^n)$ is constant for all $n$, now if not $A$ as upper triangular has its last $m\times m$ block (lower right) of the form $\begin{pmatrix}0&a\\0&0\end{pmatrix}$ $a$ is $1\times m-1$ not zero, elevating for $A^2$ this block turns to all zero diminishing the rank a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3428594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
The Closed-Form of $\displaystyle \int_{0}^{1}\mathrm{li}(x)\ln\Big(\ln\Big(\frac{1}{x}\Big)\Big) \mathrm{d}x$ $$\mathrm{Prove \;that } \int_{0}^{1}\mathrm{li}(x)\ln\Big(\ln\Big(\frac{1}{x}\Big)\Big) \mathrm{d}x\;\;=\;\; \frac{1}{2}\zeta(2)+\frac{1}{2}\ln^2(2)+\gamma \ln(2)$$
I found this problem on a group in facebook, I would like to share my solution with you and hope you like it.
| To solve this problem I will use a very nice result
$$\int_{0}^{1}\mathrm{li}(x)\sin\Big(u\ln\Big(\frac{1}{x}\Big)\Big) \mathrm{d}x \;\;=\;\; \frac{1}{u^2+1}\Bigg(\tan^{-1}\Big(\frac{u}{2}\Big)-\frac{u\ln(u^2+4)}{2}\Bigg)$$
Multiply both sides by $\displaystyle \frac{\ln(u)}{u} $ and integrate both sides from $\displaystyle u \;=\;0\;\; \mathrm{to} \;\;\infty$ , we get
$$\int_{0}^{1}\int_{0}^{\infty}\mathrm{li}(x)\sin\Big(u\ln\Big(\frac{1}{x}\Big)\Big)\frac{\ln(u)}{u}\mathrm{d}u \mathrm{d}t \;\;=\;\; \int_{0}^{\infty}\frac{\ln(u)}{u(u^2+1)}\Bigg(\tan^{-1}\Big(\frac{u}{2}\Big)-\frac{u\ln(u^2+4)}{2}\Bigg)\mathrm{d}u$$
and it's very easy to prove that $\displaystyle \int_{0}^{\infty}\frac{\sin(ax)\ln(u)}{u}\mathrm{d}u \;\;=\;\;- \frac{\pi}{2}\Big(\ln(a)+\gamma\Big)$, then we get that
$$\frac{\pi}{2}\;\;\int_{0}^{1}\mathrm{li}(x)\ln\Big(\ln\Big(\frac{1}{x}\Big)\Big) +\gamma\;\mathrm{li}(x)\mathrm{d}x \;\;=\;\; \int_{0}^{\infty}\frac{\ln(u)}{u(u^2+1)}\Bigg(\frac{u\ln(u^2+4)}{2}-\tan^{-1}\Big(\frac{u}{2}\Big)\Bigg)\mathrm{d}u$$
$$\mathrm{Let \;\; I \;\;=\;\; }\int_{0}^{1}\mathrm{li}(x)\ln\Big(\ln\Big(\frac{1}{x}\Big)\Big) \mathrm{d}x \;\;=\;\;\frac{2}{\pi}\underbrace{\int_{0}^{\infty}\frac{\ln(u)}{u(u^2+1)}\Bigg(\frac{u\ln(u^2+4)}{2}-\tan^{-1}\Big(\frac{u}{2}\Big)\Bigg)\mathrm{d}u}_\text{ $\mathrm{I}_1$} -\gamma\;\underbrace{\int_{0}^{1}\;\mathrm{li}(x)\mathrm{d}x}_\text{ $\mathrm{I}_2$} $$
$$\mathrm{I}_1 \;\;=\;\;\int_{0}^{1}\int_{0}^{\infty} \frac{\partial}{\partial a }\frac{\ln(u)}{u(u^2+1)}\Bigg(\frac{u\ln(a^2 u^2+4)}{2}-\tan^{-1}\Big(\frac{au}{2}\Big)\Bigg)\mathrm{d}u\mathrm{d}a $$
$$=\;\;\int_{0}^{1}\int_{0}^{\infty} \frac{\ln(u)}{(u^2+1)}.\frac{a u^2-1}{(a^2u^2+4)}\mathrm{d}u \mathrm{d}a\;\;=\;\; \int_{0}^{1}\frac{1}{a-2}\int_{0}^{\infty} \frac{\ln(u)}{(u^2+1)}-\frac{2a \ln(u)}{(a^2u^2+4)}\mathrm{d}u\mathrm{d}a $$
but $\displaystyle \int_{0}^{\infty} \frac{\ln(x)}{(x^2+a^2)}\mathrm{d}x \;\;=\;\;\frac{1}{a}\int_{0}^{\infty} \frac{\ln(a)+\ln(x)}{(x^2+1)}\mathrm{d}x \;\;=\;\; \frac{\pi}{2} \ln(a)+\int_{0}^{\infty} \frac{\ln(x)-\ln(x)}{(x^2+1)}\mathrm{d}x \;\;=\;\;\frac{\pi}{2} \ln(a) $
$$\mathrm{Then \;\; I_1} \;\;=\;\;\frac{\pi}{2}\int_{0}^{1}\frac{\ln(a)}{a-2}\mathrm{d}a \;\;=\;\;\frac{\pi}{4}\int_{0}^{1}\frac{\ln(a)}{\frac{a}{2}-1}\mathrm{d}a \;\;=\;\;-\frac{\pi}{4}\;\sum_{n=0}^{\infty}\frac{1}{2^n}\int_{0}^{1}\ln(a) a^n\mathrm{d}a $$
$$ =\;\;\frac{\pi}{4}\;\sum_{n=0}^{\infty}\frac{1}{2^n(n+1)^2} \;\;= \;\;\frac{\pi}{2} \mathrm{Li}_2\Big(\frac{1}{2}\Big) \;\;=\;\;\frac{\pi}{4} \Big(\zeta(2)-\ln^2(2)\Big)$$
and $\mathrm{ I_2}$ is know in the mathematical literature and has the value : $-\ln(2)$
By combining these results we get that
$$\qquad\int_{0}^{1}\mathrm{li}(x)\ln\Big(\ln\Big(\frac{1}{x}\Big)\Big) \mathrm{d}x \;\;=\;\;\frac{1}{2}\zeta(2)+\frac{1}{2}\ln^2(2)+\gamma \ln(2) $$
[by Ahmad Albow November 8,2019 ]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3428776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why does $\left(x \cdot \tan\left(\frac{1}{x}\right)-1\right)^{-1}$ asymptotically approach $3x^2 - 6/5$? I noticed that $\lim_{x \to \infty}\tan\left(\frac{1}{x}\right)*x = 1$ and I was wondering how fast it approaches $1$. I looked at $\frac{1}{\tan\left(\frac{1}{x}\right)*x-1}$ and found that this grows slower than $x^3$, so to find what polynomial degree it grows as fast as, I plugged it into $\lim_{x \to \infty} \frac{\ln\left(f\left(x\right)\right)}{\ln\left(x\right)} = \lim_{x \to \infty} \frac{\ln\left(\frac{1}{\tan\left(\frac{1}{x}\right)*x-1}\right)}{\ln\left(x\right)}=2$ to find that it grows around as fast as $x^2$. Then I tried plugging $\lim_{x \to \infty}\left(\tan\left(\frac{1}{x}\right)*x-1\right)*x^2$ into Wolfram|Alpha and it produced $1/3$. It was not able to provide any steps. How is this limit calculated? Using this, the next question I come upon is how $\lim_{x\to\infty}3x^2-\frac{1}{\tan\left(\frac{1}{x}\right)*x-1} = 6/5$ is calculated. Why does $\left(\tan\left(\frac{1}{x}\right)*x-1\right)^{-1}$ asymptotically approach $3x^2 - 6/5$?
| We have that by Taylor's series
$$\tan\left(\frac{1}{x}\right)=\frac1x+\frac1{3x^3}+o\left(\frac1{x^3}\right)$$
and therefore
$$\left(\tan\left(\frac{1}{x}\right)\cdot x-1\right)\cdot x^2=\left(1+\frac1{3x^2}+o\left(\frac1{x^2}\right)-1\right)\cdot x^2=\frac 13+o(1) \to \frac 13$$
and since
$$\tan\left(\frac{1}{x}\right)=\frac1x+\frac1{3x^3}+\frac2{15x^5}+o\left(\frac1{x^5}\right)$$
$$\frac{1}{\tan\left(\frac{1}{x}\right)\cdot x-1}=\left(\frac1{3x^2}+\frac2{15x^4}+o\left(\frac1{x^4}\right)\right)^{-1}=$$
$$=3x^2\left(1+\frac2{5x^2}+o\left(\frac1{x^2}\right)\right)^{-1}=3x^2-\frac6{5}+o\left(1\right)$$
then
$$3x^2-\frac{1}{\tan\left(\frac{1}{x}\right)\cdot x-1}=\frac65+o(1) \to \frac65$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3428932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Tetration convergence: prove $\lim_{x\rightarrow0} {}^{n}x = \begin{cases} 1, & n \text{ even} \\ 0, & n \text{ odd} \end{cases}$ I'm a computer student, learning math just for fun. Today I was graphing for fun that I found something strange! I noticed that that wired function ${x^{x^{\cdot^{\cdot^{x}}}}}$ in zero, seems to converge to 1 when there are even powers and to 0 when there are odd powers! Then I attempt to prove it but I failed.
Then I did a little research and I found the Tetration article on Wikipedia. This article says that my guess was right but without any proof. So I'm here to ask you about it.
If we define ${x^{x^{\cdot^{\cdot^{x}}}}}$ as ${^{n}x} :=\begin{cases} 1 &\text{if }n=0 \\ x^{\left(^{(n-1)}x\right)} &\text{if }n>0 \end{cases}$
then prove:
$$\lim_{x\rightarrow0} {}^{n}x = \begin{cases} 1, & n \text{ even} \\ 0, & n \text{ odd} \end{cases}$$
| Sorry, I'm leaving an answer so I can show an image. Note the red colored large kidney in the center of the image. It is the location of period one convergence and is referred to as the Shell-Thron Region (STR). Immediately to left of the center of the STR is a small yellow disk of period two convergence. Note that $^{\infty}1 = 1$ is at the center of the STR where $^{\infty}a = a$, while center of the yellow disk is $0$.
A pragmatic answer is that $1$ drives the dynamics of the surrounding exponential Mandelbrot map and that $0$ does the same, therefore $0^0=1$. In combinatorics it is common to take $0^0$ as $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3429034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Can you go from the integral of x squared to the summation of x squared? To be specific, I was wondering if you could go from the indefinite integral of $x^2$, mainly
$\int$ $x^2$$dx = \frac{x^3}{3}$, to the summation of $x^2$, $\sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6}$?
| The difference between the integral and the actual sum is the difference of $n^3/3$ and $\frac {n(n+1)(2n+1)}{6}$ that is $$\frac{(n+1)^2}{6}$$
Which grows very fast with $n$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3429184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Any clues on how to do this modular arithmetic proof? Assume:
*
*$2x^3 - 8x^2 + 8y^3 - 12y^2 -10 \equiv 0 \mod 10$.
*$2y^3 - 8y^2 + 8z^3 - 12z^2 -10 \equiv 0 \mod 10$.
WTP:
*
*$2x^3 - 8x^2 + 8z^3 - 12z^2 -10 \equiv 0 \mod 10$.
I'm not sure where to start. How should I go about this?
| Reducing $\pmod {10}$ gives:
$2x^3 +2x^2 - 2y^3 - 2y^2 \equiv 0 \pmod {10}$
So:
$2x^3 +2x^2 \equiv 2y^3 + 2y^2 \pmod {10}$
Similarly from eqn. 2 we get:
$2z^3 +2z^2 \equiv 2y^3 + 2y^2 \pmod {10}$
The equivalence relation is transitive, therefore:
$2z^3 +2z^2 \equiv 2x^3 + 2x^2 \pmod {10}$
and so eqn. 3 is true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3429345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finite Complement Topology and Local Path Connectedness I'm having trouble with deciding whether or not a given space is locally path connected.
Let $(R,F)$ denote the finite complement topological space over the real numbers.
a) Determine the connected and path-connected components of (R,F).
b) Is (R,F) locally path-connected? Prove your statements.
I have proven that the real line is connected and path-connected and so is its own component. I have also shown that it is locally connected. But my intuition fails when it comes to deciding whether or not the space is locally path-connected.
Obviously, every two points $x, y \in R$ can be joined by the path
$$ \gamma: [0,1] \to [x,y], \gamma(t)=x+t(y-x)$$.
But if the claim were true one could show that for every $x \in R$ there exists a path-connected neighborhood. Since this neighborhood contains a set O which is open with respect to the Finite Complement Topology and contains x one would have to prove that O is path-connected. Since O is the real line with a finite number of points removed, I cannot imagine how such a neighborhood can exist, as my intuition fails at imagining how to continously connect the set O.
| In the finite complement topology, and infinite subset of the same size is homeomorphic to the whole space (any bijection between finite complement topologies is a homeomorphism).
And injective mapping from $[0,1]$ (usual topology) into a space with the finite complement topology is continuous.
The second fact implies the path-connectedness, the first implies the local path-connectedness. QED.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3429481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Picking a popular vs unpopular friend There are 2 types of people on a social media platform: type A has 80 friends, type B has 20 friends.
We assume that half of the people are Type A, and half of the people are Type B.
The expected number of friends for a person is then $\frac{1}{2} \times 80 + \frac{1}{2} \times 20 = 50$.
What is the probability I pick type A friend among my friends? This question is taken from this Udacity video.
Their answer is explained here. They state the probability of picking type A friend among your friends is $\frac{4}{5}$.
I understand their reasoning, and it makes sense - it is much more likely to have a "popular" i.e. type A friend among your friends than "unpopular" i.e. type B friend, simply because their probability of having you as a friend is higher.
But I am wondering why we can't reason like this:
*
*type A and type B people are equally likely as stated above. So among my friends, I am as likely to pick type A person as type B person?
| In the overall population if a person is selected at random, they are equally likely to be Type A or Type B. However, your friends circle is not representative of the overall population.
It's like say half of all people in the world are rich and half are poor. A guy walks up to you in the street and gives you one dollar for your service. Is he equally likely to be rich or poor? No, he is not a random representative of the population
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3429630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Number in binary as a product I agree that any binary number that consists of $n$ ones (and no zeros) has as its decimal equivalent the number $2^n - 1$. However, the author of the book I'm reading next makes the following claim, which I don't quite see. He says that as a consequence of the fact above, it follows that any binary number that has the form of starting with $n$ ones and ending with $n - 1$ zeros (with nothing between them) has as its decimal equivalent the number $2^{n - 1}(2^n - 1)$. For example, the decimal number 496 in binary is 111110000, which consists of five 1s followed by four 0s, so $496 = 2^4 \times (2^5 - 1) = 16 \times 31$. But why is this true, i.e. why does it work?
| Since your binary number starts with $n$ ones followed by $n-1$ zeros, the number has $n+(n-1) = 2n-1$ binary digits. Therefore its decimal value is
$$
\begin{align}
& 0 \times 2^{0} + 0 \times 2^{1} + \cdots + 0 \times 2^{n-2} + 1 \times 2^{n-1} + 1 \times 2^{n} + 1 \times 2^{n+1} + \cdots + 1 \times 2^{2n-3} + 1 \times 2^{2n-2} \\
&= 2^{n-1} \left( 1 + 2 + 2^2 + \cdots + 2^{n-2 } + 2^{n-1} \right) \\
&= 2^{n-1} \frac{1 - 2^n }{1 - 2 } \\
&= 2^{n-1} \frac{2^n - 1 }{2-1} \\
&= 2^{n-1} \left( 2^n-1 \right).
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3429924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
General formula for $e^x+\cos(x)$, $e^x+\sin(x)$, $e^x-\sin(x)$, $e^x-\sin(x)$ I have been able to derive the formal series for these four functions:
$e^x+\sin(x) = 1+2x+\dfrac{x^2}{2!}+\dfrac{x^4}{4!}+\dfrac{2x^5}{5!}+\dfrac{x^6}{6!}+\dfrac{x^8}{8!}+\dfrac{2x^9}{9!}+...$
$e^x+\cos(x) = 2+\dfrac{x^3}{3!}+\dfrac{2x^4}{4!}+\dfrac{x^5}{5!}+\dfrac{x^7}{7!}+\dfrac{2x^8}{8!}+...$
$e^x-\sin(x) = 1+\dfrac{x^2}{2!}+\dfrac{2x^3}{3!}+\dfrac{x^4}{4!}+\dfrac{2x^7}{7!}+\dfrac{x^8}{8!}+...$
$e^x-\cos(x) = x+x^2+\dfrac{x^3}{3!}+\dfrac{x^5}{5!}+\dfrac{2x^6}{6!}+\dfrac{x^7}{7!}+\dfrac{x^9}{9!}+...$
Due to the missing term and the irregularity, I am unable to write the general formula for these series. I wish to find the general formula with compact sigma notation. Can you help with this?
| Formal power series can be added and then their area of convergence is limited by the most restricted one.
So one easy way would be to
*
*find separately expansions for $\{e^{x},\cos(x),\sin(x)\}$
*add (or subtract) them to each other
*put them under the same $\sum$ and then
*try to simplify the expression you get using algebra.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3430117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Generators of $\text{GL}_2(\mathbb{Z})$ and $\text{SL}_2(\mathbb{Z})$. We denote by $(a,b,c)$ the integral binary quadratic form $q(x,y)=ax^2+bxy+cy^2$. Also, we denote by $\sim$ (resp. $\sim_+$) $\text{GL}_2(\mathbb{Z})$ equivalence (resp. $\text{SL}_2(\mathbb{Z})$ equivalence).
I know that for all $a,b,c\in\mathbb{Z}$ we have that $(a,b,c)\sim (a,-b,c)$ and
$$(a,b,c)\sim_+ (c,-b,a)\sim_+ (a,b+2a,c+b+a)\sim_+ (a,b-2a,c-b+a).$$
This is simply the fact that
$\begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix}$
is in $\text{GL}_2(\mathbb{Z})$ and
$$\begin{bmatrix}0 & 1 \\ -1 & 0\end{bmatrix},\qquad \begin{bmatrix}1 & 1 \\ 0 & 1\end{bmatrix},\qquad\begin{bmatrix}1 & -1 \\ 0 & 1\end{bmatrix}$$
are in $\text{SL}_2(\mathbb{Z})$.
I want to know if all equivalent forms can be obtained by repeated applications of those rules. In other words, whether these matrices generate the respective groups or not.
(Since $\mathbb{Z}$ is an euclidean domain, I know that $\text{GL}_2(\mathbb{Z})$ is generated by elementary matrices but it's not clear to me that all elementary matrices can be written as a product of the ones above.)
| It is true that $\mathrm{SL}_2(\mathbb Z)$ is generated as a monoid by
$$S = \begin{bmatrix}0 & 1 \\ -1 & 0\end{bmatrix}, \qquad T = \begin{bmatrix}1 & 1 \\ 0 & 1\end{bmatrix},\qquad T^{-1} = \begin{bmatrix}1 & -1 \\ 0 & 1\end{bmatrix}$$
and thus $\mathrm{GL}_2(\mathbb Z)$ is generated by those matrices together with $\begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix}$. See for example Modular group at Wikipedia, section "Presentation". I think this is originally due to Poincaré. Serre's book "Trees" contains more of this.
For references, Generators of $\text{GL}_{2}(\mathbb{Z})$ group, good reference book?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3430200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Solve system solutions The number of actual system solutions
$ \begin{cases}
a^2=b+2\\
b^2=c+2 \\
c^2=a+2\\
\end{cases}$
is equal to:
Solution:
$\cos 2\theta=2\cos^2\theta-1\implies 2\cos 2\theta=(2\cos\theta)^2-2$.
Using this results in all $8$ solutions to the system.
$(2,2,2)$, $(-1,-1,-1)$, and cyclic permutations of $\left(2\cos\frac{2\pi}{7},2\cos\frac{4\pi}{7},2\cos\frac{6\pi}{7}\right)$ and $\left(2\cos\frac{2\pi}{9},2\cos\frac{4\pi}{9},2\cos\frac{8\pi}{9}\right)$
How can i solve this in a simpler way
(Without using trigonometry)
| $$a^2=b+2 \tag1$$
$$b^2=c+2 \tag2$$
$$c^2=a+2 \tag3$$
By successive eliminations
$$(1) \implies b=a^2-2$$
$$(2) \implies c=a^4-4a^2-2$$
$$(3) \implies a^8-8 a^6+20 a^4-16 a^2-a+2=0\tag 4$$
$(4)$ can be factorized as
$$(a-2) (a+1) \left(a^3-3 a+1\right) \left(a^3+a^2-2 a-1\right)=0$$ Each cubic equation has three real roots that you can express using ... the trigonometric method !
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3430353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Computational complexity of calculating the $n$th derivative of $f(x)=\exp\bigg(\frac{1}{\ln(x)}+\frac{1}{\ln(1-x)}\bigg)$? Computational complexity of calculating $$ f^{(n)}(x)? $$ where $f(x)=\exp\bigg(\frac{1}{\ln(x)}+\frac{1}{\ln(1-x)}\bigg)$
I don't know much about computational complexity but I do know that the derivatives just keep accumulating more and more mass.
I think the running time is exponential time. What's the fastest running time it can be calculated in?
| The answer by Steven Stadnicki isn't quite right. For each term with exponents $(a,b,c,d)$, you get six terms in the next derivative:
*
*$(a-1,b,c,d)$
*$(a-1,b-1,c,d)$
*$(a,b,c-1,d)$
*$(a,b,c-1,d-1)$
*$(a-1,b-2,c,d)$
*$(a,b,c-1,d-2)$
The first four come from differentiating $x$, $\ln x$, $(1-x)$, $\ln (1-x)$ respectively. The last one comes from differentiating the exponential, which multiplies by $(-x^{-1}(\ln x)^{-2} + (1-x)^{-1}(\ln (1-x))^{-2})$. This means that $0 \ge b \ge 2a$ and $0 \ge d \ge 2c$. So there's four times as many as he thought.
But luckily, it's actually much better than he thought. Notice that the sum $a+c$ always decreases by one, so $-(a+c)=n$. That means the number of terms in the $n$-th derivative is only $O(n^3)$, and the total time is $O(n^4)$.
But we can do much better. Define $$g(x) = \exp\bigg( {1 \over {\ln x}} \bigg)$$ so $$f(x)= g(x) \cdot g(1-x)$$
By the General Leibniz rule, $$f^{(n)}(x) = \sum_{i=0}^n {n \choose i} (-1)^{n-i} g^{(i)}(x)g^{(n-i)}(1-x)$$
When you compute the derivatives of $g$, you only have the $x$ and $\ln x$ factors, so $g^{(n)}$ only has $O(n)$ terms, and the total time is $O(n^2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3430547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove that $f$ is an increasing function if $f'(x)$ is more than zero for all real values for $x$ I'm having some difficulty with proving this theory. What I do know is how to prove that it is a constant function when $f'(x) = 0$ by simplying assuming that f is not constant and contradict the supposition. In this case, what should I do instead?
| $f'(x) > 0$ everywhere means $f$ is continuous everywhere.
And the mean value theorem says that for any $a, b; a < b$ that there is a $c: a < c < b$ where $f'(c)=\frac {f(b)-f(a)}{b-a}$.
But we know $f'(c) > 0$ so $f(b) > f(a)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3430659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Taking a bizarre limit Consider the set of integers, $\Bbb{Z}$. Now consider the sequence of sets which we get as we divide each of the integers by $2, 3, 4, \ldots$.
Obviously, as we increase the divisor, the elements of the resulting sets will get closer and closer.
Question: In the limit as $\text{divisor}\to\infty$, what will the "limiting" set be?
(I don't think it could be $\Bbb{R}$.)
| The typical way to define limits of sets is via
$$\liminf_{n\to\infty} A_n = \bigcup_{n\geq 1} \bigcap_{k \geq n} A_k \\ \limsup_{n\to\infty} A_n = \bigcap_{n\geq 1} \bigcup_{k\geq n} A_k$$
Using these and $A_n = f_n(\mathbb{Z})$ where $f_n(x) = x/n,$ we have
$$\liminf_{n\to\infty} A_n = \mathbb{Z} \\ \limsup_{n\to\infty} A_n = \mathbb{Q} $$
In particular, the limit doesn't exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3430812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Continuity of the stochastic process $X_t=\int_{0}^{t}(a+b\frac{u^n}{t^n} ) dW_{u} $ I am wondering about the continuity of the stochastic process
$$X_t=\int_{0}^{t}(a+b\frac{u^n}{t^n} ) dW_{u} $$
Where n=1,2,..
At t=0, there seems to be a discontinuity except for b=0 . Is there a discontinuity?
| This problem comes down to showing the continuity of
$$\frac{1} {t^n}\int_0^t u^n dW_u $$
From Itô's formula, we have
$$\frac{1} {t^n}\int_0^t u^n dW_u =W_t-\frac{n}{t^n}\int_{0}^{t}W_u u^{n-1}du$$
So essentially we have to evaluate the limit
$$\lim_{t\to 0} \frac{1}{t^n}\int_{0}^{t}W_u u^{n-1}du$$
Where n is any natural number
Since $W_u u^{n-1}$ is continuous a.s thus from the fundamental theorem of calculus and L'Hôpital's rule, we have
$$\lim_{t\to 0} \frac{1}{t^n}\int_{0}^{t}W_u u^{n-1}du=\lim_{t\to 0}\frac{W_{t}t^{n-1}}{nt^{n-1}}=\lim_{t\to 0}\frac{W_{t}} {n} =0$$
a.s
Thus $$X_t$$ is continuous a.s
More generally for the stochastic process
$$Y_t=\int_{0}^{t}f(u,t)dW_{u}$$
Where f is of the form
$$f(u, t) =\frac{b(u)} {g(t)} $$
From Itô's formula, we have
$$f(t, t)W_{t} - f(0,t)W_{0}-\frac{1}{g(t) }\int_{0}^{t}\frac{db} {du} W_{u} du$$
If $\frac{db} {du}$ is continuous then
$$\lim_{t\to 0} \frac{1}{g(t) }\int_{0}^{t}\frac{db} {du} W_{u} du=\lim_{t\to 0}\frac{b' (t)} {g'(t)} W_{t} $$
Thus sufficient(but not necessary) conditions for continuity at t=0 are
$$f(t, t)$$ is continuous
$$f(0,t)$$ is continuous
$$\frac{db} {du}$$ is continuous
$$\lim_{t\to 0}\frac{b' (t)} {g'(t)}$$ exists and is finite
Bonus:Check this for $$f(u, t) =\frac{\sin(u)} {t}$$
For the stochastic process $X_t$ given in the question, certain a,b can make it a Wiener process
We have already established $X_0=0$ a.s.
Note that X_t has mean 0 and variance
$(a^2+\frac{b^2}{2n+1}+\frac{2ab}{n+1})t$
and it is normally distributed since the limit of a sequence of normal random variables is also a normal random variable.
For its variance to be t
$(a^2+\frac{b^2}{2n+1}+\frac{2ab}{n+1})=1$
Also note that since $X_t-X_s$ and $X_s$ are jointly normally distributed(any of their linear combination is individually normally distributed) and their covariance vanishes for
$b=0$
and
$\frac{a} {n+1}+\frac{b}{2n+1}=0$
These equations determine the a, b e.g two of the solutions are
$a^2=1, b=0$
Hence $X_t-X_s$ is independent of $ X_s$ for these a, b.
Now since
$|X_t-X_s|=|t-s|^{\frac{1}{2}}Z$
Where Z is the standard normal random variable and the equality is in distribution. Then
$E|X_t-X_s|^{2m}=C_{m} |t-s|^{m} $
Thus from the Kolmogorov continuity theorem $X_{t} $ is a.s $<\frac{1}{2}$ holder continuous for those a, b.
These conditions make $X_t$ a Wiener process for those a, b.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3430941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Span of Density operators (Positive Semi-definite matrices of Trace one) While reading basic-mathematics of quantum mechanics I came across a statement -
"For every complex euclidean space $\cal X$ there exists spanning sets
of the space $L({\cal X})$ consisting of only density operators".
Here,
*
*$\cal X$ is euclidean space, say ${\mathbb C}^n$;
*$L({\cal X})$ correspond to linear operators over ${\cal X}$ which is just complex matrices of $n\times n$ size; and
*Density operators are just positive semi-definite matrices of trace one.
Now, I do not quite understand this that linear combination of positive semi-definite matrices should be at least a symmetric matrix, then how can span cover all the elements in $L({\cal X})$.
Any remarks will be of help. Thanks.
| Given any $T\in L(\mathbb C^n)$, you can write
$$
T=\frac{T+T^*}2+i\,\frac {T-T^*}{2i}
$$
so $T$ is a linear combination of selfadjoints. And for each selfadjoint you have the Spectral Theorem saying that they are a linear combination of rank-one projections (which are positive semidefinite of trace one).
The result is even true when $\mathcal X$ is an infinite-dimensional Hilbert space, although it is not trivial in that case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3431129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are these infinite groups decomposable? I have been asked to:
Decide whether the following groups are decomposable:
(a) - $(\mathbb{R^*}, \cdot)$
(b) - $(\mathbb{C}, +)$
(c) - $(\mathbb{Q^*}, \cdot)$
(d) - $(\mathbb{Q}, +)$
I would like a hint for item (a). I believe I was able to do itens (b), (c) and (d).
Regarding item (a), I tried to decompose $\mathbb{R^*}$ in rationals and irrationals (but this failed, since the irrationals are not a subgroup) or into algebraic and transcendental numbers (which also fails, since the transcendental numbers are not a subgroup). I also thought about showing that if $\mathbb{R^*} = A \times B$ then $A$ and $B$ do not intersect trivially (thus showing that the group is indecomposable), but I couldn't prove this idea.
Regarding item (b), I decomposed $\mathbb{C}$ into $\mathbb{R}$ and
$i\mathbb{R} = \{iy \ | \ y \in \mathbb{R} \} $.
Regarding item (c), I wrote that $\mathbb{Q^*} = \langle \ p \ | \ p \ \text{is a prime} \rangle = \langle 2 \rangle \ \oplus \ \langle \ p \ | \ p \ \text{is an odd prime} \rangle $.
EDIT: As pointed in the comments, this decomposition is for the multiplicative group of positive rational numbers. A correct decomposition would be, for instance, $\mathbb{Q^*} = \langle 2, -1 \rangle \ \oplus \ \langle \ p \ | \ p \ \text{is an odd prime} \rangle $.
Regarding item (d), I proved that the group is indecomposable by proving that two non-trivial subgroups don't intersect trivially. My reasoning was the same as in: Why is the additive group of rational numbers indecomposable?.
Can anyone give me a hint for item (a)? Thanks in advance.
| For item (a), you can decompose where one factor is the sign and the other is the absolute value. Notice that you probably should do (c) the same way. Your decomposition is for the multiplicative group of positive rational numbers.
It turns out $(\mathbb R, +) $ is decomposable as well, but it's not quite as easy to see this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3431485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Differentiability of the operator norm My question is simple. Given finite-dimensional real Banach spaces $V, W$, is the operator norm on $\mathcal{L}(V, W) \setminus \{ 0 \}$ differentiable?
I know the standard Euclidean norm would be, but I don’t know what to do with this.
| Consider $V=W=\mathbb R^n$ with $\|\cdot\|$ as the spectral norm on matrices (induced by the 2-norm on vectors). If $\|\cdot\|$ is differentiable, then for any $A,B$, the one-sided directional derivatives
$$\nabla_B\|A\|=\lim_{h\to0^+}\frac{\|A+hB\|-\|A\|}{h}$$
should satisfy $\nabla_B\|A\|=-\nabla_{-B}\|A\|$.
However, take $A=I$ and $B=\operatorname{diag}(1,0,\dots,0)$, and we have $\nabla_B\|A\|=1$ but $\nabla_{-B}\|A\|=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3431615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Can't solve a difficult limit I need to solve this limit
${\lim_ {x\to {+∞}}}{\frac{{x}(\sqrt{x^2 + x} - x) +\cos(x)\ln(x)}{\ln(1+\cosh(x))}}$
I've tried to use Taylor's Theorem with Peano's Form of Remainder, but first time I forgot that ${x\to{+∞}}$, so I made a substitution ${t=\frac{1}{x}}$, then I just didn't get anything (I've got ${o({\frac{1}{t}})}$ (or ${o((t-1)^3)}$ and too complicated expression) which doesn't disappear). I've thought to use L'Hospital's rule, but there's a problem with defining indeterminate form. Here we have ${\cos(x)\ln(x)}$ that sometimes becomes ${0\cdot∞}$. Then I thought about the existence of this limit and... WolframAlpha says it doesn't exist. But the answer in my book is 1/2.
So now Ii don't know how to solve it or does it even exist or not. Can anyone give me at least a hint of how to solve this problem?
| Hint:
First the expression in two, and use equivalents:
We have $\cosh x\sim_{+\infty}\frac12\mathrm e^x$, so $1+\cosh x\sim \frac12\mathrm e^x$, and finally
$$\ln(1+\cosh x)\sim_{+\infty}x-\ln 2\sim_{+\infty} x$$
. On the other hand,
$$x(\sqrt{x^2 + x} - x)=\frac{x(\not x^2 + x - \not x^2)}{\sqrt{x^2 + x} + x}\sim_{+\infty}\frac{x^2}{2x}=\frac x2,$$
so that $$\frac{x(\sqrt{x^2 + x} - x)}{\ln(1+\cosh x)}\sim_{+\infty}\frac{\frac12 x}{x}=\frac 12.$$
Can you show that $$\frac{\cos x\ln x}{\ln(1+\cosh x)}\to 0?$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3431716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Is this integral evaluation legitimate? I would like to evaluate the following integral:
$$\int_{-1}^{1}\frac{1}{\sqrt{1-x^{2}}}\cos (2\arccos (x))\cos (3\arccos (x)){\mathrm{d} x}$$
Some experience from taking a Numerical Methods course gives me the idea that the integrand can be thought of as some weight function $w(x) = \frac{1}{\sqrt{1-x^{2}}}$ multiplied by some trig terms which appear to be Chebyshev Polynomials of the first kind. Due to the orthogonality of Chebyshev Polynomials with respect to the weight function, I can conclude that the integral evaluates to $0$ because the Chebyshev Polynomials have different indices.
However, I would like to calculate this integral with elementary methods to be sure.
With some manipulation of the trigonometric terms I find that:
$$\cos (2\arccos (x)) = -\cos(2\arcsin(x)) $$
$$\cos (3\arccos (x))= -\sin(3\arcsin(x)) $$
I make the substitution:
$$u = \arcsin(x)$$
$${\mathrm{d} u} = \frac{1}{\sqrt{1-x^{2}}}{\mathrm{d} x}$$
Then the original integral transforms into:
$$\int_{\infty}^{\infty}\cos (2u)\sin (3u){\mathrm{d} u}$$
From which I conclude equals $0$ because:
$$\int_{a}^{a}f(x){\mathrm{d} x} = 0$$
I feel that I have done something ill advised because sine and cosine do not converge to a limit at infinity. This reminds me of the Cauchy Principal Value. Are my manipulations legitimate?
| Hint :
Both function $\cos(2\arccos x)$ and $\cos(3\arccos x)$ are Chebyshev polynomials and
$$
\cos(2\arccos x)=2x^2-1 \\
\cos(3\arccos x)=4x^3 -3x
$$
Let $$f(x)=\frac{\cos(2\arccos x)\cos(3\arccos x)}{\sqrt{1-x^2}} \\
=\frac{(2x^2-1)(4x^3 -3x)}{\sqrt{1-x^2}}$$
Then $f\left(\frac{\sqrt{3}}{2}\right) = 0$ and $\forall x \in \left(\frac{\sqrt{3}}{2}, 1\right)$, $f(x)>0$
Since $0<(2x^2 - 1)(4x^3 - 3x)< 1$ on $\left(\frac{\sqrt{3}}{2}, 1\right)$, multiply $\frac{1}{\sqrt{1-x^2}}$ both side and we know
$$
0<f(x)<\frac{1}{\sqrt{1-x^2}} \quad \quad \left(\frac{\sqrt{3}}{2}<x<1\right)
$$
So improper integral of $f$ at near of $x=1$ is converge. since $f$ is odd, improper integral of $f$ at near of $x=-1$ is also converge. Then our integral is zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3431860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What's the probability that I draw at least 1 white card when drawing 3 cards from 3 decks of 15 cards, 2 of which are white? I haven't seen quite this scenario on a card drawing problem on here. I'm trying to figure out the probabilities for a card game I'm developing. There are 3 separate decks with 15 cards each. In each deck there are 2 'white cards' let's say and we're interested in drawing those. So if I draw 3 cards from each of the decks, then what is the probability that I draw at least 1 'white card' and not one of the other 13?
I think I calculated the probability for one of the decks so I'll put my work here for someone to check it.
Probability of drawing at least 1 white card when drawing 3 cards from one deck:
First, I computed the probability of drawing exactly 1 white card when drawing 3 cards from one deck, which is as follows.
$P(W_1) = (_3C_1) \left(\frac{2}{15}\right)^1 \left(\frac{13}{15}\right)^2$
$P(W_1) = 0.3004$
Then, I computed the probability of drawing 2 white cards when drawing 3 cards from the deck.
$P(W_2) = (_3C_2)\left(\frac{2}{15}\right)^2 \left(\frac{13}{15}\right)^1$
$P(W_2) = 0.0462 $
So then the probability of drawing at least 1 white card is,
$P(W)= P(W_1) + P(W_2) = 0.3466$
So how do I go about incorporating the other 2 decks into my equation? What's the probability of drawing a white card when drawing 3 cards from each deck? Thanks.
| So I was messing around with it more and talking to a friend about it. Can anyone tell me if this is correct...
$P(W) = 1 - P(W')$
$ = 1 - \left(\frac{^{13}C_3}{^{15}C_3}\right)^3 $
Which comes out to $P(W) = 0.752$
It just seems quite high to me.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3431985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
$u(t,B_t)$ is a martingale if $u(t,x)$ is polynomial in each variables and satisfies the heat equation I want to show that for $u(t,x)$ which is a polynomial in $t$ and $x$ such that $$\frac{\partial u}{\partial t} + \frac{1}{2}\frac{\partial^2 u}{\partial x^2}$$
we have $u(t,B_t)$ is a martingale where $B_t$ stands for the standard Brownian motion.
Durrett's "Probability theory and examples" shows that $ E_x u(t, B_t) $ is constant in $t$ and concludes right away that $u(t,B_t)$ is a martingale. How is this possible?
Any help is appreciated.
| A mean-constant process with markov property is a martingale.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3432101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Define rank of matrix by reduced row echelon form - well-defined? In order to define the rank of a matrix, I want to use reduced row echelon form (rref). I have an ugly proof that the rref is unique in the following sense: If $A,B$ are in rref and $A=ZB$ with invertible $Z$ then $A=B$.
However, the claim is too strong: I only need that the number of pivot positions is unique. Then the question reduces to a hunt for a proof of the following statement:
Let $F$ be any field, $Q\in GL_m(F)$, $R\in GL_n(F)$, $0\le q,r\le \min(m,n)$ with
$$
Q \pmatrix{ I_q & 0 \\ 0&0} = \pmatrix{ I_r & 0 \\ 0&0} R.
$$
Then $q=r$.
Is there a (nice) proof that only uses matrix-multiplication based arguments?
Note, that I cannot use rank or any other advanced concept here like determinant, dimension, etc.
| Assume $q<r$. Let me partition $R=\pmatrix{ R_{11} & R_{12}\\ R_{21} & R_{22}}$ with $R_{11}\in K^{r,q}$. Then
$$
\pmatrix{ I_r& 0 \\ 0&0} R= \pmatrix{ I_r& 0 \\ 0&0}\pmatrix{ R_{11} & R_{12}\\ R_{21} & R_{22}} =\pmatrix{ R_{11} & R_{12}\\ 0&0}.
$$
Due to the assumption, the last $n-q$ columns are zero, so $R_{12}=0$.
In addition, there exists $x\in K^{n-q,1}$, $x\ne0$, such that $R_{22}x=0$. This can be constructed from the rref of $R_{22}$, since $R_{22}$ has more columns ($n-q$) than rows ($n-r$). Then $R\pmatrix{0\\ x}=0$, which is a contradiction to the invertibility of $R$. Hence $q\ge r$.
The inequality $q\le r$ can be obtained by multiplying the assumption by $Q^{-1}$ and $R^{-1}$ and applying the first part of the proof again.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3432263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The number of real roots of the equation
$$e^{\sin x}-e^{-\sin x}-4=0$$
Let $e^{\sin x}=y$
Then $$y-\frac 1y -4=0$$
$$y^2-4y-1=0$$
$$y=2+\sqrt 5 , 2-\sqrt 5$$
How should I solve further ?
| Your method is fine, now observe that
*
*$e^{\sin x}=2+\sqrt 5 \implies \sin x=\log(2+\sqrt 5)>1$
which is not possible and
*
*$e^{\sin x}=2-\sqrt 5<0 $
which is not possible, therefore there are not real solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3432487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
For rational numbers $a, b$, what is the range of $b$ such that $\lceil a + b \rfloor = \lceil a \rfloor$ holds? For rational numbers $a, b$, what is the range of $b$ such that $\lceil a + b \rfloor = \lceil a \rfloor$ holds?
Clearly, b=0 gives us the result.
What are the lower and upper bounds of $b$?
$\lceil \cdot \rfloor$ : is a rounding function that rounds a rational number to the nearest integer
The range of $a$ is $[-\frac{x}{2}, \frac{x}{2})$ for some positive integer $x$.
| For positive $a$, $\lceil a \rfloor = \lfloor a + \frac 12 \rfloor$, being the floor function. If $a = a_q+a_r$ where $a_q$ is the whole number part and $a_r$ is the fractional part, $\lceil a \rfloor = a_q + \lfloor a_r + \frac 12 \rfloor$. Similarly, if $b$ (also positive) is $b_q + b_r$ then
\begin{align}
\lceil a + b \rfloor & = \big \lfloor a_q+a_r + b_q + b_r + \frac 12 \big \rfloor \\
& = a_q + b_q + \big \lfloor a_r + b_r + \frac 12 \big \rfloor
\end{align}
If this is to equal $\lceil a \rfloor$ then we must have $b_q = 0$ and we want to find when $\lfloor a_r + b_r + \frac 12 \rfloor = \lfloor a_r + \frac 12 \rfloor$.
This is best broken into parts: $a_r < \frac 12$ and $a_r \ge \frac 12$. For the first of these we find that $b_r \in [0, \frac 12 - a_r)$, while for the second $b_r \in [0, \frac 32 - a_r)$.
For negative $a$, it is easier to think of $a_q< 0$ and $a_r \ge 0$, thus $-5.6 = -6 + 0.4$. The above arguments are true and we still split it into two: when $a_r< \frac 12, b\in [0, \frac 12 - a_r)$ and when $a_r \ge \frac 12, b \in [0, \frac 32 - a_r)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3432605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
question about sets and subsets using the definition
Assume $A$ and $B$ are two sets such that: $Card(A)=n$ and $Card(B)=m$, also $B⊆A$,Clearly $n\ge
m$.
define: $$\mathscr{P}(A:B)=\left\{X∈\mathscr{P}(A):B⊆X\right\}$$
find the number of elements of $\mathscr{P}(A:B)$
I just know $Card\left(B\right)\le Card\left(X\right)$
how I can answer the question?
does there exist any general formula?
| If we assume $Card(A)=n$ and $Card(B)=m$ then
$$\mathscr{P}(A:B)=\sum_{k=m}^{n}{{n}\choose{k}}{{k}\choose{m}}$$
For example if $A=\left\{1,2,3\right\}$, clearly $Card(A)=3$,and take $Card(B)=2$ then:
$\mathscr{P}(A)=\left\{\left\{\right\},\left\{1\right\},\left\{2\right\},\left\{3\right\},\left\{1,2\right\},\left
\{1,3\right\},\left\{2,3\right\},\left\{1,2,3\right\}\right\}$
Since $X∈\mathscr{P}(A)$ and $B⊆X$ we have:
There are ${{3}\choose{2}}$ sets of $\mathscr{P}(A)$ and ${{2}\choose{2}}$ set of $X$ with cardinality
$2$
There are ${{3}\choose{3}}$ set of $\mathscr{P}(A)$ and ${{3}\choose{2}}$ sets of $X$ with cardinality
$2$
Summing gives the desired result, also using the formula we have:
$$\mathscr{P}(A:B)=\sum_{k=2}^{3}{{3}\choose{k}}{{k}\choose{2}}$$$$={{3}\choose{2}}{{2}\choose{2}}+
{{3}\choose{3}}{{3}\choose{2}}$$
which is the same as the previous result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3432737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to find the singular solution of $y'=\frac{2x+y}{y-x}$ $y'$ and $y$ occur linearly in the ODE
$$y'=\frac{2x+y}{y-x}.$$
yet it is a first order non-linear ODE. I can find a family solutions of this homogeneous ODE by using $y=vx \implies \frac{dy}{dx}= v+x \frac{dv}{dx}$. I want to know if there is(are) singular solution(s) of this equation.
| $$ y'=\frac{x(2+\frac{y}{x})}{x(\frac{y}{x}-1)} $$
then you can make a replacement $$ t=\frac{y}{x} \Rightarrow \frac{dy}{dx}=\frac{dt}{dx}x+t \Rightarrow \frac{dt}{dx}x+t=\frac{2+t}{t-1} $$
It remains only to divide the variables and take the integral :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3432861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.