Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Denoting $2\le\frac{6-4x}5\le3$ by $|mx-n|\le5$. What is the value of $|n-m|$?
If we denote solution set of the inequality $2\le\frac{6-4x}5\le3$ by
$|mx-n|\le5$, what is the value of $|n-m|$?
$1)7\qquad\qquad2)5\qquad\qquad3)21\qquad\qquad4)23$
I solved this problem with following approach:
$$2\le\frac{6-4x}5\le3\quad\Rightarrow-\frac94\le x\le-1$$
And $|mx-n|\le5$ is equivalent to $\frac{-5+n}m\le x\le\frac{5+n}{m}$. Hence we have $\frac{-5+n}m=-\frac94$ and $\frac{5+n}m=-1$ and by solving system of equations I got $m=8$ and $n=-13$. So the final answer is $21$.
I wonder, can we solve this problem with other approahces?
| ....or... from the other direction
$|mx- n| \le 5$
$-5 < mx - n < 5$
Stretch those so the end points are only one apart
$\frac {-5}{10} < \frac {mx-n}{10} < \frac {5}{10}$.
Then shift so that the endpoints line up where we want.
$\frac {-5}{10} + 2\frac 12 < \frac {mx-n}{10}+2\frac 12 < \frac 5{10} +2\frac 12$.
We need $2<\frac m{10}x + (\frac 52 -\frac n{10}) < 3 \iff 2< \frac{6-4x}5 < 3$
.... I suppose we can give a handwavy argument for why that would require $\frac m{10} = -\frac 45$ and for $\frac 52-\frac n{10} = \frac 65$
that is to say why $a < kx + j < b \iff a< wx + v < b$ must imply $k=w$ and (if $k\ne 0$) $j=v$. I think your method of solving directly does that. I could argue that if $j\ne v$ or $k\ne w$ there will always be wiggle room to slide an $x$ so that one inequality is true and the other not. Handwavy.... but true.
=====
An astute observer might notice that once I chose the assymetrce shift of adding $2\frac 12$ to every term I committed $m$ to equaling $-8$ and $n$ to equalling $13$ (the exact opposite of your values).
That is because as long as we have $-M < something < M$ we could just as well have $-M < -something < M$. But once we commit to one and we "shift the origin" ... well, the die is cast.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4213018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Can we say that $\forall x\in \Bbb R$, $g(x)>0$, $g(x)<0 $ or that $(\exists x\in \Bbb R) g(x)=0$. Let $ (a,b,c)\in \Bbb R^3 $ with $ a\ne 0 $,
$$f(x)=ax^2+bx+c$$
and
$$g(x)=f(x)+f'(x)+f''(x).$$
We assume that
$$(\forall x\in \Bbb R)\;\; f(x)>0$$
What can we conclude
$$1. \;\;(\forall x\in \Bbb R)\;\;g(x)>0$$
$$2. \;\;(\forall x\in \Bbb R)\;\; g(x)<0$$
or
$$3. \;\; (\exists x\in \Bbb R)\;:\; g(x)=0$$
I used the fact that
$$(\forall x\in \Bbb R)\;\; f(x)>0 \implies \delta=b^2-4ac<0$$
then, i wrote $ g(x) $ as
$$g(x)=ax^2+(2a+b)x+2a+b+c$$
the discriminant is
$$\Delta=(2a+b)^2-4a(2a+b+c)$$
$$=b^2-4ac-4a^2=\delta-4a^2<0$$
i concluded that the sign of $ g(x) $ is constant but i cannot say if it is positive or negative.
Any idea will be appreciated.
| We can conclude $g(x)>0$ everywhere. If $f(x)>0$ then $a>0$, and since the coefficient of $x^2$ in $g(x)$ is also $a$, and the discriminant of $g$ is negative, we must have $g(x)>0$ for all $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4213218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Doubts about this nicely integral identity with floor function I'm trying to prove for $f:(a,b]\to\mathbb{R}$ be a continuous and strictly decreasing function, with $\displaystyle \lim_{x\to a^{+}} f(x)=\infty$, that:
$$ \int_a^b (-1)^{\left \lfloor f(x) \right \rfloor}\mathrm dx=(-1)^{\left \lceil f(b) \right \rceil -1}b +2\sum_{n=\left \lceil f(b) \right \rceil}^\infty (-1)^{n}f^{-1}(n)$$
where $\left \lfloor m \right \rfloor$ and $\left \lceil m \right \rceil$ are floor and ceiling function of $m$, respectively. Intuitively, I start with the substitution $f(x)=y$ and then:
$$ \int_a^b (-1)^{\left \lfloor f(x) \right \rfloor}\mathrm dx=\int_{f(a)}^{f(b)} (-1)^{\left \lfloor y \right \rfloor}\left[f^{-1}(y)\right]'\mathrm dy= -\int_{f(b)}^{\infty} (-1)^{\left \lfloor y \right \rfloor}\left[f^{-1}(y)\right]'\mathrm dy$$
Here, I'm having trouble developing the last integral and what I know is that I need to add integrals over some interval which for me is still a mystery. Thanks for some clarification.
| Since the floor function is not continuous (let alone differentiable), it seems more promising to start with letting $x_n=f^{-1}(n)$ for all naturals $n\ge n_0:=\lceil f(b)\rceil$. Then $\{x_n\}_n$ is a sequence in $(a,b]$ and strictly decreasing to $a$.
As the integrand is piecewise constant, we have
$$ \int_{x_{n_0}}^b(-1)^{\lfloor f(x)\rfloor}\,\mathrm dx = \int_{x_{n_0}}^b(-1)^{\lfloor f(b)\rfloor}\,\mathrm dx =(b-x_{n_0})(-1)^{\lfloor f(b)\rfloor}
=(-1)^{n_0}(x_{n_0}-b)$$
and for $n\ge n_0$,
$$ \int_{x_{n+1}}^{x_{n}}(-1)^{\lfloor f(x)\rfloor}\,\mathrm dx=\int_{x_{n+1}}^{x_{n}}(-1)^{n}\,\mathrm dx=(-1)^n(x_{n}-x_{n+1}).$$
By summing and telescoping,
$$\begin{align}\int_{x_{N+1}}^{b}&=(-1)^{n_0}(x_{n_0}-b)+\sum_{n=n_0}^N(-1)^n(x_{n}-x_{n+1})\\
&=(-1)^{n_0+1}b+(-1)^{n_0}x_{n_0}+\sum_{n=n_0}^N(-1)^nx_{n}+\sum_{n=n_0+1}^{N+1}(-1)^nx_{n}\\
&=(-1)^{n_0+1}b+2\sum_{n=n_0}^N(-1)^nx_{n}+(-1)^{N+1}x_{N}
\end{align}$$
Let's assume $a=0$ for the moment. Then if we take the limit as $N\to\infty$ of the right hand side above, we arrive at
$$ (-1)^{n_0+1}b+2\sum_{n=n_0}^\infty(-1)^nx_{n}$$
where the series converges by the Leibniz criterion, while at the same time the left hand side converges to the improper integral from $a$ to $b$, as desired.
In the general case when $a\ne 0$, the series does not converge, while clearly the improper integral does. We can adjust for that: Define $g\colon (0,b-a]\to \Bbb R$, $g(x)=f(x+a)$, do the above with $g$. We can then express the result in terms of $f$ and note that the original claim has to be adjusted to have $(-1)^n(f^{-1}(n)-a)$ instead of $(-1)^nf^{-1}(n)$ as series summand.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4213374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability that a binomial random variable with $n=10.8\cdot 10^6$ and $p=1.1\cdot 10^{-5}$ is larger than $52$ Given $n=10.8\cdot 10^{6}$ independent identically distributed (i.i.d.) random variables $$X_1,\dots, X_n\sim\text{Bernoulli}(p=11\cdot10^{-6}),$$ what is the following probability? $$\mathsf P \left( X_1 + \cdots + X_n \ge 52 \right)$$
Motivation
Warning: the following contains material that may cause discomfort to some readers.
According to the United Nations Office on Drugs and Crime 2015 crime statistics, the rate of police recorded instances of sexual intercourse without valid consent in Greece in the year 2015 was $1.1$ per $100'000$ people and the population of Greece is around $10.8\cdot 10^6$.
| The expected number of occurrences would be $10800000\cdot1.1/100000\approx119$. So having at least 52 is very very likey. In fact, R tells me:
> binom.test(52,10800000,1.1/100000, alternative="greater")
Exact binomial test
data: 52 and 10800000
number of successes = 52, number of trials = 10800000, p-value = 1
alternative hypothesis: true probability of success is greater than 1.1e-05
…
Somehow, this is confusing, since you said (before editing your question if I remember correctly) that there has been an increases in the number of occurrences. This scenario suggests that there has actually been a very significant drop:
> binom.test(52,10800000,1.1/100000)
Exact binomial test
data: 52 and 10800000
number of successes = 52, number of trials = 10800000, p-value = 8.528e-12
alternative hypothesis: true probability of success is not equal to 1.1e-05
…
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4213488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Expected value of $\overline{ABC} \times \overline{DEF}$ Here is a question from HMMT:
https://hmmt-archive.s3.amazonaws.com/tournaments/2013/nov/team/solutions.pdf
The digits $1$, $2$, $3$, $4$, $5$, $6$ are randomly chosen (without replacement) to form the three-digit numbers $M = \overline{ABC}$ and $N = \overline{DEF}$. For example, we could have $M = 413$ and $N = 256$. Find the expected value of $M \cdot N$.
Here's what I did. Each digit on average is going to be $(1 + 2 + 3 + 4 + 5 + 6)/6 = 3.5$, so the expected value is $(100(3.5) + 10(3.5) + 3.5)^2 = 150932.25$. However, the answer at the link above is $143745$. What did I do wrong? Did I overcount something?
| 143745 is correct. Your assumption is wrong. Just see this with the set {3, 4}. Its mean is 3.5, but 34*43 is not equal to the square of (35 + 3.5).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4213667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
For which value of $t \in \mathbb R$ the equation has exactly one solution : $x^2 + \frac{1}{\sqrt{\cos t}}2x + \frac{1}{\sin t} = 2\sqrt{2}$ For which value of $t \in R $ the equation has exactly one solution : $x^2 + \frac{1}{\sqrt{\cos t}}2x + \frac{1}{\sin t} = 2\sqrt{2}$
Here $t \neq n\pi, t \neq (2n+1)\frac{\pi}{2}$
Therefore , for the given equation to have exactly one solution we should have :
$(\frac{2}{\sqrt{\cos t}})^2 -4.(\frac{1}{\sin t} - 2\sqrt{2}) = 0 $
$\Rightarrow \frac{4}{\cos t} - 4 (\frac{1}{\sin t} - 2\sqrt{2}) = 0 $
$\Rightarrow \sin t -\cos t +2 \sqrt{2}\sin t\cos t = 0 $
$\Rightarrow \sqrt{2}( \frac{1}{\sqrt{2}}\sin t - \frac{1}{\sqrt{2}}\cos t) = -2\sqrt{2} \sin t\cos t$
$\Rightarrow \sqrt{2}(\cos(\pi/4)\sin t -\sin(\pi/4)cos t = -\sqrt{2}\sin2t $ [Using $\sin x\cos y -\cos x\sin y = \sin(x-y)$]
$\Rightarrow \sqrt{2}\sin(\frac{\pi}{4}-t) =-\sqrt{2}\sin2t$
$\Rightarrow \sin(\frac{\pi}{4}-t) =-\sin2t $ [ Using $-\sin x = \sin(-x)$ and comparing R.H.S. with L.H.S. ]
$\Rightarrow \frac{\pi}{4}-t = -2t $
$\Rightarrow t = - \frac{\pi}{4}$
Is it correct answer, please suggest.. thanks
| Note:
$$\sin t\cos \frac{\pi}{4}-\cos t \sin \frac{\pi}{4}=\sin\left(t-\frac{\pi}4\right)$$
So, you must have
$$\sin\left(t-\frac{\pi}{4}\right) =-\sin2t$$
Then you can not simply equate the arguments, because you must remember the period.
Here is how you can continue:
$$\sin\left(t-\frac{\pi}{4}\right)+\sin 2t=0 \stackrel{{\sin A + \sin B=2\sin \frac{A+B}{2}}\cos \frac{A-B}{2}}{\Rightarrow} \\
2\sin{\left(\frac{3t}2-\frac{\pi}{8}\right)}\cos\left(-\frac{t}{2}-\frac{\pi}{8}\right)=0 \Rightarrow \\
\sin\left(\frac{3t}2-\frac{\pi}{8}\right)=0 \quad \text{or} \quad \cos\left(-\frac{t}{2}-\frac{\pi}{8}\right)=0 \Rightarrow \\
\frac{3t}2-\frac{\pi}{8}=\pi n \quad \text{or} \quad -\frac{t}{2}-\frac{\pi}{8}=-\frac{\pi}{2}+\pi n \Rightarrow \\
t=\frac{\pi}{12}+\frac{2\pi n}{3} \quad \text{or} \quad t=\frac{3\pi}4-2\pi n,n\in Z.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4213775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
If $|\frac{c_{n+1}}{c_n}|\leq1+\frac{a}{n}$, where $a<-1$, $a$ does not depend on $n$, then the series $\sum_{n=1}^\infty c_n$ converges absolutely Question: If $|\frac{c_{n+1}}{c_n}|\leq1+\frac{a}{n}$, where $a<-1$ and $a$ does not depend on $n$, then the series $\sum_{n=1}^\infty c_n$ converges absolutely.
My attempt: Let $\epsilon>0$. To show $\sum c_n$ converges absolutely, we want to show that there exists an $N\in\mathbb{N}$ such that $|\frac{c_{n+1}}{c_n}|$ converges uniformly to some constant $n>N$. Since $a<-1$, there is some $N_0$ such that $||\frac{c_{n+1}}{c_n}|-1|<\epsilon$ whenever $n>N_0$. Thus, $|\frac{c_{n+1}}{c_n}|$ converges uniformly (to $1^-$) hence $\sum c_n$ converges absolutely.
I actually asked this question a little over a year ago here: Showing a series converges absolutely and got a couple neat answers, but I was wondering if this would be a more "direct" (I don't know if that is the right word) of doing it. Or, is there something I messed up? Any help is, as always, greatly appreciated! Thank you.
| If
$$ \left|\frac{c_{n+1}}{c_n}\right| \leq 1-\frac{k}{n}\qquad\text{with }k>1 $$
then
$$|c_{n+1}|\leq |c_n| e^{-k/n}\leq |c_1|\exp\left(-k H_n\right)\leq |c_1|\exp(-k\log n)=\frac{|c_1|}{n^k}$$
and $\sum_{n\geq 1}\frac{1}{n^k} = \zeta(k)< +\infty$ for any $k>1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4213912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Come up with a formula for n years that has previous years result as input for next year Look at the following sequence:
$\text{year }1 = 2x-0.5x = 1.5x
$
$\text{year }2 = 2(1.5x) - 0.5(1.5x) = 2.25x$
$\text{year }3 = 2(2.25x) - 0.5(2.25x)$
How do I come up with a generic formula to continue this calculation for $n$ years?
| According to the message I got from the question, ${y_n}$ is a geometric sequence, which acts like:
$$y_n=2y_{n-1}-0.5y_{n-1}=1.5y_{n-1}.$$
$$\dfrac{y_n}{y_{n-1}}=1.5$$
$$y_n = 1.5y_{n-1} = (1.5)^2y_{n-2}=\dots=(1.5)^{n}y_0 = (1.5)^n x.$$
I don't know if I misunderstood what you mean. If so, I'll be super sorry about that!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4214047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\sqrt{7\sqrt{7\sqrt{7\sqrt{7\,\cdots}}}}$. How can we know $0$ is an extraneous solution? The task is to find the the value of this expression:
$$\sqrt{7\sqrt{7\sqrt{7\sqrt{7\,\cdots}}}}$$
We can assume
$$\sqrt{7\sqrt{7\sqrt{7\sqrt{7....}}}} = x$$
Then we replace the nested part with $x$
$$\sqrt{7x} = x$$
Then square both sides and solve the equation
$$\begin{align}
7x &= x^2 \\[4pt]
x^2 - 7x &= 0 \\[4pt]
x(x - 7) &= 0 \\[1em]
x_1 = 7,\quad& x_2 = 0
\end{align}$$
The problem is I assume $0$ is not a valid solution, but I can't explain why. It almost makes sense it could be $0$ as the numbers get progressively smaller.
Can someone explain how can we reject the solution $0$?
Edit: I think I understand it better now. But what I'm most confused about is why we get the invalid solutions here in the first place.
Could it be that the problem is in the beginning where we could write the equation in different ways, like $\sqrt{7\sqrt{7x}} = x ?$ This way we would get 4 different solutions.
| Extraneous solutions are usually a result of an algebraic manipulation of an equation. In this case, by squaring the equation, you are forcing it to have “two” solutions. This is the reason why it pays to check if solutions make sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4214220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 3
} |
All Hilbert spaces are isometric to $l^2(E)$ - how? I was reading about Hilbert spaces and came across this line on Wikipedia:
By choosing a Hilbert basis (i.e., a maximal orthonormal subset of
$L^2$ or any Hilbert space), one sees that all Hilbert spaces are
isometric to $ℓ^2(E)$, where $E$ is a set with an appropriate
cardinality.
My questions are:
*
*What does the $E$ stands for? Is it the basis of $l^2$?
*What is meant by "appropriate cardinality"?
*Why is $l^2$ isometric to any other Hilbert space? Yes, the norm on $l^2$ is square root of sum of squares. But how do we know the isometry applies, even when some Hilbert spaces have elements with n coordinates, while $l^2$ has infinite coordinates? (Because elements are sequences and each sequence is infinite - has infinite "coordinates").
Thank you very much for your insights.
| Every Hilbert space $H$ admits an orthonormal basis $\{v_i : i \in E \}$. Then $H$ is isometric to $\ell^2(E)$.
EDIT: The $E$ is dependent upon the Hilbert space. Moreover, if $E, F$ are two abstract sets, then $\ell^2(E)$ and $\ell^2(F)$ are isometric if and only if $E, F$ are of the same cardinality. The point of this question is that every Hilbert space can be represented as $\ell^2$ over an (essentially) unique set $E$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4214396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
$(I-A)^k=0 \text{ implies that } \exists A^{-1} \text{ s.t. }AA^{-1}=I $ I think this proposition is right. If this is not right, could you provide a counter example? However, this is definitely right for the $\mathbb{R}^3 $ case. Here is how I proved it. Is it right and if so rigorous? Even if yes, are there more ways to prove this, I'm really curious. I put two ways, I'm not sure if either is right, or maybe one of them is rigorous and the other one is not. Could you please point out flaws in the proofs if the idea is right but it is not rigorously explained? I'm new to proofs and it's summer so I can't annoy my professors. Thanks.
$\\(I-A)^k=0 \\
det(I-A)^k=0 \Rightarrow det(I-A)=0 \\
\text{Therefore there exists a non-zero vector x, }\\
(2) \quad(I-A)x=Ix-Ax=0 \\
\text{There exists non trivial x s.t. } Ax=\lambda_{2}x \\
\text{So now } (I-A)x=x-\lambda_2x=(1-\lambda_2)x \\
\text{Multiplying both sides by (A-I) k times, we get } 0=(1-\lambda)^kx \\
\text{Since x is non-trivial, }1-\lambda=0 \Rightarrow \lambda=1 \\
\therefore \text{A has eigenvalue 1 of multiplicity k, and so A is invertible}
$
| Let's try this with minimal polynomial argument:
Consider $p(x)=(1-x)^k$ . Then $A$ is a matrix which satisfies the polynomial $p(x)$. Now the minimal polynomial of $A$ has to be a divisor of $p(x)$( This fact can be easily proven using the divison algorithm). So the minimal polynomial of $A$ has to be of the form $m(x)=(1-x)^r$ where $r\leq k$. So the minimal polynomial of $A$ has roots 1 with multiplicity $r$. Now the roots of the minimal polynomial of $A$ are exactly the eigen values od $A$. So eigen values of $A$ is 1 and this is the only eigen value. So $0$ is not an eigen value of $A$ hence $A$ is invertible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4214655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Compute the the dominant eigenvalue and its corresponding eigenvector of a sparse transition matrix I have a sparse $N\times N$ transition matrix (asymmetric), in which most entries are zero. I can use scipy.linalg.eig to compute all eigenvalues and eigenvectors and then find the dominant eigenvalue and its corresponding eigenvector. However, if $N$ is very large, it suffers from high computational expensive.
Is there some method more space-effective for the computation of the dominant eigenvalue and its eigenvector for a sparse transition matrix?
| Yes---use the power iteration method. The Wikipedia article has more details, including a Python implementation.
The power iteration method requires only taking matrix-vector products, and so remains efficient for large, sparse matrices. Note though that it will find the largest-magnitude eigenvalue and its corresponding eigenvector (and will have numerical issues if the spectral gap between the two largest-magnitude eigenvalues is small); if you want the most-positive eigenvalue of an indefinite matrix instead, you will need to modify the algorithm a bit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4214768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Example of a function $f$ such that $f\cdot r\in L^2$ and $f'\cdot r\in L^2$ but $f'\cdot r^2\not\in L^2$ Let $\newcommand{\R}{\mathbb R}\R_+\overset{\text{Def.}}=[0,\infty[$. I am looking for an explicit example of a smooth function $f:\R_+\to\R$ such that all of the following hold:
*
*$f(r)\cdot r\in L^2(\R_+)$;
*$f'(r)\cdot r \in L^2(\R_+)$;
*$f'(r)\cdot r^2\color{red}{\not\in} L^2(\mathbb R_+)$.
I have tried an Ansatz of the type $f(r)=(r+1)^{a}$ for some $a\in\mathbb R\setminus\{0\}$, however, in that case, if $f(r)\cdot r\in L^2(\R_+)$, then $a<-2$ and $$f'(r)\cdot r^2 = a (r+1)^{a-1}\cdot r^2,$$
which is contained in $L^2(\R_+)$.
| Hint: Consider interpolating between $0$ and $r^a\sin(r^b)$, some appropriate $a,b$ and appropriate subdomains of $\mathbb{R}_+$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4214894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
SDE for $Y_t = \frac{1}{Z_t}$ where $Z_t =\exp \left(\int_0^t X_u \, dW_u-\frac{1}{2}\int_0^t X^2_u \, du \right)$ The problem is the following (Problem 3.10 from Karatzas/Shreve):
Let $Z_t = \exp \left(\int_0^t X_u \, dW_u-\frac{1}{2}\int_0^t X^2_u \, du \right)$. Define $Y_t=1/Z_t$. Show the following stochastic differential equation holds
\begin{align*}
dY_t=Y_tX_t^2\,dt-Y_tX_t\,dW_t, \quad Y_0=1
\end{align*}
where we have assumed $\int_0^t X_u^2 \, du<\infty$ almost surely for all $0<t<\infty$, and $W_t$ is the standard Brownian motion.
My questions are:
*
*How can we prove $Y_t$ is a semi-martingale?
*For the differential equation, this is my try (which I assume $Y_t$ is a semi-martingale
\begin{align*}
Y_tZ_t=1 \ \Longrightarrow Z_t\,dY_t + Y_t\,dZ_t + d\langle Y_t, Z_t \rangle = 0
\end{align*}
Since $Y_tZ_t=1$, we have $\langle Y_t, Z_t\rangle = 0$. Also, from the textbook, it is shown $dZ_t= Z_tX_t \,dW_t$
So from the above we have
\begin{align*}
Y_tZ_t \,dY_t = -Y_t^2 \, dZ_t \ \Longleftrightarrow\ dY_t = -Y_t^2\,dZ_t = -Y_tX_t\,dW_t
\end{align*}
since $dZ_t = Z_tX_t\,dW_t$ and $Z_tY_t=1$. However, it seems the $dt$ term is missing.... I can't really see what is wrong in the calculation though. Does anyone have any comments?
| The issue with your solution is that $Y_t Z_t = 1$ does not imply that $\langle Y_t, Z_t \rangle = 0$ for the covariation. See the answer by @ChristopherK for a computation of $\langle Y_t, Z_t \rangle$. Here's a way to solve your original problem.
Let $$R_t = \underbrace{\int_0^t X_u dW_u}_{(\star )} - \underbrace{\frac{1}{2}\int_0^t X_u^2 du}_{(\star \star)}$$ Note that with the assumption that $\int_0^t X_u^2 du < \infty$, the process $R_t$ is well-defined. Moreover, $R_t$ is a semi-martingale, because $(\star)$ is a local martingale and $(\star \star)$ is a process of bounded variation.
Let $g(x) = e^{-x}$ and observe that $Y_t = g(R_t)$. Itô's lemma implies that the class of semi-martingales is stable under the application of $C^2$ maps, which $g$ clearly is. That solves (1).
To solve (2), observe that $g_x(x) = - e^{-x}, g_{xx}(x) = e^x$. Further observe that the dynamics of $R_t$ can be written as $$dR_t = X_t dW_t - \frac{1}{2} X_t^2 dt$$ By Itô's lemma:
$$\begin{align*}
dY_t &= g_x(R_t)(dR_t) + \frac{1}{2} g_{xx}(R_t)(dR_t)^2 \\
&= - \exp (-R_t) \left( X_t dW_t - \frac{1}{2}X_t^2 dt\right) + \frac{1}{2} \exp(-R_t) X_t^2 dt \\
&=\exp (-R_t) X_t^2 dt - \exp (-R_t) X_t dW_t \\
&=Y_t X_t^2 dt - Y_tX_tdW_t
\end{align*}$$
As required.
As an additional technical point, setting $Y_t = \frac{1}{Z_t}$ is a perfectly valid thing to do you because $P \left(\inf_{0 \leq s \leq t} Z_s > 0\right) = 1 $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4215066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Proof of Cartan's theorem in Do Carmo's book I would like to know why it is necessary to prove that $\widetilde{J}(l) = df_q(v) = df_q(J(l))$ to prove the Cartan's theorem below.
Thanks in advance!
| I realized that the equality in the post is necessary after observe that $J(l) = v$, $\widetilde{J}(l) = df_q(v)$ and $|\widetilde{J}(l)| = |J(l)|$, then $|df_q(v)| = |v|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4215346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to solve $\int x^{-1/4} \sqrt{1+x^{1/4}}$ using the substitution $u = 1 + x^2$ I have this question in calculus but can't find a way to solve it the "intended way".
The question is this:
The integral ${\displaystyle \int x^{-1/4}\sqrt{1+x^{1/4}}} \ dx $ can be solved utilizing the substitution $u=1+x^2$. After the change of variables, we have an integral ${\displaystyle \int f(u) \ du}$ wich results in $F(u) +C$. Determine the primitive function $F(u)$.
Answer: $\dfrac{8}{7}u^{\frac{7}{2}}-\dfrac{16}{5}u^{\frac{5}{2}}+\dfrac{8}{3}u^{\frac{3}{2}}$
The thing is that I can't find a way to solve this using the given substitution.
I did find the answer by solving the integral using a diferent substituition $t=x^{1/4}$, but I realy don't know how to approach this exercise by utilizing the given substituition.
| See what I have done.
Put $t^2 = x^1/4 +1$ and represent the original integral above in terms of $t$ and so you will get the integrand as $$8t^2(t^2-1)^2dt$$ and when you will integrate it and solve for the integral in terms of $t$ you will get the answer as $$8/7t^7-16/5t^3+8/3t^3+C$$ and as per you answer in terms of $u$ we must have $t^2=u$ but as per my substitution $t^2=x^1/4+1 =u$ but as per yours it is $u=1+x^2 $.
Therefore, your answer in terms of substitution in $u$ is wrong and final answer in terms of $u$ will be it you replace $t$ with $u=(t^2-1)^8+1$.
Thanks.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4215518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Relationship between column spaces of blocks of a Positive Semi-definite Matrix If $A$ is positive semi-definite matrix, partitioned, $A=\begin{bmatrix} B & C\\
C^* & D\end{bmatrix}$ with $B$ square, then prove that $\mathcal{C}(C^*)\subseteq\mathcal{C}(D)$.
I think we may instead show that $\mathcal{N}(D)\subseteq\mathcal{N}(C)$. So I tried as follows:
If $Dy=0$, then let $x=[z\ \ y]$. Now $x^*Ax=z^*Bz+y^*C^*z+z^*Cy\geq 0$. I am stuck here. Can I somehow proceed to show that $Cy=0$ too? or is there any other way of attacking this problem? Help needed.
Here $\mathcal{C}(.)$ is the column space and $\mathcal{N}(.)$ is the null space.
Thanks in advance.
| Your approach is correct. We can continue it as follows.
As you have shown, we have
$$
x^*Ax = z^*Bz + y^*C^*z + z^*Cy = z^*Bz + 2\operatorname{Re}(\langle z,Cy \rangle).
$$
Now, suppose for the purpose of contradiction that $Cy \neq 0$. Let $z = -t \cdot Cy$ with $t > 0$. We find that
$$
x^*Ax = ((Cy)^*B(Cy)) \cdot t^2 - 2 \|Cy\|^2 \cdot t.
$$
Show that there must be value of $t$ for which $x^*Ax$ is negative, contradicting the positive semidefiniteness of $A$. Thus, $\mathcal N(C) \supseteq \mathcal N(D)$, which was what we wanted.
Another approach:
Because $A$ is positive semidefinite, the principal submatrix $D$ must be positive semidefinite. Sylvester's law of inertia guarantees the existence of an invertible matrix $P$ such that $PDP^{*}$ has the block-form
$$
PDP^* = \pmatrix{I_r & 0\\0 & 0},
$$
where $I_r$ is the identity matrix of size $r$ (and $r$ is the rank of $D$). Note that because $A$ is positive semidefinite, the conjugate matrix
$$
\pmatrix{I & 0\\ 0 & P} \pmatrix{B & C\\ C^* & D} \pmatrix{I & 0\\0 & P^*} = \pmatrix{B & CP^*\\PC^* & PDP^*}
$$
must be positive semidefinite. Partition the matrices $PC^*$ into blocks
$$
PC^* = \pmatrix{G_1\\G_2},
$$
where $G_1$ has $r$ rows. We now see that the block-matrix
$$
\pmatrix{B & G_1^* & G_2^*\\
G_1 & I_r & 0\\
G_2 & 0 & 0
}
$$
is positive semidefinite. It follows that the principal submatrix
$$
\pmatrix{B & G_2^*\\ G_2 & 0}
$$
is positive semidefinite. It follows that $G_2$ must be zero: otherwise, this matrix has a principal submatrix of the form
$$
\pmatrix{a & \bar b\\b & 0}
$$
with $b \neq 0$, which has negative determinant and therefore fails to be positive semidefinite. Conclude that $\mathcal C(PC^*) \subseteq \mathcal C(PDP^*)$, from which it follows that $\mathcal C(C^*)\subseteq \mathcal C(D)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4215686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Expected value of $S_t$ where $dS_t=(\mu S_t+a)dt + (\sigma S_t +b)dW_t$ I have the stochastic differential equation: $$dS_t=(\mu S_t+a)dt + (\sigma S_t +b)dW_t$$
where $W_t$ is a Wiener process with $S_0 > 0$ and $\mu, \sigma, a, b \in \mathbb{R}$.
I have found the solution of this equation: $$S_t= S_0\beta^{-1}_t+(a-\sigma b)\int_0^t \frac{ \beta_s} {\beta_t}ds + b\int_0^t \frac{ \beta_s} {\beta_t}dW_s$$
using an integrating factor of $\beta_t = \exp(-(\mu-\frac{1}{2}\sigma^2)t - \sigma W_t)$ and proved that it is a unique solution of this SDE.
I want to now find the expected value of this solution, so $\mathbb{E}(S_t) $ for all $t\geq 0$, with $- \frac{1}{2} \sigma < \mu < 0$. Could someone just point me in the right direction?
| Following @KurtG's comment, you may observe that $\beta_t M_t = \int_0^t \beta_s dW_s$, being the Itô integral of an $L^2(\Omega \times [0,t])$ process, is a martingale, so that $E(\beta_t M_t) = 0$.
Now observe that $$E(\beta_s/\beta_t) = \exp \left( (\mu - 0.5 \sigma^2 ) (t-s)\right) E \exp \left( \sigma (W_t - W_s) \right) = \exp (\mu (t-s)) $$
Where we have used $E \exp \left( \sigma (W_t - W_s) \right) = \exp \left( \frac{1}{2} \sigma^2 (t-s) \right)$, as $\sigma (W_t - W_s) \sim N(0, \sigma^2(t-s))$. Now, use Tonelli's theorem to get: $$E(S_t) = S_0 E(\beta_t^{-1})+(a-\sigma b) \int_0^t E(\beta_s / \beta_t) ds$$
Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4215951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Are all finite schemes over a field flat? If $f:X\to S$ is a morphism of schemes, and $S$ is locally Noetherian, then $f$ being finite and flat is equivalent to the sheaf $f_*\mathcal{O}_X$ being a finite locally free $\mathcal{O}_S$-module. (See https://stacks.math.columbia.edu/tag/02K9)
If $S=\text{Spec} k$, then this essentially asks that $\Gamma(X,\mathcal{O}_X)$ is a finite free module over $k$. All modules over a field are free, so as long as $f$ is a finite map this should hold.
But then this means that every finite scheme over a field is flat. This gives me pause because I have often seen people talking about finite flat group schemes. I understand that group schemes over a field are not the only group schemes of interest, but I haven't seen anyone claim or imply somehow that finite group schemes are automatically flat over a field, so I just want to be sure I haven't made a mistake or overlooked anything.
| Indeed, all schemes over a field are flat. This is immediate from the definition: to check whether $f:X\to\operatorname{Spec} k$ is flat, you have to check whether the restriction to affine open subschemes $\operatorname{Spec}A\to\operatorname{Spec} k$ is flat, which just means that $A$ is flat as a $k$-module. But since $k$ is a field, every $k$-module is free, and in particular flat.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4216065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Showing ${\sin mx\over\sin x}=(-4)^{(m-1)/2}\prod_{j=1}^{(m-1)/2}\left(\sin^2x-\sin^2{2\pi j\over m}\right)$ for odd $m>0$ (from Serre's "Arithmetic") On pages 9 and 10 of Serre's A Course in Arithmetic, there is a lemma (towards a proof of the quadratic reciprocity law) stating that
for any positive odd integer $m$,
$${\sin mx\over\sin x}=(-4)^{(m-1)/2}\prod_{j=1}^{(m-1)/2}\left(\sin^2x-\sin^2{2\pi j\over m}\right)$$
Serre claims that this result is "elementary", saying that one should start by proving that $\sin(mx)/\sin(x)$ is a polynomial of degree $(m-1)/2$ in $\sin^2 x$ with $(m-1)/2$ roots given by $\sin^2(2\pi j/m)$ for $1\le j\le m$, then compare coefficients of
$e^{i(m-1)x}$ on both sides to get the factor of $(-4)^{(m-1)/2}$.
I was not able to follow this outline, as I'm not exactly sure which sine identity to start with. I suspect Serre would like me to expand
$${\sin mx\over \sin x} = {e^{imx}-e^{-imx}\over e^{ix} - e^{-ix}},$$
since he mentions coefficients of $e^{i(m-1)x}$. So I was thinking of performing induction over all odd $m$ (the case $m=1$ is easy), but have not been able to get the computations right. If anyone would be able to point me in the right direction, I'd very much appreciate it!
| A direct proof.$\renewcommand\Re{\operatorname{Re}}$
Let $z=e^{-2ix}$ and $w=e^{2\pi i/m}$.
First note that $z\bar z=w\bar w=1$ and
\begin{align}
\{w^k:1\leq k\leq m-1\}
&=\{w^k:1\leq k\leq(m-1)/2\}\cup\{w^{-k}:1\leq k\leq(m-1)/2\}\\
&=\{w^{2k}:1\leq k\leq(m-1)/2\}\cup\{w^{-2k}:1\leq k\leq(m-1)/2\}
\end{align}
Then
\begin{align}
{\sin mx\over \sin x}
&= {e^{imx}-e^{-imx}\over e^{ix} - e^{-ix}}\\
&=e^{i(m-1)x} {1-e^{-2imx}\over 1 - e^{-2ix}}\\
&=e^{i(m-1)x}\frac{z^m-1}{z-1}\\
&=e^{i(m-1)x}\prod_{k=1}^{m-1}(z-w^k)\\
&=e^{i(m-1)x}\prod_{k=1}^{(m-1)/2}(z-w^{2k})(z-\bar w^{2k})\\
&=e^{i(m-1)x}\prod_{k=1}^{(m-1)/2}(z^2-2z\Re(w^{2k})+1)\\
&=\prod_{k=1}^{(m-1)/2}\bar z(z^2-2z\Re(w^{2k})+1)\\
&=\prod_{k=1}^{(m-1)/2}(z-2\Re(w^{2k})+\bar z)\\
&=\prod_{k=1}^{(m-1)/2}(2\Re(z)-2\Re(w^{2k}))\\
&=\prod_{k=1}^{(m-1)/2}2(\cos(2x)-\cos(4\pi k/m))\\
&=\prod_{k=1}^{(m-1)/2}(-4)(\sin^2(x)-\sin^2(2\pi k/m))\\
&=(-4)^{(m-1)/2}\prod_{k=1}^{(m-1)/2}(\sin^2(x)-\sin^2(2\pi k/m))
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4216199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to solve this complicated trigonometric equation for the solution of this triangle.
In $\triangle ABC$, if $AB=AC$ and internal bisector of angle $B$ meets $AC$ at $D$ such that $BD+AD=BC=4$, then what is $R$ (circumradius)?
My Approach:- I first drew the diagram and considered $\angle ABD=\angle DBC=\theta$ and as $AB=AC$, $\angle C=2\theta$. Therefore $\angle A=180-4\theta$. Also as $AE$ is the angular bisector and $AB=AC$, then $BE=EC=2.$ Now applying the sine theorem to $\triangle ADB$ and $ \triangle BDC$ gives $$\frac{BD}{\sin {(180-4\theta)}}=\frac{AD}{\sin \theta}$$
$$\frac{BC}{\sin(180-3\theta)}=\frac{BD}{\sin 2\theta}$$
Now we know that $BC=4$ and then solving both the equations by substituting in $BD+AD=4$, we get $$\sin 2\theta .\sin4\theta+\sin2\theta.\sin\theta=\sin3\theta.\sin4\theta$$
$$\sin4\theta+\sin\theta=\frac{\sin3\theta.\sin4\theta}{\sin2\theta}$$
Now I have no clue on how to proceed further from here. Though I tried solving the whole equation into one variable ($\sin\theta$), but it's getting very troublesome as power of $4$ occurs. Can anyone please help further or else if there is any alternative method to solving this problem more efficiently or quickly?
Thank You
| The values are such that equations do not simplify. Nonetheless, here is an alternate approach.
Say $AB = AC = x$ and we know $BC = 4$,
By angle bisector theorem,
$\cfrac{4}{x} = \cfrac{x-AD}{AD} \implies AD = \cfrac{x^2}{4+x}$
Now by angle bisector length formula,
$BD^2 = \cfrac{4x}{(4+x)^2} [(4+x)^2 - x^2] = \cfrac{32x(x+2)}{(4+x)^2}$
Now, $BD + AD = 4 = \cfrac{x^2}{4+x} + \cfrac{\sqrt{32x(x+2)}}{4+x}$
$16+4x-x^2 = \sqrt{32x(x+2)}$
Solving using WolframAlpha, the only valid solution is $x \approx 2.61$
Now to find circumradius, use $R = \cfrac{abc}{4 \triangle} = \cfrac{4 x^2}{8 \sqrt{x^2 - 4}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4216355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Given $d = \min(\gcd(a, b+1),\gcd(a+1,b))$, prove $d \leqslant \frac{\sqrt{4a+4b+5}-1}{2}$ The question is stated above:
Given $d = \min(\gcd(a, b+1),\gcd(a+1,b))$, prove $d \leqslant \frac{\sqrt{4a+4b+5}-1}{2}$.
Here's what I've done so far:
$$d \leqslant \frac{\sqrt{4a+4b+5}-1}{2}$$
then
$$2d + 1 \leqslant \sqrt{4a+4b+5}$$
Squaring both sides, we have
$$4d^2+4d+1 \leqslant 4a+4b+5$$
$$4d^2+4d \leqslant 4a+4b+4$$
$$d^2+d \leqslant a+b+1$$
From here I'm stuck and unable to continue. I've also noticed that the sum $a+b+1$ can have some relations with the two GCDs, since $a+b+1=(a+1)+b=a+(b+1)$, but I'm unable to work out how.
| Let $d_1=\gcd(a,b+1)$ and $d_2=\gcd(a+1,b)$.
Then, as you've mentioned already, $a+b+1=a+(b+1)=(a+1)+b$, so $d_1$ and $d_2$ are divisors of the $a+b+1$. Since $d_1\mid a$ and $d_2\mid a+1$ we also have $\gcd(d_1,d_2)=1$. Thus, $d_1d_2\mid a+b+1$. In particular, $d_1d_2\le a+b+1$.
Finally, if $d_1=d_2$, then $d_1=d_2=1$ and the statement is trivial and otherwise due to $d=\min\{d_1,d_2\}$ we have
$$
a+b+1\ge d_1d_2\ge d(d+1),
$$
as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4216514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A trigonometric identity: $\frac1{\sin40 ^\circ}+\tan10^\circ=\sqrt{3}.$ My sister asked me such a trigonometric identity (her high school challenging problem):
prove:
$$\frac1{\sin40 ^\circ}+\tan10^\circ=\sqrt{3}.$$
I found that this is really true (surprising... with a calculator), but as an undergraduate equipped with calculus and linear algebra, I have no idea how to attack this problem. Thanks in advance for any help!
| Replace $\sqrt{3}$ with $2 \sin{60^\circ}$. We want to show $$\frac1{\sin40 ^\circ}+\tan10^\circ=2 \sin{60^\circ}.$$
Let $q = e^{2 i \pi / 360}$ (A nome of one degree) then $$\sin(\theta^\circ) = \frac{q^\theta - q^{-\theta}}{2i}$$ and $$\tan(\theta^\circ) = - i \frac{q^{\theta} - q^{-\theta}}{q^{\theta} + q^{-\theta}}$$ so we can rewrite our equation in terms of the nome:
$$\frac{2i}{q^{40} - q^{-40}} - i \frac{q^{10} - q^{-10}}{q^{10} + q^{-10}} - 2 \frac{q^{60} - q^{-60}}{2i} = 0$$
multiplying this out and then reducing it modulo the cyclotomic polynomial $\Phi_{360}(q) = q^{96} + q^{84} - q^{60} - q^{48} - q^{36} + q^{12} + 1$ gives zero:
? r = (2*I)/(q^40 - q^(-40)) - I*(q^10-q^(-10))/(q^10+q^(-10)) - 2*(q^60-q^(-60))/(2*I)
% = (-q^200 + q^140 - q^120 - q^80 + q^60 - 1)/(I*q^140 - I*q^60)
? Mod(r, q^96 + q^84 - q^60 - q^48 - q^36 + q^12 + 1)
% = Mod(0, q^96 + q^84 - q^60 - q^48 - q^36 + q^12 + 1)
this proves the identity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4216811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Exponential of a 3D multivector in Clifford's Geometric Algebra An arbitrary 3D multivector can be written as the superposition: $M=Z+F$, where $Z=a+bi$ with $a,b\in\mathbb R$ and $i$ been the pseudo scalalar of $(Cl_3)$. The remaining term $F=v+iw$, where $v$ and $w$ are vectors of $Cl_3$, have vector and bivector parts, respectively. Since $Z\in$ Cen $(Cl_3)$, the exponential of $M$ becomes
$$\exp(M)=\exp(Z+F)=\exp(Z)\exp(F).$$
To obtain a closed form of $\exp(M)$, I would like to ask if there is a way to show that $$\exp(F)\stackrel{?}{=}\exp(v)\exp(iw)\ldots.$$
Thank you very much in advance.
| As noted by Peeter Joot, $e^F$ cannot be split into $e^{v}e^{iw}$ since $[v,iw] \neq 0$. So the hunt is on for a decomposition of $F$ into two commuting parts. Fortunatelly, by staring at Cayley tables we can discover that $\mathbb{R}_3$ is isomorphic to $\mathbb{R}^+_{3,1}$, the even subalgebra of the spacetime algebra, and thus that your $F$ can be viewed as a bivector in spacetime. And for bivectors, we can use the invariant decomposition to decompose them into commuting terms.
So adapting the invariant decomposition formula slightly using the isomorphism, we can find an $F_1$ and $F_2$ satisfying $F = F_1 + F_2$, $\lambda_i := F_i^2 \in \mathbb{R}$, and $[F_1, F_2] = 0$:
$$F_i = \frac{\lambda_i + \tfrac{1}{2} \langle F^2 \rangle_3}{F},$$
where the $\lambda_i$ are given by
$$\lambda_i = \tfrac{1}{2} \langle F^2 \rangle_0 \pm \tfrac{1}{2} \sqrt{ \langle F^2 \rangle_0^2 - \langle F^2 \rangle_3^2 }.$$
So we have now found two commuting elements $F_1$ and $F_2$, which both square to scalars so they follow Euler's formula. Specifically, since $\lambda_1 \geq 0$ and $\lambda_2 < 0$, we find
$$
\begin{aligned}
e^{F_1} &= \cosh(\sqrt{\lambda_1}) + \frac{F_1}{\sqrt{\lambda_1}} \sinh(\sqrt{\lambda_1}) \\
e^{F_2} &= \cos(\sqrt{-\lambda_2}) + \frac{F_2}{\sqrt{-\lambda_2}} \sin(\sqrt{-\lambda_2})
\end{aligned}
$$
To conclude, the exponential of any multivector $M = Z + F$ is
$$ e^{M} = e^{a}e^{bi}e^{F_1}e^{F_2}. $$
So the invariant decomposition allows us to bypass the Baker-Campbell-Hausdorff formula. For more details and to see how this generalises to higher dimensions I would highly recommend reading the invariant decomposition paper.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4217077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Exchange of the one-sided derivative and the one-sided limit of a real variable Let $f$ be a real-valued continuous function on $[0, 1)$. Suppose that $f$ is infinitely-differentiable on $(0 , 1)$. Is there some sufficient condition for $f$ to be right-differentiable at $0$ with
$$f_{+}'(0) = \lim_{x \to 0^{+}} f'(x),$$
where I use the notation $f_{+}'(0) := \lim_{\Delta x \to 0^+} \frac{f(\Delta x) - f(0)}{\Delta x}$ for the right derivative of $f$ at $0$?
| The existence of the limit $\lim_{x \to 0^{+}} f'(x)$ is sufficient for $f^\prime_+(0)$ to exist and in that case we have the equality
$$f_{+}'(0) = \lim_{x \to 0^{+}} f'(x).$$
You can perform the proof by applying the Mean Value Theorem to $g_a(x) = f(x) -ax$ where $a = \lim_{x \to 0^{+}} f'(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4217227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Number of unique integer in random generated arrays I generated 1.000.000 random integers in the range from 0 to 1.000.000 (using rand() in c++ and random.randrange() in python) and both code got approximately 632000 unique numbers.
I noticed $1 - e^{-1} = 0.6321$.
Does $e$ involve in this? If so, why?
Unique number | Total numbers in array
632413 1000000
632088 1000000
631594 1000000
6305 10000
6319 10000
| Let $N$ be a large number (in your case, $N=10^6$).
By linearity of expectation, we define an indicator for each number from $1$ to $N$ according to whether or not it appears.
The probability that a number appears is $$1-\left(\frac {N-1}{N}\right)^{N}$$
Hence your expectation is $$N\left(1-\left(\frac {N-1}{N}\right)^{N}\right)$$
Now, $$\lim_{N\to \infty} \left(\frac {N-1}N\right)^N=1-e^{-1}$$
So your expectation is approximately $$N\times (1-e^{-1})$$ as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4217361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$f(n) =$ the sum of digits of $n$ and number of digits in decimal notation. How many solutions does $f(x)=M$ have? For any integer $n$ let us denote $f(n)$ as the sum of digits in decimal notation of $n$ and number of digits in decimal notation. For example, $f(12) = (1+2)+2=5$. Now, consider the equation $f(x) = M$. Can I estimate the number of solutions?
| I have a proof that it is $2^{M-2}$ for $M>1$ if digits were not capped at 9
First, $M=1$ does not follow this rule, but obviously there is $1$ solution. Now for $M>2$
Each digit except for the leftmost ranges from $0$ to $\infty$ and contributes a value ranging from $1$ to $\infty$, while the leftmost digit contributes a value ranging from $2$ to $\infty$.
Using generating functions, we want the coefficient of $x^{M}$ in the expansion of
$$\left(x^2+x^3+x^4+\ldots\right)\left( 1+\left(x+x^2+x^3+\ldots\right)+\left(x+x^2+x^3+\ldots\right)^2+\left(x+x^2+x^3+\ldots\right)^3+\ldots\right)$$
This simplifies to
$$\left(\frac{x^2}{1-x}\right)\left(1+\left(\frac{x}{1-x}\right)+\left(\frac{x}{1-x}\right)^2+\left(\frac{x}{1-x}\right)^3+\ldots\right)$$
$$\left(\frac{x^2}{1-x}\right)\left(\frac{1}{1-\frac{x}{1-x}}\right)$$
$$\left(\frac{x^2}{1-x}\right)\left(\frac{1}{\frac{1-2x}{1-x}}\right)$$
$$\frac{x^2}{1-2x}$$
$$x^2\left(1+2x+4x^2+8x^3+\ldots\right)$$
$$\sum_{k=2}^\infty 2^{k-2}x^k$$
Hence, the coefficient of $x^M$ is $2^{M-2}$.
When our digits are capped at $9$, we have to also cap most of our infinite geometric series to finite geometric series and this makes the generating function math a lot messier. However, the actual answer should still be somewhat close to $2^{M-2}$ for relatively small values of $M>10$.
The actual generating function would be the coefficient of $x^M$ in
$$\frac{x^2-x^{11}}{1-2x+x^{11}}$$
$$\left(x^2-x^{11}\right)\left(1+(2x-x^{11})+(2x-x^{11})^2+(2x-x^{11})^3+\ldots\right)$$
We can probably use this to estimate some smaller values, but an exact solution is rather complicated.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4217595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question on "Proof by contradiction" I am a little confused whether the following set of arguments can constitute a proof by contradiction:
Required to prove: Statement A is true.
Assumption: Suppose statement A is false.
We then show that if A is false then a (different) statement B is true.
But, statement B is true implies A is true.
Hence, A must be true.
These set of arguments feel a bit bizzare to me and perhaps might be incorrect. I am unable find a mistake.
| there isn't a smallest rational number greater than zero.
Assumption: there is a smallest rational number greater than zero.
So, call him $r$.
Now you can take $r/2$.
So B: given a rational number $r$ greater than zero, there exists another rational number $r/2$ that is greater than zero and smaller than $r$.
So A.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4217869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 1
} |
Two periodic functions whose product is not periodic I'm looking for a concrete example. One example that @Koro suggested in the chat is $\{\sqrt{2}x\}\sin x$ where $\{-\}$ is the fractional part function. Here is a plot of that function. Seems periodic but if you take a closer look, it's not periodic. How can I prove it's not periodic? or is there any simpler example?
| You can consider any two functions whose periods are incommensurable, for example $\sin(x)\sin(\pi x)$, or $\{x\}\{\sqrt 2 x\}$.
Edit: On better though, that may be false in general.
But since you are looking for a concrete example, consider this one: $\cos(x) \cos(\pi x)$. Observe that if $\cos(x)\cos(\pi x)=1$ then we have $\cos(x)=1$ and $\cos(\pi x)=1$ or $\cos(x)=-1$ and $\cos(\pi x)=-1$. In the first case, from the first equation we have $x=2\pi k$ for some integer $k$. From the second equation we have $x=2m$ for some integer $m$. So we have $\pi k = m$ for integers $k$ and $m$. If $k\ne 0$ then $\pi = m/k \in \Bbb Q$, a contradiction. So $k=0$ and then $x=0$. A similar argument shows that the second case can't happen.
This proves that $\cos(x)\cos(\pi x)$ take the value $1$ only once, hence cannot be periodic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4217961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Points outside a closed set are separated from that closed set Let $(X,d)$ be a metric space, $G \subseteq X$ be a closed set, and $x \in X \setminus G$ be a point outside $G$. Then there exists an $\varepsilon \in (0,\infty)$ such that $B_{\varepsilon}(x) \cap G =\emptyset$.
Hint: To prove this aim for a contradiction,
*
*Suppose this wasn't true, and understand what that means.
*Argue that for every $n \in \mathbb{N}$, under this assumption there must be at least one point in $B_{\frac{1}{n+1}}(x) \cap G$.
*By appealing to the Axiom of Choice, define a sequence $a_n$ by requiring that each $a_n \in B_{\frac{1}{n+1}}(x) \cap G$.
*Prove that this sequence converges.
Assume for contradiction that $\forall \varepsilon>0$ we have $B_{\varepsilon}(x) \cap G \ne\emptyset$ whenever $(X,d)$ is a metric space with closed set $G \subseteq X$ and $x \in X \setminus G$.
Suppose $(a_n) \in G \ni (a_n)$ convergent and let $(a_n) \to l$, where $l \in G$. Because $G \subseteq X$, then $(a_n) \in X$ and $l \in X$. Choose $N \in \mathbb{N}$ so that $n>N$ with $n \in \mathbb{N}$, then $d(a_n,l)<\varepsilon \in (0,\infty)$. So for $\varepsilon=\frac{1}{n+1}$ with $n \in \mathbb{N}$, we have $d(a_n,l)<\frac{1}{n+1}$ for every $n \in \mathbb{N}$.
Initially I was thinking about defining a sequence $a: \mathbb{N} \to X$ by $a(n)=x$. But I don't think that'll work since that would mean $x=l$, but $x \in X \setminus G$.
Please let me know what you think.
| This follows trivially from the definition of an open set in metric space since the closed sets are defined to be complements of open sets and open sets are defined to be possibly uncountable unions of open balls, each point not in a closed set must be contained in an open ball, then using triangle inequality you can show that there exists a ball centred in x that satisfies the wanted condition exists.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4218127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Understanding the morphism of schemes involving nilpotents induced by: $\mathbb C[x] \to \mathbb C[x]/(x^2)$ where $x \mapsto ax \pmod {x^2}$ From Vakil's FOAG:
The following imprecise exercise will give you some sense of how to visualize maps of schemes when nilpotents are involved. Suppose $a \in \mathbb C$. Consider the map of rings $\mathbb C[x] \to \mathbb C[x]/(x^2)$ given by $x \mapsto ax$. Recall that $\operatorname{Spec} \mathbb C[x]/(x^2)$ may be pictured as a point with a tangent vector (§4.2). How would you picture this map if $a \ne 0$? How does your picture change if $a=0$?
When $a \ne 0$, I believe the ring map $\mathbb C[x] \to \mathbb C[x]/(x^2)$ given by $x \to ax$ induces the morphism of schemes $\operatorname{Spec} \mathbb C[x]/(x^2) \to \operatorname{Spec} \mathbb C[x]$ given by $(x)/(x^2) \mapsto (x)$. I think this is true because if $a \ne 0$, then $(x)/(x^2)=(ax)/(x^2)$ and the preimage of $(ax)/(x^2)$ is $(x)$.
When $a=0$, then we also have morphism of schemes $\operatorname{Spec} \mathbb C[x]/(x^2) \to \operatorname{Spec} \mathbb C[x]$ given by $(x)/(x^2) \mapsto (x)$ because the preimage of $(x)/(x^2)$ is the set of all polynomials with zero constant term, i.e., the prime ideal $(x)$.
Is the above correct? I am still not sure how to visualize this morphism.
What is this exercise trying to get across?
| Pointwise, you are correct.
This is my rookie view of why he says that the tangent vector should be “crushed”. Suppose you have a function $f(x)$ on $\operatorname{Spec} \mathbb C[x].$ Then under the map $\operatorname{Spec} \mathbb C[\epsilon]/(\epsilon^2) \to \operatorname{Spec} \mathbb C[x]$ it pulls back to $f(a\epsilon).$ Now, for $a \not =0$ having the function $\epsilon \mapsto f(a\epsilon)$ in $\operatorname{Spec} \mathbb C[\epsilon]/(\epsilon^2)$ means knowing the value of $f$ plus its derivative. Thus, the tangent vector accounts for the fact that you can take the derivative along it. However, when $a=0$, $f(x)$ pulls back to $f(0)$ and your information about the derivative is lost.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4218486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Calculating probability in iterative double pigeonhole type problem Assume I have $2n$ unique members of a set. Now assume that each element of that set is assigned an integer $\in (0,n]$, such that each integer is used twice. So for example if you have a set of 4 elements which we label $\{A,B,C,D\}$, then two elements will be labeled with 1 and two will be labeled 2, e.g. you might get the labeled set: $\{(A,1),(B,1),(C,2),(D,2)\}.$
Now assume the labels of the $2n$ elements are completely randomized. My question is, what is the probability $p_n$ that no pair of elements that shared a label, both got that same label again (for arbitrary values of $n$)? To continue the example before (for $n=2$), the 6 distinct ways the 4 labels can be arranged are:
$$\{(A,1),(B,1),(C,2),(D,2)\}\\
\{(A,1),(B,2),(C,1),(D,2)\}\\
\{(A,1),(B,2),(C,2),(D,1)\}\\
\{(A,2),(B,1),(C,1),(D,2)\}\\
\{(A,2),(B,1),(C,2),(D,1)\}\\
\{(A,2),(B,2),(C,1),(D,1)\}.$$
Because there is only one case where either $A$ and $B$ where both reassigned 1 and/or $C$ and $D$ where reassigned 2, the probability for $n=2$ is $p_2=5/6$.
Ideally I would like a closed form solution for $p_n$, but I'd also settle for a recursive formula (which could e.g. be used to numerically compute $p_n$ by say an iterated python script)
| You want to avoid having any of the $n$ events which consist of each of the $n$ pairs repeating, we can get the probability of this with inclusion exclusion.
Let $f(n)$ be the number of labels on $2n$ elements, where $f(0) = 1, f(1) = 1, f(2) = 6$ etcetera.
The probability that a given subset of $k$ pairs repeats is $\frac{f(n-k)}{f(n)}$, because there are $f(n-k)$ ways to label the remaining $2(n-k)$ elements.
Hence via inclusion-exclusion the probability that no event happens is:
$1 + \sum\limits_{k=1}^n (-1)^k \binom{n}{k}\frac{f(n-k)}{f(n)}$.
Where $f(n)$ is $\frac{(2n)!}{2^n }$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4218589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
proved that$f (x)$ is uniformly continuous on $[0,+\infty)$ if and only if $\lim\limits_{x\to+\infty}\frac{f(x)}{x}$ exists. Let $f(x)$ be second-order differentiable on $[0,+\infty)$,$f''(x)\geqslant\sin x(\forall x\in[0,+\infty))$.Prove that $f (x)$ is uniformly continuous on $[0,+\infty)$ if and only if $\lim\limits_{x\to+\infty}\frac{f(x)}{x}$ exists.This is our monthly exam question,I try to expand it with Taylor's theorem:
$$f(x+h)=f(x)+f'(x)h+\frac{f''(\xi)}{2}h^2\geqslant f(x)+f'(x)h +\frac{\sin x}{2}h^2.$$
But then I don't know how to deal with it, I also know that if $f (+\infty)$ exists, then $f (x)$ is uniformly continuous, but it doesn't seem to help the problem.
| Some comments:
(1) We cannot drop the continuity from the right at $x=0$. Here is a counterexample: the function $f(x):= \frac{1}{x}-\sin{x}$ is not uniformly continuous on $(0,\infty)$. We have $f^{\prime\prime}(x)=\frac{2}{x^3}+\sin{x}>\sin{x}$ for every $x\in (0,\infty)$. And obviously $\lim_{x\rightarrow \infty}\frac{f(x)}{x}=0$.
(2) It is also enough to prove that $f$ is uniformly continuous on $[1,\infty[$ since it is, by assumption, continuous on any closed interval
$[0,N]$ with $N>0$, and consequently uniformly continuous thereat.
(3) We know that, since $\lim_{x\rightarrow +\infty}\frac{f(x)}{x}$ exists,
$$|f(x)|\leq C |x|$$
for large $x$, meaning the function can grow at most linearly.
But the statement does not apply to functions of the form $f(x)=x^\alpha$ for any $\alpha\in (0,1]$ because the restriction $f^{\prime\prime}(x)\geq \sin{x}$ is not satisfied by such functions.
(4) I suggest the following simplification of the problem:
We can write any smooth $f:[0,\infty)\rightarrow \mathbb{R}$ as
$f(x)=g(x)-\sin{x}$ where $g:[0,\infty)\rightarrow \mathbb{R}$
is smooth. Now, the peculiar condition $f^{\prime\prime}(x)\geq \sin{x}$
translates into $g^{\prime\prime}(x)\geq 0$. We also have $\lim_{x\rightarrow +\infty}\frac{g(x)}{x}=\lim_{x\rightarrow +\infty}\frac{f(x)}{x}$ exists. Since $x\mapsto \sin{x}$ is uniformly continuous on $\mathbb{R}$, we only need to show that $g$ is uniformly continuous on $[N,\infty)$ with $N$ large.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4218737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
$S \subseteq T \Rightarrow |A^S| \leq |A^T|$ only when $A \neq \emptyset$ Exercise 1.7 from Chapter 4 of Hrbacek & Jech's Introduction to Set Theory (3rd Ed.) asks to prove:
If $S \subseteq T$, then $|A^S| \leq |A^T|$.
I think this is true only when $A \neq \emptyset$. Am I right?
Remark: This question was posted with the purpose of being answered by myself. You can see my own answer below, but that is not completely right, as was pointed out by M. Logic.
| Yes, you is right. Consider $A = \emptyset$, $S = \emptyset$, $T = \{\emptyset\}$. Then, $S \subseteq T$, but $A^S = \emptyset^\emptyset = \{\emptyset\}$, whilst $A^T = \emptyset^{\{\emptyset\}} = \emptyset$, so that there is no function on $A^S$ into $A^T$. In particular, there is no injection on $A^S$ into $A^T$. That is, $|A^S| \not\leq |A^T|$.
On the other hand, if $A \neq \emptyset$, then there is an $a \in A$ for which the map $$\begin{aligned}\theta: A^S &\to A^T\\f &\mapsto \theta_f\end{aligned}$$
given by $$\theta_f(x) = \begin{cases} f(x) &\mbox{ if $x \in S$,}\\ a &\mbox{ otherwise} \end{cases}$$ is an injection on $A^S$ into $A^T$.
Conclusion: there is a typo in that exercise. The author missed the hypotesis that $A \neq \emptyset$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4219035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
isomorphism $\psi$ between quotient rings
Show that there is an isomorphism $\psi$ between the rings $R_1 := \mathbb{Q}[r,s]/(r+s-1, r-r^2, s-s^2)$ and $R_2 := \mathbb{Q}[y]/(y^2-1)$ so that $\psi([r]) = [(1+y)/2]$ and $\psi(s)=[(1-y)/2].$
I could define an surjective homomorphism $\psi$ from $\mathbb{Q}[r,s]$ to $\mathbb{Q}[y]/(y^2-1)$ and show that its kernel is $(r+s-1,r-r^2, s-s^2)$. The first isomorphism theorem for rings gives an isomorphism $\psi_2 : R_1\to R_2$ so that $\psi_2 \circ q = \psi,$ where $q : \mathbb{Q}[r,s]\to R_1$ is the quotient map. Then it suffices to define $\psi$ so that $\psi(r) = [(1+y)/2]$ and $\psi(s) = [(1-y)/2]$. Define $\psi$ by $\psi(f(r,s)) = [f(\frac{1+y}2, \frac{1-y}2)],$ where each $f(r,s)\in \mathbb{Q}[r,s].$ It's easy to show that $\psi$ is a homomorphism. I can also show that $(r+s-1, r-r^2, s-s^2)\subseteq \ker \psi$ and that $\psi$ is surjective using the fact that the elements of $\mathbb{Q}[y]/(y^2-1)$ are of the form $[ay+b]$ for some $a,b \in \mathbb{Q}$. This can be shown using the division algorithm for polynomials. Then one can find $f$ so that $f(\frac{1+y}2, \frac{1-y}2) =ax+b.$ But how do I show that $(r+s-1, r-r^2, s-s^2) = \ker \psi$?
| The most elegant way to solve such problems with no effort is to use universal properties and in particular the Yoneda Lemma. In fact, this Lemma tells us that two objects with the same universal properties are isomorphic. A universal property of an object $R$ is just a description of the homomorphisms $R \to S$ for any other object $S$ of the ambient category (or, of the homomorphisms $S \to R$). Here, we are in the category of commutative $\mathbb{Q}$-algebras (of course, you could also work in the category of rings, but it would make life less easy).
So if $S$ is a commutative $\mathbb{Q}$-algebra, then the universal properties of quotient rings ("fundamental theorem on homomorphisms") and polynomial algebras ("evaluation homomorphisms") show us that there are natural bijections
$\mathrm{Hom}(\mathbb{Q}[r,s]/(r+s-1,\, r-r^2,\, s-s^2),S)\\
\cong \{(a,b) \in S^2 : a+b-1=0,\, a-a^2=0,\, b-b^2=0\},$
$\mathrm{Hom}(\mathbb{Q}[y]/(y^2-1),S) \cong \{c \in S : c^2-1=0\}.$
So all we have to do is to find a natural bijection between the sets
$\{(a,b) \in S^2 : a+b=1,\, a=a^2,\, b=b^2\} \cong \{c \in S : c^2=1\}.$
But notice that $a+b=1$ means that $b$ is superfluous, we may replace it by $1-a$. The equation $b=b^2$ then holds automatic, since in general if $a$ is idempotent ($a=a^2$) then $1-a$ is idempotent. (This also shows, by the way, that the ideal $(r+s-1,r-r^2,s-s^2)$ is equal to the ideal $(r+s-1,r-r^2)$.)
So all we have to show is that the set of idempotents in $S$ is naturally isomorphic to the set of involutions in $S$ (i.e. $c^2=1$).
Well, if $a$ is idempotent, then $c := 1-2a$ is an involution, since $c^2=1-4a+4a^2=1$. Conversely, if $c$ is an involution, then $a := (1-c)/2$ is idempotent, since $a^2=(1-2c+c^2)/4=(2-2c)/4=(1-c)/2=a$. These maps are inverse to each other. This finishes the proof.
This correpondence between idempotents and involutions can also be motivated or explained with a bit of algebraic geometry. An idempotent is nothing but a section on $\mathrm{Spec}(S)$ with values in $0,1$ (in the residue fields). An involution is nothing but a section on $\mathrm{Spec}(S)$ with values in $-1,+1$. Now we just need a linear transformation which takes $\{0,1\}$ to $\{-1,+1\}$, for example $1-2x$, to get the correspondence.
Here is the whole proof in one chain of natural isomorphisms:
$\mathrm{Hom}(\mathbb{Q}[r,s]/(r+s-1,r-r^2,s-s^2),S)\\
\cong \{(a,b) \in S^2 : a+b-1=0,\, a-a^2=0,\, b-b^2=0\}, \qquad | ~ \text{ substitute } b=1-a \\
\cong \{a \in S : a=a^2,\, (1-a)=(1-a)^2=0\} \\
= \{a \in S : a=a^2\} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad ~\,\, | ~ \text{ substitute } a=(1-c)/2 \\
\cong \{c \in S : (1-c)/2=\bigl((1-c)/2)\bigr)^2\} \\
= \{c \in S : c^2=1\} \\
\cong \mathrm{Hom}(\mathbb{Q}[y]/(y^2-1),S).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4219256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
11 sided irregular shape that tessellates My friend was fiddling around on the triangle when he created an irregular heptagon with it not able to tessellate.
He then asked me if I could create an 11 sided irregular polygon that is able to tessellate by changing a side AB. He told me there was 3 and that you can make 2 more without necessarily using the side AB. I have absolutely no idea what they are. Can I please have some help?
| For: "create an 11 sided irregular polygon that is able to tessellate by changing a side AB":
which then becomes:
where I just a projected a duplicate triangle, using the axis of symmetry between side AB (to be changed) and side AC (the side with tessellation to be matched with). And there you have your shape. There are more you can make, take this example as a hint.
For: "you can make 2 more without necessarily using the side AB":
Here's another hint:
Good Luck!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4219392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Disproving the continuity of a function I have the following question:
Suppose $f$ is a real-valued function on $\mathbb{R}$, and that, for a
given $x\in \mathbb{R}$, $\lim _{n \rightarrow
\infty}\left[f\left(x+a_{n}\right)-f\left(x-a_{n}\right)\right]=0$, for all sequences $a_{n}$ that converge to $0$. Is $f$ continuous at $x$?
My answer is no. In support of this, I have constructed the following (putative) counter example:
$$f(x) \triangleq \begin{cases} 1 \hspace{5mm} x\neq0\\
0 \hspace{5mm}x=0\end{cases}$$
Note that, for $x=0$, $\lim_{n\rightarrow\infty}f(x+a_{n})=\lim_{n\rightarrow\infty}f(x-a_{n})$, for any $\{a_{n}\}_{n\in\mathbb{N}}$ which converges to $0$ (since $f(x)$ is an even function). Now, consider the sequence given by $a_{n}\triangleq \frac{1}{n}$. Since $f(a_{n})=1$ for all $n$, $\lim f(a_{n})=1.$ However, $f(0)=0$. Thus, $f$ is not continuous at $x=0$.
Thank you.
| If we want to be absolutely strict - $\lim_{n \to \infty} f(x + a_n)$ doesn't have to exist - consider sequence $0, \frac{1}{2}, 0, \frac{1}{3}, 0, \frac{1}{4}, \ldots$ - or any other that contains infinite number of zero and infinite number of non-zero terms.
I don't think you need to even mention limits to prove that your $f$ satisfies this property: just note that $f(0 + a_n) - f(0 - a_n) \equiv 0$ and thus limit of it is $0$ too.
Your proof of $f$ not been continuous looks good.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4219528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Method of characteristics - how to determine which one is the unique solution? I need to solve the following problem
$$xu_x + u_y = \sqrt{u}, \, u>0 $$
$$ u(x,0)=2+\sin(x).$$
Solving the characteristic equations implies
$x(t,s)=c_1(s)e^t, \, y(t,s)=t+c_2(s), \, u(t,s)=(\frac{t}{2} + c_3 (s) ) ^2 .$
Using $(x(0,s),y(0,s),u(0,s)) = (s,0,2+\sin(s))$ implies $c_1(s)=s, c_2(s)=0, c_3^2(s)=2+\sin(x)$, so that $u(x,y)=[\frac{y}{2}\pm \sqrt{2+\sin(xe^{-y})}]^2$.
My question is -
Which of the two options $c_3(s)=\sqrt{2+\sin(s)}$ or $c_3(s)=-\sqrt{2+\sin(s)}$ is the unique solution?
(Transversality condition in this case implies that we have a unique solution )
Is it true that if we add the constraint $y>0$, then from $\sqrt{u}=\frac{t}{2}+c_3(s)$ we can deduce that $t>-2c_3(s)$, so that we must have $c_3(s)=\sqrt{2+\sin(s)}$ for a continuous solution?
Thank you!
| When you solve for $z(t)=u(x(t),y(t))$ along the characteristic curve through $(s,0)$ for some fixed $s$, you have
$$
dz/dt = \sqrt{z}
,\qquad
z(0) = 2+\sin(s)
.
$$
This gives
$$
\frac{d}{dt} \sqrt{z} = \frac{dz/dt}{2 \sqrt{z}} = \frac12
,
$$
which after integration from $0$ to $t$ becomes
$$
\sqrt{z(t)} - \sqrt{2+\sin(s)} = \frac{t}{2}
,
$$
and the spurious solution with the wrong sign never even appears; you get
$$
z(t) = \left( \frac{t}{2} + \sqrt{2+\sin(s)} \right)^2
$$
right away.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4219691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is $5^\circ \mathrm{F}$ minus $5^\circ \mathrm{F}$? Is it $0^\circ \mathrm{F}$ or is it $0\,\mathrm{K}$ (Kelvin)?
From an arithmetic standpoint, it seems like it should be $0^\circ \mathrm{F}$, but that seems inconsistent because the result represents a delta ($0 \,\Delta^\circ \mathrm{F}$ perhaps) which is conceptually completely different. Doing math with units like $^\circ\mathrm{F}/s$ completely breaks down.
From a natural sciences standpoint, it's 0 K, because an amount of heat minus the same amount of heat is no heat, but that seems inconsistent because everyone knows that $5 - 5 = 0$, and this would instead yield $5^\circ \mathrm{F} - 5^\circ\mathrm{F} = -459.67^\circ\mathrm{F}$, which may be surprising.
Is there a convention for doing arithmetic on Interval Scale measurements? That same page suggests there is a difference operation for them in its comparison table.
For context, I'm working on a Python library for doing dimensioned arithmetic (with an eye towards the natural sciences) and interval scales are making things complicated. For instance, $^\circ\mathrm{F}\cdot s$ doesn't seem to represent a meaningful physical dimension; it can't be converted to $\mathrm{K}\cdot s$. For example, trying to convert $6^\circ\mathrm{C}\cdot s$ by visualizing temperature and time as axes:
| There have been great answers so far, but I'd like to focus on the practical implementation side of this. Which operations do you implement and how?
Temperature forms an affine line, no matter which unit you're using. It is much like with dates and time: you cannot add last Tuesday to 17 October of 2015, what would that even mean? And so Python does not let you add two instances of datetime. You can subtract them, though, giving you a timedelta object.
Similarly, you would need to implement a delta-temperature unit to represent differences in temperature. This delta-temperature can then implement the same operations as timedelta: you can add them together or subtract them, you can scale them by a scalar, you can divide two delta-temperatures to get a scalar value, and you can add or subtract from a temperature value.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4219840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 11,
"answer_id": 1
} |
Totally disconnected space is T1 I'm asked to prove that every totally disconnected topological space satisfies $T_1$ seperation axiom. my proof is this:
fix $x\in X$. now, for each $y\in X\setminus\{x\} $ we know that $\left\{ x,y\right\}$ is not connected, therefore there are non-empty disjoint open sets in $X$, say $U,V_{y}$ such that $\left(U\cap\{ x,y\} \right)\cup\left(V_y \cap \{x,y\} \right)=\{x,y\} $. WLOG we can assume that $x\in U, y\in V_{y}$ and we get
$$X\setminus\{x\} =\bigcup_{y\in X\setminus\{x\} } V_y$$
hence every singleton is closed and $X$ is $T_1$
I've seen different proofs and I would like to know if this one is correct.
| $U$ and $V_y$ are not necessarily disjoint in $X$. You only have $U \cap \{x,y \} = \{x \}$ and $V \cap \{x,y \} = \{y \}$.
The rest of the proof is fine.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4219952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Matrix exponential weirdness in WolframAlpha - it fails on diagonal matrices? So Wikipedia says this, which makes sense:
But Wolfram computed this:
Is this a bug or am I just extremely tired?
| In Mathematica:
MatrixExp[{{2, 0, 0, 0}, {0, 3, 0, 0}, {0, 0, 4, 0}, {0, 0, 0, 5}}]
(* {{E^2, 0, 0, 0}, {0, E^3, 0, 0}, {0, 0, E^4, 0}, {0, 0, 0, E^5}} *)
N@% //MatrixForm
$$\left(
\begin{array}{cccc}
7.38906 & 0. & 0. & 0. \\
0. & 20.0855 & 0. & 0. \\
0. & 0. & 54.5982 & 0. \\
0. & 0. & 0. & 148.413 \\
\end{array}
\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4220081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Proving if statement is true by proving the converse is false Can we prove an if-statement by proving the converse is false. I.e.
$\lnot (B \implies A) \implies (A \implies B)$
I've spoken to several colleagues about this and I'm still not convinced this proof is both valid and sound. The
The logical statement
$\lnot (B \rightarrow A) \implies (A \rightarrow B)$
is valid. This can be checked by means of a truth table. However,the antecedent implies $\lnot A$ which feels like it should violate soundness.
| *
*Logically, the non-quantified statement $$\text{Amy being a vegan implies that she eats beef}$$ and its converse $$\text{Amy eating beef implies that she is a vegan}$$ cannot both be false: if one is false, then the other must be vacuously true.
*However, the quantified statement $$\text{Every vegan eats beef}$$ and its converse $$\text{Every beef-eater is a vegan}$$ are both false.
Explanation
The open formula $$A(x)\rightarrow B(x)$$ might not have a definite truth value, but is often implicitly universally-quantified to actually mean the sentence $$\forall x\,\Big(A(x)\rightarrow B(x)\Big).$$ The crux of your dilemma, as suggested by DanielV, stems from conflating the above with the sentence $$A \rightarrow B.$$
*
*The sentence $$\lnot (B \rightarrow A) \rightarrow (A \rightarrow
B)$$ is a tautology. Thus, $$\text{not }\Big(B{\implies}A\Big)$$
logically entails $$A{\implies}B.$$
*On the other hand, the sentence $$\lnot\forall x\,\Big(B(x)\rightarrow
A(x)\Big)\,\rightarrow\,\forall x\,\Big(A(x)\rightarrow B(x)\Big)$$ is
(satisfiable but) invalid. Thus, $$\text{not }\forall x\,
\Big(B(x){\implies}A(x)\Big)$$ does not logically
entail $$\forall x\,\Big(A(x){\implies}B(x)\Big).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4220208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
In the figure two regular pentagons are shown. Calculate "x". For reference:
In the figure two regular pentagons are shown. Calculate "x".
My progress..I marked the angles I found but I couldn't find the equation to solve
$a_i = \frac{180(5-2)}{5}=108^o\\ \triangle BCG (isosceles) \therefore \measuredangle CGB = 36^o$
If we find $\measuredangle DCF$ the problem is solved because $\measuredangle DJF$ is half of $\measuredangle DCF$
| $\angle{FDC} = \angle{DFC} = 180 - 108 = 72$
Therefore, $\angle{DCF} = 180 - 72 \times 2 = 36$
Therefore, $\angle{DCG} = 36 + 108 = 144$
Therefore, $\angle{CDG} = \angle{CGD} = \frac{180 - 144}2 = 18$
Therefore, $\angle{JDF} = 180 - 108 -18 = 54$
Therefore, $x = \angle{DJF} = 180 - 54 \times 2 = 72$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4220509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
A counting question about pairs of three-digit numbers the question asks how many pairs of $3$ digit numbers exist such that when one is added to the other, we never have to use a "carry-over". For clarification, a carry-over occurs when you add $75 + 68$, because $5 + 8=13$, which contributes the "$1$" to the next round of addition "$7+6$". Note that you need to have another carry-over because $7 + 6 + 1=14$. Any thoughts would be appreciated.
| Assuming I am understanding the question correctly, we are trying to find the number of non-ordered pairs $\{\overline{abc}, \overline{def}\}$ such that $\overline{abc} + \overline{def}$ does not require a "carry-over."
I will not give away the specifics of the solution so you can try things out on your own. My general recommendation is to look at the digits separately and count the number of combination possible for $\{c,f\}, \{b,e\}, \{a,d\}$. Note that $\{c,f\}$ and $\{b,e\}$ should have the same number of possible combinations (whereas $\{a,d\}$ is slightly different given that none of them can be $0$). Also note that to not have a "carry-over" when adding two digits means the sum of the two digits should be less than 10 (i.e. less than or equal to 9).
Multiplying the number of combinations for each digit gives you the total number of combinations. In addition, we need to handle the double-counted pairs. In this case, dividing by $2$ should suffice since addition is symmetrical.
Edit: oops sorry yah the comment below is right haha, do pick out those when doubled also don’t cause any carry-overs, also fixing typo on $\leq 9$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4220722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
For twin primes $p$ and $q$, prove there is an integer $a$ such that $p|(a^2-q)$ if and only if there is an integer $b$ such that $q|(b^2-p)$. For twin primes $p$ and $q$, prove there is an integer $a$ such that $p|(a^2-q)$ if and only if there is an integer $b$ such that $q|(b^2-p)$.
Algebraic substitution using $p=q+2$ and the definition of divisibility seems to go nowhere, are there other properties of twin primes that may aid in this proof?
| In a twin-prime pair $\ p\ $ and $\ q\ $, necessarily either $\ p\ $ is of the form $\ 4k+1\ $ and $\ q\ $ of the form $\ 4k+3\ $ or vice versa. In this case , we have $$\left(\frac{p}{q}\right)=\left(\frac{q}{p}\right)$$ which is exactly the content of the claim
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4220957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $A,B \in \left[ 0,\frac{\pi}{2}\right]$ and $\sin{A}\gt\cos{B}$, show that $A+B\gt\frac{\pi}{2}$. The question was this:
If $A,B \in \left[ 0,\frac{\pi}{2}\right]$ and $\sin{A}\gt\cos{B}$, show that $A+B\gt\frac{\pi}{2}$.
Under the topic of trigonometry - Radian & quadrants. And this is not from a textbook. This is from a question paper.
I tried, but I have no idea on how to do this. Please someone help me out :(
| First observation is that the $sin$ is monotonically increasing in $[0,\pi/2]$.
Therefore by our hypothesis and considering the addition
formula of sines we get
$$ \sin A > \cos B = \cos B \sin\left(\frac{\pi}{2}\right) - \sin B \cos \left(\frac{\pi}{2}\right) = \sin\left(\frac{\pi}{2} - B\right)$$
Since $\sin$ is monotonic (note that $\pi/2 - B \in [0,\pi/2]$) we get $A > \pi/2 - B$ as expected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4221120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Computing the number of ordered pairs of integers $(k_1, k_2)$ such that $k_1k_2 \leq N$ for a given value of $N$ I'm asking this question in context with a more larger computer science problem.
Let $k_1$ and $k_2$ be two natural problems such that their product $k_1k_2 \leq N$. What is the number of possible values of the ordered pair $(k_1,k_2)$ such that the product is smaller than $N$?
I am thinking that it has to do with the number of divisors of numbers smaller than $N$, but don't know how to approach it. Since $k_1k_2 \leq N$ implies $k_1 \leq N/k_2$, we could look for all natural values of $k_2$, and then find an inequality for $k_1$; add up the number of possible values of $k_1$ for each mentioned case. However, this is a computational solution, and I am looking for an analytic one.
| If for every natural integer $n$, $d(n)$ is the number of (positive) divisors of $n$, then the number of ordered pairs $(k_1,k_2)$ such that $k_1k_2\leq N$ is simply
$$\sum_{n=1}^N d(n)$$
as for every divisor $k$ of $n$, $n/k$ also is an integer, and every ordered pair has been counted once and only once. This method is called filtering.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4221213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Rewrite expression involving the hypergeometric function to prove it is real-valued Let
$$
f(\varphi) = \sin(\varphi)^{-i} {}_2F_1(\frac{s - i}{2}, \frac{s - i}{2}, s, \cos^2(\varphi))
$$
where $s \in (1, 2)$ is a real parameter and $\varphi$ is restricted to $(0, \pi/2)$. Prove that $f$ is real-valued.
Background: I had Mathematica solve an ODE in $\varphi$, and this was the output (after some simplification). An argument involving the ODE itself shows that a real-valued solution exists.
I was pleasantly surprised when I plugged in various values of $s$ and $\varphi$ and all the outputs were real-valued. However, I would like to analytically show that this must be real-valued.
I tried various known transformations involving ${}_2F_1, \Gamma,$ and $\sin$, but couldn't get it to work.
Any help is much appreciated!
| By http://dlmf.nist.gov/15.8.E1
\begin{align*}
&\sin ^{-i} (\varphi) \times {}_2F_1 \!\left( {\frac{{s - i}}{2},\frac{{s - i}}{2};s;\cos ^2 (\varphi) } \right) \\ & = \sin ^{-i} (\varphi) (1 - \cos ^2 (\varphi) )^{ - \frac{{s - i}}{2}} \times{}_2F_1 \!\left( {\frac{{s - i}}{2},s - \frac{{s - i}}{2};s;\frac{{\cos ^2 (\varphi) }}{{\cos ^2 (\varphi) - 1}}} \right)
\\ &
= \sin ^{ - s}(\varphi) \times {}_2F_1 \!\left( {\frac{{s - i}}{2},\frac{{s + i}}{2};s; - \cot ^2 (\varphi) } \right).
\end{align*}
Now
\begin{align*}
& \overline {{}_2F_1\! \left( {\frac{{s - i}}{2},\frac{{s + i}}{2};s; - \cot ^2 (\varphi) } \right)} = {}_2F_1 \!\left( {\overline {\frac{{s - i}}{2}} ,\overline {\frac{{s + i}}{2}} ;s; - \cot ^2 (\varphi) } \right)
\\ &
= {}_2F_1 \! \left( {\frac{{s + i}}{2},\frac{{s - i}}{2};s; - \cot ^2 (\varphi) } \right) = {}_2F_1 \!\left( {\frac{{s - i}}{2},\frac{{s + i}}{2};s; - \cot ^2 (\varphi) } \right),
\end{align*}
showing that what multiplies $\sin ^{ - s}(\varphi)$ (which is real) is real.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4221385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Let $f$ be a twice-differentiable function on $\mathbb{R}$ such that $f''$ is continuous. Prove that $f(x) f''(x) < 0$ cannot hold for all $x.$ Let $f$ be a twice-differentiable function on $\mathbb{R}$ such that $f''$ is continuous. Prove that
$f(x) f''(x) < 0$ cannot hold for all $x.$
I have been able to think of specific examples of $f(x)$ in which $f(x)f''(x) <0$ does not hold, but I have not been able to come up with specific values of $x$ for which $f(x)f''(x)<0$ does not hold.
Any help is greatly appreciated!
| Assume wlog that $f''(0)>0$. Then $f(0)<0$ and as neither factor can change sign, $f''(x)>0>f(x)$ for all $x\in\Bbb R$ and $f$ is strictly convex. Pick $a<b$ with $f(a)\ne f(b)$. Then the line through $(a,f(a))$ and $(b,f(b))$ intersects the $x$-axis. By convexity, $f$ is above this line outside $[a,b]$, hence must assume positive values.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4221545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Regular Pentagon and Complex Numbers
$ABCDE$ is a regular pentagon of center $O$.
Let $$\vec{V}=\vec{OA}+\vec{OB}+\vec{OC}+\vec{OD}+\vec{OE}$$
Express $\vec{V}$ in terms of $\vec{OA}$, and then in terms of $\vec{OB}$. Conclude.
I tried to use Chasles relation and draw the vectors and add them up (4th vertex of the parm), but I was not able to reach a precise answer, and I feel like there is a much easier way to solve this.
I also tried giving each point a pair of coordinates but I couldn't reach the answer.
So this is my main question, finding $\vec{V}$ with simple math. (No matrices, maybe only vectors).
This question came as an activity before introducing the $n^{th}$ of a complex number, so I guess the conclusion is related to complex numbers, but finding $\vec{V}$ should be done algebraicly.
Any help is appreciated
| Assuming the vectors are in $\mathbb{R}^2$, then we can use the rotation matrix
$$M=\begin{bmatrix} \cos\frac{2\pi}{5} & -\sin\frac{2\pi}{5}\\\sin\frac{2\pi}{5}&\cos\frac{2\pi}{5}\end{bmatrix}$$
to express all the vectors in terms of $\overrightarrow{OA}$. The expression for $\overrightarrow{V}$ simplifies to
$$\overrightarrow{V}=\left(M^0+M^1+M^2+M^3+M^4\right)\overrightarrow{OA}$$
$$\overrightarrow{V}=\begin{bmatrix} \sum_{k=0}^4 \cos\frac{2\pi k}{5}&\sum_{k=0}^4 -\sin\frac{2\pi k}{5}\\\sum_{k=0}^4 \sin\frac{2\pi k}{5}&\sum_{k=0}^4 \cos\frac{2\pi k}{5}\end{bmatrix}\overrightarrow{OA}$$
You can geometrically prove that this final matrix is the $2\times 2$ zero matrix by positioning a unit pentagon with two vertices on $(0,0)$ and $(1,0)$ and then find the horizontal/vertical displacement as you go around the pentagon (instead of this geometric approach, you can also use complex numbers, but that defeats the point). Hence,
$$\overrightarrow{V}=\begin{bmatrix}0&0 \\ 0&0\end{bmatrix}\overrightarrow{OA}=\vec{0}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4221648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that the area of $M$ is bigger than the area of $\mathbb{S}^2$ Given $\mathbb{S}^2$ the unit sphere in $\mathbb{R}^3$ and let $f:\mathbb{S}^2\to \mathbb{R}$ be a $C^1$ function such that $f(x)\geq1$ for every $x\in\mathbb{S}^2$ and define $M=\{xf(x)|x\in\mathbb{S}^2\}$
1.Prove that $M$ is smooth manifold with dimension of $2$
2.Prove that the area of $M$ is bigger than the area of $\mathbb{S}^2$
Attempt:
*
*I proved it with composition of a map and f and showed that it is a regular parametrization.
*I've thought of doing the following
$r:U\to\mathbb{S}^2$ map of the unit sphere
$\int_M 1=\int_U \sqrt{\Gamma \left (\frac{d g}{dx_i} \right )}$ where $g(x_1,x_2)=r(x_1,x_2)f(r(x_1,x_2))$ but got stuck here and somehow to use the fact that $f(x)\geq1$
any hint?
| Your idea can be made to work but it involves some calculations. Let's write vectors in $\mathbb{R}^3$ as column vectors and choose $X \colon U \rightarrow \mathbb{R}^3$ such that $U \subseteq \mathbb{R}^2$ is an open set and $X$ is a parametrization of almost all of $\mathbb{S}^2$. For example, $X$ can be defined using spherical coordinates but the specific form of $X$ does not matter. Then $Y \colon U \rightarrow \mathbb{R}^3$ given by
$$ Y(p) = \underbrace{f(X(p))}_{:=h(p)} X(p) $$
is a parametrization of almost all of $M$. First of all, note that $\| X(p) \|^2 = X^T \cdot X = 1$ for all $p \in U$. Differentiating this identity, we obtain
$$ X^T \cdot dX = 0 $$
(where $dX$ is the $3 \times 2$ matrix representing the differential).
Now, using the chain rule, we have
$$ dY = (X \cdot dh + h dX) $$
and so
$$ dY^T \cdot dY = \left( \nabla h \cdot X^T + h dX^T\right) \cdot \left(X \cdot \left( \nabla h \right)^T + h dX\right) = \nabla h \cdot \left( \nabla h \right)^T + h \left( \nabla h \cdot \underbrace{X^T \cdot dX}_{0} + \underbrace{dX^T \cdot X}_{0} \cdot \left( \nabla h \right)^T \right) + h^2 (dX^T \cdot dX) =
(\nabla h) \cdot \left( \nabla h \right)^T + h^2 (dX^T \cdot dX). $$
Using the matrix determinant lemma we get
$$ \det \left( Y^T \cdot Y \right) = \left( 1 + \left( \nabla h \right)^T \left( h^2 \left( dX^T \cdot dX \right)\right)^{-1} \cdot \nabla h \right) h^4 \det \left( X^T \cdot X \right) = \left( h^4 + h^2 \left< \nabla h, \left( dX^T \cdot dX \right)^{-1} \nabla h \right>\right) \det \left( X^T \cdot X \right). $$
Note that $\left( dX^T \cdot dX \right)^{-1}$ is a positive definite symmetric matrix and so $\left< \nabla h, \left( dX^T \cdot dX \right)^{-1} \nabla h \right> \geq 0$. Together with the fact that $h \geq 1$, we can conclude that
$$ \det \left( Y^T \cdot Y \right) \geq \det \left(X^T \cdot Y \right). $$
Finally,
$$ \textrm{Area}(M) = \iint_{U} \sqrt{\det \left( Y^T \cdot Y \right)} \geq \iint_{U} \sqrt{\det \left( X^T \cdot X \right)} = \textrm{Area}(\mathbb{S}^2). $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4221891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Shouldn't tan(π/2) = ∞? We will first prove that $\sin(x)/\cos(x) = \tan(x)$.
By definition, $\sin(x) = O/H$, $\cos(x) = A/H$ and $\tan(x) = O/A$, where $O$,$A$ and $H$ are the opposite, adjacent and hypotenuse of a right angled triangle.
$\sin(x)/\cos(x) = O/H ÷ A/H = O/H × H/A = OH/AH = O/A = \tan(x)$
So $\tan(π/2) = \sin(π/2)/\cos(π/2) = 1/0$, which is undefined. However, we know that it can either be $∞$ or $-∞$. Therefore, $\tan(π/2)$ is equal to $∞$ or $-∞$.
If $\tan(π/2) = -∞$, then the limit of $\arctan(x)$ as $x$ approaches $-∞$ should equal $π/2$. However, it equals $-π/2$ instead. Therefore, $\tan(π/2)$ cannot be $-∞$, so it must be $∞$.
Is there any mistake in my working?
| What @ultralegend5385 has said is pretty much correct. You cannot define $\infty$ as undefined (ie, $\infty$ ≠ undefined). The term undefined basically denotes that the value does not exist, whereas $\infty$ is way of saying something is endless.
For example, say an arbitrary function $f(x)$ has the domain (-$\infty$, $\infty$). It's just stating the function is endless. Now, that is not the same as being undefined, is it?
From what I believe, you might be looking for the word asymptote or asymptotic in terms of a graph:
The graph of $y = \tan{x}$ has the first, positive asymptote at $\frac{\pi}{2}$ and then for every $\pm \pi$ recurring after that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4222019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Function related problem Let $g:N \to N$ be defined as
$g\left( {3n + 1} \right) = 3n + 2$
$g\left( {3n + 2} \right) = 3n + 3$
$g\left( {3n + 3} \right) = 3n + 1$, for all $n \ge 0$.
Then which of the following statement is true?
*
*There exist an onto function $f:N \to N$ such that $f\circ g=f$.
*There exist a one-one function $f:N \to N$ such that $f\circ g=f$.
*$g\circ g\circ g=g$.
*There exist a one-one function $f:N \to N$ such that $g\circ f=f$.
I tried to make $f(x)=x+1$, the first two condition will satisfy it but the third will will not do so as it will satisfy the condition $f(x)=x-2$.
| I won't answer this question for you, but I'll try to get you started, because from what I can tell, you don't see what it's about.
The first couple of options ask about a function such that $f\circ g=f$. What would such a function be like? We'd have
$$
\begin{align}
f(3n+1)&=f(g(3n+1))=f(3n+2)\\
f(3n+2)&=f(g(3n+2))=f(3n+3)\\
f(3n+3)&=f(g(3n+3))=f(3n+1)
\end{align}
$$
Is there a one-to-one function or an onto function that satisfies these requirements?
For the other two parts, proceed as above. Start by unraveling the formulas to see what the question is really asking.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4222186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\lim\limits_{x\rightarrow 0}\frac{\sin(x)}{\exp(x)-\exp(\sin(x))}$?
Calculate this limit
$$\lim\limits_{x\rightarrow 0}\frac{\sin(x)}{\exp(x)-\exp(\sin(x))}$$
My attempt: using the limit development : we find
$$\exp(\sin(x)-x)=\exp(x-\frac{x^{3}}{3!}+o (x^{3})-x)=\exp(-\frac{x^3}{3!}+o(x^3))=1-\frac{x^3}{3!}+o(x^3)$$
So:
\begin{align}
\lim\limits_{x\rightarrow 0}\frac{\sin(x)}{\exp(x)-\exp(\sin(x))}
&=\lim\limits_{x\rightarrow 0}\frac{\sin(x)}{\exp(x)}\left(\frac{1}{\frac{x^3}{3!}+o(x^3)}\right)\\\
& \sim \lim\limits_{x\rightarrow 0}\left(\frac{\sin(x)}{x}\right)\left(\frac{3!}{x^{2}\exp(x)}\right)=\pm\infty.
\end{align}
Is this correct?
| Your calculations are correct except for your final result which should obviously be only $+\infty$.
Here another way which uses the standard limits $\lim_{t\to 0}\frac{e^t-1}{t}= 1$ and $\lim_{t\to 0}\frac{\sin t}{t}=1$.
First of all note that for $x>0$ you have $\sin x < x$ and for $x<0$ you have $x < \sin x$. Hence
$$\frac{\sin x}{e^x - e^{\sin x}} >0 \text{ for } x \in \left[-\frac{\pi}2 , \frac{\pi}2\right]\setminus\{0\}$$
Now, just consider the reciprocal
\begin{eqnarray*} \frac{e^x - e^{\sin x}}{\sin x}
& = & \frac{e^x -1 - \left(e^{\sin x}-1\right)}{\sin x} \\
& = & \frac{e^x -1}{x}\cdot \frac{x}{\sin x} - \frac{e^{\sin x}-1}{\sin x} \\
& \stackrel{x \to 0}{\longrightarrow} & 1\cdot 1 - 1 = 0
\end{eqnarray*}
Hence,
$$\lim_{x\to 0} \frac{\sin x}{e^x - e^{\sin x}} = +\infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4222294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Why can’t a non-constant polynomial have a constant interval? Given a polynomial that is not constant, of course, it doesn’t contain a constant interval. But how can we prove it?
Since the polynomial is differentiable over R, I came up with a solution that uses Lagrange mean value theorem n times and reduces the nth derivative to a constant. Since the leading term is not zero, there can not be a zero in the nth derivative, and that contradicts the mean value theorem. Therefore, the polynomial must not contain a constant interval.
However, I do realize that this solution is a bit complicated, so is there a simpler solution(possibly elementary) that can prove this?
| A polynomial of degree $n \gt 0$ has at most $n$ roots. A constant polynomial is of degree $0$.
A non constant polynomial $p$… can’t be constant. Therefore it is of degree $n\gt 0$. If it would be constant and equal to $a$ on an interval of strictly positive lenght, $p-a$ would have an infinite number of roots. A contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4222411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Is $∃!x(x^2 = 1)$ true? for a question that I have it is asking for the truth value of $∃!x(x^2 = 1)$ my instincts tell me no because there are 2 values that make it true. Am I right?
| It depends on $x$. If $x > 0$, or $x < 0$ then it's true. But if $x$ is any real number or complex number, then it's false since there are $2$ answers: $x = \pm 1$. And let's say if $x < -3$, then there is no solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4222515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Maximizing $x_1+x_2+\cdots+x_{10}$, where each $x_i$ lies between $-1$ and $1$, and $x_1^3+x_2^3+\cdots+x_{10}^3=0$
Let $x_1, x_2, \ldots, x_{10}$ be ten quantities each lying between $-1$ and $1$, and the sum of cubes of these ten quantities is zero. Find the maximum value of $x_1+x_2+\cdots+x_{10}$.
I have tried substituting $x_1=\sin(y_1)$, $x_2=\sin(y_2)$, and so on, and then used the identity for $\sin 3x$ to simplify the cubes leading to
$$x_1 + x_2 + \cdots x_{10} = \frac13 (\sin 3y_1 + \sin 3y_2 +\cdots + \sin 3y_{10})$$
Now is the statement
$$\frac13 (\sin 3y_1 + \sin 3y_2 +\cdots + \sin 3y_{10}) \leq \frac{10}{3}$$
right? Is the answer $10/3$? How do I maximise
$$\frac13 (\sin 3y_1 + \sin 3y_2 +\cdots + \sin 3y_{10})$$ under the constraints?
| @RishiShekher What you have done is established an upper bound, to the problem that has to be attained. What the error in your argument is that you never explain whether equality can occur w.r.t the constraint, your argument is analogous to this. We all know the following inequality is true $\left(x+\frac{1}{x}\right)\ge 2$ for all $x\in \mathbb{R}$, but the same inequality could posed as $x+\frac{1}{x}\ge 1$. The difference is the first one, equality can be attained whereas for the second it can't.
P.S: This is not an answer just a comment that points out the error, the comment box couldn't handle the word count
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4222733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Maximum value of a expression Problem: If $\alpha+\beta+\gamma=20$, then what is $\max(\sqrt{3\alpha+5}+\sqrt{3\beta+5}+\sqrt{3\gamma+5})$?
My attempt: Assume $\alpha \geq \beta \geq \gamma$. Then $\alpha+\beta+\gamma \leq 3\alpha$ so $25 \leq 3\alpha+5$.
Also $\sqrt{3\alpha+5}+\sqrt{3\beta+5}+\sqrt{3\gamma+5}\leq 3\sqrt{3\alpha+5}$
At this point, I don't have idea what to do next. What should I do next?
Am I doing it incorrectly?
| Use Cauchy-Bunyakovsky-Schwarz inequality:
$$\left(\sum_{i=1}^3u_iv_i\right)^2\le \left(\sum_{i=1}^3u_i^2\right)\left(\sum_{i=1}^3v_i^2\right)$$
We choose $u_1=\sqrt{3\alpha+5}$ and so on, and $v_i=1$. Then $$\sqrt{3\alpha+5}+\sqrt{3\beta+5}+\sqrt{3\gamma+5}\le\sqrt{(3\alpha+5+3\beta+5+3\gamma+5)3}=15$$
The equality is achieved when all the terms are equal, for $\alpha=\beta=\gamma=20/3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4223021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Show that $\langle x,x\rangle > 0$ if $x \neq0$ Let $x=(x_1,x_2)$ and $ y=(y_1,y_2)$ be vectors in the vector space $C^2$ over $C$ and define $\langle \cdot,\cdot\rangle : C^2 \times C^2 \rightarrow C$ by
$ \langle x,y\rangle =3x_1 \overline{y_1}+(1+i)x_1 \overline{y_2}+(1-i)x_2 \overline{y_1}+x_2 \overline{y_2} $
Show that $\langle x,x\rangle > 0$ if $x \neq0$.
So what I have so far is:
$\begin{aligned} \langle x,x\rangle &= 3x_1 \overline{x_1}+(1+i)x_1 \overline{x_2}+(1-i)x_2 \overline{x_1}+x_2 \overline{x_2} \\
&= 3|x_1|^2 + |x_2|^2 +x_1 \overline x_2 + ix_1\overline x_2 - ix_2 \overline x_1 +x_2 \overline x_1 \\
&=3|x_1|^2 + |x_2|^2 + (1+i)(x_1 \overline x_2 - i\overline x_1 x_2)
\end{aligned}$
Now $3|x_1|^2 + |x_2|^2 >0$ if $x \neq0$, but I don't know how to show that $(1+i)(x_1 \overline x_2 - i\overline x_1 x_2) >0$ as well.
Any guidance would be appreciated.
Thank you.
| Let $x_1=a+bi$, $x_2=c+di$. Then
$$(1+i)(a+bi)(c-di)+(1-i)(a-bi)(c+di)=$$
$$(1+i)((ac+bd)+(bc-ad)i)+(1-i)((ac+bd)+(ad-bc)i)=$$
$$((ac+bd)-(bc-ad))+((ac+bd)+(bc-ad))i+((ac+bd)-(ad-bc)+((ad-bc)-(ac+bd))i=$$
$$(ac+bd-bc+ad+ac+bd-ad+bc)+(ac+bd+bc-ad+ad-bc-ac-bd)i=$$
$$2ac+2bd$$
So it isn't always greater than 0 by itself, you need to show that
$$3(a^2+b^2)+(c^2+d^2)>-(2ac+2bd)$$
as long as at least one of $a,b,c,d\neq 0$
Can you finish?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4223193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
} |
Proving an integral equivalence involving floor and ceiling functions During the course of looking at the Euler–Mascheroni constant I have run across the following result:
$$\gamma
= \int \limits_1^\infty \Bigg( \frac{1}{\lfloor x \rfloor} - \frac{1}{x} \Bigg) \ dx
= \int \limits_1^\infty \Bigg( \frac{\lceil x \rceil}{x^2} - \frac{1}{x} \Bigg) \ dx.$$
Representation of this constant using the first integral is a well-known result. What is the simplest way to prove the equivalence of the two integrals?
| This is actually easier than it seems, since for any positive integer $k$,
\begin{align*}
\int_k^{k+1} \bigg( \frac{\lceil x \rceil}{x^2} - \frac{1}{x} \bigg) \, dx
&= \int_k^{k+1} \bigg( \frac{k+1}{x^2} - \frac{1}{x} \bigg) \, dx = \frac1k - \log\bigg( 1+\frac1k \bigg) \\
\int_k^{k+1} \bigg( \frac{1}{\lfloor x \rfloor} - \frac{1}{x} \bigg) \, dx &= \int_k^{k+1} \bigg( \frac{1}{k} - \frac{1}{x} \bigg) \, dx = \frac1k - \log\bigg( 1+\frac1k \bigg).
\end{align*}
Then one can sum both sides over all positive integers $k$ (yielding the Euler–Mascheroni constant as it turns out).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4223474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Integrate via substitution: $x^2\sqrt{x^2+1}\;dx$ I need to integrate the following using substitution:
$$
\int x^2\sqrt{x^2+1}\;dx
$$
My textbook has a similar example:
$$
\int \sqrt{x^2+1}\;x^5\;dx
$$
They integrate by splitting $x^5$ into $x^4\cdot x$ and then substituting with $u=x^2+1$:
$$
\int \sqrt{x^2+1}\;x^4\cdot x\;dx\\
=\frac{1}{2}\int \sqrt{u}\;(u-1)^2\;du\\
=\frac{1}{2}\int u^{\frac{1}{2}}(u^2-2u+1)\;du\\
=\frac{1}{2}\int u^{\frac{5}{2}}-2u^{\frac{3}{2}}+u^{\frac{1}{2}}\;du\\
=\frac{1}{7}u^{\frac{7}{2}}-\frac{2}{5}u^{\frac{5}{2}}+\frac{1}{3}u^{\frac{3}{2}}+C
$$
So far so good. But when I try this method on the given integral, I get the following:
$$
\int x^2\sqrt{x^2+1}\;dx\\
=\frac{1}{2}\int \sqrt{x^2+1}\;x\cdot x\;dx\\
=\frac{1}{2}\int \sqrt{u}\;\sqrt{u-1}\;du\;(u=x^2+1)\\
=\frac{1}{2}\int u^{\frac{1}{2}}(u-1)^\frac{1}{2}\;du
$$
Here is where it falls down. I can't expand the $(u-1)^\frac{1}{2}$ factor like the $(u-1)^2$ factor above was, because it results in an infinite series. I couldn't prove, but I think any even exponent for the $x$ factor outside the square root will cause an infinite series to result. Odd exponents for $x$ will work, since it will cause the $(u-1)$ term to have a positive integer exponent.
How should I proceed? I don't necessarily want an answer. I just want to know if I'm missing something obvious or if it is indeed above first year calculus level and probably a typo on the question.
| I have used the trigonometric substitution $x=tan\theta$ and proceeded to get the same answer as @Aman Kushwaha got via completing the square and @user773458 got via hyperbolic trig identies. Below is my steps for reference:
$$
\int (x^2-\sqrt{x^2 +1})dx \\
=\int tan^2\theta \sqrt{tan^2 \theta + 1}sec^2 \theta \;d\theta \\
=\int tan^2 \theta \sqrt{sec^2 \theta}sec^2 \theta \;d\theta \\
=\int tan^2 \theta sec^3 \theta \;d\theta \\
=\int sec^3 \theta(sec^2 \theta - 1) \; d\theta \\
=\int sec^5 \theta - sec^3 \theta \; d\theta \\
=\int sec^5 \theta \; d\theta - \int sec^3 \theta \; d\theta \\
=sec^3 \theta \; tan \theta \; - \; 3 \int sec^3 \theta \; tan^2 \theta \; d\theta - \Big(sec \theta \; tan \theta \; - \; \int sec \theta \; tan^2 \theta \; d\theta\Big) \\
\therefore 4 \int sec^3 \theta \; tan^2 \theta \; d\theta = sec^3 \theta \; tan \theta \; - \Big(sec \theta \; tan \theta \; - \; \int sec \theta \; tan^2 \theta \; d\theta\Big) \\
\therefore \int sec^3 \theta \; tan^2 \theta \; d\theta = \frac{1}{4} \; \bigg(sec^3 \theta \; tan \theta \; - \Big(sec \theta \; tan \theta \; - \; \int sec \theta(sec^2 \theta - 1)\; d\theta\Big)\bigg) \\
= \frac{1}{4} \; \bigg(sec^3 \theta \; tan \theta \; - \Big(sec \theta \; tan \theta \; - \; \int sec^3 \theta \; d\theta \; - \int sec \theta \; d\theta\Big)\bigg) \\
= \frac{1}{4}\big(sec^3 \theta \; tan \theta \; - \; \frac{1}{2}(\sec \theta \; tan \theta - \ln|sec \theta + tan \theta|)\big) + C\\
=\frac{1}{4}\big(\sqrt{tan^2 \theta + 1}(tan^2 \theta + 1)tan \theta \; - \; \frac{1}{2}(\sqrt{tan^2 \theta + 1}(tan \theta) - \ln | \sqrt{tan^2 \theta + 1} + tan \theta|)\big) + C\\
=\frac{1}{4}\big(\sqrt{x^2 + 1}(x^2 + 1)x \; - \; \frac{1}{2}(\sqrt{x^2 + 1}(x) - \ln | \sqrt{x^2 + 1} + x|)\big) + C\\
=\frac{(2x^3+2x)\sqrt{x^2 +1}-x\sqrt{x^2 + 1} - \ln|\sqrt{x^2 + 1} + x|}{8} + C \\
=\frac{(2x^3 + x)\sqrt{x^2 + 1} - \ln|\sqrt{x^2 + 1} + x|}{8} + C \\
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4223596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 3
} |
With $(AB^{-1})^T$, what comes first, the multiplication or the transpose? In part of the question we have $(AB^{-1})^T$.
My first thought was that I multiply $A$ and $B^{-1}$, then apply the transpose.
But according to theory $(AB)^T = (B)^T(A)^T,$ so this says I should get the transpose first, then multiply.
I'm confused about either sticking to the theory or what I did was right.
| The order is unimportant from a principles standpoint, but in some situations one formula may be more useful than the other. The identity $(AB)^T=B^TA^T$ is an algebraic tool to prove theorems and solve equations, it isn’t necessarily a computational tool. $(B^{-1})^TA^T=(AB^{-1})^T$, calculate whichever feels more natural (I’d go for the RHS myself).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4224100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Homology of Torus with two discs glued on the inside I am trying to figure out the homology of the space $X$, where $X$ is a $T^2$ torus with two discs $D_1,D_2$ glued on the inside (see drawing).
What is the CW-complex structure of $X$?
I thought I would take one 0-cell $p$ and attach four 1-cells $a,b,\alpha,\beta$ where $a,b$ correspond to the 1-cells of $T^2$ in $X$. Then I would attach three 2-cells $A,D_1,D_2$, where $A$ is attached via the identification of the torus and $D_1,D_2$ are attached to $\alpha,\beta$ respectively. So we get the chain complex $0\to\mathbb{Z}^3\to\mathbb{Z}^4\to\mathbb{Z}\to0.$ Is that the right approach?
What are the boundry maps on the chain complex?
My try: The differentials $d_0,d_1$ are both zero as there is only one 0-cell. The Space is path connected thus $H_0(X)=\mathbb{Z}$. But what about $d_2: \mathbb{Z}^3\to\mathbb{Z}^4$? Is there a problem with the approach?
| The quickest way to do this is to follow the comment of @PedroTamaroff: Collapse the disks to points, then stretch one of them to an edge. Then move the end points of the edge to the other collapsed disk, to get the wedge of two spheres and a circle.
Thus we know the answer is: \begin{eqnarray*}H_2(X)&=&\mathbb{Z}^2\\
H_1(X)&=&\mathbb{Z}\\H_0(X)&=&\mathbb{Z}\\
\end{eqnarray*}
However you may not be familiar with the results that allow you to do this, so lets follow your approach and cellularize $X$.
In your picture you have two disks which do not touch. Thus you will need two vertices $a,b$ to construct $X$ as a cell complex.
Let $\alpha,\beta,\gamma,\delta$ be the $1$-cells as shown above, and let $C,D,E,F$ be the two cells, again as shown above.
We have:
\begin{eqnarray*}
\partial_1 \alpha&=& b-a\\
\partial_1 \beta&=& a-b\\
\partial_1 \gamma&=& 0\\
\partial_1 \delta&=& 0\\\\
\partial_2 C&=& \gamma\\
\partial_2 D&=& \delta\\
\partial_2 E&=& \gamma+\delta\\
\partial_2 F&=& \gamma+\delta
\end{eqnarray*}
Thus $H_0(X)$ is freely generated by $a$, $H_1(X)$ is freely generated by $\alpha+\beta$ and $H_2(X)$ is freely generated by $E-C-D, F-C-D$, confirming what we got by the first method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4224231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Having trouble with finding a limit for this sequence $a_n=\frac{n+1}{n}\ln(\sqrt{n})-\frac{1}{n^2}\sum^n_{k=1}\ln(k+n)^k$ We got this question as a bonus for our homework assignment and
I'm having trouble figuring out how to start to solve this limit. We need to find $\lim_{n\rightarrow\infty}(a_n)$ for this sequence:
$$a_n=\frac{n+1}{n}\ln(\sqrt{n})-\frac{1}{n^2}\sum^n_{k=1}\ln(k+n)^k$$
Can anyone give suggestions on how to solve this? (any help would be appreciated)
p.s. This was in the 'Riemann sums' section of our homework assignment
| We define
\begin{align}
g(n) = \frac{1}{n^2}\sum^n_{k=1}k\ln\left( k+n \right).
\end{align}
Since $f(x) = x \ln(x + n)$ is an increasing function, we find
\begin{align}
n^{-2}\int_{0}^{n}\mathrm{d} x~f(x) < ~& g(n) < n^{-2}\int_{1}^{n+1}\mathrm{d} x~f(x) \\
\frac{1+2\ln n}{4}<~&g(n) < \frac{n^2 - 1}{2n^{2}}\ln(n+1) + \frac{n-2}{4n} + \frac{2n+1}{2n}\frac{\ln(2n+1)}{n}.
\end{align}
Therefore, we obtain
\begin{align}
\frac{\ln n}{2n} - \frac{1}{4}< a_{n} < \frac{n+1}{2n}\left(\ln n - \frac{n - 1}{n}\ln(n+1)\right) - \frac{n-2}{4n} + \frac{2n+1}{2n}\frac{\ln(2n+1)}{n}.
\end{align}
Since both LHS and RHS approach $-1/4$ in the limit of $n \to \infty$, we can conclude
$$
\lim_{n \to \infty} a_{n} = -\frac{1}{4}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4224363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Metric topology on closed set I had a question on the following portion of lecture notes :
Let $(X, d)$ be a metric space. Then for any $\delta > 0$, we define
the open ball to be $B(x, \delta) := \{y \in X : d(x, y) < \delta\}$.
One way to define open sets of a metric space, i.e. to dene the
metric topology is to then just declare a set $U$ open if around any
point $x \in U$, we can find an open ball $B(x,\delta) \subseteq U$:
Lemma 2.4 (Metric topology). Let $(X, d)$ be a metric space. Define a
set of subsets $τ_d$ as follows: we declare $U \subseteq X$ to be open
(this is, we set $U$ to be in $τ_d$), if for every $x \in U$, we can
find some $\delta > 0$ such that $B(x, \delta) \subseteq U$. Then
$τ_d$ is a topology and is called the metric topology.
A topology on $X$ needs to contain X. What if the metric space is "closed" ? For example if $X=[0,1]$ and the usual distance on real numbers (absolute value), then for $U=X$ and $x=0$ we will not be able to find a $\delta > 0$ such that $B(x, \delta) \subseteq X$.
| The main point is that in the definition of $B(x, \delta)$, you consider only points inside $X$. So for $X = [0,1]$, $B(0, \delta) = [0, \delta)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4224506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to prove this inequality with this $\sqrt{n\left(x_{1}^{2}+x_{2}^{2}+\dots+x_{n}^{2}\right)} $ $x_i \ge 0 \quad (i=1,2,\dots,n)$and $n \ge 2$. Prove or disprove:
$$
\sqrt{n\left(x_{1}^{2}+x_{2}^{2}+\dots+x_{n}^{2}\right)} \geq \sum_{c y c} \sqrt{x_{1}^{2}+\frac{\left((n-1) x_{1}-x_{2}-\dots-x_{n}\right)^{2}}{4 n(n-1)}}
$$
This is a hard problem proposed by Chen Shengli, an expert at mechanical theorem-proving in inequality. His program didn't work on this, so I'd like to share it with you and see if anyone can prove or disprove it (proof by computer is welcomed here).
| Too long for a comment :
Partial answer :
Case $n=3$
It seems we have the following inequalities:
Let $0.5\leq a,b,c\leq 1$ real numbers then $\exists d$ a real number such that $|d|\leq a,b,c$ or $d\geq 0$then :
$$3\cdot\frac{\left(a+d\right)^{2}}{\left(a+b+c+3d\right)^{2}}\left(a^{2}+b^{2}+c^{2}\right)-\left(a^{2}+\frac{\left(2a-b-c\right)^{2}}{24}\right)\geq 0$$
$$3\cdot\frac{\left(b+d\right)^{2}}{\left(a+b+c+3d\right)^{2}}\left(a^{2}+b^{2}+c^{2}\right)-\left(b^{2}+\frac{\left(2b-a-c\right)^{2}}{24}\right)\geq 0$$
And :
$$3\cdot\frac{\left(c+d\right)^{2}}{\left(a+b+c+3d\right)^{2}}\left(a^{2}+b^{2}+c^{2}\right)-\left(c^{2}+\frac{\left(2c-b-a\right)^{2}}{24}\right)\geq 0$$
Summing the inequalities gives a partial result and the inequality is homogeneous .
It could gives some idea for the general case .
Hope it helps .
Edit 06/03/2022
For $a=b+\frac{\left(c-b\right)}{2}$ and $0.5\leq b \leq 0.75\leq c\leq 1$ it seems we can choose $d=\frac{c-a}{9}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4224749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Get all dihedral angles of a tetrahedron given three angles and three edges Suppose you have a general tetrahedron, you are given the three angles that define a vertex of that tetrahedron. You also know the length of the edges that converge on this same vertex.
Given that, calculate all dihedral angles.
From This question and its accepted answer I can get three of the six dihedral angles. I am trying to figure out the rest (the ones at the "base").
I tried adapting the answer for that question, extending the definition of $\vec v_1$, $\vec v_2$, $\vec v_3$, so that now they are not unit vectors, but their magnitude is the length of the edges.
Then we could similarly define a unit normal vector for the "base" as:
$$\vec n_{12}=\dfrac{\vec v_1 \times \vec v_2}{|\vec v_1 \times \vec v_2|} \quad\text{and} \quad \vec n_{b}=\dfrac{\vec v_1 \times \vec v_2 + \vec v_2 \times \vec v_3 + \vec v_3 \times \vec v_1}{|\vec v_1 \times \vec v_2 + \vec v_2 \times \vec v_3 + \vec v_3 \times \vec v_1|}$$
so then, similar to that answer:
$$\text{cos}(θ_{ab})=-\vec n_{12}\cdot \vec n_{b}$$
But this hasn´t taken me very far. I feel like I might be very close but my vector arithmetics is too rusty. Or it is just simply wrong.
EDIT: Already solved this, thanks to your answers. However I am still trying to solve using my original approach. After some vector operations I get to this:
$\text{cos}(θ_{ab})=- \dfrac{a^2b^2\text{sin}^2\phi_{ab}+b^2ac(\text{cos}\phi_{ab}\text{cos}\phi_{bc}-\text{cos}\phi_{ac})+a^2bc(\text{cos}\phi_{ac}\text{cos}\phi_{ab}-\text{cos}\phi_{bc})}{ab\text{sin}\phi_{ab}(|\vec v_1 \times \vec v_2 + \vec v_2 \times \vec v_3 + \vec v_3 \times \vec v_1|)}$
Where $a,b,c$ are the lengths of the edges along $\vec v_1,\vec v_2,\vec v_3$ (also their magnitudes) and $\phi_{ab}$ is the angle between edges $a$ and $b$ (and so on)
So it is looking quite well, because it is mostly in terms of the angles and edge lengths, but still need to simplify those vectors. Any ideas?
| I've used the Cartesian coordinate frame to find the coordinates of all the vertices of the tetrahedron.
Place the given vertex at the origin $(0, 0, 0)$. Then the three edges meeting at the vertex are represented by the vectors $\vec{A}$ , $\vec{B}$ and $\vec{C}$.
We can take vector $\vec{A}$ to be along the $x$ axis, so we'll take point $A$ to be
$A = (a, 0, 0) $
where $a$ is the given length of this edge.
Next, we have point $B$, such that the angle between $\vec{B}$ and $\vec{A}$ is $\theta_{12}$, from this it follows that
$B = (b \cos \theta_{12}, b \sin \theta_{12} , 0 )$
Finally, we want to find the point $C$ along the third edge, so let
$C = (x, y, z)$
From the length requirement we know that $x^2 + y^2 + z^2 = c^2$
And from the angle constraints, we know that
$\vec{A} \cdot \vec{C} = a c \cos \theta_{13} = a x $
Hence $x = c \cos \theta_{13}$
Next, we know that
$\vec{B} \cdot \vec{C} = b c \cos \theta_{23} = b x \cos \theta_{12} + b y \sin \theta_{12} $
From this, it follows that,
$ y = c \dfrac{ \cos \theta_{23} - \cos \theta_{13} \cos \theta_{12} }{ \sin \theta_{12} }$
Using the length equation we can directly solve for $z$.
Now point $C$ is known. So all the vertices of the tetrahedron have known coordinates. One can now obtain the normals to the faces of the tetrahedron, then compute the dihedral angles in a direct way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4224870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Abstract algebra book for slightly advanced beginner I want to purchase a book for Abstract Algebra but am too confused at the time that which one should I go for .
Based on whatever I have gathered from web it seems :
*
*Dummit and Foote.
*Fraleigh.
*I.N. Herstein.
*Gallian.
are some goto books for a beginner , but few of my friend recommended some other books like :
*
*Dan Saracino
*Beachy
*Hungerford
*M. Artin
Because of which I am even more confused now. I have learned Linear Algebra, Real Analysis ,PDE ,Calculus of Variations at an Undergraduate level (if it helps). I want a book that contains everything that one should know about Abstract Algebra at an Undergraduate level and is fun to read , with good examples , great exercise section.So HELP ME PICK ONE .
*
*If possible can you arrange them in order of your personal preference(with respective reasoning)?
*What book did you read?
*What are your experiences regarding that ?
*Is there any other book you got to know about later that you find even more good and comprehensive ?
(I will be purchasing a Indian print of one of these 8 books as they are comparatively way cheaper than their respective counterparts.)
| My personal favorite is Aluffi's Algebra: Chapter 0. It covers a lot of topics starting from the very elementary ones (you can skip those) and diving as deep as Homological Algebra. Furthermore, I personally find the writing style of Aluffi just excellent. The book contains a lot of concrete examples as well as a number of quality exercises of varying difficulty. Unfortunately, no solutions are provided, just hints for some of the more difficult exercises.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4225129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Maximization of velocity involving trigonometric functions I have a physics question but my main doubt is regarding the math part of it, namely the maximization of the velocity. I'll attach a diagram of the question here for reference.
I am given the values of coefficients of friction, $\mu_s$(static friction coefficient) = $\frac{1}{\sqrt3}$, $\mu_k$(kinetic friction coefficient) = $\frac{1}{\sqrt5}$
It is given that the point A describes a circle in the vertical plane at a constant speed $v_A$, the question asks me to find the maximum speed $v_A$ such that the block B doesn't slide on the platform.
Without showing, the main physics part of it, (I can provide the working if needed though), what I'm obtaining from this is $$v_A^2=\frac{4}{\sqrt3 \cos \theta +\sin \theta}$$
Now, to maximize $v_A$, clearly I need to minimize the denominator here. Now, I know, $$-2 \leq\sqrt3 \cos \theta+\sin \theta \leq 2$$
Now, $v_A^2$ has to be positive and so $-2$ clearly cannot be the required minimum value of the denominator. However, I find that the answer to this question is that $v_A = \sqrt2$ which means that $v_A^2=2$ and that means that the value of $\sqrt3 \cos \theta + \sin \theta$ has been taken to be $2$ which is the maximum value of it. But, shouldn't that make the value of $v_A$ minimum, as the denominator becomes maximum?
EDIT: According to http://ilin.asee.org/Conference2007program/Papers/Conference%20Papers/Session%202B/Liang.pdf
"The normal analytical approach to this problem has two steps. In the first step, using the x- and the y- components (or the normal and the tangential components) of the equation of motion, an equation of maximum velocity vA without sliding is found as
$$(v_{A(max)})^2 =\frac{\mu_s g \rho} {(\cos \theta + \mu_s \sin \theta)}$$
where $g$ is the gravitational acceleration, $\rho$ is the radius of curvature, θ is the angle of crank OA as shown in the Fig 6, and max VA is the velocity permissible by the friction for the particular crank angle θ. In the second step, the minimum of $(v_{A(max)})^2$ and the corresponding θ are found using calculus or Excel Solver. Here the above two sentences must be written carefully to tell what is maximized and what is minimized. The same possible confusion can arise in the students."
| In order for the block not to slide, the force of static friction must match the horizontal component of the centripetal force towards the point of rotation. When you solved for the maximum speed that achieves this, you got an expression in terms of the angle of rotation, $\theta$:
$$v_A=\sqrt{\frac{4}{\sqrt3 \cos \theta +\sin \theta}}$$
Now, the speed $v_A$ is constant, so in order for the block not to slide, $v_A$ needs to be such that the block would not slide at any $\theta$, so you need to take the minimum of $\sqrt{\frac{4}{\sqrt3 \cos \theta +\sin \theta}}$, which occurs at $\theta= \frac{\pi}{6}$ and $v_A=\sqrt{2}$.
In essence, the value for $v_A$ represents the maximum speed at a particular angle $\theta$ for which the block does not move, but you need to take a minimum of it so that the block never moves.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4225252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Prove that every component is complete graph
Let G be a simple graph in which for every three vertices
$u,v,w$ we have:
*
*between those three vertices none of them are neighbors,
*or they form triangle
*or only two of them are neighbors.
Prove that every connected component of this graph is complete graph.
First, I am not sure what does it mean that only two of them are neighbors. Does it mean that from 3 possible edges between those three vertices only one of them exists, for example edge $e=uv$?
Then, I tried to assume the opposite, that there exists compoenent H that is not complete graph, and number of vertices in H is $\ge 1$. Now if we pick two vertices $u,v$ from H(such that $u,v$ are not neighbors because H is not complete so those two vertices exist) and $w$ from component other than H, first claim would be fulfilled, and other two won't, but that is not contradiction. Where am I mistaking?
| The correct interpretation is that a $3$-vertex subgraph induced by $\{u,v,w\}$ is allowed to have no edges, $1$ edge, or all $3$ edges - but not $2$ of the edges.
You can check that this makes the adjacent-or-equal relation between vertices transitive. It is always reflexive and symmetric. Since it is an equivalence relation, it has equivalence classes, which are your complete graph components.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4225358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Semidirect products via $\rho\colon B\to \mathrm{Aut}(A)$ Show that for every homomorphism $ρ$ , ($a_1,b_1)\cdot (a_2, b_2)= (a_1ρ(b_1)(a_2),b_1b_2)$ defines a group structure on product $A\times B$ and $B$ normalizes $A$ in this product. I don't really get what the question needs from me. Neither do i know if my attempt to prove this is right.
My attempt: Let $G: A\times B\to C$ where $(a,b)\to ab$ and let $A$, $B$ be subgroups of $C$. Then $AB= \{ab :a \in A, b\in B\}$ is a subgroup of $C$. Since $ρ$ is a group homomorphism we define $ρ(b)(a) = bab^{-1}$ for every $a\in A$ and $b\in B.$
Then $(a_1,b_1)=(a_1\rho(a_1)(a_1),(b_1b_2)$ and since the product $(a_1,b_1)\cdot(a_2,b_2)=(a_1\cdot a_2,b_1\cdot b_2)$
then $a_1b_1a_2b_2=a_1b_1a_2b^{-1}b_1b_2$ and therefore this product belongs to $A\times B$ and so closure is proved. There are other axioms of groups that i will show once i am sure that's the correct prove.
For the second part I tried to prove that $B$ normalizes $A$ using normalization definition but I got stuck because the product is defined a bit complexly. Any corrections, methods for proving or explanations are appreciated.
| We have a binary operation $$(a_1,b_1)\cdot (a_2, b_2)= (a_1ρ(b_1)(a_2),b_1b_2)$$ to complete a group we need
*
*identity
*associativity of the operation
*inverses
The identity is easy, just take $(1,1)$ and show that it is the identity for the binary operation here.
Associativity is also quite easy but may involve a lot of writing, just expand out the definition of the binary operation for both:
*
*$$((a_1,b_1)\cdot (a_2, b_2))\cdot (a_3, b_3)$$
*$$(a_1,b_1)\cdot ((a_2, b_2)\cdot (a_3, b_3))$$
and then use associativity of your underlying groups to prove these two terms equal.
For the inverse, given $(a_1,b_1)$ I will let you come up with some $x$ such that $$(a_1,b_1)\cdot x = 1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4225507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Mathematical notation for 'The tangent of $f(x)$ at the point $(x,y)$ I've been looking around and can't seem to find correct mathematical notation for the tangent of a function at some point $(x,y)$.
For example, if a question asks to find the equation of the tangent of $f(x) = x^2$ at the point $(2,4)$, i'd solve it this way:
$$f'(x) = 2x$$
$$f'(2) = 4$$
So, let the tangent be denoted by the linear function $g(x)$
$$g(x) = 4x + c$$
$$g(2) = 4$$
$$g(x) = 4x - 4$$
Is there a mathematical symbol representing the tangent in this situation? Is what i've written the 'common practice'?
| I'm not aware of any symbol for the tangent line, but using the notation $g(x)=\dots$ looks fine to me, as long as you explicitly state that the function $g$ describes a tangent line. In general, the tangent to the graph of $f$ at the point $\left(a,f(a)\right)$ is given by
$$
g(x)=f'(a)(x-a)+f(a) \, .
$$
In this context, is simpler to use the point-slope equation of a straight line.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4225629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Corollary of Uniform boundedness principle Let $X$ be banach space, $Y$ be normed space, $\mathcal A \subset \mathcal B(X, Y)$ be some set of continuous linear operators $X \to Y$. I need to prove that if
$$\forall x \in X, g \in Y^* \;\; \sup_{A \in \mathcal A} |g(Ax)| < \infty,$$
then
$$\sup_{A \in \mathcal A} ||A|| < \infty.$$
I have managed to use uniform boundedness principle to deduce
$$\forall g \in Y^* \sup_{A \in \mathcal A} ||g \circ A|| <\infty.$$
I have no idea how to proceed.
| Define $T_A: Y^{*} \to X^{*}$ by $T_A (g)=g\circ A$. For each $g$, $(T_Ag)_{A\ \in \mathcal A}$ is norm bounded (from what you have already observed). By Uniform Boundeness Principle we get $\sup_{A \in \mathcal A, \|g\|\leq 1} \|g\circ A\| <\infty$. This means $\sup \{\|A\|:A \in \mathcal A\}<\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4225729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Curious inequality with binomial coefficients While dealing with a larger task, I've faced an inequality which I found hard to prove:
$$
\frac{i}{n-i-1} \cdot \frac{
1 - 1/{n \choose i+1}
}{
1 - 1/{n \choose i}
} \leq 1
$$
here $n \in \mathbb{N}$ and $i \in \left\{1, \dots, \lfloor n/2\rfloor - 1\right\}$ (or, equivalently, for integer $i\geq 1$ such that $2 i \leq n - 1$). I believe that this inequality holds, because I've checked it numerically for lots of values of $n$ and $i$.
I've tried dealing with both multipliers independently and observed that $\frac{i}{n-i-1}\leq 1$ holds for $2 i \leq n - 1$, but for these values of $i$ $\frac{1 - 1/{n \choose i+1}}{1 - 1/{n \choose i}} \geq 1$, so it doesn't lead to desired result.
| Your inequality holds!
Note that $\binom{n}{i+1}=\frac{n-i}{i+1}\binom{n}{i}$ and therefore the inequality is equivalent to
$$i-\frac{i(i+1)}{(n-i)\binom{n}{i}}\leq n-i-1-\frac{n-i-1}{\binom{n}{i}}$$
that is
$$\frac{(n-i)(n-i-1)-i(i+1)}{\binom{n}{i}}\leq (n-2i-1)(n-i)$$
or
$$n(n-2i-1)\leq (n-2i-1)(n-i)\binom{n}{i}.$$
If $n\geq 2i+1$ then it suffices to show
$$n\leq (n-i)\binom{n}{i}=n\binom{n-1}{i}$$
which holds since $\binom{n-1}{i}\geq 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4225866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving $C\cos(\sqrt\lambda\theta)+D\sin(\sqrt\lambda\theta)=C\cos(\sqrt\lambda(\theta+2m\pi)) + D\sin(\sqrt\lambda (\theta + 2m\pi))$
I want to solve
$$C\cos(\sqrt\lambda \theta) + D\sin(\sqrt\lambda \theta) = C\cos(\sqrt\lambda (\theta + 2m\pi)) + D\sin(\sqrt\lambda (\theta + 2m\pi))$$
The solution must be valid for all $\theta$ in $\mathbb{R}$ and all $m$ in $\mathbb{Z}$, but $C$, $D$, and $\lambda$ are to be determined and can be in $\mathbb{C}$.
The solutions I've found by guessing are $(C\ $arbitrary$, D\ $arbitrary$, \lambda = n^2)$, where $n$ is any integer, and $(C = 0, D = 0, \lambda\ $arbitrary$)$.
Is there some algebra I can do to show that these are the only solutions, or find the rest of the solutions to this equation?
| Use the formula $\cos(x) = \frac{e^{ix} + e^{-ix}}{2}$ and $\sin(x) = \frac{e^{ix} - e^{-ix}}{2i}$. When you substitute this into your expression, on the left-hand side, you would get $ C(\frac{e^{i\sqrt{\lambda}\theta} + e^{-i\sqrt{\lambda}\theta}}{2}) + D(\frac{e^{i\sqrt{\lambda}\theta} - e^{-i\sqrt{\lambda}\theta}}{2i})$. On the right-hand side, you would end up with
$C(\frac{e^{i\sqrt{\lambda}\theta} e^{2\pi mi\sqrt{\lambda}} + e^{-i\sqrt{\lambda}\theta}e^{2\pi mi\sqrt{\lambda}}}{2}) + D(\frac{e^{i\sqrt{\lambda}\theta}e^{2\pi mi\sqrt{\lambda}} - e^{-i\sqrt{\lambda}\theta}e^{2\pi mi\sqrt{\lambda}}}{2i})$. Notice how the right-hand-side is similar to the left-hand side, except of the $e^{2\pi mi\sqrt{\lambda}}$ term. If we set this equal to 1 and solve for $\lambda$, we would see that $\sqrt{\lambda}$ would have to be an integer and therefore making $\lambda = n^2$
This explains the one solution for any C or D value. If you want to find any arbitrary C or D value, I would start by using Euler's formula to turn $C\cos{\sqrt{\lambda}\theta} + D\sin{\sqrt{\lambda}\theta}$ into one function, and to do the same with the right-hand side.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4226030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Proof $\sum_{k=1}^{2n} (-1)^{1+k}\frac{1}{k} = \sum_{k=1}^{n}\frac{1}{n+k}$ by induction. I'm trying to show the following formula:
$$\sum_{k=1}^{2n} (-1)^{1+k}\frac{1}{k} = \sum_{k=1}^{n}\frac{1}{n+k}$$
I have already verified the formula with $n=1$, now I continue with the induction:
$$ \sum_{k=1}^{2(n+1)}(-1)^{k+1}\frac{1}{k} = \sum_{k=1}^{2n}(-1)^{k+1}\frac{1}{k}+\frac{1}{2n+1}-\frac{1}{2n+2} \\
\rightarrow\sum_{k=1}^{2(n+1)}(-1)^{k+1}\frac{1}{k} = \sum_{k=1}^{n}\frac{1}{n+k} +\frac{1}{2n+1}-\frac{1}{2n+2}$$
But now I have no idea how to get $\sum_{k=1}^{n+1}\frac{1}{(n+1)+k}$, I have been trying a lot of algebra or changing the index but I don't see the "magic" step. Any idea is welcome!!
| We have that
$$\dots= \sum_{k=1}^{n}\frac{1}{n+k} +\frac{1}{2n+1}-\frac{1}{2n+2}=\\=\left(\frac1{n+1}+\sum_{k=1}^{n+1}\frac{1}{(n+1)+k} -\frac{1}{2n+1}-\frac{1}{2n+2}\right)+\frac{1}{2n+1}-\frac{1}{2n+2}=\\=\sum_{k=1}^{n+1}\frac{1}{(n+1)+k}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4226141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Probability Problem(unfair coin) Suppose we have an unfair coin with a probability of 0.6 of obtaining a heads on any given toss. As is typical for coin toss problems, assume each coin toss is independent.
What is the expected number of flips before we get a heads?
My thought is using the negative binomial distribution and get this:
$E(x) = 1 + (1-p)/p = 1 + ( 0.4)/ (0.60) = 1.67$ Filps
I tried 1.67 and 2 flips, but none of them is correct, can someone explain to me what's wrong here? Thanks
| Let the positive-integer-valued random variable $X$ be the number of tosses before a heads show up for the first time. For any $k \in \mathbb Z_{>0}$, the event $X=k$ is equivalent to the event that the first $k-1$ tosses land tails, and the $k$th toss lands heads. Thus $\mathbb P(X=k)=(1-p)^{k-1} p$. WLOG and to avoid trivialities, we will be assuming $0 < p < 1$; if $p=0$, then $X=\infty$ almost surely, and so $\mathbb E[X] = \infty$; if $p=1$, then $X=1$ almost-surely, and so $\mathbb E[X] = 1$. Otherwise,
one computes
$$
\mathbb E[X] = \sum_{k=1}^\infty k\mathbb P(X=k)=\sum_{k=1}^\infty k(1-p)^{k-1} p = pf'(p),
$$
where $f(p):= -\sum_{k=1}^\infty (1-p)^k=1-1/p$, and so $pf'(p) = p/p^2 = 1/p$.
Therefore, $\mathbb E[X] = 1/p$.
If you need the expected number of flips before the head, this is $\mathbb E[Y]$, where $Y=X-1$. Thus, $\mathbb E[Y] = \mathbb E[X] - 1 = 1/p-1=(1-p)/p$, by linearity of expectations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4226252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If $A$ and $B$ are solutions to $7\cos\theta+4\sin\theta+5=0$, then $\cot\frac{A}{2}+\cot\frac{B}{2}=-\frac{2}{3}$ If $A$ and $B$ are the solutions to ${\displaystyle 7\cos\theta+4\sin\theta+5=0\mbox{ where }A>0,0<B<2\pi;}$
Without finding the solutions to the trig equation, show that;
$$\cot\left(\frac{A}{2}\right)+\cot\left(\frac{B}{2}\right)=-\frac{2}{3}$$
This is my effort so far:
Since A and B are solutions then:
\begin{align*}
7\cos A+4\sin A+5 & =0\\
7\left( 2\cos^{2}\frac{A}{2}-1\right) +4\times2\sin\frac{A}{2}\cos\frac{A}{2}+5 & =0\\
14\cos^{2}\frac{A}{2}+8\sin\frac{A}{2}\cos\frac{A}{2}-2 & =0\\
\mbox{Now divide by }\sin\frac{A}{2}\cos\frac{A}{2} & \mbox{ gives:}\\
14\cot\frac{A}{2}+8-\frac{2}{\sin\frac{A}{2}\cos\frac{A}{2}} & =0\\
\mbox{similarly for B;}\\
14\cot\frac{B}{2}+8-\frac{2}{\sin\frac{B}{2}\cos\frac{B}{2}} & =0
\end{align*}
When I add these two equations I have $\cot\left(\frac{A}{2}\right)+\cot\left(\frac{B}{2}\right)$,
but am unsure how to progress.
| We use following relations:
$sin(\theta)=\frac {2 tan\frac{(\theta)}2}{1+tan^2(\frac{\theta}2}$
$cos(\theta)=\frac {1-tan^2(\frac{\theta}2}{1+tan^2(\frac{\theta}2}$
plugging these in equation we get:
$6cot^2(\frac{(\theta)}2)+4cot(\frac{(\theta)}2)-1=0$
Comparing with $ax^2+bx+c=0$ we have:
$x_1+x_2=-\frac ba=-\frac 23$
which gives what is required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4226367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Example of function where one-sided directionaly derivative does not exist One can show that for convex functions $f: \mathbb{R}^d \to \overline{\mathbb{R}}$ the so-called one-sided directional derivative exists:
Let $x_0 \in \mathbb{R}^d$ be a point where $f$ is finite. Then this derivative is given as the following if the finite or infinite limit
$$
Df(x_0,v) := \lim_{h^{+} \to 0} \frac{f(x_{0}+ hv) - f(x_{0})}{h}
$$
exists. It's quite a different notion to a "normal" derivative since we also allow infinite limits. An example of this is $f(x) = -\sqrt{x}$ for $x \ge 0$ where we can see that:
$$
Df(0,1) = \lim_{h^{+} \to 0} -\frac{1}{\sqrt{h}} = -\infty
$$
I was wondering what a good example is of a (continuous?) function where the one-sided directional derivative does not exist (thus also not convex). I guess I could construct some highly non-continuous function like
$$
f(x) = \begin{cases}
x, & \text{ if } \left \lceil{1/x}\right \rceil \mod 2 = 0\\
-x, & \text{ else.}\\
\end{cases}
$$
| One example could be $f(x)= \begin{cases} x \sin \frac{1}{x}, & x \ne0 \\ 0, & x=0\end{cases}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4226508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How many ways are there to go from $A$ to $C$ and come back from $C$ to $A$ Three small towns, designated $A,B,C$ are interconnected by a system of two-way roads as shown in the following picture:
Image of interconnected cities
How many ways are there to go from $A$ to $C$ and come back from $C$ to $A$ such that the connections which are used to go from $A$ to $C$ cannot be used again. For example, if you use ($R_1$ and $R_6$), then they are cancelled for going from $C$ to $A$, you may use the roads ($R_5$ and $R_2$) or use ($R_8)$.
Second example, if you use $R_9$ go from $A$ to $C$, then it will be cancelled, so you can use ($R_8$) or ($R_7$ and $R_1$) or ($R_5$ and $R_3$) etc for going from $C$ to $A$.
My solution: There are $4\cdot3+2=14$ ways to go from $A$ to $C$, I tought that there may be $3\cdot2+1=7$ to come back. However, I am not sure. So, I want help for it..
NOTE: Unnecessary travelling is not allowed, i.e, you should always move to your target, for example if you go from A to B then you cannot shuttle here, you should move from B to C.
| I will divide this problem into two parts
$1-)$ Move from $A$ to $B$ and $B$ to $C$ :
If we move to $A$ to $C$ by using connector city $B$ , then there are $4 \times 3 = 12$ ways. Now , it is time for coming back , assume that we used the roads $R_1$ and $R_5$ then , we can come back by $3 \times 2 = 6$ ways or using $2$ direct ways , so there are $6+2 =8$ ways to come back.
Result = $12 \times 8 = 96 $ ways
$2-)$ Move from A to C directly : We can select $2$ ways to go directly , so we cannot use one of them to come back from $C$ to $A$ , then there are $4 \times 3 +1 = 13$ ways to come back. Then , $2 \times 13 = 26$ ways to go form $A$ to $C$ and come form $C$ to $A$.
Result = $96 + 26 =122$ ways there are
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4226613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Factor $x^{17}+1$ into a product of irreducibles In title, factor $x^{17}+1$ into a product of irreducibles over $\mathbb{R}$. I know it factors as $$(x+1)(x^{16}-x^{15}+\dots+1)$$
but I have no real justification for why the second factor is irreducible besides "mathematica says it's true and I don't want to try to factor it". I know it has no real zeros, since -1 is the only real zero of the original function and $x^{17}+1$ is coprime with its derivative, but that doesn't rule out that its the product of other irreducible polynomials of degrees higher than 1. I could also show the ideal generated by $x^{16}-x^{15}+\dots+1$ is maximal but that sounds horrific and I don't want to do it. Eisenstein's criterion also fails since the constant term is 1.
| Consider $\omega_k=\cos(\frac{2k\pi}{17})+i\sin(\frac{2k\pi}{17})$, with $k\in\{0,...,16\}$. Now since $f(x)=x^{17}+1$ splits in $\mathbb{C}$ and $f(-\omega_k)=0, \forall k\in\{0,...,16\}$, we can write $f$ as,
$$f(x)=\prod_{k=0}^{16}(x+\omega_k)=(x+1)\prod_{k=1}^{16}(x+\omega_k).$$
Using the fact that $f(-\omega_k)=0\implies f(-\overline{\omega_k})=0$ we have that,
$$(x+1)\prod_{k=1}^{16}(x+\omega_k)=(x+1)\prod_{k=1}^8(x+\omega_k)(x+\overline{\omega_k})=(x+1)\prod_{k=1}^8(x^2+2\operatorname{Re}(\omega_k)+\lvert\omega_k\rvert^2)=$$
$$=(x+1)\prod_{k=1}^8(x^2+2\cos(\frac{2k\pi}{17})+1).$$
At this point notice that the polynomials $f_k(x)=(x^2+2\cos(\frac{2k\pi}{17})+1)\in\mathbb{R}[x]$ are degree 2 and have negative discriminant, for any $k\in\{1,...,16\}$, and therefore they are irreducible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4226773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Finding limit $ \lim_{{x\rightarrow0},y\rightarrow0}\frac{\cos xy-2^{x{y}^2}}{x^2y^2}$ Compute or show that a limit
$$L: \lim_{(x,y)\to(0,0)}\frac{\cos xy-2^{x{y}^2}}{x^2y^2}$$
doesn't exist. Here is my approach:
First I divided the limit into two limits first one is
$\lim_{{x\rightarrow0},y\rightarrow0}\frac{\cos xy}{x^2y^2}$ (Lets call this limit $L_1$)
And second is $\lim_{{x\rightarrow0},y\rightarrow0}\frac{2^{x{y}^2}}{x^2y^2}$. (And call this $L_2$)
When solving $L_1$ I substituted $xy=t, t\rightarrow 0$ and I get
$\lim_{{t\rightarrow0}}\frac{\cos t}{t^2}$, Since this is $\frac{constant}{0}$ this is undefined end hence the limit does not exist, which indicates that also limit $L$ does not exist?
Is this okay, if not what other approach can be made?
| To see explicitely that the limit doesn't exist let consider $x=y= t \to 0$
$$\frac{\cos xy-2^{x{y}^2}}{x^2y^2}=\frac{\cos t^2-2^{t^3}}{t^4}=\frac{\cos t^2-1}{t^4}-\frac1{t}\frac{2^{t^3}-1}{t^3}\to -\frac12\pm\infty\cdot\log 2=\pm\infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4226940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Evaluating the Integral $\int_{0}^{\infty} \frac{x^{49}}{(1+x)^{51}} dx$ I tried evaluating the integral $\displaystyle \int_{0}^{\infty} \dfrac{x^{49}}{(1+x)^{51}}dx$
but I wasn't able to get the result.
Following is the way by which I did it-
$$I=\displaystyle \int_{0}^{\infty} \dfrac{x^{49}}{(1+x)^{51}}dx$$
$$\implies I=\int_{0}^{\infty} x^{49}(1+x)^{-51}dx$$
Further, I tried Integration by parts but it didn't worked. Can anyone tell that how this integral can be evaluated.
| Here is an alternative method.
First of all, for each integer $n>1$, we have $\int_1^\infty x^{-n}dx=\frac1{n-1}$.
Thus, using the binomial theorem,
\begin{align}
\int_0^\infty\frac{x^{49}}{(1+x)^{51}}dx&=\int_1^\infty\frac{(x-1)^{49}}{x^{51}}dx\\
&=\int_1^\infty\frac1{x^{51}}\sum_{n=0}^{49}(-1)^{n-1}{49\choose n}x^ndx\\
&=\sum_{n=0}^{49}(-1)^{n-1}{49\choose n}\int_1^\infty x^{n-51}dx\\
&=\sum_{n=0}^{49}(-1)^{n-1}{49\choose n}\frac1{50-n}\\
&=\frac1{50}\sum_{n=0}^{49}(-1)^{n-1}{50\choose n}\\
&=-\frac1{50}\left((1+(-1))^{50}-{50\choose50}(-1)^{50}\right)\\
&=\frac1{50}.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4227079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Find $m$ if $x^3 - mx^2 - 4 = 0$ and $x^3 + mx + 2 =0$ have a common root. Good Day
Today, I was trying to solve this problem below:
Find the values real number $m$ can take if $x^3 - mx^2 - 4 = 0$ and $x^3 + mx + 2 =0$ have a common root.
Here's what I did:
Assume the common root to be $a$. Substituting:
$$(1)\ a^3 - ma^2 - 4 = 0$$
$$(2)\ a^3 + ma + 2 = 0$$
Subtracting $(1)$ from $(2)$ gives:
$$ma^2 + ma + 6 = 0$$
Using this as a quadratic in $a$, we use the fact that $D \geq 0$ because $a$ is real.
Thus,
$$-4(m)(6) + (m)^2 \geq 0$$
So, $$m(m - 24) \geq 0$$
which is true precisely when $m \in (-\infty, 0]$ $\cup$ $[24, \infty)$
However, the answer given to me is only $m = -3$. Where am I going wrong?
Thanks!
| The intersections equation you found, $ \ mx^2 + mx + 6 \ = \ 0 \ , $ is useful, but does not by itself tell us about the zeroes of the polynomial; rather, it only locates the $ \ x-$coordinates of the two intersection points between the curves for $ \ x^3 - mx^2 - 4 \ \ $ and $ \ x^3 + mx + 2 \ \ . $ These points lie at
$$ x \ \ = \ \ \frac{-m \ \pm \ \sqrt{m^2 \ - \ 24m}}{2m} \ \ = \ \ -\frac12 \ \pm \ \frac{\sqrt{1 \ - \ \frac{24}{m}}}{2} \ \ . $$
As you found, this means that the two function curves intersect only for $ \ m \ \ge \ 24 \ \ $ or $ \ m \ < \ 0 \ \ . $ (We can discard $ \ m = 0 \ $ as this produces the two "parallel" cubic curves $ \ y \ = \ x^3 + 2 \ \ $ and $ \ y \ = \ x^3 - 4 \ \ . ) $
A differing approach we may take is to consider the properties of the functions and their curves and what this quadratic relation tell us. Something we can say immediately (although it is not clear how it is helpful) is that the common zero $ \ r \ $ must be a common factor of $ \ 4 \ $ and $ \ -2 \ \ . $ If we "complete-the-cube" for the function $ \ x^3 + bx^2 + cx + d \ , \ \ $ we obtain $$ \ \left(x + \frac{b}{3} \right)^3 \ + \ \left(c - \frac{b^2}{3} \right)x \ + \ \left( d - \frac{b^3}{27} \right) \ \ , $$
which indicates a symmetry axis (neglecting "vertical displacement") at $ \ x \ = \ -\frac{b}{3} \ $ and that the curve will have local extrema ("turning-points") if $ \ c < \frac{b^2}{3} \ \ . $ Consequently, $ \ y \ = \ x^3 - mx^2 - 4 \ $ has a symmetry axis about $ \ x \ = \ \frac{m}{3} \ \ $ and always has turning-points since $ \ c = 0 \ \ ; $ the curve for $ \ y \ = \ x^3 + mx + 2 \ $ has a symmetry about the $ \ x-$axis $ \ ( b = 0 ) \ $ and turning-points when $ \ m < 0 \ \ . $ [Calculus would enable us to say rather more, but we are limiting ourselves to algebra of polynomials and analytic geometry.] We will be able to narrow down possibilities with this information.
It might also be noted that the Rule of Signs indicates that:
• for $ \ m > 0 \ , \ \ x^3 - mx^2 - 4 \ $ has one positive real zero and no negative real zeroes; $ \ x^3 + mx + 2 \ $ has no positive real zeroes and one negative real zero
• for $ \ m < 0 \ , \ \ x^3 - mx^2 - 4 \ $ has one positive real zero and two or no negative real zeroes; $ \ x^3 + mx + 2 \ $ has two or no positive real zeroes and one negative real zero
So the Rule of Signs analysis permits us to reject the case for $ \ m \ \ge \ 24 \ \ . $ This is confirmed by investigating the function values. For $ \ m = 24 \ \ , $ the single intersection at $ \ x = -\frac12 \ $ gives us $ \ y \ = \ -\frac{81}{8} \ \ . $ As $ \ m \ $ increases, the intersection bifurcates into two points with $ \ x \rightarrow -1^{+} \ $ and $ \ x \rightarrow 0^{-} \ \ . $ For the former intersection, $ \ y \rightarrow -\infty \ \ , $ while the latter approaches the $ \ y-$intercept at $ \ ( 0 \ , -4 ) \ \ . $ Hence, no common zero is possible for this case.
In a similar fashion, we can show that $ \ m \ $ cannot have a large negative value if there is to be a common zero for the two polynomials. (The Rule of Signs leaves the possibility open for $ \ m < 0 \ \ . ) $ If we choose a convenient value of $ \ m = -8 \ \ , $ the intersections are found at $ \ x = -\frac32 \ $ and $ \ x = +\frac12 \ \ , $ with the corresponding function values $ \ y = \frac{85}{8} \ $ and $ \ -\frac{15}{8} \ \ ; $ we find that the S-turns in the two cubic curves are intertwining. As $ \ m \ $ decreases from this value, the $ \ x-$coordinates of the intersections behave as $ \ x \rightarrow -1^{-} \ $ and $ \ x \rightarrow 0^{+} \ \ , $ with the former intersection having $ \ y \rightarrow +\infty \ $ and the other intersection again approaching $ \ ( 0 \ , -4 ) \ \ . $ So there can be no common zero for this situation either.
Thus, if a common zero between the polynomials exists, we must have $ \ -8 < m < 0 \ \ . $ We could continue to analyze the behavior of the intersections, but we see that they must lie not far from the origin. So we could just try common factors of $ \ 4 \ $ and $ \ -2 \ $ at this point to see what happens.
It might be remarked here that there is something suspicious about these real zeroes. The Rule of Signs shows that there is one positive and one negative real zero among the two polynomials. But a cubic polynomial with real coefficients must have either one or three real zeroes: could it be that there is a double zero?
On testing possible integer zeroes, we observe for the intersections that
$ \mathbf{x = +1 \ \ } \Rightarrow \ \ 1 \ - \ \frac{24}{m} \ = \ 9 \ \ \Rightarrow \ \ m \ = \ -3 \ \ ; $
$ \mathbf{x = -1 \ \ } $ [not admissible: asymptotic value] ;
$ \mathbf{x = +2 \ \ } \Rightarrow \ \ 1 \ - \ \frac{24}{m} \ = \ 25 \ \ \Rightarrow \ \ m \ = \ -1 \ \ ; $
$ \mathbf{x = -2 \ \ } \Rightarrow \ \ 1 \ - \ \frac{24}{m} \ = \ 9 \ \ \Rightarrow \ \ m \ = \ -3 \ \ . $
For the $ \ m = -1 \ $ case, the function curves intersect at $ \ (2 \ , 8) \ \ , $ so $ \ x = 2 \ $ is not a zero of either function. On the other hand, for $ \ m = -3 \ $ , we find two intersections at $ \ x = -2 \ $ and $ \ x = +1 \ \ ; $ we find the function values to be
$ \mathbf{f(x) \ = \ x^3 \ + \ 3x^2 \ - \ 4 \ \ : } \quad f(-2) \ = \ -8 + 12 - 4 \ = \ 0 \ \ \ , \ \ \ f(1) \ = \ 1 + 3 - 4 \ = \ 0 \ \ ; $
$ \mathbf{g(x) \ = \ x^3 \ - \ 3x \ + \ 2 \ \ : } \quad g(-2) \ = \ -8 + 6 + 2 \ = \ 0 \ \ \ , \ \ \ f(1) \ = \ 1 - 3 + 2 \ = \ 0 \ \ . $
[More thorough analysis would show that the intersections move away from the $ \ x-$axis as $ \ m \ $ is "shifted away" from $ \ -3 \ \ . ] $
Polynomial or synthetic division shows that
$$ x^3 \ + \ 3x^2 \ - \ 4 \ \ = \ \ (x - 1) · (x + 2)^2 \ \ \ \text{and} \ \ \ x^3 \ - \ 3x \ + \ 2 \ \ = \ \ (x - 1)^2 · (x + 2) \ \ , $$
which means that the singleton zero of each of these polynomials makes an intersection with the double zero of the other (an initially unanticipated property!). The only solution for the problem then is $ \ \mathbf{m \ = \ -3 \ \ } . $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4227185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 4
} |
Equivalence and isomorphism of $\hom$s in $2$-categories/bicategories In a usual $(1)$-category $C$, if $A$ is an object and $B\cong B'$ are isomorphic objects, then the hom-sets $\hom(A,B)\cong \hom(A,B')$ are in bijective correspondence, because composition with the isomorphism $B\cong B'$ yields a bijection.
Now, consider a bicategory or $2$-categories $C$, i.e. the $\hom$s are not sets but categories. The objects of such a $\hom(A,B)$ are then called 1-morphisms and the morphisms of such a $\hom(A,B)$ $2$-morphisms. There are now two notions of sameness between objects:
*
*$A$ and $B$ are isomorphic if there are 1-morphisms $f\colon A\to B$ and $g\colon B\to A$ such that $gf=id$ and $fg=id$.
*$A$ and $B$ are equivalent if there are 1-morphisms $f\colon A\to B$ and $g\colon B\to A$ such that $gf$ and $fg$ are isomorphic to $id$ (in the respective $\hom$-categories), as witnessed by pairs of 2-morphisms.
Now, fix objects $A$, $B$, and $B'$. Questions:
*
*Let $B$ and $B'$ be isomorphic. Does it follow that the category $\hom(A,B)$ is isomorphic to $\hom(A,B')$? Does it follow that the category $\hom(A,B)$ is equivalent to $\hom(A,B')$?
*Let $B$ and $B'$ be equivalent. Does it follow that the category $\hom(A,B)$ is equivalent to $\hom(A,B')$? Does it follow that the category $\hom(A,B)$ is isomorphic to $\hom(A,B')$?
| As Zhen Lin said in the comments:
In a (strict) 2-category isomorphic objects have isomorphic hom-categories, and equivalent objects have equivalent hom-categories.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4227339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In classical predicate logics, why is it usually assumed that at least one object exists? In classical predicate logic it is commonly assumed that the domain of objects is non-empty. This validates inferences such as $$\forall x Fx \models \exists x Fx$$ as well as, if the identity predicate is available, the logical truth of $$\exists x~x=x.$$ What is the motivation for this assumption? In reality it is arguably not a logical contradiction to assume that nothing exists.
| If you allow the domain to be non-empty, then you need to introduce more complexity to handle non-logical symbols.
You can relax or eliminate the assumption of domain non-emptiness, which gives you various flavors of free logic.
Free logic on the face of it addresses a slightly different problem, namely how to handle constant symbols that happen not to refer to anything or partial functions.
However, these are questions that you need to address if you want to allow constant symbols in your non-logical vocabulary. In the theory of Peano Arithmetic, for example, there's a constant $0$. In an empty structure, there's nothing that $0$ can refer to ... so it's not clear how to build a structure with an empty domain that could be or not-be a model of PA.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4227479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 2
} |
Why aren't all points in a set limit points (by this definition)? I am self-studying Real Analysis using S. Abbot's Understanding Analysis and am a bit confused with the definition of a limit point of a set on page 89. The definition is as follows:
A point $x$ is a limit point of a set $A$ if every $\epsilon$-neighbourhood $V_\epsilon(x)$ of $x$ intersects the set $A$ at some point other than $x$.
This confuses me a bit - doesn't this apply to all points in $A$? for example, if we defined some point arbitrary $a$ somewhere in the middle of the set, won't we be able to keep shrinking the size of the $\epsilon$-neighbourhood, and by archimedian property, always find a member of this $\epsilon$-neighbourhood that is a member of A but not $x$?
Can someone help me reconcile this? I am a bit confused. Also, the additional notion of isolated points make this even more confusing - perhaps someone could explain the differences as well?
Thanks!
|
This confuses me a bit - doesn't this apply to all points in A? for
example, if we defined some point arbitrary a somewhere in the middle
of the set, won't we be able to keep shrinking the size of the
ϵ-neighbourhood, and by archimedian property, always find a member of
this ϵ-neighbourhood that is a member of $A$ but not $x$?
No. Consider $\ A = [0,1]\cup \{ 2\}.\ $ For any $\ \varepsilon<1,\quad V_{\varepsilon}(2) = \{2\}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4227610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Does there exist a dense subset $\ X\ $ of $\ \mathbb{R}\ $ and a real number $\ a\neq 0\ $ such that $\{\ x+a:\ x\in X\ \} =\mathbb{R}\setminus X\ ?$ Does there exist a dense subset $\ X\ $ of $\ \mathbb{R},\ $ and a real number $\ a\neq 0\ $ such that $\ \{\ x+a:\ x\in X\ \} = \mathbb{R}\setminus X\ ?$
Clearly our set $\ X\ $ must be totally disconnected and uncountable on every interval in order for $\ X\ $ to exist.
But I'm actually not sure on the answer to this question, and find it hard to give even a heuristic argument as to why the answer should be negative or affirmative. Maybe it has something to do with the Baire Category Theorem?
| $$X=\{ x \in \mathbb R : \lfloor x\rfloor\text{ is odd}\}\mathbin{\triangle} \mathbb Q = \bigcup_{n\in\mathbb Z}\,\Bigl([2n,2n+1)\cap\mathbb Q\Bigr) \cup \Bigl([2n+1,2n+2) \setminus \mathbb Q\Bigr) $$
This works for $a=1$. Scale it by your favorite $a$ if you want a different one.
(This $X$ is countable on every other unit interval, by the way).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4227721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Compute upper limit and lower limit Consider the sequence, $0<a_{1}<a_{2}$ and
\begin{eqnarray*}
a_{n}= \frac{a_{n-1}+a_{n-2}}{2}
\end{eqnarray*}
for $n\geq 3$. Show that:
\begin{eqnarray*}
\overline{\lim a_{n}}=\underline{\lim a_{n}}
\end{eqnarray*}
For this, the bounds are $a_{1}$ and $a_{2}$ so I can describe the sequence:
\begin{align*}
\overline{a_{k}} &= \begin{cases}
a_{k} & \text{if } k \text{ is even} \\
a_{k+1} & \text{if } k \text{ is odd}
\end{cases} \\
\underline{a_{k}} &= \begin{cases}
a_{k} & \text{if } k \text{ is odd} \\
a_{k+1} & \text{if } k \text{ is even}
\end{cases}
\end{align*}
So $\{\overline{a_{k}}\}=\{a_{2},a_{4},a_{6},...\}$ and $\{\underline{a_{k}}\}=\{a_{1},a_{3},a_{5},...\}$, the sequence $\{\overline{a_{k}}\}$ is decreasing and the sequence $\{\underline{a_{k}}\}$ is increasing, so, by Weirstrass's theorem are convergent but when I try to compute $\overline{\lim a_{n}}$ or $\underline{\lim a_{n}}$ I try to do this, Since:
\begin{eqnarray*}
a_{2k}=\frac{a_{2k-1}+a_{2k-2}}{2}
\end{eqnarray*}
for $n\geq 3$ and $\lim a_{2k}= x = \lim a_{2k-1}= \lim a_{2k-2}$, but this don't say a great information, Can you give some hint or advice to start? Thank you
| At each step, you take the center of the two previous terms. That means that $a_3$ is the center of $[a_1,a_2]$, samely $a_4$ is the center of $[a_2,a_3]$, etc... This suggests that $a_{n+1}-a_n$ behaves like $\frac{1}{2^n}$, indeed
$$ a_{n+2}-a_{n+1}=\frac{a_{n+1}+a_n}{2}-a_{n+1}=-\frac{1}{2}(a_{n+1}-a_n) $$
Therefore $a_{n+1}-a_n=\lambda\left(-\frac{1}{2}\right)^n$ where $\lambda$ is a constant. Summing this gives that $a_n=\alpha+\beta\left(-\frac{1}{2}\right)^n$ where $\alpha,\beta$ are constants. Therefore $(a_n)$ converges and thus $\liminf\limits_{n\rightarrow +\infty} a_n=\limsup\limits_{n\rightarrow +\infty} a_n=\lim\limits_{n\rightarrow +\infty} a_n$.
Note : You can directly show that $a_n=\alpha+\beta\left(-\frac{1}{2}\right)^n$ by searching the roots of $X^2-\frac{1}{2}X-\frac{1}{2}$, which are $1$ and $-\frac{1}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4227839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
$k[x,y]$ is not a Prüfer Domain? I know that if $R$ is a domain, the following conditions are equivalent:
*
*$R$ is a Prüfer Domain,
*If $I,J,K \subset R$ are nonzero ideals, $I + (J \cap K) = (I \cap J) + (I \cap K)$
*If $I,J,K \subset R$ are nonzero ideals, $I \cap (J + K) = (I + J) \cap (I + K)$
It is stated that $k[x,y]$ is not a Prüfer Domain where $k$ is a field in When do (multivariate) polynomial rings fail to be Prüfer rings?. However, I can not find any three nonzero ideals $I,J,K \subset k[x,y]$ such that $I + (J \cap K) \neq (I \cap J) + (I \cap K)$ or $I \cap (J + K) \neq (I + J) \cap (I + K)$.
Any ideas would be appreciated.
| Let $$I=\langle x^2,y^2, xy \rangle $$ $$J=\langle x^2 \rangle$$ $$K=\langle y^2 \rangle$$ Then
$$I+(J \cap K) = \langle x^2,y^2, xy \rangle + \langle x^2y^2 \rangle=\langle x^2,y^2, xy \rangle=I$$
$$I \cap J = J \qquad \qquad \qquad I \cap K = K$$
$$(I \cap J) + (I \cap K)= J+K=\langle x^2, y^2 \rangle \neq I$$
Indeed, using this argument you can prove a more general result:
In any Prufer domain you have the equality of ideals:$$\langle a, b \rangle^2 = \langle a^2, b^2 \rangle$$
Since in $k[x,y]$ this does not hold ($\langle x, y \rangle^2 = \langle x^2, y^2, xy \rangle$ ) this is not a Prufer domain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4228021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Optimising the given function I am required to find minimum value of the following function without calculus.
$$f(a,b)=\sqrt{a^2+b^2}+2\sqrt{a^2+b^2-2a+1}+\sqrt{a^2+b^2-6a-8b+25}$$
My attempt:
I realised that I can write the function as $$f(a,b)=\sqrt{(a-0)^2+(b-0)^2}+2\sqrt{(a-1)^2+(b-0)^2}+\sqrt{(a-3)^2+(b-4)^2}$$ which when $a$ and $b$ replaced with coordinates in Cartesian plane represents distances from points $(1,0),(0,0)$ and $(3,4).$ But I'm not sure how I can minimise this.
| The function to be minimized is composed of three addends, one of which has a double coefficient. If we eliminate it, imposing that
$\sqrt{a^{2}+b^{2}-2a+1}=0$
give zero contribution, we get:
$a=1$ and $b=0$.
Therefore the minimum is:
$f(1,0)=\sqrt{1^2+0^2}+2\sqrt{1^2+0^2-2+1}+\sqrt{1^2+0^2-6-0+25}=1+\sqrt{20}=1+2\sqrt{5}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4228156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 3
} |
Understanding the martingale property My lecture notes says that "The idea of the martingale property is that, on average, the Markov chain stays where it is and for this to be true, the chain must stay where it is all the time (i.e. be in an absorbing state) or be able to move in both directions. This shows that, for a martingale on the state space $$S = \{0, \dots, d\},$$ the states $0$ and $d$ must be absorbing."
Intuitively, I understand why the states $0$ and $d$ must be absorbing.
However, my lecture notes then goes on to say that "For a more formal demonstration, note that the martingale property shows that $$\sum^d_{y = 0} yP(0, y) = 0$$ so that $$P(0, 1) = P(0, 2) = \dots = P(0, d - 1) = P(0, d) = 0$$ and we see that state $0$ is absorbing. A similar argument shows that state $d$ is absorbing."
This is the part that I do not quite understand yet.
Thus, my question is, to show that state $d$ is absorbing, I can get from $$\sum^d_{y = 0} yP(d, y) = d$$ to $$P(d, 1) + 2P(d, 2) + \dots + (d - 1)P(d, d - 1) + dP(d, d) = d,$$ but how do I conclude that $$P(d, d - 1) = P(d, d - 2) = P(d, 1) = P(d, 0) = 0?$$
I am only taking an introductory module in stochastic processes, so any explanations to the mathematical proof will be greatly appreciated :)
| If we're in state $d$, we can only stay in $d$ or transition to some value less than $d$ - that is to say, the probability of increase is 0. But this is a martingale, so our expected movement must be zero - it follows trivially that the probability of decrease must also be zero, and so the state is absorbing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4228319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Shortest equivalent word for FPG Given a finitely presented group $G$ and a word $w$ in $G$, is there always an efficient algorithm to find the set of shortest words equivalent to $w$?
I am guessing that the problem of finding the shortest words is undecideable if the group is infinite, but possibly efficient if the group is finite.
| The word problem for a group $G$ given by a finite presentation $\langle \mathbf{x}\mid\mathbf{r}\rangle$ is the problem of determining whether a word $W\in(\mathbf{x}^{\pm1})^*$, i.e. a word over the generators $\mathbf{x}$ and their formal inverses, defines the trivial element of $G$. This problem is undecidable in general.
Theorem.
Fix a group $G=\langle \mathbf{x}\mid\mathbf{r}\rangle$.
There exists an algorithm with input a word $W\in(\mathbf{x}^{\pm1})^*$ and output the set of shortest words equivalent to $W$ if and only if $G$ has decidable word problem.
Proof.
The word problem in $G=\langle \mathbf{x}\mid\mathbf{r}\rangle$ reduces to the problem here, as a word defines the trivial element if and only if the set of shortest words equivalent to $W$ is precisely $\{\epsilon\}$ (here $\epsilon$ denotes the empty word, the unique word of length $0$). Hence, there is no algorithm to solve your problem in groups with undecidable word problem.
Suppose on the other hand that $G=\langle \mathbf{x}\mid\mathbf{r}\rangle$ has decidable word problem. Then enumerate all words $U\in(\mathbf{x}^{\pm1})^*$ such that $|U|\leq|W|$. By applying the word problem to $U^{-1}W$, we can find all such words which are additionally equal to $W$ in $G$. Hence, we have found the set $\mathcal{S}$ of all words $U$ which are no longer than $W$ and which are equal to $W$ in $G$. The set we are seeking is the set of shortest elements in $\mathcal{S}$, which is now easily computable. QED
In particular, there are infinite groups where this problem can be solved (e.g. finitely generated abelian groups and free groups).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4228422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What is the measure of $\measuredangle B$ on the trapeze below? For reference: In the ABCD trapeze(CB//AD), $m\measuredangle B = 4m \measuredangle D$ and $11AB + 5BC = 5AD$. Calculate $\measuredangle D$. (answer $26,5^\circ$}
My progress:
I could only find a relationship between the sides but I can't find a relationship with the angles
$Draw ~BE \parallel CD \implies \measuredangle BEA = \theta\\
Draw ~BF \measuredangle FBE = \theta \therefore \measuredangle ABF = 2\theta ~and~\measuredangle AFB = 2\theta\\
\triangle ABF~\triangle EFB \text{ are isosceles}\\
11m + 5a = 5(m+n+a)\rightarrow 11m+5a = 5m+5n+5a\rightarrow\\
11m = 5m + 5n \therefore m = \frac{5n}{6}$
| Let $K\in AD$ such that $ABCK$ be a parallelogram.
Thus, by law of sines for $\Delta KCD$ we obtain:
$$\frac{m}{\sin\theta}=\frac{\frac{11m}{5}}{\sin3\theta}.$$
Can you end it now?
I got $\theta=\arcsin\frac{1}{\sqrt5}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4228670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Let $R$ be a commutative ring with unity. $s\in U(S) \iff \det(s) \in U(R)$ Let $R$ be a commutative ring with $1_R$ and
let
$$
S = \biggl\{ \begin{pmatrix} a & b \\ 0 & c \end{pmatrix} \;\Biggm| \; a,b,c \in R \;\biggr\}.
$$
If $s = \begin{pmatrix} a & b \\ 0 & c \end{pmatrix} \in S$, is it true that
$$
s \in U(S) \iff \det(s) \in U(R)?
$$
My instinct says it isn't but I cant find an example so I can make my mind right.
| Answer: For any commutative ring $A$ and any $n\times n$ matrix $R\in Mat(n,A)$ with coefficients in $A$ you may define the adjunct matrix $adj(R)$. This matrix has the property that $adj(R)R=Radj(R)=det(R)Id$ where $Id$ is the $n\times n$ identity matrix and $det(R)$ is the "determinant". The determinant $der(R)\in A$ is multiplicative: $det(RR')=det(R)det(R')$. From this it follows that $R$ is invertible (there is a matrix $R^{-1}\in M(n,A)$ with $RR^{-1}=R^{-1}R=Id$) iff $det(R)\in A^*$ is a unit in $A$.
Proof: If $R$ is invertible it follows $RR^{-1}=Id$ hence $det(RR^{-1})=det(R)det(R^{-1})=det(Id)=1$ hence $det(R)\in A^*$. Conversely if $det(R)\in A^*$ it follows $det(R)^{-1}adj(R):=R^{-1}$ is an inverse in $M(n,A)$.
You may check that in your case it follows $adj(s)$ is upper triangular, hence in your case the inverse (if it exists) lives in $S$. There is an explicit formula for the adjunct matrix for $2\times 2$-matrices:
Why the inverse of a matrix involves division by the determinant?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4228784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Does a finite-dimensional Grassmannian classify the subbundles of a trivial vector bundle? It is known that the universal vector bundle over the infinite-dimensional Grassmannian,
$$
E \longrightarrow Gr_n(\mathbb{R}^{\infty}),
$$
classifies the rank $n$ vector bundles in the sense that any such vector bundle (let me assume that $B$ is a compact CW complex)
$$
E' \longrightarrow B $$
is isomorphic to the pullback
$$
f^{*}E \longrightarrow B $$
for some $f: B \rightarrow Gr_n(\mathbb{R}^{\infty})$. Furthermore, two bundles
$f^{*}E$ and $g^{*}E$ are isomorphic if and only if $f$ and $g$ are homotopic.
Is there a version of this correspondence for the subbundles of a fixed trivial bundle? To be precise,
*
*Let $F_n^{D}$ be the canonical vector bundle over $Gr_n(\mathbb{R}^D)$. For fixed $n, d$, can we find $D$ with the following property: for every rank $n$ subbundle $F'$ of the trivial bundle $B\times \mathbb{R}^d$, there is a $f:B \rightarrow Gr_n(\mathbb{R}^D)$ such that $F'\approx f^{*}F_n^D$ ?
*Can we assume $D=d$?
*For two $f,g: B \rightarrow Gr_n(\mathbb{R}^D)$, does $f^{*}F_n^D \approx g^{*}F_n^D$ hold if and only if $f,g$ homotopic? How does the answer depend on the dimension of $B$?
| Connor Malin's comment pointed out that there is not likely to be a representability theorem of subbundles of a trivial bundle as in the infinite-dimensional Grassmannian case. This resolved most of my initial concerns.
Aside from that, I think I found a direct counter-example of 1&2&3 in the case $B=S^1\times S^1\times S^1$.
For simplicity, I will take $F=\mathbb{C}$ as the base field. The following construction is used in condensed matter physics to build a model of Hopf Insulator.
$$
f: B \longrightarrow Gr_1(\mathbb{C}^2)=\mathbb{C}P^1,
\\
(k_1,k_2,k_3) \mapsto [\sin{k_1}+i\sin{k_2}:\sin{k_3}+i(\cos{k_1}+\cos{k_2}+\cos{k_3}-3/2)],$$
where $k_i$'s are the periodic coordinates on $B=T^3$.
$f$ defines the pullback line bundle $f^{*}E$ of the canonical bundle $E$ over $\mathbb{C}P^1$, which is trivial since it has a nonvanishing section
$$
\sigma:B\longrightarrow f^{*}E \subseteq B\times \mathbb{C}^2,
\\
\mathbf{k} \mapsto \bigg(\mathbf{k},\big(\sin{k_1}+i\sin{k_2},\sin{k_3}+i(\cos{k_1}+\cos{k_2}+\cos{k_3}-3/2) \big)\bigg). $$
Hence hypotheses 1&2&3 in the question cannot be true simultaneously in the complex case unless $f$ is nulhomotopic.
In fact, $f$ restricts to the nulhomotopic maps on the surfaces ${1}\times S^1\times S^1, S^1\times {1}\times S^1, S^1\times S^1\times {1}$ and homotopy extension property of CW pairs enables us to see $f$ as
$$
\widetilde{f}: S^3 \longrightarrow \mathbb{C}P^1\approx S^2.
$$
$\widetilde{f}$ has nonzero Hopf invariant,
$$
\chi=-\frac{1}{4\pi^2}\int_{T^3} d\mathbf{k}\;\mathbf{F}\cdot\mathbf{A}=1, \quad\text{where}
\\
\mathbf{A}(\mathbf{k})=i \overline{f(\mathbf{k})} \cdot \nabla_{\mathbf{k}} f(\mathbf{k}) \;(\text{here $f$ maps into $\mathbb{C}^2$}),\;
\mathbf{F}(\mathbf{k})= \nabla_{\mathbf{k}} \times \mathbf{A}(\mathbf{k}),
$$
which implies that $\widetilde{f}$ cannot be homotoped to a constant map.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4228946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Linear Programming: At least two different (integer) values Let's assume I have multiple tools $T=\{t_1,\dots,t_n\}$ (binary decision variables) and each tool has an efficiency, $E=\{e_1,\dots,e_n\}$. For a given $k$ I now want to maximize the efficiency:
$$\max \sum e_i t_i$$ $$\text{s.t. } \sum t_i \leq k$$
The solution would indicate which tools to choose (yes, in this very simple example a greedy algorithm (take the $k$ most efficient tools) would also do the trick ;-))
But now let's say, the tools also have an associated category $C=\{c_1,\dots,c_n\}, c_i\in\mathbb{N}$ and it is required that at least two (or more generally: $m\leq n$) tool categories are present in a feasible solution. Is there any way to encode this in a linear constraint?
Note that this is not a "no two pairs are equal" question as a certain category can occur multiple times as long as at least $m=2$ categories are covered.
| Add a new variable $\rho_i \in [0,1]$ per category,
together with the constraints
$$\rho_i \le \sum_{j : c_j = i} t_i$$
and
$$ \sum_i \rho_i \ge m.$$
Then, $\rho_i$ can only be $1$, if there is at least one tool from this category. Hence, the second constraint requires that we have $m$ different categories.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4229026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find all $a,b$ for which the polynomial has real roots and are in geometric progression. Find all $a, b$ such that the roots of $x^3 + ax^2 + bx − 8 = 0$ are real and in a geometric progression.
I did deduce the answer till $a=\dfrac{-b}{2}$.
Using the Vieta's relations I deduced that if $\alpha,\beta$ and $\gamma$ are the roots of the equation (in the same order) then $\beta^2 = \alpha \gamma$, which implies $\beta^3 =8$ or $\beta=2$, and using the other two relations, I found that $a=\dfrac{-b}{2},$ but got stuck while writing the values of $a$ for which the equation satisfies.
| Let $\dfrac{2}{r}$, $2$ and $2r$ be the roots.
Since you have $b=-2a$,
$$8r^3+4ar^2-4ar-8=0$$
$$(r-1)[2r^2+(a+2)r+2]=0$$
Now check the discriminant in $2r^2+(a+2)r+2=0$,
$$\Delta=(a+2)^2-16 \ge 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4229233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
evaluate $\int_0^{\pi/2} x^2\log(\sin x)\,dx$ I am a high school student , I know how to evaluate $\int_0^{\pi/2} x\log(\sin x)\,dx$.
It would be great if someone can help me evaluating $\int_0^{\pi/2} x^2\log(\sin x)\,dx$ and tell me if this integral is elementary or non elementary .
I tried using the "a-x" property but it resulted in $0=0$
| Let
$$S=\int_0^{\pi/2} x^2\log(2\sin x)\,dx,\>\>\>\>\>
C=\int_0^{\pi/2} x^2\log(2\cos x)\,dx$$
Since you already knew
$\int_0^{\pi/2} x\log(2\cos x)\,dx=-\frac7{16}\zeta(3)$, apply the variable change $x\to\frac\pi2-x$ to the $S$-integral to get
$$S - C =-\pi\int_0^{\pi/2} x\log(2\cos x)\,dx=\frac{7\pi}{16}\zeta(3)\tag1
$$
Also
\begin{align}
S+C & = \int_0^{\pi/2} x^2\log(2\sin 2x)\,dx
\overset{2x\to x}=\frac18\int_0^{\pi} x^2\log(2\sin x)\,dx\\
&=\frac18S + \frac18\int_{\pi/2}^{\pi} x^2\log(2\sin x)\overset{x\to\frac\pi2+x}{ dx}\\
&= \frac18(S+C) + \frac\pi8 \int_0^{\pi/2} x\log(2\cos x)\,dx
= -\frac\pi{16}\zeta(3)\tag2
\end{align}
Combine (1) and (2) to obtain $S= \frac{3\pi}{16}\zeta(3)$, or
$$\int_0^{\pi/2} x^2\log(\sin x)\,dx= \frac{3\pi}{16}\zeta(3)-\frac{\pi^3}{24}\ln2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4229391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
If $(Av,Au)=(v,u)$ then matrix $A$ is orthogonal
Let $A\in M_{n \times n}(\Bbb R)$ and suppose that for every $u, v \in \Bbb R^{n}$ $$(Av,Au) = (v,u)$$ where $(\cdot,\cdot)$ is the standard inner product on $\Bbb R^{n}$. Prove $A$ is an orthogonal matrix.
I wasn't able to solve it, I got to the point
$\left(Av,Au\right)=\left(Av\right)^{T}Au=v^{T}A^{T}Au$ and $\left(v,u\right)=v^{T}u$
$\left(Av,Au\right)=\left(v,u\right)\ \ \ ➜\ \ \ v^{T}A^{T}Au\ =\ v^{T}u$
and now I'm stuck.. I don't know if I can conclude that $A^{T}A=I_{R^{n}}$ just by the last equation, and if not how to get to the point I can show it..
Now I have two questions, first the official solution is this
Let $u = e_{i}$ and $v = e_{j}$ . Then $(a_{i} , a_{j} ) = (Ae_{i} , Ae_{j} ) = (e_{i} , e_{j} ) = δ_{i,j}$ . Thus, the columns of $A$ are
orthonormal, so $A$ is orthogonal.
This is super unclear.. what is $a_{i},a_{j}$? what is $δ_{i,j}$?
I understand that $(e_{i},e_{j})=0$ if $i \ne j$ and 1 otherwise, but why can we choose $v,u$ if it's a for every claim? If someone can explain to be the logic behind the solution I'd be grateful.
the second question is if there is another way to solve it?
| $e_i$ is an orthogonal basis. $a_i=Ae_i$ for each $i$. $\delta_{i,j}$ is called the kroneckerdelta function and is simply $1$ if $i=j$ and $0$ if $i\neq j$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4229521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.