Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Finding $\int \frac{\cos(2x) dx}{\cos^4x+\sin^4x}$ $\int \frac{\cos(2x) dx}{\cos^4x+\sin^4x}$
I'm looking more into simplifying this than the solution itself (which I know involves using t=tan(x/2)).
I did:
$$\int \frac{\cos(2x) dx}{\cos^4x+\sin^4x} = \int\frac{\cos(2x)dx}{(\cos^2x+\sin^2x)^2-2\sin^2x\cos^2x} = \int\frac{\cos(2x)dx}{1-2\sin^2(2x)}$$
I tried applying the t=tan(y/2) (where y = 2x) but it became a mess so I figure this can be simplified further... help?
| We know that
$$\begin{align}
\cos(2x) &= \cos^2(x) + \sin^2(x)\\
\sin(x) &= \dfrac{\tan(x)}{\sec(x)}\\
\sec^2(x) &= 1 + \tan^2(x)
\end{align}$$
So,
$$\int\dfrac{\cos(2x)}{\sin^4(x)+\cos^4(x)}\,\mathrm dx \equiv \int\sec^2x\left(\dfrac{-(\tan(x) - 1)(\tan(x) + 1)}{\tan^4(x) + 1}\right)\,\mathrm dx$$
Let $u = \tan(x)$. So, $\dfrac{\mathrm du}{\mathrm dx} = \sec^2(x)\to\mathrm dx = \dfrac{\mathrm du}{\sec^2(x)}$.
$$\implies\int\sec^2x\left(\dfrac{-(\tan(x) - 1)(\tan(x) + 1)}{\tan^4(x) + 1}\right)\,\mathrm dx\equiv-\int\dfrac{(u - 1)(u + 1)}{u^4 + 1}\mathrm du$$
Now, you should be able to factor the denominator and use partial fractions to get the final answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3480289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Taylor series of $x \cdot \ln(10+x) $ with $x=9$ point of expansion Have been trying to solve this for quiet long, however still have no idea of how to.
$$x \cdot \ln(10+x) $$
with $x=9$ point of expansion.
Will appreciate any advice, thanks in advance!
| $$f(x)=x \ln (19+x-9)= x \ln 19+ x\ln\left(1+\frac{(x-9)}{19}\right)$$
Let $(x-9)/19=z$, then
$$\implies f(x)= 19 z \ln 19+ 9 \ln 19 + 19z \ln(1+z)+9 \ln (1+z)$$
Now use $$\ln(1+z)=-z-z^2/2-z^3/2--...$$
to expand $f(x)$ in the powers of $z=(x-9)/19.$ and add the co-efficients of similar powers of $z$.WE get
$$f(x)=9\ln 19+ (9+19\ln 19) z+\frac{29}{2}z^2-\frac{13}{2} z^3+\frac{49}{12} z^4-..., z=(x-9)/19.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3480456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are strictly convex Banach norms Fréchet differentiable? Suppose $(V, \|\cdot\|_V)$ and $(W, \|\cdot\|_W)$ are two Banach spaces and $f: V \to W$ is some function. We call a bounded linear operator $A \in B(V, W)$ Fréchet derivative of $f$ in $x \in V$ iff
$$\lim_{h \to 0} \frac{\|f(x + h) - f(x) - Ah\|_W}{\|h\|_V} = 0$$
We call a $f$ Fréchet differentiable in $x$ iff there exists a Fréchet derivative of $f$ in $x$.
We call a Banach space $(V, \|v\|)$ strictly convex, iff $\forall x \neq y \in V, \lambda \in (0,1)$ if $\|x\|=\|y\|=1$, then $x + \lambda(y-x) < 1$.
Hilbert spaces are a particular case of strictly convex spaces.
Proof:
If $\langle x, x\rangle = 1$ and $\langle y, y \rangle = 1$, then $\langle x + \lambda(y-x), x + \lambda(y-x) \rangle = (1-\lambda)^2 + \lambda^2 + 2(1-\lambda)\lambda \langle x, y \rangle < (1-\lambda)^2 + \lambda^2 + 2(1-\lambda)\lambda = 1$
My question is:
Suppose $(V, \|\cdot\|_V)$ is a strictly convex Banach space. $f: V \to \mathbb{R}, v \mapsto \|v\|_V$. Is it true, that $f$ is Fréchet differentiable $\forall x \in V \setminus \{0\}$?
If $V$ is a Hilbert space, then it is true.
Proof:
One can manually check, that $h \mapsto \frac{h}{2\sqrt{x_0}}$ is a Fréchet derivative for $x \mapsto \sqrt{|x|}$ in $x_0 \neq 0$. One can also manually check, that $h \mapsto 2\langle v, h \rangle_V$ is a Fréchet derivative for $x \mapsto \langle x, x \rangle_V$ in all $v \in V$. And it is a well known fact, that the composition of Fréchet derivatives of two functions is a Fréchet derivative of their composition. Thus, as $\|v\|_V = \sqrt{\langle v, v \rangle_V}$, we have, that $h \mapsto \ \frac{\langle v, h \rangle_V}{\|v\|_V}$ is a Fréchet derivative of $\|v\|_V$ in all $v \in V \setminus \{0\}$.
However condition of “strict convexity” can not be omitted here. $(\mathbb{R}^2, l_\infty)$ is a counterexample.
| Here is a counterexample on $\mathbb R^2$:
$$
\|(x,y)\| := \sqrt{ \max(x^2 + 2y^2, \ 2x^2 + y^2 )}.
$$
It is the maximum of two striclty convex norms.
It is strictly convex and not differentiable along for points with $|x|=|y|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3480598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
An elegant proof for a claim about orthogonal and positive-definite matrices Let $P$ be a real $n \times n$ symmetric positive-semidefinite matrix.
Suppose that $\langle O,P \rangle = \langle I,P \rangle$ for some orthogonal matrix $O$. Here $\langle , \rangle$ is the Euclidean (Frobenius) inner product. The assumption is equivalent to $\text{tr}(O^TP)=\text{tr}(P)$.
Then $OP=P$ holds. Since we can replace $O$ with $O^T$ the claim is equivalent to the seemingly surprising assertion $\text{tr}(OP)=\text{tr}(P) \implies OP=P$.
I have a proof, and I wonder whether there are other shorter proofs.
Is there a proof which does not require diagonalizing $P$?
Here is my proof: By orthogonally diagonalizing $P$, we can reduce to the case where $P=\Sigma$ is diagonal, with non-negative entries $\sigma_i$.
The assumption implies $\sum_i \sigma_i O_{ii}=\sum_i \sigma_i$ where the sums run over all the indices $i$ where $\sigma_i \neq 0$. Since $|O_{ii}| \le 1$ this forces $O_{ii}=1$ for all these $i$. A direct check then implies that $O\Sigma=\Sigma$:
Indeed, $O\Sigma=\Sigma$ is equivalent to $O_{ij}\sigma_j=\Sigma_{ij}$.
If $\sigma_j=0$, then $\Sigma_{ij}=0$ so equality holds. If $\sigma_j \neq 0$, then $O_{jj}=1$ so equality holds for $i=j$. For $i \neq j$, $\Sigma_{ij}=0$ (since $\Sigma$ is diagonal) and , and $O_{ij}=0$, since whenever $O_{jj}=1$, all the rest of the entries in the $j$-th column of $O$ are zero. (Since $O$ is orthogonal).
| The following proof does diagonalize $P$, but not in matrix form.
Let $\{v_1,\ldots,v_n\}$ be an orthonormal eigenbasis of $P$ and $Pv_i=\lambda_iv_i$ for each $i$. Then
$$
\langle P,I\rangle
= \langle P,O\rangle = \sum_i\langle Pv_i,Ov_i\rangle
\le \sum_i\|Pv_i\|\|Ov_i\|
= \sum_i\lambda_i
= \operatorname{tr}(P)
= \langle P,I\rangle.
$$
Therefore $\langle Pv_i,Ov_i\rangle=\|Pv_i\|\|Ov_i\|$ for each $i$. Since $Pv_i=\lambda v_i$ and $Ov_i$ is a unit vector, we have $\langle v_i,Ov_i\rangle=1$ and in turn $Ov_i=v_i$ whenever $\lambda_i>0$. Hence $OP$ and $P$ agree on $\{v_1,\ldots,v_n\}$, meaning that $OP=P$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3480694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Is this a valid mathematical model (MILP)? Is it ok to calculate values in one set of constraint and than using it for another in MILP model. Here Z and Y are binary variable.
| No, in a mathematical programming model the limits of summation cannot be variables. (In a constraint programming formulation, you can sum a variable number of variables.) One workaround (assuming the $Z_{j,q}$ are nonnegative integers) is the following:
*
*introduce binary variables $x_{1,q},\dots, x_{K_q,q}$, where $K_q$ is the maximum possible value of $\sum_{j=1}^M Z_{j,q}$;
*constrain the new variables so that $x_{k,q}=1 \iff \sum_{j=1}^M Z_{j,q}\ge k$;
*change your second constraint to $\sum_{k=1}^{K_q} Y_{j,k,q}x_{k,q} = 1$; and
*linearize the product in that constraint (which is fairly straightforward if the $Y$ variables are bounded).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3480946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $m$ and $n$ are integers and $mn$ is even, $m$ is even or $n$ is even. I'm looking for feedback on my proof of the following statement:
"If $m$ and $n$ are integers and $mn$ is even, then $m$ is even or $n$ is even."
I tried using a direct proof method:
(1) Since $mn$ is even, $mn = 2k$ for some integer $k$. The integer $k$ must then be equal to $mn/2$, and 2 divides $mn$.
(2) In order for $k$ to be an integer, $m$ or $n$ must then have a factor of two, and the statement is proved.
| The really serious problem in your argument is here:
(2) In order for $k$ to be an integer, $m$ or $n$ must then have a
factor of two, and the statement is proved.
That's a problem because it is just a restatement of what you have been asked to prove.
One good way to start this problem is to suppose that both factors are odd. Then use what you know (and have proved) about the partity of a product of odd numbers.
Your attempt at a direct proof will work once you have proved other important theorems - in particular, the theorem that says that if a prime divides a product it divides one of the factors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3481022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Calculating the genus of Quartic model $y^2=x^4+bx^3+cx^2+dx+e$ of elliptic curves. This link says that $y^2=x^4+bx^3+cx^2+dx+e$ is an elliptic curve.
*
*How do we compute its genus (which should be 1)?
*Under what conditions is $y^2=p(x)$ an elliptic curve where $p(x)$ is polynomial in $x$ of degeree $\ge 5$?
| This is called a hyperelliptic curve.
Basically your questions are already answered in the above wiki link, so please read that first.
I just want to add several important details as complements:
*
*One usually should assume that the polynomial $f(x) = x^4+bx^3+cx^2+dx+e$ doesn't have multiple roots, i.e. that $f$ is coprime to its derivative $f'$. Any multiple root of $f$ will lead to a singularity of the curve, hence will decrease the genus (in this case, necessarily to $0$).
*Even when we assume this, the plane curve $y^2 = f(x)$ does not represent an elliptic curve, even if we look at its completion in $\Bbb P^2$ (that is, the projective curve defined by $Y^2Z^2 = X^4 + bX^3Z + cX^2Z^2 + dXZ^3 + eZ^4$). This is because the point at infinity $(X, Y, Z) = (0, 1, 0)$ becomes a singularity. The correct statement is that the unique complete smooth curve which is birational to the curve $y^2 = f(x)$ has genus $1$. This curve can be obtained by blowing up (possibly multiple times) the previous projective curve at the point at infinity. The point at infinity eventually separates as two "points at infinity".
*Finally, that complete smooth curve is still one step from being an elliptic curve, since one should fix a rational point on the curve (the "point at infinity" for the Weierstrass model, or the neutral element for the group law) to get an elliptic curve. This step however is easy: a common choice is any one of the two "points at infinity" obtained via blow up.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3481144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Taylor expansion about candidate point $x^*$: $f(x^∗ + h) = f(x^∗) + hf'(x^∗) + O(h^2)$? My textbook, Algorithms for Optimization, by Kochenderfer and Wheeler, says the following:
A point can also be at a local minimum if it has a zero derivative and the second derivative is merely nonnegative:
*
*$f'(x^∗) = 0$, the first-order necessary condition (FONC)
*$f''(x^∗) \ge 0$, the second-order necessary condition (SONC)
These conditions are referred to as necessary because all local minima obey these two rules. Unfortunately, not all points with a zero derivative and a zero second derivative are local minima, as demonstrated in figure 1.7.
The first necessary condition can be derived using the Taylor expansion about our candidate point $x^*$:
$$f(x^∗ + h) = f(x^∗) + hf'(x^∗) + O(h^2)$$
$$f(x^∗ − h) = f(x^∗) − hf'(x^∗) + O(h^2)$$
$$f(x^∗ + h) \ge f(x^∗) \Rightarrow hf'(x^∗) \ge 0$$
$$f(x^∗ − h) \ge f(x^∗) \Rightarrow hf'(x^∗) \le 0$$
$$\Rightarrow f'(x^∗)=0$$
Appendix C states the Taylor expansion about $a$ as
$$f(x) \approx f(a) + f'(a)(x - a) + \dfrac{1}{2} f''(a)(x - a)^2$$
So if we want the Taylor expansion about our candidate point $x^*$, as the textbook states, then we have
$$f(x) \approx f(x^*) + f'(x^*)(x - x^*) + \dfrac{1}{2} f''(x^*)(x - x^*)^2,$$
which is not $f(x^∗ + h) = f(x^∗) + hf'(x^∗) + O(h^2)$.
Or if we set $x = x^* + h$, we get
$$f(x^* + h) \approx f(a) + f'(a)(x^* + h - a) + \dfrac{1}{2} f''(a)(x^* + h - a)^2,$$
which is not $f(x^∗ + h) = f(x^∗) + hf'(x^∗) + O(h^2)$.
Or if we set $x = x^* + h$ and take the Taylor expansion about $h$ (rather than $x^*$ as was stated), we get
$$f(x^* + h) \approx f(h) + f'(h)(x^*) + \dfrac{1}{2} f''(h)(x^*)^2,$$
which is not $f(x^∗ + h) = f(x^∗) + hf'(x^∗) + O(h^2)$.
So I'm confused as to how the authors' result makes sense? I would greatly appreciate it if someone would please take the time to clarify this.
| Have you tried
$$x=x^*+h, a=x^*$$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3481277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find all integers $m,\ n$ such that $m^2+4n$ and $n^2+4m$ are both squares.
Find all integers $m,\ n$ such that both $m^2+4n$ and $n^2+4m$ are perfect squares.
I cannot solve this, except the cases when $m=n$.
| Hint : There is an useful technique dealing with some problems including squares. Assume $m \ge n$ and bound $m^2+4n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3481384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Example where $n^{-1/2}S_n\Rightarrow N(0,1)$ but its variance does not converge to 1 Let $\{X_n\}$ be a sequence of independent random variables such that $X_n$ takes the values $\pm n$ each with probability $1/2n^2$ and $\pm 1$ each with probability $1/2(1-1/n^2).$ Define $S_n=X_1+\cdots+X_n$ for $n\geq 1.$ Show that $S_n/\sqrt{n}$ converges in distribution to $N(0,1).$
This example is interesting because although $n^{-1/2} S_n \Rightarrow N(0,1),$ it holds that $\text{Var}[n^{-1/2} S_n]\to 2$ as $n\to \infty.$
My approach: Write $X_n = Z_n + (n-1)Z_n Y_n$ where $Z_n$ are iid, taking values $\pm 1$ with prob. $1/2$ and $Y_n\sim\text{Ber}(1/2n^2),$ such that $Y_n$'s are independent themselves and also independent of the $Z_n$'s. Then, $$\frac{S_n}{\sqrt{n}} = \frac{1}{\sqrt{n}} \sum_{k=1}^n Z_k + \frac{1}{\sqrt{n}} \sum_{k=1}^n (k-1)Z_kY_k.$$ Now the first part in the RHS converges weakly to $N(0,1).$ Hence it suffices to show that the other part converges to $0$ in probability (or in distribution). How can I show this?
| $\sum P(Y_n=1)=\sum \frac 1 {2n^{2}} <\infty$. By Borel - Cantelli Lemma it follows that $Y_n=0$ for all large $n$ with probability $1$. This implies that the second term tends to $0$ almost surely.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3481529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\tanh^2(x) \leq x^2$ I'm looking for an alternative proof to prove that $\tanh^2(x)\leq x^2$. My current proof is by observing that $\tanh'(x)=1-\tanh^2(x)\leq 1$, hence integrating for positive $x$ gives that $\tanh(x)\leq x$ hence $\tanh^2(x)\leq x^2$, the proof follows by parity of both $\tanh^2$ and $x^2$.
Ideally I wouldn't want the use of derivatives or parity.
| Suppose that $x\geqslant 0$. Then$$\sinh x=x+\frac1{3!}x^3+\frac1{5!}x^5+\cdots+\frac1{(2n-1)!}x^{2n-1}+\cdots$$and$$x\cosh x=x+\frac1{2!}x^3+\frac1{4!}x^5+\cdots+\frac1{(2n-2)!}x^{2n-1}.$$Therefore, since we are assuming that $x\geqslant0$, $\sinh x\leqslant x\cosh x$. In other words, $\tanh x\leqslant x$. And, since $\tanh$ is an odd function, $\tanh(-x)=-\tanh(x)\geqslant-x$. So, $(\forall x\in\mathbb R):\bigl\lvert\tanh(x)\bigr\rvert\leqslant\lvert x\rvert$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3481753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show that $\sqrt{\langle T(x), x \rangle}$ is a norm when $T$ is positive definite
I'm having some trouble proving the second property of a norm, i.e.
$$\| x+y\| \leqslant \| x\| + \|y\|.$$
Let $T: R^n \to R^n$ be a linear operator and $\langle, \rangle: R^n\times R^n \to R$ be defined by
$$\langle x,y\rangle=\sum_{i=1}^n x_iy_i \quad \forall \space x, y \in R^n.$$
Assume that $\langle T(x), x\rangle >0 \ \forall x \neq 0$ and
$\langle T(x), y \rangle=\langle T (y), x \rangle \ \forall x, y \in R^n$.
Prove that $f : R^n \to R^n $, defined by $f(x)= \sqrt{ \langle T (x), x\rangle} \quad \forall x \in R^n$ is a norm.
I already know that a solution to my problem is given by an auxiliary function $$g(t)= \langle T (x + ty), x + ty\rangle$$ through which I apply Cauchy-Schwartz inequality, but I proceed in another way, i.e.
$$
\begin{split}
f(x + y)^2
&= \langle T (x+y),x + y\rangle \\
&= \langle T (x), x + y\rangle + \langle T (y), x + y\rangle \\
&= \langle T (x), x\rangle + \langle T (y), y\rangle
+ 2\langle T (x), y\rangle \\ ( f(x)+f(y) )^2
&= \langle T (x), x\rangle + \langle T (y), y\rangle
+ 2\langle T (x), x\rangle \langle T (y), y\rangle
\end{split}
$$
Thus, it remains to check if
$$
\begin{split}
\langle T (x), y\rangle
& \leqslant \langle T (x), x\rangle \langle T (y), y\rangle \\
&= \sum_{i=1}^n T(x_i)x_i \sum_{i=1}^n T(y_i)y_i \\
&= \sum_{i=1}^n T(x_i)y_i \sum_{i=1}^n T(y_i)x_i \\
&= 2\langle T (x), y\rangle.
\end{split}
$$
It follows that the inequality is proved. Is it correct?
| I don't buy this:
$$ \sum_{i=1}^n T(x_i)x_i \sum_{i=1}^n T(y_i)y_i = \sum_{i=1}^n T(x_i)y_i \sum_{i=1}^n T(y_i)x_i $$
For example
$$ 45 =(1 \cdot 1 + 2 \cdot 2)(2 \cdot 2 + 3 \cdot 3) \ne (1 \cdot 2 + 2 \cdot 3)(2 \cdot 1 + 3 \cdot 2) = 64.$$
Instead, you have to prove Cauchy-Schwarz for the inner product $(x,y) \overset{\rm def} = \langle Tx, y \rangle$. Or, if you're allowed to, you can say that $(x,y)$ is an inner product and therefore Cauchy-Schwarz holds for it.
There are proofs of C-S that work for any inner product https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality#Proofs, so if you've seen one of those, it applies to $(x,y)$ and you don't need to prove it again.
Once you have C-S, the proof of the Triangle Inequality is a simple application:
\begin{align}
|(x,y)| &\le \|x\| \|y\| \\
\text{i.e. } |\langle Tx, y\rangle| &\le \sqrt{\langle Tx, x \rangle} \cdot \sqrt{ \langle Ty, y \rangle}
\end{align}
Actually you made a slight error when you calculated $(f(x) + f(y))^2$, you should have this term: $$2f(x)f(y) = 2\sqrt{\langle Tx, x \rangle} \cdot \sqrt{\langle Ty, y \rangle}$$
instead of $2f(x)^2f(y)^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3481996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Rhombus and circles angle problem Let $ABCD$ be a rhombus.
The circle $(C_1)$ of center $B$ passing through $C$ and the circle $(C_2)$ of center $C$ passing through $B$.
$E$ is one of the two points of $(C_1) \cap (C_2)$.
The line $(ED)$ meets $(C_1)$ again in $F$.
It is asked to find the measure of angle $\angle AFB$.
I tried a lot of angle chasing but in vain.
I even took a square instead just to find out a means to a solution but failed to get the value.
From geogebra, the measure would be $60^{\circ}$.
| It should be clear that $\triangle BEC$ is equilateral. Denote $\angle BFD=\alpha$ and $\angle AFD=\beta$. We need to find $\alpha+\beta$. You can show that $\angle ADF=60^{\circ}-\alpha$ through some angle chasing. Sine law for $\triangle ADF$ gives you
$$\frac{AD}{AF}=\frac{\sin\beta}{\sin(60^{\circ}-\alpha)} $$
and the sine law for $\triangle ABF$ gives you
$$\frac{AB}{AF}=\frac{\sin(\alpha+\beta)}{\sin(2(\alpha+\beta))}=\frac{1}{2\cos(\alpha+\beta)} $$
Since $AB=AD$,
$$\frac{\sin\beta}{\sin(60^{\circ}-\alpha)}=\frac{1}{2\cos(\alpha+\beta)} $$
Can you prove that the last equality implies $\beta=30^{\circ}$ or $\alpha+\beta=60^{\circ}$ and that $\beta=30^{\circ}$ is a contradiction?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3482089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Finding extrema of non-linear second-order ODE I'm dealing with a non-linear ODE of the form
$$a\frac{d^{2}y}{dx^{2}}+bA(x)\frac{dy}{dx}+cy^{3}+y=[A(x)]^{2}$$
where a,b, c are positive constants and the function A(x) is a real valued and continuously differentiable (except, maybe, at x=0). The only unchangeable boundary value is that $y\rightarrow 0$ when $x\rightarrow \infty $. At $x=0$ y might be required to be a given real number or it might be allowed to diverge, both cases are of interest.
Unsurprisingly, I can't solve the equation. I only want to study its qualitative behaviour so
(a) Is there a way to find if the solutions of the equations have extrema in the interval $[0,\infty )$?
If "yes", then
(b) Is there a way to approximately know where the extrema are located?
| Numerically computing the solution to the differential equation using most approaches would rewrite the problem as
$$\begin{cases}z'=\frac1a\left(A^2-bAz-cy^3-y\right)\\y'=z\end{cases}$$
In which case one would be computing $y$ and $y'$ simultaneously, and the extrema can be numerically deduced based on $y'$, which you can search for being zero with some tolerance, or very large in magnitude. Determining if it actually results in a relative extrema can't be done in general of course, but you can gain some insight on it based on whether or not $z'=y''$ is positive or negative near the point of interest, or simply by looking at how $y'$ behaves around the points of interest.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3482336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding an elliptic curve with Frobenius trace zero The following theorem by Waterhouse lists all values of the Frobenius trace such that there is a corresponding elliptic curve over $\mathbb{F}_q$, $q = p^n$, $p$ prime.
The thing is, i couldn't find a single curve with $n$ even, $t = 0$ and $p \ne 1 $mod $4$ at the same time. I iterated over thousands of curves with low $q$ checking for those conditions and not one of them appeared, while i could find examples of every other condition this way. Condition (iii3) must be exceedingly rare.
Can you give me an example of such curve?
EDIT: N(t) is the number of curves with Frobenius trace $t$. $N(t) \ne 0 \Leftrightarrow$ there is at least one curve.
| Using that $Tr(\phi_q)=0\implies \Bbb{Z}[\phi_q] \cong \Bbb{Z}[i p^{n/2}] \subset \Bbb{Z}[i]\implies $ the curve is probably a reduction of $y^2=x^3+x$, also the dual endomorphism of $\phi_q$ is $-\phi_q$ thus the curve is supersingular, and this
I obtained the Magma code
K<a>:= GF(7^2); E:=EllipticCurve([K|1, 0]); // y^2=x^3+x
T :=Twists(E); // the twists of E, ie. the curves isomorphic over F_{q^r} but not over F_q
P<t> := PolynomialRing(K);
C := T[2];
C;
"trace of the Frobenius ";
#K +1 - #C;
"minimal polynomial of a";
(t-a)*(t-a^7);
Result
Elliptic Curve defined by y^2 = x^3 + a*x over GF(7^2)
trace of the Frobenius
0
minimal polynomial of a
t^2 + 6*t + 3
The obtained curve is not defined over $\Bbb{F}_p$ and is not a quadratic twist of $y^2=x^3+x$ ie. not isomorphic to $dy^2=x^3+x$ that's why it was hard to find by hand.
Any $E$ satisfying your requirement can't be defined over $\Bbb{F}_p$ : if it is there is the Frobenius $\varphi(x,y)=(x^p,y^p)\in End(E)$, the main theorem is that there is a dual endomorphism such that $\varphi^*\varphi = p\in End(E)$ and $t=\varphi+\varphi^* \in \Bbb{Z}\subset End(E)$, the minimal polynomial of $\varphi$ is $(X-\varphi)(X-\varphi^*)=X^2-t X+p$ (it means that $\varphi^2-t\varphi+p=0\in End(E)$), thus the minimal polynomial of $\varphi^{n}$ is a quadratic polynomial too, $X^2-t_{n} X+p^{n}\in \Bbb{Z}[x]$. Your assumption is that for some $n$, $t_{2n}=0$, it means $\varphi^{2n}$ is a root of $X^2+p^{2n}$ thus we can identity it with $\pm ip^n$. There is no quadratic integer whose square is $\pm i p^n$, contradicting that $\varphi^n$ is the root of a quadratic polynomial $\in \Bbb{Z}[x]$. Thus $E$ isn't defined over $\Bbb{F}_{p^n}$, contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3482443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Why does $(I-A)$ has inverse when $\|A \| < 1$ I've seen a proof using the convergence of numerical series, but it is too large, could you please tell me if there is a shorter proof. Thank you!
| We need the norm to be submultiplicative, i.e. $\|XY\|\le\|X\|\|Y\|$ for every pair of matrices $X$ and $Y$, otherwise the statement isn't true. For instance, let $\epsilon>0$ and define a matrix norm $\|X\|=\epsilon\sum_{i,j}|x_{ij}|$ for real matrices. Then $\|I\|<1$ when $\epsilon$ is sufficiently small, but $I-I=0$ is not invertible.
Suppose $\|A\|<1$ for some submultiplicative matrix norm $\|\cdot\|$. If $I-A$ is singular, then $(I-A)x=0$ for some nonzero vector $x$. Therefore $\|xx^T\|=\|Axx^T\|\le\|A\|\|xx^T\|<\|xx^T\|$, which is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3482514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Does this differential equation have an elementary solution? $$\frac{dx}{dy} =x-y^2$$
My professor told the class that this differential equation has no solutions. My question is how come? can't you just separate variables shown here.
$$y^2dy =xdx$$
$$y= \sqrt[3]{ \frac{x^2}{6} } $$
| $$\frac{dx}{dy} =x-y^2$$
The solution of this ODE is :
$$x(y)=ce^y+y^2+2y+2$$
Probably your Professor didn't say "this differential equation has no solution" but he said something that you not well understood.
Possibly he was talking of the function $y(x)$ which is the inverse function of the above known function $x(y)$.
In fact the function $y(x)$ exists but cannot be written in terms of a finite number of elementary functions. So one cannot write it explicitly even though $x(y)$ is already found explicitly.
Another eventality might be a typo in the ODE. Can't the ODE be :
$$\frac{dy}{dx} =x-y^2$$
This is an ODE of Riccati kind which is solvable in terms of Airy special functions. In this case the Professor would say that the solutions exist but cannot be expressed with a finit number of the usual elementary functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3482598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Bilinear transform and higher order differential equation The bilinear transform as I understand corresponds to the trapezoid rule. However, I have not been able to find whether the correspondence holds for higher order ODEs, or what kind of estimate the bilinear transform corresponds to in that case. As an example I use the Butterworth filter. The ODE of the filter is given by
$$y(t)=-\frac{r}{\omega^2}y'(t)-\frac{1}{\omega^2}y''(t)$$
with initial condition $y(0)=0,y'(0)=\omega^2$.
First calculate the coefficients by using the bilinear transform. Butterworth filter has the following transfer function:
$$H(s)=\frac{\omega^2}{s^2+s\cdot r+\omega^2}$$
The bilinear transform is given by $s\rightarrow \frac{2}{T}\frac{1-z^{-1}}{1+z^{-1}}$. Then
$$H(z)=\frac{\omega^2}{(\frac{2}{T}\frac{1-z^{-1}}{1+z^{-1}})^2+\frac{2}{T}\frac{1-z^{-1}}{1+z^{-1}} r+\omega^2}=
\frac{\omega^2(1+z^{-1})^2}{(\frac{2}{T}(1-z^{-1}))^2+\frac{2}{T}(1-z^{-1})(1+z^{-1}) r+\omega^2(1+z^{-1})^2}
=\frac{\omega^2(1+2z^{-1}+z^{-2})}{\frac{4}{T^2}(1-2z^{-1}+z^{-2})+\frac{2}{T}(1-z^{-2}) r+\omega^2(1+2z^{-1}+z^{-2})}
=\frac{\omega^2(1+2z^{-1}+z^{-2})}{(\frac{4}{T^2}+\frac{2}{T}r+\omega^2)+2(-\frac{4}{T^2}+\omega^2)z^{-1}+(\frac{4}{T^2}-\frac{2}{T}r+\omega^2)z^{-2}}
=\frac{\omega^2(1+2z^{-1}+z^{-2})/D}{1+2(-\frac{4}{T^2}+\omega^2)z^{-1}/D+(\frac{4}{T^2}-\frac{2}{T}r+\omega^2)z^{-2}/D}$$
So the coefficients are:
$$a_0=1,a_1=2(-\frac{4}{T^2}+\omega^2)/D,a_2=(\frac{4}{T^2}-\frac{2}{T}r+\omega^2)/D$$
$$b_0=\omega^2/D,b_1=2b_1,b_2=b_0$$
$$D=(\frac{4}{T^2}+\frac{2}{T}r+\omega^2)$$
Ok, now do the same using the trapezoid method. We have the differential equation as before and the estimates (the stepsize is $h=T$):
$$y(t+h)=y(t)+\frac{h}{2}(y'(t+h)+y'(t))$$
$$y'(t+h)=y'(t)+\frac{h}{2}(y''(t+h)+y''(t))$$
So that
$$y(t+h)=y(t)+\frac{h}{2}(2y'(t)+\frac{h}{2}(y''(t+h)+y''(t)))$$
$$=y(t)+\frac{h}{2}(2y'(t)+\frac{h}{2}(-\omega^2y(t+h)-ry'(t+h)+\omega^2y(t)-ry'(t)))$$
$$=y(t)+\frac{h}{2}(2y'(t)+\frac{h}{2}(-\omega^2(y(t+h)+y(t))-r(y'(t+h)+y'(t)))$$
$$=y(t)+\frac{h}{2}(2y'(t)+\frac{h}{2}(-\omega^2(y(t+h)+y(t)))-r(y(t+h)-y(t))$$
And
$$y(t+h)(1+\frac{h^2}{4}\omega^2+\frac{h}{2}r)=y(t)(1-\frac{h^2}{4}\omega^2+\frac{h}{2}r)+hy'(t)$$
$$\Leftrightarrow y(t+h)(\frac{4}{h^2}+\frac{2}{h}r+\omega^2)=y(t)(\frac{4}{h^2}+\frac{2}{h}r-\omega^2)+\frac{4}{h}y'(t)$$
But the above is not the same difference equation as given by the bilinear transform. For one $y(0)=0$ was an initial condition, but the bilinear transform gives $y(0)=\omega^2/D$ (unless there was a mistake somewhere). So what happens in higher order ODEs?
| Take one step more, either using $y(x+2h)$ or here for symmetry $y(x-h)$, and consider that equality is only up to terms of size $O(h^3)$,
\begin{align}
&y(x+h)-2y(x)+y(x-h)
\\
&=\frac{h}2(y'(x+h)-y'(x-h))=\frac{h^2}4(y''(x+h)+2y''(x)+y''(x-h))
\\
&=-\frac{h^2}4[ry'(x+h)+ω^2y(x+h)+2ry'(x)+2ω^2y(x)+ry'(x+h)+ω^2y(x+h)]
\\
&=-\frac{ω^2h^2}4[y(x+h)+2y(x)+y(x-h)]-\frac{rh}2[y(x+h)-y(x-h)]
\end{align}
using $y(x+h)-y(x-h)=[y(x+h)-y(x)]+[y(x)-y(x-h)]=\frac{h}2[y'(x+h)+2y'(x)+y'(x-h)]$ in the last step. Collecting coefficients gets us
$$
\left[1+\frac{rh}2+\frac{ω^2h^2}4\right]y(x+h)-2\left[1-\frac{ω^2h^2}4\right]y(x)+\left[1-\frac{rh}2+\frac{ω^2h^2}4\right]y(x-h)=0
$$
which has exactly the same coefficient structure as your first approach.
Let's back-test what the actual order of this method is by inserting Taylor expansions for $y(x\pm h)$ and then reducing by the ODE and its derivatives.
\begin{align}
-2&\left[1-\frac{ω^2h^2}4\right]y(x)+\left[1+\frac{ω^2h^2}4\right](y(x+h)+y(x-h))+\frac{rh}2(y(x+h)-y(x-h))
\\
&=2\left[-1+\frac{ω^2h^2}4\right]y(x)
+2\left[1+\frac{ω^2h^2}4\right]\left(y(x)+\frac{h^2}2y''(x)+\frac{h^4}{24}y^{(4)}(x)+O(h^6)\right)
+rh\left(hy'(x)+\frac{h^3}6y'''(x)+\frac{h^5}{120}y^{(5)}(x)+O(h^7)\right)
\\
&=h^2\left(ω^2y(x)+ry'(x)+y''(x)\right)+\frac{h^4}{12}\left(3ω^2y''(x)+2ry'''(x)+y^{(4)}(x)\right)+O(h^6)
\\
&=\frac{h^4}{12}\left(2ω^2y''(x)+ry'''(x)\right)+O(h^6)
\end{align}
Thus as the truncation error is $O(h^4)$, the method is $O(h^2)$ if the first step is $O(h^3)$, that is, $y(h)=y_1+O(h^3)$, $y_1=y(0)+hy'(0)-\frac{h^2}2(ω^2y(0)+ry'(0))$.
The Lagrange and $z$ transformation methods assume that everything (not explicitly mentioned otherwise) is zero for negative times. Thus at time $0$ the right side of the recursion has to contain the defects of applying the difference operator to the sequence $...,0,y_0,y_1,y_2$, $b_k=a_0y_{k+1}+a_1y_k+a_2y_{k-1}$. This gives $b_{-2}=0$, $b_{-1}=a_0y_0$, $b_0=a_0y_1+a_1y_0$, $b_1=a_0y_2+a_1y_1+a_2y_0=0$, everything else is zero. This does not give the same structure as the numerator of $H(z)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3482730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Uniform Continuity of a function with the simplest way I'm trying to show that
$f(x) =
\begin{cases}
x \sin\left(\frac1x\right),\quad\text{if $x \in (0,1]$ }\\[2ex]
0, \quad \quad \quad \quad \ \text{if $x=0$}
\end{cases}$
is uniformly continuous on $[0,1]$
Let $\epsilon \gt 0$ and let $x, y \in (0,1)$.
Then
$$
\left|x\sin\frac{1}{x} - y \sin\frac{1}{y} \right|=\left| x\sin\frac{1}{x} - y\sin\frac{1}{x} + y\sin\frac{1}{x} - y \sin\frac{1}{y} \right|
= \left| (x-y)\sin\frac{1}{x} + y \left(\sin\frac{1}{x} - \sin\frac1y\right) \right| $$
and by triangle inequality
\begin{equation}
\label{eq: star}
\left | x\sin\frac{1}{x} - y \sin\frac{1}{y} \right |
\leq |x - y| + y \left | \sin\frac{1}{x} - \sin\frac{1}{y} \right |
\end{equation}
$$\left | \sin\frac{1}{x} -\sin\frac{1}{y} \right|= \left | 2 \cos \left ( \frac{1}{2} \left ( \frac{1}{x} + \frac{1}{y} \right ) \right ) \sin \left ( \frac{1}{2} \left ( \frac{1}{x} - \frac{1}{y} \right ) \right ) \right |$$
I could not continue from there. Is there any basic way for this? I've seen other answers on this question but I'm looking for really simple one for formal proof.
Thanks for any help.
| Here is an ad-hoc proof. Let $\epsilon>0$ be fixed. We can and do adjust it to $1$ if it is bigger.
The function $|f'|$ is bounded and continuous on $[\epsilon/3, \; 1]$, let $M> 3$ be an upper bound.
We set $\delta = \epsilon/M<\epsilon/3$.
Let $x,y$ be two points in $[0,1]$ at distance $<\delta$, and we can and do assume $0\le x\le y\le x+\delta$. Two cases:
*
*If $x$ is in the interval $[0,\epsilon/3]$ then we have:
$$|f(x)-f(y)|\le |f(x)|+|f(y)|\le x+y\le 2x+(y-x)<
\frac 23\epsilon +\delta<\epsilon\ .
$$
*
*If $x>\epsilon/3$ then both $x,y$ live in the interval $[\epsilon/3,\; 1]$ where we can find an intermediate point $\xi$ with
$$|f(x)-f(y)|=|f'(\xi)|\;|x-y|<M|x-y|<M\delta<\epsilon\ .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3482815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Compute telescopic sum of binomial coefficients Is there a nice or simple form for a sum of the following form?
$$ 1 + \sum_{i=1}^k \binom{n-1+i}{i} - \binom{n-1+i}{i-1}$$
Motivation: Due to a computation in the formalism of Schubert calculus the above sum with $k = \lceil n/2 \rceil -1$ is equal to the number of lines intersecting $2n-4$ general subspaces $H_j\subseteq \mathbb{P}^n$ of dimension $n-2$.
| Hint
The following method does not use telescopic series but it makes use of elementary binomial theorem along with some geometric progression to arrive at the answer.
$$\begin{aligned}S&=\sum_{i=1}^{k}{n-1+i\choose n-1}-\sum_{i=1}^{k}{n-1+i\choose n}\\&=\left(\text{coeff. of } x^{n-1} \text{ in } \sum_{i=1}^{k}(1+x)^{n-1+i}\right)-\left(\text{coeff. of } x^{n} \text{ in } \sum_{i=1}^{k}(1+x)^{n-1+i}\right)\end{aligned}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3483017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Finding roots using recurrence relations Everyone knows the Fibonacci sequence:
$s[ ] = 1, 1, 2, 3, 5, 8, ...$
where
$s_{n+2} = s_n + s_{n+1}$
This represents a single solution to the polynomial, $x^2 = x + 1$.
Recurrence relations can be applied to find roots of other polynomials. For example, from the relation, $x^4 = 3 + 2x + x^2$ , a sequence $s$ can be constructed, where $s_{n+4} = 3\cdot s_n + 2\cdot s_{n+1} + s_{n+2}$. For example, $$s = 1, 1, 1, 6, 6, 11, 21, 41, 61, 116, 206, 361, 621, 1121, 1961, 3446, 6066, 10731, \ldots$$ The ratio of secessive terms approaches 1.76137782854929251, which is one of the roots (the largest one) of the given relation.
How can this method be used to find more roots than 1?
| I am not sure regarding its relation to the Newton-Raphson method, but consider the function $f(x)=1+\frac 1 x$, and the recursively defined sequence
$$a_1=1,\qquad a_{n+1}=f(a_n), \quad \forall n\in \mathbb N $$.
Clearly, such sequences (i.e. defined recursively by a continuous function) can converge only to a fixed point of $f$.
That is, since if the series converges to $L\in \mathbb R$ then $$L=\lim_{n\to\infty} a_{n+1}=\lim_{n\to\infty}f(a_n)=f(\lim_{n\to\infty}a_n)=f(L).$$
An easy induction will show that $a_n=\frac{F_{n+1}}{F_n}$ where $F_n$ is the $n$'th Fibonacci number.
Because $f([1.1,2])\subset [1.1,2]$ and for $x>y\in [1.1,2]$
$$|f(y)-f(x)|=\bigg|\frac{x-y}{xy}\bigg|\leq\frac{1}{1.21}|x-y| $$
we see that $f$ is shrinking, and therefore every such iterative sequence (since $a_2=2\in[1.1,2]$) is Cauchy, and therefore converges to a value $L\in [1.1,2]$. This $L$ must be the unique fixed point of $f|_{[1.1,2]}$.
Obviously, this fixed point is the golden ratio $\phi=\frac{\sqrt5+1}{2}$!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3483122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Do Sylow subgroups distinguish representations? Let $G$ be a finite group, $X$ the set of elementary subgroups of $G$, $Y$ the set of Sylow subgroups of $G$.
Propsition 29 in Serre's Linear Representations of Finite Groups implies that the set $X$ distinguish representations, in the sense that
For any finite dimensional complex G-representations $V$ and $W$ whose restrictions to $X_i$ are isomorphic for each element $X_i \in X$, $V$ and $W$ are isomorphic.
*
*I wonder if $Y$ distinguishes representations as well? I have tried some small groups like $A_3, A_4, S_4..$. Note that when $G$ is a compact real Lie group, any maximal torus distinguishes. I'd hope to say Sylow $p$-subgroups (with all possible $p$) should have similar properties, thus asking this question. Thank you.
EDIT: Derek has shown by an explicit example that $Y$ is not enough below! Notice also that the elementary subgroups are necessary for $R(G) \to \oplus_{H\in\text{some-set}} R(H)$ to be one-to-one (Green's theorem). For me, this justify why the elementary subgroups are important.
*How about this: can $Y$ distinguish all irreducible representations? This is actually what I had in mind.. but I was not careful enough. If we want to disprove this statement, we have to find two nonisomorphic irreducible representations $V$ and $W$ of $G$ such that their restrictions to any Sylow subgroups are isomorphic.
EDIT: A counter example has been found by Derek: $G=D_{2\times 12}$! The faithful irreps of degree $2$ (there are only two of them) restrict to the same ones to the Sylow-$2$ and Sylow-$3$.
| The set $Y$ of Sylow subgroups does not distinguish between complex representations.
Let $G$ be cyclic of order $6$, and consider the two representations $\rho_1$ and $\rho_2$ of $G$ of degree 2 that map a generator $g$ of $G$ to
$$ \left(\begin{array}{cc}\omega&0\\0&-\omega^2\end{array}\right)\ \ \ \ \mathrm{and}\ \ \ \
\left(\begin{array}{cc}-\omega&0\\0&\omega^2\end{array}\right),$$ where $\omega$ is a cube root of $1$.
By computing the actions on $g^2$ and $g^3$, you can check that the restirctions of $\rho_1$ and $\rho_2$ to Sylow $3$- and $2$-subgroups of $G$ have the same character and are therefore isomorphic.
For an example with irreducible representations, we can take the two faithful irreducible representations of the dihedral group of order $24$, which are of degree 2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3483247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Exercise 3.4.14 Introduction to Real Analysis by Jiri Lebl
Suppose for $f: [0,1] \to R$, we have $|f(x) - f(y) | \le K |x-y|$, and $f(0)= f(1) = 0$. Prove that $|f(x)| \le K/2$. Further show by example that $K/2$ is the best possible, that is, there exists such a continuous function for which $|f(x)| = K/2$ for some $x \in [0 ,1]$.
I am trying to find $x$ and $y$ that satisfy $|f(x)| \le K/2$, but I constantly fails. Can you give some help? I also appreciate if you give some hint for the second part of the question (Further show ~).
| Hint: for any $x \in [0,1]$ either $x$ is closer to $0$ or $x$ is closer to $1$. If $y \in \{0,1\}$ is the closer endpoint then $|x - y| \le 1/2$.
Hint2: $|f(x) - f(0)| = |f(x)| \le K|x|$. To make this an equality, we need $f(x) = \pm Kx$ for any suitable $x$. Don't forget about the condition $f(1) = 0$—if you can do the first hint you may have an idea what I mean by "suitable" $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3483338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
How to evaluate $\int_{0}^{\infty}\frac{1}{t}\arctan\left(\frac{t}{1+2t^2}\right)\,\mathrm dt$? I entered this integral into Wolframalpha, and got $$\int_{0}^{\infty}\frac{1}{t}\arctan\left(\frac{t}{1+2t^2}\right)\,\mathrm dt=\frac{1}{2}\pi\log{2}.$$ But it doesn't provide step by step solution for this integral.
This integral is a bonus challenge in my Calculus class, and the professor that the key is $\arctan$. But I don't know is there any special about $$\arctan\left(\frac{t}{1+2t^2}\right),$$ so I tried some common integration method, and it doesn't work.
| First notice that:
$$\arctan\left(\frac{x}{1+2x^2}\right)=\arctan\left(\frac{2x-x}{1+2x\cdot x}\right)=\arctan(2x)-\arctan(x)$$
So the integral can be rewritten as:
$$I=\int_0^\infty \frac{\arctan(2x)-\arctan x}{x}dx\overset{IBP}=\int_0^\infty \ln x\left(\frac{1}{1+x^2}-\frac{2}{1+4x^2}\right)dx$$
$$2\int_0^\infty \frac{\ln x}{1+4x^2}dx\overset{2x\to x}=\int_0^\infty \frac{\ln x-\ln 2}{1+x^2}dx$$
$$\Rightarrow I=\int_0^\infty \frac{\ln x -\ln x+\ln 2}{1+x^2}dx=\ln 2\int_0^\infty \frac{dx}{1+x^2}=\frac{\pi}{2}\ln 2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3483514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Deducing congruence relations from given congruence relations I am trying problems from Apostol Modular functions and Dirichlet series in number theory and I could not think about this problem from chapter 2 .
Problem is – Given integers $a, b, c, d\;$ with $ad-bc \equiv 1 \pmod n$, prove that there always exists integers $\alpha,\beta, \gamma, \delta$ such that $\alpha \equiv a \pmod n$, $\beta \equiv b \pmod n$, $\gamma \equiv c \pmod n$, $\delta \equiv d \pmod n$ with $\alpha \delta-\beta \gamma = 1$ .
I am unable to think how to prove existence of $\alpha, \beta, \gamma, \delta$ which are equivalent to $a, b, c, d \bmod n$ respectively.
Can someone please give hints.
| $(\overbrace{a\!+\!\ell n}^{\textstyle \alpha},\:\!b)=1\,$ for $\,\ell\in\Bbb Z\,$ by $\,{(a,b,n)=1,}\,$ by here. Let $\,\beta =b.\,$ We solve for $\,\delta,\gamma\in\Bbb Z$.
$\!\!\bmod n\!:\ \color{#0a0}{\alpha\equiv a}\,\Rightarrow \color{#0a0}\alpha d\!-\!b c = \color{#0a0}ad\!-\!bc = 1,\ $ so $\,\alpha d\! -\! b c = \color{#c00}{1\!-\!kn}\,$ for $\,k\in\Bbb Z,\,$ hence
$1\! =\! \underbrace{\alpha(d\!+\!in)\!-\!b(c\!+\!jn)}_{\textstyle \alpha\,\delta \,-\, \beta\,\gamma}\! =\! \color{#c00}{1\!-\!kn}\!+\!(i\alpha\!-\!jb)n\!\iff\! i\alpha\!-\!jb = k.\,$ Such $\,i,j\,$ exist by $(\alpha,b)\!=\!1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3483645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Shortest path between two points around an obstacle? I'm trying to figure out a problem that goes like this:
A particle originally placed at the origin tries to reach the point $(12,16)$ whilst covering the shortest distance possible. But there is a circle of radius $3$, centered at the point $(6,8)$, and the point cannot go through the circle. (Click on image to view larger picture.)
My original thought was to travel in a straight line until reaching the circle, and then travel along the circumference until we reach the point on the circumference that is the shortest distance to $(12,16)$. However I feel like this path should be longer than a path along a curve that is tangent to the circle and passes through both the origin and the given point. Now I'm just stuck on how to find this specific curve.
Since the curve must be tangent to the circle at some point I can equate the derivative at some point, but what point exactly?
| Here is one way of seeing the shortest path. If you take a rope and try to pull on either end until it is tight. The rope Will show you the shortest path. The rope wont have any angle (sharp corner) on it.
As it had been said in the comments, it will follow a tangent to the circle, then it will wrap around the circle until the tangent that goes thru $(12,16)$.
To evaluate its length, first note that the center of the circle $(6,8)$ is at the middle of the straight line joining $(0,0)$ and $(12,16)$. It means the length from beginning to the circle is the same as the one from the circle to the end.
Second, we know that a tangent is perpendicular to the radius. We have a right triangle formes by the origin $O(0,0)$, the center of the circle $C(6,8)$ and the point where the tangent meet the circle $P$. In the triangle $OCP$, we know that $P$ is a right angle, $PC =3$ is the radius of the circle and $OC=10$. Then
$$OP=\sqrt{10^2-3^2}=\sqrt{91}$$
We now have to find the length of rope that wrap around the circle. The angle it follow the circle is
$$\pi-2*\angle{PCO}$$
$$\pi-2*\arccos\left(\frac3{10}\right)$$
And the length is
$$3\pi-6*\arccos\left(\frac3{10}\right)$$
Finally, the shortest path is equal to
$$2*\sqrt{91}+3\pi-6*\arccos\left(\frac3{10}\right)=20.906\dots$$
I can't add a picture with my phone. I'll add one as soon as I can.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3483811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Given $n$ different points in a plane.
Given $n$ different points in a plane, $8$ of them are on one straight
line. The other points are in general everywhere else, so there are no
$3$ points on same one straight line. How many different triangles can
you create from these n points?
What I think is: since no three points in the plane are collinear, a triangle can be formed by selecting any three of the $n$ points. The three points can be selected in $^nC_3$ ways, similarly, the number of triangles formed by $8$ collinear points when no three points are collinear in the plane is $^8C_3$. However, the triangles formed by these points are not allowed, so the total number of triangles formed would be ${^nC_3}- {^8C_3}$
Is that correct?
| As has been stated in the comments, your solution is correct; $\binom n3-\binom 83$ different non-degenerate triangles can be formed from this points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3483916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Find $xy+yz+zx$, given quadratic form of equations. Given that $(x,y,z) \in \mathbb R^+$ and the following equations:
$$x^2 + y^2 + xy = 1,$$
$$y^2 + z^2 + yz = 2,$$
$$z^2 + x^2 + xz = 3.$$
How to find $xy + yz + zx$? Please help.
| Consider a triangle ABC.
take a point inside triangle say O. such that sides subtend angle of 2π/3 each.
let OA = x, OB = y, OC = z.
So sides will come out 1, √2, √3. ( cosine rule ).
as triangle is right angled, area = 1/2*√2
Also add up area of triangle formed by smaller ones we get,
xy + yz + zx = √(8/3).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3484048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can I solve $\int\limits_0^1\frac{\sqrt{x}}{(x+3)\sqrt{x+3}}dx$ without trigonometric substitution? I have the following integral to solve:
$$\int_0^1 \dfrac{\sqrt{x}}{(x+3)\sqrt{x+3}}dx$$
without using trigonometric substitution. My textbook gives me the following hint:
$$t = \sqrt{\dfrac{x}{x+3}}$$
But I don't see how this would help me. If I differentiate that, I get:
$$dt = \dfrac{1}{2} \cdot \sqrt{\dfrac{x+3}{x}} \cdot \dfrac{3}{(x+3)^2} dx$$
$$dt = \dfrac{3}{2(x+3)^2} \cdot \dfrac{1}{t} dx$$
And I'm stuck. If I substitute in the original integral, I'll have terms with both $x$ and $t$. So how can I use the given hint?
| Solve for $x$:
$$t^2 = \frac{x}{x+3} = 1 - \frac{3}{x+3} \implies x = \frac{3}{1-t^2} - 3$$
then we have
$$dx = \frac{6t}{(1-t^2)^2}dt$$
and plugging in to the integral gets us
$$ \int_0^{\frac{1}{2}} \left(\frac{1-t^2}{3}\right)\cdot (t) \cdot \left(\frac{6t}{(1-t^2)^2}\right)dt = 2\int_0^{\frac{1}{2}} \frac{t^2}{1-t^2}dt = \int_0^{\frac{1}{2}} \frac{2}{1-t^2}-2dt$$
$$ = \int_0^{\frac{1}{2}} \frac{1}{1+t}+\frac{1}{1-t}-2dt = \log\left(\frac{1+t}{1-t}\right)-2t\Biggr|_0^{\frac{1}{2}} = \log(3)-1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3484169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is the proof that if subsets $gH$ have no common elements then $H$ must be a subgroup? In proving Lagrange's Theorem we're usually first showing that cosets have no common elements. I'm looking for the proof that if subsets $gH$ have no common elements, then $H$ must be a subgroup.
| Firstly, the claim is not true without assuming that $H$ contains the identity $e$, because any coset is a counter example.
Now let me prove:
Let $G$ be a group and $H\subseteq G$ be a subset containing the identity element $e$. Suppose that for any two elements $g, g'\in G$, either $gH = g'H$ or $gH \cap g'H = \emptyset$. Then $H$ is a subgroup of $G$.
Proof: Let $h$ be any element of $H$. We have $e = h^{-1}h = ee \in h^{-1}H \cap eH$, hence $h^{-1}H = eH = H$. In particular, the element $h^{-1}e = h^{-1}$ is in $H$.
On the other hand, we have $h = he = eh \in hH \cap eH$, hence $hH = eH = H$. Thus for any $h'\in H$, we have $hh' \in H$.
This shows that $H$ is a subgroup of $G$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3484266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $\forall x \in G, \exists k \in \mathbb Z^+ \backepsilon xa=a^kx $, then $\langle a \rangle$ is a normal subgroup In Pinter's A Book of Abstract Algebra, Chapter 14 Exercise E3 asks the reader to prove the following statement:
If $a$ is any element of $G$, $\langle a \rangle$ is a normal subgroup of $G$ iff $a$ has the following property: $\forall x \in G, \exists k \in \mathbb Z^+ \backepsilon xa=a^kx $
For this biconditional, I have no issue demonstrating the implication that:
if $\langle a \rangle$ is a normal subgroup, then $\forall x \in G, \exists k \in \mathbb Z^+ \backepsilon xa=a^kx $.
However, it is the other implication that I am struggling with...specifically:
if $\forall x \in G, \exists k \in \mathbb Z^+ \backepsilon xa=a^kx $, then $\langle a \rangle$ is a normal subgroup.
Here is the issue that I am running into:
Given the antecedent, I can premultiply by $x^{-1}$ to arrive at:
$a=x^{-1}a^kx$ , which is equal to $a = (x^{-1})a^k(x^{-1})^{-1}$
At first glance this seems just fine...but then I realized that due to the existential quantifier, I am pretty sure that I have no way of knowing if all elements of $\langle a \rangle$ will make it into $a^k$. At most, I can say that at least one element of $\langle a \rangle$ has all of its conjugates in $\langle a \rangle$...for example, if $k$ is the same integer for all values of $x$.
Any suggestions?
| Hint: $ xa^nx^{-1}=a^{nk}$ can you prove this?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3484346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Population of P people, where each person knows K others, how many people mutually know each other If you have a population of $P$ people, where each person knows $K$ others within the population, (does not have to be mutual, i.e. if I know you, you don't necessarily know me), and $1 < K < P$, How many people at the least must mutually know each other? Is there a general formula for this minimum in terms of $P$ and $K?$
| Here's an elementary start on investigating this interesting problem
Let $M$ be the minimum number of people in a clique. Then $M$ is non-zero if and only if $K>\frac{P-1}{2}.$
If $M=0$ then there is no pair such that each knows the other. The number of 'knowings', $KP$ , is therefore no greater than $\begin{pmatrix}P\\2\\\end{pmatrix}$ and so $K\le \frac{p-1}{2}.$
Conversely, suppose $K\le \frac{P-1}{2}.$ Consider the population arranged in a circle with everyone knowing the next $K$ people in the circle. Then no pair know each other.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3484472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Prove that the number of self-conjugate partitions of $n$ equals the number of partitions of $n$ into distinct odd parts First, I would love if someone can provide some clarification of this problem. Then possibly help me map out/begin a proof.
So If I were taking the number $6$ and partitioning for example (just to make sure I understand what the question is asking):
The only partition with distinct odd parts would be $6=5+1$. However, for self-conjugate partitions I understand when I flip over the middle diagonal the picture should look exactly the same? That would also only happen once.
How would I go about formulating a proof?
| The Wikipedia article is quite good on a proof.
You can see that $x^{2n+1}=x^n\cdot x\cdot x^n$ to form both 'legs' of a self-conjugate partition in a Ferrers diagram.
Or, if you travel along the main diagonal and read only to the right, we are looking at the number of partitions into distinct parts, $\prod 1+x^k$. We need two of these - $\prod 1+x^{2k}$ - to form the reflection when travelling downwards, and we also need to supply the diagonal - $\prod 1+x^{2k}\cdot x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3484622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
rank and eigenvalues Let A be a square matrix of order $14\times 14$. We know that rank(A)=12 and $\lambda=0$ is an eigenvalue with algebraic multiplicity 4. I have to decide which of the following statements is true:
*
*$\text{rank}(A^2)=12$.
*$\text{rank}(A^3)\leq11$.
*There is no matrix satisfying the given conditions.
I tried studying the connection between the Ker and the image of the associate endomorphism, but I don't know how to use the information about the eigenvalue.
Could someone help me?
| Since $\dim \ker A = 2$ and $0$ has algebraic multiplicity $4$ we see that there are
two Jordan blocks corresponding to the $0$ eigenvalue. The only possible sizes are
$(1,3)$ and $(2,2)$.
In both cases, $A^2$ will drop rank by at least one, so 1. cannot hold.
Since $\operatorname{rk} A^3 \le \operatorname{rk} A^2 < 12$, we see that 2. is
always true.
Let $A=\begin{bmatrix} 0 & 1 \\
0 & 0 \\
& & 0 & 1\\
& & 0 & 0 \\
& & & & I_{12}
\end{bmatrix}$
hence 3. is false.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3484823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Finding $\limsup$ of a Brownian motion function Q) Use the fact that $(1+t)^{-1/2}\exp(B_t^2/2(1+t))$ is a martingale to show that $$\limsup_{t\to \infty} \frac{B_t}{\sqrt{(1+t)\log(1+t)}}\leq 1 \text{ a.s.}$$
I can see that $(1+t)^{-1/2}\exp(B_t^2/2(1+t))$ is a non-negative martingale and hence converges to a finite limit a.s.
$$\begin{align}
x_t &:= \frac{B_t}{\sqrt{(1+t)\log(1+t)}} \tag{1}\\
\implies (1+t)^{-1/2}\exp(B_t^2/2(1+t)) &= (1+t)^{-1/2}\exp(x_t^2\log(1+t)) \tag{2}
\end{align}
$$
Thus if $x_t^2\geq 1/2$ i.o., then RHS of $(2)\geq 1$ i.o. which means LHS of $(2)\geq 1$ i.o. but can't the martingale have its limit as a number/random variable $>1$ so that LHS of $(2)\geq 1$ i.o. is okay? If that is right, may I know how to prove that $\limsup_{t\to \infty}x_t\leq 1$ a.s.?
| If $\lim \sup x_t^{2} >1$ then $(1+t)^{-1/2} e^{x_t^{2} log(1+t)} \to \infty$ along some sequence $(t_n)$ and this contradicts the fact that the martingale converges to a finite limit.
[$(1+t_n)^{a_n-\frac 1 2} \to \infty$ if $a_n >1$ for all $n$ and $t_n \to \infty$].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3484938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
First countable space Theorem: A subspace of a first countable space is first countable.
My proof:
Let $X$ be a first countable space. So for each p $\in X$, there exists a countable neighborhood basis for $X$ at $p$. Let $A$ $\subseteq X$ be a subspace and $p\in A$. Since $p$ is a member of the space $X$, let $\mathbb{B}_p$ be the countable neighborhood basis $\mathbb{B}_p$ for $X$ at $p$. Consider $\mathbb{B}_p'$ $=$ $\{$ $A\cap B$ $:$ $B\in \mathbb{B}_p$ $\}$. If $\mathbb{B}_p$ is finite, then so is $\mathbb{B}_p'$ , hence it would be countable. If $\mathbb{B}_p'$ was countably infinite, then define a function $f:$ $\mathbb{B}_p$ $\rightarrow \mathbb{B}_p'$ to be $f(B_i)=A\cap B_i$. Clearly the function is well defined and a surjection; hence, since $\mathbb{B}_p$ is countably infinite, $\mathbb{B}_p'$ is countable. Let $U$ be a neighborhood of $p$ in the subspace topology, so $U$= $A\cap U'$, where $U'$ is a neighborhood of $p$ at $X$ containing some $B\in \mathbb{B}_p$. Hence, $A\cap B \subseteq A\cap U' = U$. Since $A$ is an arbitrary subspace of $X$, every subspace of a first countable space is first countable.
Is it correct?
| *
*You don't have to distinguish countable from finite, it's superfluous.
*Of course $\Bbb B'_p$ is by definition the image of $\Bbb B_p$ under $U \to U \cap A$ so is also countable (an image of a countable set is countable).
The proof itself is correct. Don't try to be "hypercorrect", though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3485184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Parity of permutation If I know that the parity of a permutation is the parity of the number of transpositions, how can prove that the parity of the permutation is the parity of
permutation decrement?
Permutation decrement is the difference between the number of truly movable elements and the number of independent cycles
| I'm going to assume you're trying to prove that the parity of a $k$-cycle is equal to the parity of $k-1$. If this is the case, note that
$$(a_1~~a_2~~\cdots~~a_{k-1}~~a_k) = (a_1~~a_k)(a_1~~a_{k-1}) \cdots(a_3~~a_1)(a_2~~a_1)$$
The expression on the right-hand side is a product of $k-1$ transpositions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3485383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Suppose $f:[0,1] \rightarrow \mathbb{R}$ is continuous. What is the value of $\int_{0}^{1} \int_{x}^{1-x} f(y) d y d x ?$ Use Fubini's theorem
*
*Suppose $f:[0,1] \rightarrow \mathbb{R}$ is continuous. What is the value of $\int_{0}^{1} \int_{x}^{1-x} f(y) d y d x ?$ Again, do not forget to justify any use of Fubini's Theorem.
I evaluated by the online calculator that $\int_{0}^{1} \int_{x}^{1-x} f(y) d y d x=0.$
My attempt. $\{0\leq x\leq 1, x\leq y\leq 1-x\}$ then $\{0\leq y\leq 1, 1-y\leq x\leq y\}$. Hence the integral is:
$$\int_{0}^{1} \int_{1-y}^{y} f(y) d x d y$$
but when I evaluate the integral, I didnt get $0$, can you help?
But When you use Fubini's thereom to evaluated the integral, may you say how can we change boundeds of integrals?
| Make the variable change $u=1-x$, $x=0, u=1, x=1,u=0, du=-dx$ $\int_0^1\int_x^{1-x}f(y)dydx$
$=\int_1^0\int_{1-u}^uf(y)dy(-du)=\int_1^0\int_u^{1-u}f(y)dydu=-\int_0^1\int_u^{1-u}f(y)dydu$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3485486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Ito's formula and sin(Brownian motion) I would like to compute:
$d(e^{\frac12t}sinB_t)$
using the integration by parts of Ito I come up with the following:$$\frac12e^{\frac12t}sinB_tdt +e^{\frac12t}cosB_tdB_t + 0$$ however I the solution should be only $e^{\frac12t}cosB_tdB_t$. where is the mistake?
moreover what is the result of $E[e^{\frac12t}sinB_t]$?
| You are computing $d(\sin B_t)$ like if it was a differentiable function.
If we apply Ito's formula with the function $f(x)= \sin x,$ we get
$$d(\sin B_t) = \cos B_t dB_t - \frac{1}{2} \sin B_t dt.$$
Now applying the integration by parts formula, you will get the result
\begin{align*}
d(e^ {\frac{1}{2}t} \sin B_t) &= e^{\frac{1}{2}t} d(\sin B_t) + \frac{1}{2}e^ {\frac{1}{2}t} \sin B_t dt \\
&= ...
\end{align*}
For the other question, you can use the fact that $B_t \sim N(0, \sqrt{t})$ in conjunction with this: Compute $E(\sin X)$ if $X$ is normally distributed
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3485605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solving $\frac{\ln(x)\ln(y)}{\ln(1-x)\ln(1-y)}=1$ for $y$
I'm trying to solve for $y$ in terms of $x$ for the expression below.
$$\frac{\ln(x)\ln(y)}{\ln(1-x)\ln(1-y)}=1$$
First I multiplied both sides by $$ \frac{\ln(1-x)}{\ln(x)} $$
to get
$$ \frac{\ln(y)}{\ln(1-y)}=\frac{\ln(1-x)}{\ln(x)} $$
but I don't see how to isolate $y.$ I tried using every technique I know including logarithm properties.
| The function $f(x)=\dfrac{\ln(1-x)}{\ln(x)}$ is monotonic in its domain $(0,1)$, hence it is invertible. So the relation between $x$ and $y$ is a bijection, and…
$$y=1-x.$$
Interestingly, the function is well approximated by $\left(\dfrac1x-1\right)^{-3/2}$, and a solution with $a$ in the RHS is approximately
$$\left(\dfrac1x-1\right)^{-3/2}=a\left(\dfrac1y-1\right)^{3/2},$$ or
$$y=\frac{1-x}{1+(a^{2/3}-1)x}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3485736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
} |
How to find result of system $x+y^2-4=0$ and $y+x^2-4=0$ without using the quartic formula How do I get the values of $x$ and $y$, without using the quartic formula?
\begin{align*}
\begin{cases} x+y^2-4=0 \\ y+x^2-4=0 \end{cases}
\end{align*}
| Subtract the first equation from the second to obtain:
$$x^2-y^2+y-x=0\Leftrightarrow \\
(x-y)(x+y-1)=0 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3485836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solving the integrand $\frac{x^3}{\sqrt{x^2+10x+16}}$ I just wanted to make sure that what I did to integrate $\frac{x^3}{\sqrt{x^2+10x+16}}$ is correct.
I assumed that it is classified as a trigonometric substitution problem. And so, what I first did is to apply "completing the square":
$\int \frac{x^3 dx}{\sqrt{x^2+10x+16}} = \int \frac{x^3 dx}{\sqrt{x^2+10x+25+16-25}} = \int \frac{x^3 dx}{\sqrt{(x+5)^2-9}}
$
After that, I assigned values into some variables:
let $a = 3 $
$ x + 5 = 3 \sec \Theta \rightarrow \sec \Theta = \frac{x+5}{3} $
$ dx = 3 \sec \Theta \tan \Theta\, d \Theta$
Next, I have substituted the value of (x+5) to $\sqrt{(x+5)^2-9}$ which leads to $3 \tan \Theta$. And, $\tan \Theta$ is equal to $\frac{\sqrt{x^2+10x+16}}{3}$.
Afterwards, I have replaced all of the variables to the values that I assigned to them:
$\int \frac{x^3}{\sqrt{x^2+10x+16}} \rightarrow \int \frac{(3 \sec \Theta)^3 (3 \sec \Theta \tan \Theta)\, d\Theta}{3 \tan \Theta} \rightarrow \int (3 \sec\Theta - 5)^3 \sec\Theta\, d \Theta$
Expanding the trinomial, distributing $\sec \theta$ to each term and applying the constant theorem will result to:
$27\int \sec^4\Theta d\Theta - 135\int sec^3\Theta\, d\Theta + 225 \int sec^2\Theta d\Theta - 125 \int sec \Theta d \Theta$
Now, by making $\sec^2 \Theta$ into $(1 + \tan^2 \Theta)$ and applying u-substitution: $27 \int \sec^4 \Theta \,d\Theta \rightarrow 27 \tan \Theta + 9 \tan^3 \Theta + C$
By applying integration by parts, $\int \sec^3 \Theta\, d\Theta$ would become $\frac{\sec\Theta tan\Theta + \ln\left | \sec\Theta + \tan\Theta \right |}{2}$.
Finally, $\int \sec^2 \Theta\, d\Theta$ would simply become $\tan \Theta$ and $\int \sec \Theta$ would be $ln\left | \sec\Theta + \tan\Theta \right |$ .
Since $\sec \Theta$ is equal to $\frac{x+5}{3}$ and $\tan\Theta$ is equal to $\frac{\sqrt{x^2+10x+16}}{3}$, the whole integral would be (I had added all like terms before this):
$9\left [ \frac{\sqrt{x^2+10x+16}}{3} \right ]^3 + 252\left ( \frac{\sqrt{x^2+10x+16}}{3}\right ) -\frac{135}{2} \left ( \frac{x+5}{3}\right ) \left ( \frac{\sqrt{x^2+10x+16}}{3}\right ) - \frac{385}{2} ln \left | \frac{x+5}{3} + \frac{\sqrt{x^2+10x+16}}{3}\right | + C$
Lastly, I simplified the integral:
$\frac{1}{3} \left ( x^2 + 10x+16 \right )^\frac{3}{2} + \frac{(93-15x)\sqrt{x^2+10x+16}}{2} - \frac{385}{2} ln \left | \frac{x+5+\sqrt{x^2+10x+16}}{3}\right | + C$
Have I integrated the integrand appropriately? Thanks in advanced.
| It is correct. Here is another method without trigonometric substitution:
$$\int\frac{x^3}{\sqrt{x^2+10x+16}}dx=\int\frac{x^3+10x^2+16x-10x^2-100x-160+84x+160}{\sqrt{x^2+10x+16}}dx$$
$$ = \int\frac{(x-10)(x^2+10x+16)+84x+160}{\sqrt{x^2+10x+16}}dx$$
$$ = \int(x-10)\sqrt{x^2+10x+16}\ dx+\int\frac{84x+420}{\sqrt{x^2+10x+16}}dx-260\int\frac{1}{\sqrt{x^2+10x+16}}dx$$
The last integral can be directly solved using formula, second last one will follow if you substitute $t=x^2+10x+16$, remains to solve the first integral.
$$\int(x-10)\sqrt{x^2+10x+16}\ dx = \frac{1}{2}\int(2x+10)\sqrt{x^2+10x+16}\ dx-15\int\sqrt{x^2+10x+16}\ dx$$
Last integral follows directly from formula and second last is solved by substituting $x^2+10x+16$.
Note that this is a general method. In case there is any polynomial in numerator and any quadratic (with or without square root) in denominator, divide the numerator by denominator to get a linear in numerator, then reduce the numerator to a constant times the derivative of quadratic plus another constant, which can be solved easily using substitution and existing formulas.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3485956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is there a non-circular explanation for computing the probability of the intersection of two dependent events? The explanation with which I'm familiar goes like this. Define the conditional probability P(A|B) as the probability of their intersections P(A and B) divided by the probability P(B). To figure out the probability of the intersection P(A and B) then, simply rearrange the terms of equation algebraically to obtain P(A and B) = P(A|B) X P(B).
The problem: What I'm struggling to understand is how to apply this equation in practice. It seems like whenever we want to figure out P(A and B), we need to figure out the values of P(A|B) and P(B).But to figure out the value of P(A|B), we already must figure out the value of P(A and B). So how is the equation P(A and B) = P(A|B) X P(B) helpful for figuring out the value of P(A and B)? In other words, to solve for it, we must figure out the value of P(A|B), which we can't do without knowing the value of P(A and B).
| In practice, a lot of time you can calculate $P(A|B)$ by assuming $B$ happened and see what is the probability of $A$. For example, let's say you draw 2 cards face down, then randomly choose 1 to flip up, and let $B$ is the event that exactly 1 red card and 1 blue card was drawn, and $A$ is the event that the flipped up card is blue. Then you know $P(A|B)$ without knowing $P(B)$ nor $P(A\bigcap B)$, because assuming you have 1 red card and 1 blue card faced down, randomly picking 1 of them to flip up give you blue card with probability $\frac{1}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3486186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
An interesting contest math problem: find the maximum value of $f(a_1,a_2,...,a_n)$
Suppose the sequence $a_1,a_2,...,a_n$ is a permutation of the sequene $1+2^1,2+2^2,...,n+2^n$. Find the maximum value of $f(a_1,a_2,...,a_n)=\vert a_1-a_2\vert+\vert a_2-a_3\vert+\cdots+\vert a_{n-1}-a_n\vert$.
This is an interesting problem found in a math contest. I used to be a contest math lover when I was a high school student. I know I may solve it by finding the possible regular formula when $n=2,3,4,...$, but the method is not always effective so I hope someone can give me some hints.
By the way, as a student majored in math now, if there exists an advanced method behind it, can anyone share it with me or at least tell me what it may related with. Thanks for your answer.
| Inspired by John Omielan's answer,here is my attempt.
Our goal is to find the maximum value of \begin{equation}\begin{aligned}
f(a_1,a_2,...,a_n) & = \vert a_1-a_2\vert+\vert a_2-a_3\vert+...+\vert a_{n-1}-a_n\vert \\
& = \sum_{i=1}^{n-1}\vert a_{i} - a_{i+1} \vert
\end{aligned}\end{equation}
We can regard $f(a_1,a_2,...,a_n)$ as the foumula with coefficients$$f(a_1,a_2,...,a_n)=x_1a_1+x_2a_2+...+x_na_n$$
An important observation is that the sum of the coefficients $x_1+x_2+...+x_n$ has to be $0$.On the other hand, $a_1$ and $a_n$ appear only once and the other terms $a_i,(0\leqslant i \leqslant n-1)$ appear twice, so we get $x_1,x_n \in \{-1,1\}$, and $x_i\in\{-2,0,2\}$, $(0\leqslant i \leqslant n-1)$.
Now the problem seems to get clear.It's time to discuss $n$ is even or odd now.We first let $b_i=i+2^i,1\leqslant i \leqslant n$
$(1)$ When $n$ is even:
The coefficients $2$ appear ${n\over 2}$ times,the coefficients $-2$ appear ${n\over 2}$ times either.And the coefficients $1$ and $-1$ both appear once.
Suppose $n=2k,k\geqslant 2$
\begin{align}
f(a_1,a_2,...,a_n)&=2(b_{2k}+...+b_{k+2})+b_{k+1}-b_k-2(b_{k-1}+...+b_1)\\
&=2[(2k)+...+(k+1)-k-...-1]+k-(k+1)\\
&+2[2^{2k}+...+2^{2k-1}-2^k-...-2]+2^k-2^{k+1}\\
&=2k^2-1+2^{2k+2}-2^{k+3}-2^k+4\\
&=\frac 12 n^2+4\cdot 2^{n}-9\cdot 2^{\frac n2}+3
\end{align}
$(2)$ When $n$ is odd:
The situation is a bit different but actually both $[1]$ and $[2]$ are the same.
$[1]$ The coefficients $2$ may appear ${n-1\over 2}$ times,while $-2$ appear ${n-3\over 2}$ times,$1$ appear twice and no $-1$.
$[2]$ The coefficients $2$ appear ${n-3\over 2}$ times,while $-2$ appear ${n-1\over 2}$ times,$-1$ appear twice and no $1$.
Suppose $n=2k+1,k\geqslant 1$
\begin{align}
f(a_1,a_2,...,a_n)&=2(b_{2k+1}+...+b_{k+2})-b_{k+1}-b_k-2(b_{k-1}+...+b_1)\\
&=2[(2k+1)+...+(k+2)-(k+1)-k-...-1]+(k+1)+k\\
&+2[2^{2k+1}+...+2^{k+2}-2^{k+1}-...-2]+2^k+2^{k+1}\\
&=2k^2+2k-1+2^{2k+3}-13\cdot 2^{k+1}+4\\
&=\frac 12 n^2+4\cdot2^{n}-13\cdot 2^{n-1 \over 2}+\frac52
\end{align}
So we find the maximal value of it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3486286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
difference between algebraic set and affine algebraic set in hartshorne my question is basically already in the title. I just started to read the Algebraic Geometry book by Hartshorne where he defines an algebraic set but in some propositions a bit later talks about affine algebraic sets. So does he mean the same thing because i didn't see him define affine algebraic sets?
Thank you!
| an algebraic set (or variety) is the zero set of a polynomial whose
coefficients belong to a given
field k, considered as a subset of k^n.
if k contains R k^n may be considered affine space,
in the sense that it is closed under u,v-->au+(1-a)v for all a in R,
and then the algebraic set may be called affine.
some authors use variety only to mean na irreducible algebraic set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3486407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving a result related to Farey Sequence This is a problem from Tom M Apostol modular functions and Dirichlet series in number theory of Chapter -5 .
I am adding image of the problem
I proved that fractions $\theta $ =$\frac{\lambda a + \mu c } { \lambda b + \mu d } $ always lie between a/ b and c/d.
But how to prove that every fraction lying between a/b and c/d is of this form and $\lambda$ and $\mu $ are relatively prime.
Can someone please help with this problem.
| Suppose $c/d<a'/b'<a/b$ with $d,b',$ and $b$ all positive, and with $ad-bc=1$ and $\gcd(a',b')=1.$ Consider the simultaneous equations $$La+Mc=a',$$ $$Lb+Md=b' .$$ The unique solution for $(L,M)$ is $$L=\frac {a'd-b'c}{ad-bc}=a'd-b'c,\quad M=\frac {ab'-a'b}{ad-bc}=ab'-a'b.$$ We have $L>0\iff a'd>b'c \iff a'/b'>c/d.$
We have $M>0\iff ab'>a'b \iff a/b>a'/b'.$
If $0<m$ with $m|L$ and $m|M,$ then $m|(La+Mc)=a'$ and $m|(Lb+Md)=b'$, so $m=1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3486653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Absolute convergence to a rational number Let's recall the not so popular/familiar form of completeness of real numbers:
Theorem: Absolute convergence of a series implies its convergence.
Since $\mathbb{Q} $ is not complete there should exist a series $\sum_{n=1}^{\infty} u_n$ with rational terms such that $\sum_{n=1}^{\infty} |u_n|$ converges to a rational number and $\sum_{n=1}^{\infty}u_n$ converges to an irrational number.
I could not think of an obvious example of such a series. Please provide one such example.
| Every irrational number in Balanced ternary has a non-repeating expansion, and viceversa. Therefore, if we take a non-repeating sequence $(e_n)_{n\in\mathbb Z^+}$ with $e_n\in\{-1,1\}$, $$\sum_{k=1}^\infty\frac{e_k}{3^k}$$ will be irrational, while $$\sum_{k=1}^\infty\left|\frac{e_k}{3^k}\right|= \sum_{k=1}^\infty\frac1{3^k}=\frac12$$ will be rational.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3486854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 0
} |
Find the value of k that bisects the area Find the value of $k$ for which the line $ y= kx$ bisects the area enclosed by the curve $4y=4x-x^2$ and the $x$ - axis.
I have tried to solve this and the solution seems odd... the solution that came out was $k=1 - \sqrt [3]{2}$
The step I took was to find the interval of the curve $4y=4x-x^2$ (which I got 0 to 4) then integrated after subtracting the line from the curve but the solution I got was $4/3$ or something similar to that.
Any help would be appriciated.
| Hint:
The abscissa of intersections of $y=0, 4y=x^2-4x$ are the roots of $$x^2-4x=0$$
Total area $$\int_0^4\dfrac{4x-x^2}4dx=?$$
The abscissa of intersections of $y=kx, 4y=x^2-4x$ are the roots of $$x^2-4x=-4kx\iff x=0,x=4-4k$$
$$\int_0^{4-4k}\dfrac{4x-x^2}4dx=?$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3486925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Factoring $x^4-2x^3+2x^2+x+4$
I need to show that the polynomial is not irreducible and I am trying to factor the polynomial
$$x^4-2x^3+2x^2+x+4$$
I checked from a calculator that it has a factor but how do I get it by myself?
I tried grouping but It didnt work I got
$x^2(x^2-2x+2)+x+4$ And I dont know how should I proceed. My guts tell me that it should be of the form:
$(x^2-ax+k)(x^2+bx+l)$, should I just try to figure out the constants by trying out?
| Use the Rational Root Test to check if there is any real roots. It turns out that there is no such roots of this polynomial. Hence, the conclusion is that there can only be complex roots. By the fundamental theorem of Algebra, we know any non constant single variable polynomial has at least one complex root (including real ones).
As you have mentioned, assume that the polynomial is the product of two quadratic polynomials. Solve them and find constants $a,b,k$ & $l$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3487085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
Visualizing Conditional Gaussian I am looking at a graph that depicts a conditional Gaussian:
I understand what the titled red spheres mean - that the variables are somewhat are correlated with each other.
I don't understand the significance of the blue line. I know it's related to conditional gaussian but I can't quite make sense of it.
Could someone explain what the blue line means?
| Basically the bivariate normal distribution looks like this one:
Then the conditional distribution $f_{Y|X=x}(x,y)$ is here marked with the red line. We take the joint pdf and plug in $x=1.6$
$f_{X,Y}(x,y) =
\frac{1}{2 \pi \sigma_X \sigma_Y \sqrt{1-\rho^2}}
\exp\left(
-\frac{1}{2(1-\rho^2)}\left[
\frac{(x-\mu_X)^2}{\sigma_X^2} +
\frac{(y-\mu_Y)^2}{\sigma_Y^2} -
\frac{2\rho(x-\mu_X)(y-\mu_Y)}{\sigma_X \sigma_Y}
\right]
\right)$
$f_{Y|X=1.6}(1.6,y) =
\frac{1}{2 \pi \sigma_X \sigma_Y \sqrt{1-\rho^2}}
\exp\left(
-\frac{1}{2(1-\rho^2)}\left[
\frac{(1.6-\mu_X)^2}{\sigma_X^2} +
\frac{(y-\mu_Y)^2}{\sigma_Y^2} -
\frac{2\rho(1.6-\mu_X)(y-\mu_Y)}{\sigma_X \sigma_Y}
\right]
\right)$
This function does depend on the variable y only-the remaining unknowns are parameters. We can just draw the area at $x=1.6$ and omit all the other from the graph above. Then basically we get an univariate distribution of a normal distributed variable. At the graph below we see one example of $f_{Y|X=x}(x,y)$ and $f_{Y|Y=y}(x,y)$
Remark
The red circles (ellipses) at your graph show the different values of the joint distribution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3487189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How can I calculate the limit $\lim\limits_{n \to \infty} \frac1n\ln \left( \frac{2x^n}{x^n+1} \right)$. I have the following limit to find:
$$\lim\limits_{n \to \infty} \dfrac{1}{n} \ln \bigg ( \dfrac{2x^n}{x^n+1} \bigg)$$
Where $n \in \mathbb{N}^*$ and $x \in (0, \infty)$.
I almost got it. For $x > 1$, I observed that:
$$\lim\limits_{n \to \infty} \dfrac{1}{n} \ln \bigg ( \dfrac{2x^n}{x^n+1} \bigg) = \lim\limits_{n \to \infty} \dfrac{1}{n} \ln \bigg ( \dfrac{2x^n}{x^n(1 + \frac{1}{x^n})} \bigg) = \lim\limits_{n \to \infty} \dfrac{1}{n} \ln \bigg ( \dfrac{2}{1+\frac{1}{x^n}} \bigg)$$
Because $x>1$, we have that $x^n \rightarrow \infty$ as $n \rightarrow \infty$, so that means that we have:
$$\dfrac{1}{\infty} \cdot \ln \bigg ( \dfrac{2}{1+\frac{1}{\infty}} \bigg ) = 0 \cdot \ln 2 = 0$$
The problem I have is in calculating for $x \in (0, 1]$. If we have that $x \in (0, 1]$ that means $x^n \rightarrow 0$ as $n \to \infty$, so:
$$\lim\limits_{n \to \infty} \dfrac{1}{n} \ln \bigg( \dfrac{2x^n}{x^n + 1} \bigg ) = \lim\limits_{n \to \infty} \dfrac{\ln \bigg( \dfrac{2x^n}{x^n + 1}\bigg )}{n} $$
And I tried using L'Hospital and after a lot of calculation I ended up with
$$\ln x \lim\limits_{n \to \infty} \dfrac{x^n + 1}{x^n}$$
which is
$$\ln x\cdot \dfrac{1}{0}$$
And this is my problem. Maybe I applied L'Hospital incorrectly or something, I'm not sure. Long story short, I do not know how to calculate the following limit:
$$\lim\limits_{n \to \infty} \dfrac{1}{n} \ln \bigg( \dfrac{2x^n}{x^n+1} \bigg )$$
when $x \in (0, 1]$.
| No L'hopital needed - you just have to use the fact that $\ln(xy) = \ln(x) + \ln(y)$ and break up the limits.
$\lim\limits_{n \to \infty} \dfrac{1}{n} \ln \bigg( \dfrac{2x^n}{x^n + 1} \bigg ) = $
$\lim\limits_{n \to \infty} \dfrac{\ln (2) + \ln(x^n) - \ln(x^n + 1)}{n} = $
$\lim\limits_{n \to \infty} \dfrac{\ln (2)}{n} + \lim\limits_{n \to \infty} \dfrac{n\cdot \ln(x)}{n} - \lim\limits_{n \to \infty} \dfrac{\ln(x^n + 1)}{n} = $
$ 0 + \ln(x)+ \lim\limits_{n \to \infty}\dfrac{\ln(x^n + 1)}{n} = \ln(x) $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3487503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Let $f:[0,1] \rightarrow\mathbb R$ be a continuous map such that $f(0)=f(1)$ Let $f:[0,1] \rightarrow \mathbb R$ be a continuous map such that $f(0)=f(1) .$ Let $n \geq 2$
Show that there is some $x \in[0,1]$ such that $f(x)=f\left(x+\frac{1}{n}\right) .$
My attempt. Assume $f(x)\neq f(x+1/n)$ for all $x$. Then either $f(x)<f(x+1/n)$ or $f(x)>f(x+1/n)$. WLOG, assume $f(x)<f(x+1/n)$. Then $f(0)
So how can I get a contradiction? May you help?
| Let $g(x)=f(x)-f(x+\frac 1 n)$. If this continuous functions is never $0$ then it is always positive or always negative. Suppose it is always positive. Write $0=f(1)-f(0)$ as $[f(\frac 1 n) -f(0)]+[f(\frac 2 n)-f(\frac 1 n)]+...+[f(\frac {n-1} n)-f(1)]$. You get a contradiction since each term is $<0$.
Similar argument works when $g(x) <0$ for all $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3487571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Solution of an algebraic equation having unknown within modulus The question is this:
find the number of real values of $x$ for which $|x-3|+(x-3)^2+\sqrt{x-3}+|x+3|=0.$
Here is my attempt to answer:
Since it's a real equation and $\sqrt{x-3}$ is present here, therefore we must have $x>3$. In this case this equation becomes
$(x-3)+(x-3)^2+\sqrt{x-3}+(x+3)=0$
$\Rightarrow 2x+(x-3)^2+\sqrt{x-3}=0$
$\Rightarrow 2(x-3)+(x-3)^2+\sqrt{x-3}+6=0$
$\Rightarrow (x-3+1)^2+\sqrt{x-3}+5=0$
$\Rightarrow \{(x-2)^2+5\}^2=x-3$
$\Rightarrow (x-2)^4+10(x-2)^2-(x-2)+26=0,$ which is a biquadratic equation.
I want to know whether I am going in the right direction.
| Hint (easier direction):
None of the summands is negative.
Could they all be zero?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3487690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
When can we choose a covering local basis at a point which does not contain the whole space? I was wondering if we can ensure by relatively mild seperation axioms that a local basis ,which is a cover of the space, for at a topology at a point $x_0\in X$ does not contain $X$?
I'm pretty sure that if $X$ is $T_1$ and has more, then $3$ points for any $y\in X$ distinct from $x_0$ we have an open neighbourhood of $x_0$ containing also $y$ which is not the whole space, by the following argument:
I need to find for all $y\in X$ an open neighbourhood $U_y$ such that $x_0,y\in U_y$ and $U_y\neq X$. Let $z\in X$ be a point distinct from both $x_0$ and $y$. Then by the Hausdorff property, there exists $U_{y,z}$ which contains $y$ but not $z$. Similarly we have a $U_{x_0,z}$ containing $x_0$ but not $z$. Then $U_y:=U_{y,z}\cup U_{x_0,z}$ is such a neighbourhood.
My question is first whether the argument seems valid, and second can one reduce the assumption on the seperation axiom to obtain the same argument? I would also like to know whether it seems that $T_0$ is sufficient for having a basis, not local, which does not contain $X$?
Edit:
I added an assumption on that local basis would also be a basis, since that's what I had in mind and it seems that my original question was simpler and not what I had intended to ask about. I would appreciate if any one could adress my assumptions now.
| A point p in a space S can have a local base without S
iff there is an open nhood of p that is not S.
The Serpenski space ({0,1}, {empty set, {0}, {0,1}})
is a T$_0$ space where the only local base for 1 is {{0,1}}.
Exercise. Show if open U is nhood p, then
{ V : p in V, V open subset U } is a local base for p.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3487770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Limit of multiple third roots without L'Hospital I'm not able to solve the limit of a textbook question.
The limit:
$$\lim_{x\to \infty} (\sqrt[3]{x^2}(\sqrt[3]{x+1} - \sqrt[3]{x}))$$
I've been able to simplify the limit to:
$$\lim_{x\to \infty} (\sqrt[3]{x^3+x^2} - x)$$
How do I solve this limit?
Note: no L'Hospital allowed.
| Using
$$a-b=\frac{a^3-b^3}{a^2+ab+b^2}$$
gives
$$\sqrt[3]{x^3+x^2}-x=\frac{x^2}{(x^3+x^2)^{2/3}+x(x^3+x^2)^{1/3}+x^2}
=\frac{1}{(1+x^{-1})^{2/3}+(1+x^{-1})^{1/3}+1}\to\frac13$$
as $x\to\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3487926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Solve $x^2(xdx+ydy)+2y(xdy-ydx)=0$ Solve $x^2(xdx+ydy)+2y(xdy-ydx)=0$
My Attempts
$$x^2(xdx+ydy)+2y(xdy-ydx)=0$$
$$\dfrac {xdx+ydy}{xdy-ydx}=-\dfrac {2y}{x}$$
Put $x=r\cos (\theta)$ and $y=r\sin (\theta)$
So, $r^2=x^2+y^2$ and $\tan (\theta)=\dfrac {y}{x}$
Now,
$$x^2+y^2=r^2$$
Differentiating both sides,
$$2xdx+2ydy=2rdr$$
$$xdx+ydy=rdr$$
| $$\dfrac {xdx+ydy}{xdy-ydx}=-\dfrac {2y}{x}$$
Duvide both sides by $({x^2+y^2})$:
$$\dfrac {xdx+ydy}{x^2+y^2}=-\dfrac {2y}{x}\left ( \dfrac {xdy-ydx}{x^2+y^2} \right )$$
$$\dfrac {xdx+ydy}{x^2+y^2}=-\dfrac {2y}{x}\left ( d(\arctan (\frac {y}{x})) \right )$$
Susbtitute $\dfrac {y}{x}=z$
$$\frac 12\dfrac {d(x^2+y^2)}{x^2+y^2}=-2zd(\arctan z)$$
$$\frac 12\dfrac {d(x^2+y^2)}{x^2+y^2}=-\dfrac {2zdz}{z^2+1}$$
Integrate.
Edit I didn't pay attention that the right side of the DE was wrong so:
$$\frac 12\dfrac {d(x^2+y^2)}{x^2+y^2}=-2\frac {y}{\color{red}{x^2}}d(\arctan \frac {y}{x})$$
$$\frac 12\dfrac {dr^2}{r^2}=-2\frac {r\sin \theta}{r^2 \cos^2 \theta}d\theta$$
More simply:
$${dr}=-2\frac {\sin \theta}{\cos^2 \theta}d\theta$$
Integrate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3488027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Expected number of dice rolls before rolling "1,2,3,4,5,6" QUESTION: I roll a single six-sided die repeatedly, recording the outcomes in a string of digits. I stop as soon as the string contains "$123456$". What is the expected length of the string?
My answer so far: My initial approach is to try and find the probability mass function. If we let the random variable $X$ be the length of the string, then we can easily calculate for $x\in\{6,\ldots,11\}$,
$$\mathbb{P}(X=x) = \left(\frac{1}{6}\right)^6$$
and zero for $x<6$.
As soon as we reach $x\ge12$, we need to consider the probability that the final six rolls are "$123456$" but that sequence isn't contained in the string before that. I believe the result for $x\in\{12,\ldots,17\}$ becomes
$$\mathbb{P}(X=x) = \left(\frac{1}{6}\right)^6 - \left(\frac{1}{6}\right)^{12}(x-11).$$
Now for $x\ge18$, we will need an extra term to discount the cases when two instances of "$123456$" are contained before the final six rolls. And indeed every time we reach another multiple of six, we need to consider the number of ways of having so many instances of the string before the final six rolls.
I've messed around with this counting problem but I'm getting bogged down in the calculations. Any input is appreciated to help shed some light on this. Thanks!
| Just to point out a simple fact for independent, identical trials with finitely many outcomes: when a string $s$ of outcomes, like "123456", has no proper initial substrings which are equal to a final substring of $s$, then the expected waiting time for $s$ is just $1$/Freq($s$) where Freq($s$) is the probability that a random string of the length of $s$ is equal to $s$ -- in this case Freq(123456)= $1/6^6$. This follows from all the various methods discussed in the solutions and is also just a slight variation of the expected value computation for a geometric random variable. Modifications are needed when proper initial and final substrings coincide.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3488134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
} |
Problem understanding Itô's proof about product of abelian groups I am trying to understand Itô's Theorem that states the following: Let the group $G=AB$ be the product of two abelian subgroups $A$ and $B$. Then $G$ is metabelian.
I am following the book 'Product of groups' by Amberg, Franciosi and De Giovanni which is as follows:
Let $a$, $a_{1}$ be elements of $A$ and $b$, $b_{1}$ elements of $B$. Write $b^{a_{1}}=a_{2}b_{2}$ and $a^{b_{1}}=a_{3}b_{3}$ with $a_{2}$, $a_{3}$ in $A$ and $b_{2}$, $b_{3}$ in $B$. Then, $$[a,b]^{a_{1}b_{1}}=[a,b^{a_{1}}]^{b_{1}}=[a,b_{2}]^{b_{1}}=[a^{b_{1}},b_{2}]=[a_{3},b_{2}]$$ and $$[a,b]^{b_{1}a_{1}}=[a^{b_{1}},b]^{a_{1}}=[a_{3},b]^{a_{1}}=[a_{3},b^{a_{1}}]=[a_{3},b_{2}].$$
This proves that the commutators $[a,b]$ and $[a_{1},b_{1}]$ commute. Since the factor group $G/[A,B]$ is abelian, it follows that $G^\prime=[A,B]$, and hence $G^\prime$ is abelian.
I don't understand the second and third equalities: Why is $[a,b^{a_{1}}]^{b_{1}}=[a,b_{2}]^{b_{1}}?$ I have tried to do it by using that $a_{2}b_{2}=b_{2}^\prime a_{2} ^\prime$ for some $b_{2}^\prime \in B$ and $a_{2}^\prime \in A$, but I obtain that $$[a,a_{2}b_{2}]^{b_{1}}={b_{1}}^{-1}a{{b_{2}^\prime}}a^{-1}{b_{2}^\prime}^{-1}b_{1},$$ but how do I know that $b_{2}=b_{2}^\prime?$
| Sorry, my comment was wrong. These equations are derived using the definitions $[x,y] = xyx^{-1}y^{-1}$. You can check that we then have the commutator identities:
$$[x,zy] = [x,z]z[x,y]z^{-1}\ \ \ \mathrm{and}\ \ \ [xz,y] = x[z,y]x^{-1}[x,y].$$
So in your first equation we have:$$[a,b^{a_1}] = [a,a_2b_2] = [a,b_2]a_2[a,a_2]a_2^{-1} = [a,b_2],$$ and in the second one:$$[a^{b_1},b] =[a_3b_3,b]= a_3[b_3,b]a_3^{-1}[a_3,b]=[a_3,b],$$ and also, as in the first equation, $[a_3,b^{a_1}] = [a_3,b_2]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3488256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A quadrilateral inscribed in a rectangle Given a rectangle $ABCD$ in which there is an inscribed quadrilateral $XYZT$, with exactly one vertex on each side of the rectangle, how could I prove that the perimeter of the inscribed quadrilateral is larger then $2|AC|$ (two diagonals)?
I tried to use the triangle inequality, but I can't find the right way to do it.
| Let’s suppose that $X$, $Y$, $Z$, $T$ are in $AB$, $BC$, $CD$, $DA$, respectively. Construct the reflection of $X$ through $AD$ $X_1$, the reflection of $X$ through $BC$ $X_2$, and the reflection of $X_1$ through $CD$ $X_3$. Your diagram should look like this.
Now, we have $$XY+YZ+ZT+TX$$ $$=X_2Y+YZ+ZT+TX_1$$ $$\ge X_2Z+ZX_1$$ $$=X_2Z+ZX_3$$ $$\ge X_3X_2.$$ However, since $X_1X_2=2AB$, $X_1X_3=2AD$, $\angle X_3X_1X_2=\angle DAB=90^\circ$, $$\bigtriangleup X_1X_2X_3\sim\bigtriangleup ABD,$$ so that $$X_3X_2=2AC,$$ and $XY+YZ+ZT+TX\ge 2AC$, as we wanted. $\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3488403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Integral $\int\limits_0^a x^b (1 - c x)^d\ \cosh x\,\mathrm dx$ How does one calculate
$$\frac{2\pi^{\frac{m-1}{2}}}{\Gamma \left(\frac{m-1}{2} \right)} \left(\frac{\alpha}{\kappa} \right)^{\frac{1}{\alpha}-1}\int\limits_0^{\kappa/\alpha}x^{m+\frac{1}{\alpha}-2}\cosh(x)\left(1-\left(\frac{\alpha}{\kappa}x\right)\right)^{\frac{2}{\alpha}}dx ?$$
| Doing basically the same as @Eric Towers, for the integral
$$I=\frac{2\pi^{\frac{m-1}{2}}}{\Gamma \left(\frac{m-1}{2} \right)} \left(\frac{\alpha}{\kappa} \right)^{\frac{1}{\alpha}-1}\int\limits_0^{\kappa/\alpha}x^{m+\frac{1}{\alpha}-2}\cosh(x)\left(1-\frac{\alpha}{\kappa}x\right)^{\frac{2}{\alpha}}\,dx $$
we have
$$I=K\, _2F_3\left(\frac{1}{2 \alpha }+\frac{m-1}{2},\frac{1}{2 \alpha
}+\frac{m}{2};\frac{1}{2},\frac{3}{2 \alpha }+\frac{m}{2},\frac{3}{2 \alpha
}+\frac{m+1}{2};\frac{\kappa ^2}{4 \alpha ^2}\right)$$
where
$$K=\frac 2 {\sqrt \pi}\left(\frac { \kappa\sqrt \pi}{\alpha}\right)^m \,\,\frac{ \Gamma \left(1+\frac{2}{\alpha }\right)\,
\Gamma \left(m+\frac{1}{\alpha
}-1\right)}{\Gamma \left(\frac{m-1}{2}\right)\, \Gamma
\left(m+\frac{3}{\alpha }\right)}$$
provided $\qquad \alpha \kappa >0\land \Re\left(m+\frac{1}{\alpha }\right)>1\land
\Re\left(\frac{1}{\alpha }\right)>-\frac{1}{2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3488501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Finding the volume of the tetrahedron with vertices $(0,0,0)$, $(2,0,0)$, $(0,2,0)$, $(0,0,2)$. I get $8$; answer is $4/3$. The following problem is from the 7th edition of the book "Calculus and Analytic Geometry Part II". It can be found in section 13.7. It is
problem number 5.
Find the volume of the tetrahedron whose vertices are the given points:
$$ ( 0, 0, 0 ), ( 2, 0, 0 ), ( 0, 2, 0 ), ( 0, 0, 2 ) $$
Answer:
In this case, the tetrahedron is a parallelepiped object. If the bounds of such an object is given by the vectors $A$, $B$ and $C$ then
the area of the object is $A \cdot (B \times C)$. Let $V$ be the volume we are trying to find.
\begin{align*}
x^2 &= 6 - y^2 - z^2 \\[4pt]
A &= ( 2, 0, 0) - (0,0,0) = ( 2, 0, 0) \\
B &= ( 0, 2, 0) - (0,0,0) = ( 0, 2, 0) \\
C &= ( 0, 0, 2) - (0,0,0) = ( 0, 0, 2) \\[4pt]
V &= \begin{vmatrix}
a_1 & a_2 & a_3 \\
b_1 & b_2 & b_3 \\
c_1 & c_2 & c_3 \\
\end{vmatrix} =
\begin{vmatrix}
2 & 0 &0 \\
0 & 2 & 0 \\
0 & 0 & 2\\
\end{vmatrix} \\
&= 2 \begin{vmatrix}
2 & 0 \\
0 & 2\\
\end{vmatrix} = 2(4 - 0) \\
&= 8
\end{align*}
However, the book gets $\frac{4}{3}$.
| Note that the given volume is a cone with the height 2 and a right isosceles triangle of side 2 as the base. Thus, its volume can be calculated as
$$\frac13 Area_{base} \cdot Height = \frac13 (\frac12 \cdot 2\cdot 2)2=\frac43$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3488590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
Discrete subspace in upper limit space Let $A$ be $\mathbb R$ under the upper limit topology (having as basis $\{(a,b] : a < b\}$), then the subspace $\{(x, -x)\mid x
\text{ is irrational}\}$ of $A\times A$ is closed, uncountable and discrete.
I think the subspace is closed since singleton is closed in upper limit space so closed in the product space, and since irrational is uncountable then the subspace is uncountable.
But why it is discrete? Irrational as a subspace of upper limit space is not discrete why in the product space it is discrete? Thanks!
| In fact in $A \times A$ it's the antidiagonal that is closed and discrete, and so are all its subsets, the antidiagonal being
$$C=\{(x,-x): x \in A\}$$ while the diagonal $$\Delta=\{((x,x):x \in A\}$$
is just homeomorphic to $A$ (this holds in any space), and as $A$ is far from discrete and the irrationals in $A$ too, the statement you're asking us to prove is false.
But for the subspace $C$ it is true, as $$C \cap \left((x-1,x] \times (-x-1,-x]\right) = \{(x,-x)\}$$ showing that each point of $C$ is an isolated point in the subspace topology on $C$. That $C$ is closed in $A \times A$ is also easy to see and hence all subspaces of $C$ are closed and relatively discrete in $A \times A$, the basis for all proofs of non-normality of $A \times A$ (that I know of).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3488663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Seeking the result: $\lim_{j,M \to \infty}\frac{1}{M}\prod_{k=1}^{M}\left[\prod_{n=j}^{2j}\left(1+\frac{1}{kn}\right)\right]^{\frac{1}{ln 2}}$ I am seeking the result of this limit
$$\lim_{j,M \to \infty}\frac{1}{M}\prod_{k=1}^{M}\left[\prod_{n=j}^{2j}\left(1+\frac{1}{kn}\right)\right]^{\frac{1}{ln 2}}=X$$
$X=1.78107...$ I have tried takes the log but it takes me no where.
How to determine the result of this limit?
| \begin{align}
\log\left(\frac{1}{M}\prod_{k=1}^{M}\left[\prod_{n=j}^{2j}\left(1+\frac{1}{kn}\right)\right]^{\frac{1}{\ln 2}}\right)
&=-\log(M)+\frac{1}{\log 2}\sum_{k=1}^{M}\sum_{n=j}^{2j}\log\left(1+\frac{1}{kn}\right)\\
&=-\log(M)+\frac{1}{\log 2}\sum_{n=j}^{2j}\sum_{k=1}^{M}\left(\frac{1}{kn}+O\Bigl(\frac{1}{k^2n^2}\Bigr)\right)\\
&=-\log(M)+\frac{1}{\log 2}\sum_{n=j}^{2j}\frac 1n\sum_{k=1}^{M}\frac 1k+\frac{1}{\ln 2}\sum_{n=j}^{2j}\frac 1{n^2}\sum_{k=1}^{M}\frac 1{k^2}\\
&=-\log(M)+\frac{1}{\log 2}\left(\log(2)+O\Bigl(\frac 1j\Bigr)\right)\left(\log(M)+\gamma+O\Bigl(\frac 1M\Bigr)\right)+o(1)\\
&=-\log(M)+\left(1+O\Bigl(\frac 1j\Bigr)\right)\left(\log(M)+\gamma+O\Bigl(\frac 1M\Bigr)\right)+o(1)\\
&=\gamma+O\left(\frac{\log(M)}j\right)\\
&\to\gamma
\end{align}
hence your limit converges to $e^\gamma$, provided that $\log(M)/j\to 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3488789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do we deal with modulus signs when finding the solution of a differential equation? Consider: $\quad y'=\frac{y}{2x} \quad $ where we're required to find the general solution in the form $y=y(x)$
$\quad y'=\frac{y}{2x} \quad \rightarrow \quad \int \frac{1}{y} dy=\int \frac{1}{2x} dx$
$\hspace{2.5cm} \rightarrow \quad \ln|y|=\frac{1}{2}\ln|x|+C $
$\hspace{2.5cm} \rightarrow \quad \ln|y|=\ln\sqrt{|x|}+C \quad[1]\quad or \quad \ln{y^2}=\ln|x|+C \quad [2]$
For [1] I can continue as follows:
$\ln|y|=\ln{C\sqrt{|x|}} \quad \rightarrow \quad y=C\sqrt{|x|} \quad$
as the constant C accounts for the $\pm$ that would normally be required $y=\pm C\sqrt{|x|}$
From [2] we have:
$\ln{y^2}=\ln{C|x|} \quad \rightarrow y^2=C|x| \quad \rightarrow \quad y=\pm \sqrt{C|x|}=\pm C \sqrt {|x|}=C\sqrt{|x|} $
To the best of my knowledge, I believe that this is the simplest form I can give my solution in; I wanted to know if its at all possible to remove the modulus sign and still have a solution in the form $y=y(x)$?
| You can always write
$$
\frac{y(x)}{y_0}=\sqrt{\frac{x}{x_0}},
$$
as in any solution, there can be no sign change in neither $x$ nor $y$. Thus the fractions are always positive. Along the way this also takes care of the integration constant by directly expressing it in the initial conditions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3488934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Understanding Austin's two moving knives procedure for cutting a cake fairly From wikipedia description, the Austin procedure goes as follows
*
*Alice places one knife on the left of the cake and a second parallel to it on the right where she judges it splits the cake in two.
*Alice moves both knives to the right in a way that the part between the two knives always contains half of the cake's value in her eyes (while the physical distance between the knives may change).
*George says "stop!" when he thinks that half the cake is between the knives. How can we be sure that George can say "stop" at some point? Because if Alice reaches the end, she must have her left knife positioned where the right knife started. The Intermediate value theorem establishes that George must be satisfied the cake is halved at some point.
*A coin is tossed to select between two options: either George receives the piece between the knives and Alice receives the two pieces at the flanks, or vice versa. If partners are truthful, then they agree that the piece between the knives has a value of exactly 1/2, and so the division is exact.
In the third step, it is said that this procedure ensures that George will always say stop at a some moment, this is proved using the Intermediate value theorem (IVT),
I have been trying to link between IVT and this procedure to understand why it is ensured that a position at some time will satisfy George but i failed to establish a relation.
How is IVT used to proof this procedure always satisfy George?
| Assume George thinks there is less than half the cake between the knives at the start. At the end the other piece is between the knives, so George thinks there is more than half between the knives. As George's estimate of the value of the cake between the knives is a continuous function of the position of the knives, it must cross $\frac 12$ at some point. The IVF is used to say there is some point George's estimate is exactly $\frac 12$
The case where George thinks there is more than half between the knives at the start works the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3489023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Showing that $(\mathbb{R},+,\cdot)$ and $(\mathcal{M}_2(\mathbb{R}),+,\cdot)$ are not isomorphic rings Prove that $(\mathbb{R},+,\cdot)$ and $(\mathcal{M}_2(\mathbb{R}),+,\cdot)$ are not isomorphic rings.
I came up with the following argument, but I am not sure it works : the equation $x^2=1$ has only two solutions in $\mathbb{R}$, whereas in $\mathcal{M}_2(\mathbb{R})$ it has infinitely many (for instance, any matrix $A=\left(\begin{matrix} 1 & 0 \\ a & -1 \\ \end{matrix} \right)$ where $a$ is an arbitrary real number works). Is this enough to conclude that the two rings are not isomorphic?
EDIT: Thanks everyone and sorry for forgetting to type that minus sign !
| One is commutative; the other isn't.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3489116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Hidden random walk in shallow sums? Prove that $\sum_{k=0}^n k \cdot \binom{2n-k}n 2^k = (2n+1) \cdot \binom{2n}n - 4^n $ Motivation: I was perusing through Laurent's solution to a recent online puzzle here.
Basically he managed to prove that, from the expected value of the probability distribution (in that question), we can obtain $\displaystyle \sum_{k=0}^n \binom{2n-k}n 2^k = 4^n $.
Now suppose that I want to find the variance of the same distribution, then I'm left to prove that $\displaystyle \sum_{k=0}^n k \binom{2n-k}n 2^k = (2n+1) \binom{2n}n - 4^n $.
Plugging in the values of $n=1,2,3,4,5,\ldots$ outputs this Random walk sequence.
Which got me curious: Is there a combinatorial proof for this sum? (Actually, I'm interested in any proof)
Further notes: If my claim is correct, then $$\text{Var}[T] = 4n + 2 - \frac{4^n}{\binom{2n}n} - \frac{16^n}{\binom{2n}n ^2} \to n(4-\pi) $$
which agrees with Laurent's claim that the probability distribution of $T$ asymptotically follows a Rayleigh distribution.
I also want to point out that this question is related. That is (upon division by $4^n$),
$$ \frac1{4^n} \sum_{k=0}^n k \binom {2n-k}n 2^k = \frac{2n+1}{4^n} \binom{2n}n - 1 $$
is the expected number of returns in a symmetric random walk of $2n$ moves.
| In evaluating
$$\sum_{k=0}^n {2n-k\choose n} k 2^k$$
we write
$$\sum_{k=0}^n {2n-k\choose n-k} k 2^k
= \sum_{k=0}^n k 2^k [z^{n-k}] (1+z)^{2n-k}
\\ = [z^n] (1+z)^{2n} \sum_{k=0}^n k 2^k z^k (1+z)^{-k}.$$
Now we may extend the sum in $k$ beyond $n$ because there is no
contribution to the coefficient extractor $[z^n]$ in that case:
$$[z^n] (1+z)^{2n} \sum_{k\ge 0} k 2^k z^k (1+z)^{-k}.$$
We also have
$$\sum_{k\ge 0} k w^k = \frac{w}{(1-w)^2}$$
which yields for the sum
$$[z^n] (1+z)^{2n} \frac{2z/(1+z)}{(1-2z/(1+z))^2}
= 2 [z^n] (1+z)^{2n+1} \frac{z}{(1-z)^2}$$
This is
$$2 \sum_{q=0}^n (n-q) {2n+1\choose q}
= 2n \sum_{q=0}^n {2n+1\choose q}
- 2 \sum_{q=0}^n q {2n+1\choose q}
\\ = 2n \frac{1}{2} 2^{2n+1}
- 2 \sum_{q=1}^n q {2n+1\choose q}
\\ = n 2^{2n+1}
- 2 (2n+1) \sum_{q=1}^n {2n\choose q-1}
\\ = n 2^{2n+1}
- 2 (2n+1) \sum_{q=0}^{n-1} {2n\choose q}
\\ = n 2^{2n+1}
- 2 (2n+1) \frac{1}{2} \left(2^{2n}-{2n\choose n}\right)
\\ = (2n+1) {2n\choose n} + 2 n 2^{2n} - (2n+1) 2^{2n}
= (2n+1) {2n\choose n} - 4^n.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3489217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Characterizing smooth, square-integrable functions on $(0,1]$ Is there a simple way to characterize the functions in $C^\infty((0,1])\cap L^2((0,1])$?
That is, given a function $f(t)\in C^\infty((0,1])$, is there a necessary/sufficient condition I can check to see if it's square integrable? An example of such a function is $f(t)=t^{-1/3}$, which diverges as $t\to0$ but satisfies $\int_0^1 f(t)^2\,dt=3<\infty.$
Notes: I was hoping to prove something to the effect that $f(t)$ is square integrable if and only if $$\lim_{t\to0} \frac{f(t)^2}{t^p}=L < \infty$$ for some $p>-1$. This is certainily a sufficient condition by the "limit comparison test" for improper integrals, but I'm not sure if it's necessary. (But, I also couldn't find a simple counterexample!)
| Define
$$f(t)= \frac{1}{\sqrt t \sqrt{1+(\ln t)^2}}.$$
Then $\int_0^1 f(t)^2\,dt = \pi/2,$ but
$$\lim_{t\to 0+} \frac{f(t)^2}{t^p}=\infty$$
for all $p>-1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3489347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
If all second partial derivatives exist and are continuous then all first partial derivatives are also continuous I want to prove that for $A \subseteq \mathbb R^n$ open set and $f:A\to \mathbb R$
if for all $i,j = 1,\dots, n$ $$\frac{\partial ^2f}{\partial x_i\partial x_j}(x)$$ is continous on $A$, then $$\frac{\partial f}{\partial x_j}(x)$$ is also continuous on $A$ for $j=1,\dots,n$
However I don't want to use the fact that if all second partial derivatives exist and are continuous, then $f \in C^2(A)$.
| From the definition $\frac{\partial^2 f}{\partial x_i\partial x_j}:=\frac{\partial }{\partial x_i}\left[\frac{\partial f}{\partial x_j}\right]$, you are saying that $\frac{\partial f}{\partial x_j}$ has continuous partial derivatives. Therefore it is a differentiable function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3489449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find $p$ and $q$ such that $x^2+px+q
Find $p$ and $q$ such that $$x^2+px+q<x$$ iff $$x \in (1,5)$$
I tried the following:
$$x^2+px+q = (x+\frac{p}{2})^2+q-\frac{p^2}{4}$$
where the global minimum is $$q-\frac{p^2}{4}$$ if $$x = \frac{-p}{2}$$
However this doesn't seem to help with the problem at hand...
| First rearrange as follows: $$x^2+px+q<x\iff x^2+(p-1)x+q<0$$ Recall from the properties of parabolas that $$ax^2+bx+c<0$$ between the (real) roots of the equation if $a>0$. So since in our case $a=1>0$, the parabola $$x^2+(p-1)x+q<0$$ between its zeros. So since we are given that $x\in(1,5)$ is the solution set, we surmise from above that $x^2+(p-1)x+q=(x-1)(x-5)=x^2-6x+5\implies p=-5$ and $q=5$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3489588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Proof verification in Hatcher Algebraic Topology, Proposition 3.25 First, this is the link of the book, for convenience: https://pi.math.cornell.edu/~hatcher/AT/AT.pdf#page=244
Proposition 3.25 If $M$ is connected, then $M$ is orietable iff $\tilde M$ has two components.
(Here $M$ is an $n$-manifold. The definition of $\tilde M$ is constructed above the statement.)
Proof. If $M$ is connected, $\tilde M$ has either one or two components since it is a two-sheeted covering space of $M$. If it has two components, they are each mapped homeomorphically to $M$ by the covering projection, so $M$ is orientable. Conversely, if $M$ is orientable, it has exactly two orientations since it is connected, and each of these orientations defines a component of $\tilde M$.
I think this proof is fulled of just claims. I took quite lot of time verifying each sentence. I almost understood all, but the only thing that I couldn't is in the last sentence that the two orientations of $M$ define the two components of $\tilde M$. That is, how do I have to show that $\tilde{M}$ is not connected? Thanks in advance.
| Fix the two global orientations and call them $+1$ and $-1$. This gives you a map $\tilde M \to \{+1, -1\}$ which is continuous and surjective. So $\tilde M$ is disconnected. Conversely, the inverse images of $+1$ and $-1$ are each homeomorphic to $M$ so those are connected.
To see why the map is continuous you use the definition of the topology to show that $\tilde M$ is orientable (in particular it is locally orientable). So every point has a neighbourhood with a compatible orientation. Meaning $+1$ points have only $+1$ points in a neighbourhood of them and vice versa.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3489663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Solving integral of $\frac{\sin^2x}{x^2}$ with distributions I tried to evaluate $\int_{\mathbb{R}}\frac{\sin^2x}{x^2}dx$, using the theory of distribuions.
I guess I should evaluate it by thinking of a distribution generated by $T=\frac{\sin^2x}{x^2}$ and testing it on a $\phi \in \mathcal{D}(\mathbb{R})$:
\begin{equation}
(T, \phi)= \int_{\mathbb{R}}dx\frac{\sin^2x}{x^2} \phi(x)= \int_{supp(\phi)} dx\frac{\sin^2x}{x^2} [\phi(x)-\phi(0)] + \int_{supp(\phi)}\frac{\sin^2x}{x^2} \phi(0)
\end{equation}
Where the first integral on the right side of the equation goes to zero for Riemann-Lebesgue (at least, I suppose). I tried to rewrite the last integral as follows, supposing that, being $\phi \in \mathcal{D}(\mathbb{R})$, exists $L>0$ such that $supp(\phi) \subseteq [-L,L]$:
\begin{equation}
\phi(0) \int_{-L}^{L}dx\frac{\sin^2x}{x^2}
\end{equation}
I know that, as comes with complex anlysis, the result should be $\pi$ (and thus I should obtain $\pi \phi(0)$), but from now on I cannot find a way to get through it.. any help/suggestion/correction is very welcome!
| Take the function $$ f(x) = \left\{\begin{array}{ll} 1 & \text{for } |x|<1\\
\frac12 & \text{for } |x|=1 \\ 0 & \text{for } |x|>1 \end{array} \right.$$
The Fourier transform of this function is
$$ \tilde f(k) = \int_{-\infty}^\infty f(x) e^{-ikx} dx = \int_{-1}^1 e^{-ikx} dx = \frac{2\sin k}{k} $$
The Parseval's theorem for Fourier transform states that
$$ \int_{-\infty}^\infty |\tilde f(k)|^2 \frac{dk}{2\pi} = \int_{-\infty}^\infty |f(x)|^2 dx $$
That means that
$$ \int_{-\infty}^\infty \frac{4 \sin^2k}{k^2} \frac{dk}{2\pi} = \int_{-1}^1 dx = 2$$
$$ \int_{-\infty}^\infty \frac{\sin^2k}{k^2} dk = \pi$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3489784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Understanding how contrapositive work I want to understand contrapositive clearly. I'll start by saying: "If it is sunny, then there is light". The statement is true. But now consider the contrapositive: " If there is no light then it is not sunny". The contrapositive is false because you could create light with a flashlight.
What is wrong here?
| Read your correct contrapositive again:
If there is no light then it is not sunny.
That's true - it's equivalent to the first statement. You could create light with a flashlght, but then there would be light.
What
you could create light with a flashlight.
tells you is that the original true statement is not "if and only if". The converse
If there is light then it is sunny
is false.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3490056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
Prove length relationship of median and sides in triangle using triangle inequality AD is the median in the triangle ∆ABC, from the corner A.
Prove that $\frac{AB+AC}{2}$$>AD>$$\frac{AB+AC-BC}{2}$.
I have that
$AB+AC>BC$,
$AB+BC>AC$,
$AC+BC>AB$
as well as
$AC+AD>CD$
$AD+CD>AC$
$AC+CD>AD$
and
$AB+AD>BD$
$AD+BD>AB$
$AB+BD>AD$
and
$BC=CD+BD$
I come as far as $AB+BC+AC>2AD$ and $AD>AC-BC-BD$ and now I'm really stuck and staring myself blind, going in circles.
Super grateful for any help!
| This is just a simple application of the triangle inequality.
Take $A'$ so that $ABCA'$ is a parallelogram. Then $2AD=AA'< AB+BA'=AB+AC$. For the other inequality, we have $AD+DB> AB$ and $AD+DC> AC$. Adding gives $2AD> AB+AC-BC$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3490180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving that $\int_0^\pi\frac{x\ln(1-\sin x)}{\sin x}dx=3\int_0^\frac{\pi}{2}\frac{x\ln(1-\sin x)}{\sin x}dx$
Prove without evaluating the integrals that:$$2\int_0^\frac{\pi}{2}\frac{x\ln(1-\sin x)}{\sin x}dx=\int_\frac{\pi}{2}^\pi\frac{x\ln(1-\sin x)}{\sin x}dx\label{*}\tag{*}$$
Or equivalently:
$$\boxed{\int_0^\pi\frac{x\ln(1-\sin x)}{\sin x}dx=3\int_0^\frac{\pi}{2}\frac{x\ln(1-\sin x)}{\sin x}dx}$$
In contrast we have:
$$\boxed{\int_0^\pi\frac{\ln(1-\sin x)}{\sin x}dx=2\int_0^\frac{\pi}{2}\frac{\ln(1-\sin x)}{\sin x}dx}$$
This is of course easily provable by splitting the integral as $\int_0^\frac{\pi}{2}+\int_\frac{\pi}{2}^\pi$ and letting $x\to \pi-x$ in the second part, unfortunately this method doesn't work for the other one.
I am already aware how to evaluate the integrals as we have:
$$\mathcal I= \int_0^\frac{\pi}{2}\frac{x\ln(1-\sin x)}{\sin x} dx\overset{\tan \frac{x}{2}\to x}=-2\int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx=-\frac{\pi^3}{8}$$
And the latter integral is evaluated in many ways here, so if you have other approaches please add them there.
Here's how I came up with $\eqref{*}$:
I knew from here that:
$$I\left(\frac{3\pi}{2}\right)=\int_0^\frac{\pi}{2}\frac{\ln(1-\sin x)}{\sin x}dx=-\frac{3\pi^2}{8}$$
And since this result is very similar to the one from above, I tried to show that $\mathcal I=\frac{\pi}{3} I\left(\frac{3\pi}{2}\right)$, equivalent to:
$$\boxed{\int_0^\frac{\pi}{2}\left(\frac{\pi}{3}-x\right)\frac{\ln(1-\sin x)}{\sin x}dx=0}$$
I also noticed that we have:
$$\mathcal J=\int_\frac{\pi}{2}^\pi\frac{x\ln(1-\sin x)}{\sin x}dx\overset{x\to \pi-x}=\int_0^\frac{\pi}{2}\frac{(\pi-x)\ln(1-\sin x)}{\sin x}dx=\pi I\left(\frac{3\pi}{2}\right)-\mathcal I$$
$$\Rightarrow \mathcal I+\mathcal J=\int_0^\pi \frac{x\ln(1-\sin x)}{\sin x}dx=\pi I\left(\frac{3\pi}{2}\right)=-\frac{3\pi^3}{8}$$
Of course now it's trivial to deduce that $2\mathcal I=\mathcal J$ as we know the result for $\mathcal I$, but I'm interested to show that relationship without making use of the result or by calculating any of the integrals. If possible showing $\eqref{*}$ using only integral manipulation (elementary tools such as substitution/integration by parts etc).
I hope there's a nice slick way to do it as it will give an easy evaluation of the main integral.
| It suffices to show the vanishing integral below
\begin{align}I=& \int^\frac{\pi}{2}_0\frac{(3x-\pi)\ln(1-\sin x)}{\sin x}dx\\
=& \int^\frac{\pi}{2}_0\int^\frac{\pi}{2}_0 \frac{(\pi-3x)\cos y}{1-\sin y \sin x}dy\>dx\\
=& \int^\frac{\pi}{2}_0\int^\frac{\pi}{2}_0 (\pi-3x)\frac{d}{dx}
\left(2\tan^{-1}\frac{\sin\frac{x-y}2}{\cos\frac{x+y}2} \right)
\overset{ibp}{dx}\> dy\\
=& \int^\frac{\pi}{2}_0\int^\frac{\pi}{2}_0 6\tan^{-1}\frac{\sin\frac{x-y}2}{\cos\frac{x+y}2}\>\overset{x \leftrightarrows y}{dxdy}=-I=0
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3490404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65",
"answer_count": 1,
"answer_id": 0
} |
Is $(2+i)^n + (2-i)^n $ a real number ($\in \Bbb R$)? The question:
$$\forall n \in \Bbb N$$
is the number
$$(2+i)^n+(2-i)^n$$
in the real numbers ($\Bbb R$)?
My try for solution using Newton binom:
$$(2+i)^n = \sum_{k=0}^{n}\binom{n}{k}2^{n-k}i^{k}$$
$$(2-i)^n = \sum_{k=0}^{n}\binom{n}{k}2^{n-k}(-i)^{k}$$
$$(2+i)^n + (2-i)^n$$
$$\Downarrow$$
$$\forall k - odd: \sum_{k=0}^{n}\binom{n}{k}2^{n-k}i^{k} + \sum_{k=0}^{n}\binom{n}{k}2^{n-k}(-i)^{k} = 0$$
$$\forall k - even: \sum_{k=0}^{n}\binom{n}{k}2^{n-k}i^{k} + \sum_{k=0}^{n}\binom{n}{k}2^{n-k}(-i)^{k} = Real \ number$$
So the answer is YES, the sum $(2+i)^n + (2-i)^n \in \Bbb R.$
Correct? any better/smarter/more efficient way?
*I tried to solve it trigonometricaly, didnt realy work, stuck...:
$$(2+i) \Rightarrow r = \sqrt{2^2+1^2} =\sqrt{5}, \tan \theta = \frac{1}{2} \Rightarrow \theta = 26.565$$
$$(2+i) = \sqrt{5}(\cos 26.565 + i \sin 26.565)$$
$$(2+i)^n = \sqrt{5}^n(\cos 26.565n + i \sin 26.565n)$$
$$(2-i)^n = \sqrt{5}^n(\cos 26.565n - i \sin 26.565n)$$
$$(2+i)^n + (2-i)^n = ?$$
| More efficient way:
A complex number plus its complex conjugate is real, so
$(2+i)^n+(2-i)^n=(2+i)^n+(\overline{2+i})^n=(2+i)^n+\overline{(2+i)^n}\in\mathbb R.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3490524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
A problem related to domain and range of real functions. I am trying to solve a problem related to real functions. For which I need properties of domain and range which a number must follow to be in domain and range of the real function.
I have found the property, for domain but I'm not able to find of range.
Since functions can have different domains and range, hence they can have different properties of domain and range. So let's consider a function $f$. Which is defined as:
\begin{equation}
f=\{(x, y): \quad y=\sqrt{16-x^{2}} \ and\
x, y \in R\}
\end{equation}
Now property of Domain for this function is:
For any real number $a$ to be in domain of function $f$ there must exist
only one real number $b$ such that:
\begin{equation}
b=\sqrt{16-a^{2}}
\end{equation}
I am not able to construct similar statement for range of the function.
So I need your help. So if you're going to answer this question then please keep it in mind that I am talking about "real functions" ie functions whose both domain and codomain are either subset of $R$ or are $R$.
Thanks.
| The range is characterized as the set of all real numbers $y$ such that $f(x)=y$ for at least one $x$ in the domain of $f$.
[The domain consists of all $x$ with $16-x^{2} \geq 0$ which means $-4 \leq x \leq 4$. The range consists of all non-negative real numbers less than or equal to $4$. [If $0 \leq y \leq 4$ then $x=\sqrt {16-y^{2}}$ satisfies $f(x)=y$].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3490735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Inequality $\frac{x^3}{x^2+y^2}+\frac{y^3}{y^2+z^2}+\frac{z^3}{z^2+x^2} \geqslant \frac{x+y+z}{2}$ Help to prove this Inequality:
If x,y,z are postive real numbers then:
$\dfrac{x^3}{x^2+y^2}+\dfrac{y^3}{y^2+z^2}+\dfrac{z^3}{z^2+x^2} \geqslant \dfrac{x+y+z}{2}$
I tied to use analytic method with convex function but no result:
Since $f(x)=\frac{1}{x}$ is a convex function, by Jensen we obtain:
$$\frac{1}{x+y+z}\sum_{cyc}\frac{x^3}{x^2+y^2}=\sum_{cyc}\left(\frac{x}{x+y+z}\cdot\frac{1}{\frac{x^2+y^2}{x^2}}\right)\geq$$
$$\geq\frac{1}{\sum\limits_{cyc}\left(\frac{x}{x+y+z}\cdot\frac{x^2+y^2}{x^2}\right)}=\frac{x+y+z}{\sum\limits_{cyc}\left(x+\frac{y^2}{x}\right)}.$$
Thus, it's enough to prove that
$$\frac{x+y+z}{\sum\limits_{cyc}\left(x+\frac{y^2}{x}\right)}\geq\frac{1}{2}$$ or
$$x+y+z\geq\sum_{cyc}\frac{y^2}{x},$$ which is wrong.
thanks
| Hint: We have $$ \frac {x^3}{x^2 + y^2}=x-\frac{xy^2}{x^2+y^2}\ge x-\frac{y}{2}$$ because by AM-GM $x^2+y^2\geq 2xy$ so that $$\frac{xy}{x^2+y^2}\le\frac12$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3490814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Where precisely is topology required to prove the existence of non-zero eigenstates in the proof of the spectral theorem? My professor today remarked that the proof of the spectral theorem (even for the discrete spectrum case) uses not just algebra but also topology to prove the existence of eigenstates. However, I'm not being able to pinpoint which step of the proof of the spectral theorem (for infinite-dimensional operators) invokes topological arguments. Could someone please explain?
| At the very least you need to establish that the spectrum is nonempty. This usually requires the Fundamental Theorem of Algebra in the finite-dimensional case, or some form of Liouville's Theorem in the general case. Examples:
*
*Theorem VII.3.6 in Conway's A Course in Functional Analysis.
*Theorem 1.2.5 in Murphy's C$^*$-Algebras and Operator Theory
*Theorem 3.2.3 in Kadison-Ringrose's Fundamentals of Theory of Operator Algebras
*Theorem 4.1.13 in Pedersen's Analysis Now
*Lemma VII.3.4.4 in Dumford-Schwartz Linear Operators
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3490981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there a lower bound to density at boundary points of a convex set? Let $X \subset \mathbb R^d$ be convex and compact. For each $x \in X$ define
$$D(x) = \lim_{r \to 0}\frac{\mu(X \cap B(x,r))}{\mu(B(x,r))}$$
where $B(r,d)$ is the ball with centre $x$ and radius $r$ and $\mu$ is the Lebesgue measure. The density measures what proportion of the ball is contained in $X$ as $r$ becomes very small.
For example if $X$ is a polygon then $D(x) = 1$ at interior points; and $D(x) = 1/2$ at every point on an edge but not a vertex; while for $x$ a vertex the density $D(x)$ is the angle at that vertex. Thus for polytopes at least
$$\min\{D(x): x \in X\} = \min\{D(v): v \in X \text{ is a vertex}\}>0.$$
For smooth bodies I would imagine $D(x) = 1/2$ at every boundary point, since the boundary is locally approximated by a hyperplane. Hence we have $\min\{D(x): x \in X\} =1/2$
For more general maybe-not-smooth bodies, is is known that $\min\{D(x): x \in X\} >0$?
| I'll make a suggestion (maybe I'm wrong).
Let's fix $a$ - an interior point of $X$ and $\varepsilon > 0$ s.t. $B(a,\varepsilon) \subset X$. Then for arbitrary $x \in X$ you have that $x + t(a + z - x) \in X$ for $||z|| < \varepsilon$ and $t \in [0,1]$. So, for arbitrary $t < \frac{r}{b + \varepsilon}$ (where $b$ is the diameter of $X$) we have: $B(x + t(a - x), t\varepsilon) \subset X \bigcap B(x, r)$. And therefore $\frac{\mu(X \bigcap B(x, r))}{\mu(B(x, r))} \ge \frac{t^d \varepsilon^d}{r^d} \ge \frac{\varepsilon^d}{(b + \varepsilon)^d}$. Last expression is constant.
I think there can be more accurate estimate that uses measure of all cone $x + t(a + z - x)$ for $t \in [0,1]$ and $||z|| < \varepsilon$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3491213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Floating Point Arithmetic dealing with a Taylor expansion for e^-x Suppose we want to compute $e^{-a}$ for $a>>1$. Which of the following techniques should I use?
(a) Taylor expansion for $e^{-x}$ about $x=0$ or
(b) Taylor expansion for $e^x$ about $x=0$, then take its reciprocal.
Would a Taylor expansion about $e^{-x}$ give cancellation? Is that what this question is getting at?
| As a rule of thumb, the direct, naive evaluation of a sum $a_1+...+a_n$ will have accumulated floating point errors of a size $(|a_1|+...+|a_n|)\mu$ where $\mu$ is the floating point machine constant. In your case this means that the series evaluation of $e^x$ has a floating point error, additional to the truncation error, of $e^{|x|}\mu$.
*
*This means that variant (a) computes $e^{-x}$, $x>0$, as $e^{-x}+e^{x}\delta$ with $|δ|\le μ$. For medium sized $x$ the error will surpass the value, making the result meaningless.
*In variant (b) the result is computed as $e^{-x}/(1+δ)$, which is acceptable for a floating point operation.
*Most actual algorithms implement variant (c), or a further refinement, where in $e^{-x}=2^n(1+m)$ the exponent $n$ is first computed as integer part of $x/\ln2$ and then an optimized interpolation polynomial over the interval $[0,1]$ is used for the exponential of the remainder $x-n\ln2$.
*Another possibility is (d) to use the floating point form of $x=2^n(1+m)$ to compute $e^{-x}=(e^{-(1+m)/2})^{2^{n+1}}$ for $n>0$ via repeated squaring of the inner exponential.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3491333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Triangular numbers ($\text{mod } 2^n$) as a permutation of $\{0,1,2,\dots,2^n-1\}$ My question is based on an observation made by Vepir in their question "Grasshopper jumping on circles".
Vepir's observation was essentially that the sequence of triangle numbers $T\colon \mathbb{Z} \rightarrow \mathbb{Z}$ $$
T(n) = \frac{n(n+1)}{2}
$$
forms a permutation when the inputs are restricted to
$\{0,1,\dots,2^k-1\}$ and the outputs are considered $(\text{mod } 2^k)$. Moreover, this only works modulo powers of two.
Example
For example, when $k=3$, the sequence is $$
\begin{alignat*}{8}
n: &&\ 0,\ & 1,\ & 2,\ & 3,\ & 4,\ & 5,\ & 6,\ & 7\\
T(n): &&\ 0,\ & 1,\ & 3,\ & 6,\ & 10,\ & 15,\ & 21,\ & 28\\
T(n) \pmod {8}: &&\ 0,\ & 1,\ & 3,\ & 6,\ & 2,\ & 7,\ & 5,\ & 4
\end{alignat*}
$$
Question
I showed this to a colleague, and he proved it was a bijection for all $2^m$,
however, his proof involved a good deal of case analysis.
Is there a quick and easy way to see that the triangle numbers restrict to a
permutation if and only if $k$ is a power of two?
Also, are there examples of polynomials with rational coefficients
$f \in \mathbb Q[x]$ that restrict to a permutation if and only if $k$ is a power
of three? A power of four? A prime number? A Fibonacci number?
| I'll be using $\equiv$ between non-integers to denote that the two sides differ by a multiple of the modulus $b^k$.
The map is a permutation, that is, bijective, exactly if $\frac12m(m+1)\equiv\frac12n(n+1)$ implies $m=n$ for $0\le m,n\lt b^k$. So assume $\frac12m(m+1)\equiv\frac12n(n+1)$. Adding $\frac18$ yields $\frac12\left(m+\frac12\right)^2\equiv\frac12\left(n+\frac12\right)^2$. Bringing both terms to one side and factoring the difference yields $\frac12\left(m+\frac12+n+\frac12\right)\left(m+\frac12-n-\frac12\right)\equiv0$, that is, $\frac12\left(m+n+1\right)\left(m-n\right)=rb^k$.
Now if $b=2$, since $m+n+1$ and $m-n$ have different parity, at most one of them can contribute factors of $2$. Moreover, since $m,n\lt 2^k$, either factor can contain at most $k$ factors of $2$ unless $m=n$. One factor is divided out by the factor $\frac12$, so the equation cannot be fulfilled unless $m=n$.
This argument doesn't work for $b\ne2$; indeed we can always choose $m=b^k-1$ and $n=0$ to get $k$ factors of $b$ in $n+k+1$, and none of them are divided out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3491464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Variant of coin toss problem Suppose you play the following game: You toss a fair coin. If you get heads, a hundred dollars are added to your reward. If you get tails, however, the game is stopped and you do not get anything at all. After each throw you can decide, whether you want to take the money or keep playing. When should you stop to play the game in order to get the maximum expected reward and why? What happens if the coin is biased and has an 80% chance of showing heads?
| I would assume that the 100 dollars would be split proportionally to how many of the 3 completed flips each player guessed correctly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3491602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How do I know if the number is divisible by $60$? I was study Babel Civilization ( Babylon ) , I see what about the number $60$ now I'm going to find a trick or fast method to know the rest of divisibility by number $60$ for example :
$6689=60^{2}+51.60+29$
$2567=42.60+47$
I know that the number $60$ divisible by $1,2,3,4,5,6$
So I need fast know method I mean how the rest ?
I don't if we can generalized or no ?
Please I need fast method without using calculator !
For example $17894=?$
See that : $2177,12=2177+0,12$
$2177=36.60+17$
$0,12=\frac{3}{25}=\frac{60.3}{25.60}=?$
I need written in base $60$ ?
| Being divisible to $60$ is equivalent to being divisible simultaneously to $3,20$. So we can use divisibility test for these, which are easy in base $10$. Divisibility by $20$ mean the last $2$ digits must be and even number followed by $0$. Divisibility by $3$ mean sum of digits is divisible by $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3491854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
(X,d) be a compact metric space. For every open cover, show there exists ε > 0 such that ∀ ∈ X, B(x,ε) is contained in some member of the cover.
Let (X,d) be a compact metric space. For every open cover, show there exists ε > 0 such that for every x ∈ X, B(x,ε) is contained in some member of the cover.
My attempt:
(X,d) is compact. Therefore there exists a finite subcover of X.
Any element x in X must lie in some member of the cover, say x ∈ Ui. Otherwise they would not constitute a cover.
Since Ui is open, by definition every point is interior, so there exists ε > 0 such that B(x,ε) is contained in Ui.
I haven't used the fact the subcover is finite, or the fact X is a metric space rather than just topological space, so I feel my reasoning is flawed.
Any help is greatly appreciated!
| The Wiki proof linked in the comments uses the fact that a continuous function on a compact set reaches its extrema. If you want a proof from scratch and closer to what you are trying to do, here are a few hints:
$1).\ $ Let $\mathcal A$ be an open cover of $X$. For each $x\in X$ there is an open neighborhood $B_{\epsilon_x}(x)$ such that each such $B$ is contained in an element of $\mathcal A$.
$2).\ $ The $B's$ give you $\textit{another}$ open cover of $X$.
$3).\ $ Take a finite subcover of the cover from $2)$ and note that you also get a finite number of $\epsilon_x's$.
$4).\ $ Using the conclusion in $3)$, define $\delta>0$ appropriately to conclude.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3491978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Is central projection of the imaging process a projective transformation? I am reading the book Multiple View Geometry in Computer Vision(Second Edition) and encounter some questions.
On Page 7, it says
In applying projective geometry to the imaging process, it is
customary to model the world as a 3D projective space, equal to
$\mathbb{R}^3$ along with points iat infinity. Similarly the model for
the image is the 2D projective plane $\mathbb{P}^2$. Central
projection is simply a map from $\mathbb{P}^3$ to $\mathbb{P}^2$.
Does it mean that central projection is a projective transformation? If not, what is the reason? From other pages, it seems that projective transformation can only be the map between two spaces of the same dimension.
Any help are appreciated.
| Central projection from 3D space to 2D space is not a projective transformation, as you have correctly concluded.
Central projection from 2D space to 2D space is a projective transformation as the text concludes in the last line of the same section you have highlighted. You can also see an example of the latter case on Page 34, Fig. 2.3.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3492083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find how many parts are in a triangle Recently, I got this interesting riddle in a math test, which I still can't solve. Here are the exact words:
Each side of an equilateral triangle was divided into 100 equal parts. Points received
connected by segments. How many parts did you get?
Here is an example for a triangle with just 3 points on each side:
That much lines really confuses me. I tried to find a ratio between number of lines and parts, and how does each new line drawn divides the others, but it doesn't seem to work. What could be the possible solution to this?
| I'll take it you want number of regions created. This is tedious to count them all, until you realize there's symmetry. Breaking the triangle into 4 equilateral triangles, 3 are just rotations of each other there are 26 in each of these for 78 in those triangles. You can do this again with the last triangle, giving 14 regions in each of 3 subtriangles. Lastly, you'll note that the remaining triangle has 10 regions. so you have $3(26+14)+10= 130$ regions. okay slight error possibly in counting in each triangle. still gives you a way to figure it out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3492263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
} |
Colored areas in a triangle that don't share a line A friend and I came up with this puzzle and I'm looking for a proof.
Given an equilateral triangle of area 1, color parts of the triangle red,
blue, and green such that
*
*Each color makes exactly one connected region strictly inside the triangle
*There is no line parallel to one of the sides that contains points of multiple colors
Let $X$ be the minimum area of the red, blue, and green regions. Find
the maximum value of $X$ over all possible colorings.
I suspect the maximum occurs in the following arrangement of teardrop-shaped figures, which each have an area of $\frac{4}{45}$ (new bound found by Daniel Mathias). This is an awfully strange number for what seems like a nice problem, so I'm not sure if it's correct.
If you consider the triangle formed by the three points closest to the center and call $x$ the side length, $\frac{4}{45}$ can be reached when $x=\frac{4}{5\cdot 3^{3/4}}$. If $s$ is the side length of the original triangle, each region has area $x\left(\frac{s}{3}-\frac{5x\sqrt{3}}{12}\right)$. Maximizing this gives $\frac{4}{45}$.
Does anybody have an idea of a proof (or a counterexample) that this indeed gives the maximum? If it is valid, is there any intuition behind the value $\frac{4}{45}$ that makes it so special?
Also, we can look at the discrete case of this puzzle on a triangular grid with $n$ vertices on each side where we color vertices three colors. Asymptotically, this should have the same behavior as the original problem. I couldn't see a very nice pattern with small values—does anybody have a solution to this modified problem?
We tried to look for problems similar to this one; it seems like it should be well known! However, we couldn't find anything. If anybody could help us, that would be greatly appreciated.
| Just as a first step in proving an upper bound on the area, here is a proof I found that details $\dfrac{1}{5}$ as an upper bound, although it's clear from the proof the bound is unachievable.
For this proof I only considered two of the three sides of the triangle; that is, no two colors can lie on a line parallel to a chosen two out of the three sides. Ignoring the lines parallel to one of the sides, we proceed as follows:
First, we stretch the axes so that the triangle instead looks like an isosceles right triangle; this messes with the symmetry of the shape, but as we only consider two of those sides, let those two be the legs of the triangle so that by putting coordinate axes, no two red, blue, or green points can share a $x$ or $y$ coordinate.
Now, if we consider any square of side length $x$ with sides parallel to the axes, we first prove that the maximum possible area of the color that appears the least is $\dfrac{x^2}{9}$. To do this, let $x_r,x_b,x_g$ be the combined length of the projection of the red, green, and blue areas onto the $x$ axis, and define $y_r,y_b,y_g$ similarly. Then note $x_r+x_b+x_g \le x$ and $y_r+y_b+y_g \le x$. Thus, there exists a color $c$ such that $x_c+y_c \le \dfrac{2x}{3}$; by quadratic optimization, we can achieve $x_cy_c \le \dfrac{x^2}{9}$. Since the color is completely contained within the bounds of its projections, this proves our claim.
Now, we optimize the use of our lemma. Consider splitting a square into regions as shown below:
The black region is contained within a square of side length $x$ and thus the color that appears the least has at most area $\dfrac{x^2}{9}$. Add on the two purple areas outside to get a naive bound of $\dfrac{x^2}{9}+(s-x)^2$. We can optimize this by differentiating to get a minimum achieved when $x=\dfrac{9s}{10}$ and the area is $\dfrac{1}{10}s^2$. Since the area of the large triangle is $\dfrac{1}{2}s^2$, we get our final bound of $\dfrac{1}{5}$.
It's clear that in many aspects the bound is unachievable; the only case when equality can hold is when first of all we have equality inside the black region, which is already impossible because the part cut off from the square limits this. Then, even if equality was achieved, all colors take up an area of $\dfrac{x^2}{9}$ and then would need to split the outside area, which is also unaccounted for. However, it does come pretty close to the actual bound for only two sides. Using an optimization of the configuration below, I managed to achieve a possible area of $\dfrac{3-\sqrt{5}}{4} \approx 0.191,$ which is very close to $\dfrac{1}{5}$, and may not even be the best configuration.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3492485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Evaluating $\int\frac{\mathrm{d}u}{u\log u}$. I know we use $u$-substitution for $\log u$ and then we find the derivative but after that i'm confused. This is because the question is already using $u$ as a variable. Any explanation would be appreciated.
$$\int\frac{\mathrm{d}u}{u\log u}$$
| If $x=\log u$, then $dx/du=1/u$, so we have
$$\int\frac{du}{u\log u}=\int\frac{dx}{x}=\log|x|+C=\log|\log u|+C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3492556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Dimension of vector subspace Let $V$be the vectorspace of all polynomials. And $W$ is the subspace spanned by $t^2+t+2, t^2+2t+5, 5t^2+3t+4,2t^2+2t+4$.
The question asks us to find the dimension of $W$. Here is my try:
Let us call the four polynomials as $A$,$B$,$C$&$D$ in the order as they appear in the question. Then a cursory look at the four polynomials shows that $A$and $D$ are linearly dependent. So one of them could be removed and we could proceed with remaining $3$. I removed $D$.
Next when I formed a $3\times3$ matrix having coefficients of $t^2,t$ and constant terms in $A$,$B$&$C$ as its rows, I found the three polynomials to be linearly dependent. $C$ could be written as a linear combination of $A$ and $B$. So I concluded that $W$ could be spanned by $A$ and $B$.
But what about the dimension of $W$? How to find that?
| The dimension is at most $3$, since $W\subset P_2$. It is at least $2$, as the first two are not multiples of each other. The last is twice the first, so can be thrown out.
Let's check if the first three are independent, by computing the following determinant: $\begin{vmatrix}1&1&2\\1&2&5\\5&3&4\end{vmatrix}=1\cdot(8-15)-1\cdot (4-25)+2((3-10)=-7+21-14=0$.
Thus the first three are indeed dependent.
Thus $\operatorname {dim}W=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3492775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove that $\binom{a_1}{2} + \binom{a_2}{2} + \cdots + \binom{a_n}{2} \ge r\binom{k+1}{2} + \left(n-r\right)\binom{k}{2}$
If $a_1,a_2,\cdots,a_n$ are positive integers, and $a_1+a_2+\cdots +a_n=nk+r$, where $k$ and $r$ are integers such that $0\le r<n$, prove that $$\dbinom{a_1}{2} + \dbinom{a_2}{2} + \cdots + \dbinom{a_n}{2} \ge r\dbinom{k+1}{2} + \left(n-r\right)\dbinom{k}{2}$$
Here is what I do :
Consider the function $f(x) = \dbinom{x}{2}$. Since it is convex, then by applying the Jensen's inequality, we have :
$$\frac{\dbinom{a_1}{2}+\dbinom{a_2}{2}+\cdots+\dbinom{a_n}{2}}{n}\ge\dbinom{\frac{a_1+a_2+\cdots+a_n}{n}}{2}$$ $$\Rightarrow \dbinom{a_1}{2}+\dbinom{a_2}{2}+\cdots+\dbinom{a_n}{2}\ge \frac{n}{2}\left(\frac{nk+r}{n}\right)\left(\frac{nk+r}{n}-1\right)$$ $$\Rightarrow \dbinom{a_1}{2}+\dbinom{a_2}{2}+\cdots+\dbinom{a_n}{2}\ge \frac{1}{2}\left(r(k+1)+(n-r)k\right)\left(\frac{nk+r}{n}-1\right)$$
But I am stuck till here, I don't know how to get the form $r\dbinom{k+1}{2} + \left(n-r\right)\dbinom{k}{2}$.
Any help is surely appreciated, Thanks!
|
We obtain with OP's problem setting and applying Jensen's inequality
\begin{align*}
\color{blue}{\sum_{j=1}^n\binom{a_j}{2}}&\geq n\binom{\frac{1}{n}\sum_{j=1}^n a_j}{2}\\
&=n\binom{\frac{1}{n}(nk+r)}{2}\\
&=\frac{n}{2}\left(\frac{nk+r}{n}\right)\left(\frac{nk+r}{n}-1\right)\tag{1}\\
&=\frac{1}{2}\left(nk+r\right)\left(k+\frac{r}{n}-1\right)\\
&\,\,\color{blue}{=\frac{1}{2}\left(nk(k-1)+2rk-\frac{r}{n}(n-r)\right)}\tag{2}
\end{align*}
The expression (1) is stated by OP and can be rearranged to (2).
On the other hand the right side of OPs inequality can be written as
\begin{align*}
&r\binom{k+1}{2}+(n-r)\binom{k}{2}\\
&\qquad=\frac{1}{2}\left(r(k+1)k+(n-r)k(k-1)\right)\\
&\qquad\,\,\color{blue}{=\frac{1}{2}\left(nk(k-1)+2rk\right)}\tag{3}
\end{align*}
Since $0\leq r<n$ we see the expression in (2) is less than (3) by $\frac{r}{2n}(n-r)$ whenever $0<r<n$. We conclude Jensen's inequality is not strong enough to prove the claim.
Note, the reference given in the comment by @MartinSleziak provides a nice solution (which also makes plausible that Jensen's inequality does not work).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3492894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Where is the mistake ( in using mean value theorem)? $$ f(x)=
\begin{cases}
x^2\sin \frac1x & x \ne 0 \\
0 & x=0\\
\end{cases}
$$
$f$ is differentiable everywhere and
$$ f'(x)=
\begin{cases}
2x\sin \frac1x-\cos \frac1x & x \ne 0 \\
0 & x=0\\
\end{cases}
$$
$f$ satisfies the MVT. Using it on $(0,x)$ we get:
$$\frac{x^2\sin \frac1x-0}{x-0}= 2c\sin \frac1c-\cos \frac1c$$
$c\in(0,x)$
When $x\to0$ then $c\to0$. So we have a contradiction
$$0=\lim \limits_{x \to 0}x\sin \frac1x=\lim \limits_{c \to 0}2c\sin\frac1c-\cos\frac1c$$
Last limit doesn't exist where is the mistake ?
I see that $\lim \limits_{x \to 0}f'(x) $ doesn't exist but MVT still applies?
Using a limit inside of an interval is something i dont understand dont we then get a single point? This process is important it used in the proof of l"Hospital rule.
| The mean value theorem says that there exists some $c$ in the interval $(0,x)$ such that [...]. And it is indeed the case that for any $x>0$ you can find such a $c$. That is not to say that any $c$ in $(0,x)$ satisfies the MVT, or even that the other numbers in $(0,x)$ behave nicely.
So what the MVT actually tells you in this case is that for any $x>0$, there is a $c_x\in(0,x)$, and these $c_x$ are such that $$2c_x\sin\frac1{c_x}-\cos \frac1{c_x}\to 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3493057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Using the numbers $0,1,2,3,4,5,6,7$ (repetition allowed) how many odd numbers can be created which will be less than $10000$?
Using the numbers $0,1,2,3,4,5,6,7$ (repetition allowed) how many odd numbers can be created which will be less than $10000$?
I have tried to solve this in the following way:
Numbers which will be less than $10000$ must be one digit, two digits, three digits, or four digits (like $1,13,123,1235$ etc).
Therefore, one-digit odd numbers $=4$.
Two-digit odd numbers possible $=32-4=28$ (as there can be one zero)
Three-digit odd numbers possible $=256-28-4=224$ (again there can be two zeros)
Four-digit odd numbers possible $=2048-228-28-4=1788$ (as there can be three zeros)
Therefore, total possible numbers will be $=4+28+224+1788=2044$
But the book says the answer is $2048$.
Could you please solve this?
| If you have Mathematica here is a way to check your answer. As it has been pointed out the mistake is in the number of four-digit odd numbers.
a[n_] :=
If[ContainsNone[IntegerDigits[n], {8}] &&
ContainsNone[IntegerDigits[n], {9}] && OddQ[n], 1, 0];
Total[a /@ Table[i, {i, 1000, 9999}]]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3493161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Calculting interest on investment Given an investment of 9550000 USD I want to know what the value of this investment is after one year if I have to pay 10% of interest per year.
According to my understanding, that would be 90% of the investment which equals to 8595000 USD.
According to the solutions, however, it is 8681818 USD.
I can see that this is around 91% of the initial investment. Which formula for the calculation of interests, however, is used to arrive at this?
| Hint: You discount the investment. That means you divide it by $(1+i)=(1+0.1)=1.1$
$C_0=9,550,000\cdot \frac1{1.1}=...$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3493266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How can we show that $n < 2^n$ for all natural numbers $n$? I proved it using calculus or by drawing their graph but I was thinking if there is any simpler way to prove it. Please help me.
Proof by induction >
$P(n) : n < 2^n$ for all $n \in\mathbb{N}$
$P(1) : 1 < 2^1$, i.e.
$1 < 2,$ this is a true statement.
Now lets assume $P(m)$ is true i.e. $m < 2^m$ .
So $P(m + 1) : m + 1 < 2^{m + 1}.$
Now
$m < 2 ^ m \Rightarrow 2m < 2^{m + 1} \\
\Rightarrow m+m < 2^{m+1}\\
\Rightarrow m+1 <= m+m < 2^{m+1}\\
\Rightarrow m+1 < 2^(m+1)$
Hence $P(m+1)$ is true. Thus $P(m)$ is true $\Rightarrow P(m+1)$ so by principle of mathematical induction $P(n)$ is true for all $n \in \mathbb{N}.$
So as I said I know the proof using induction hence I wanted to know any other way to prove it.
| Induction:
If $k < 2^k$ (which is true for the first few natural $k$) and $k \ge 1$ (which is true for all $k$) then:
$k +1 \le k + k < 2^k + 2^k = 2^{k+1}$ and thus for any natural number this is true for it will be true for the next and there will be none where it isn't true.
.....
In essence, multiplying a number, $n$ by $b \ge 2$ (which is adding $n \ge 1$ to $n$ a positive number of times) results in a larger value then adding $1$ to the number (because adding $1$ is less [or equal] to adding $n$ and the number times done in multiplying are more than the just once of adding $1$). So $n$ is the result of adding $1$, $n$ times, while $2^n$ is the result of multiplying $2$ $n$ times. Each multiplication by $2$ must result in a larger value than just adding $1$ would.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3493374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 10,
"answer_id": 7
} |
HAPPY NEW YEAR $2020$ Remainder Problem I framed a new question just now. What is the Remainder when the number $20^{20}$ is divided by $2020$
My try:
$$\frac{20^{20}}{2020}=\frac{20^{19}}{101}$$
Now Consider:
$$20^{18}=(400)^9=(404-4)^9=101k-2^{18}$$
Now i was trying to find Remainder without calculator or by manual division.
| $20^{19}=100^9\cdot4^9\cdot20=100^9\cdot4^{10}\cdot5=100^9\cdot1024^2\cdot5 \equiv - 14^2\cdot5=-980 \equiv 30 \; (\mod 101)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3493472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Transformation which takes Fermat curve $x^n+y^n=1$ to a hyperelliptic curve? Motivated by this where it is possible to take certain Fermat curves like $x^3+y^3=1$ into Elliptic curves.
I was wondering if it is always possible to transform any Fermat curve $x^n+y^n=1$ birationally into some hyperelliptic curve?
| There are non-hyperelliptic Fermat curves.
*
*According to "The Group of Automorphisms of the Fermat Curve" (Tzermias 1995), the automorphism group of the Fermat curve with $n \ge 4$ in characteristic $0$ is the semidirect product $\Sigma_3 \ltimes (\Bbb{Z}/n)^2$ which has order $6n^2$.
*The genus of the Fermat curve is $g=(n-1)(n-2)/2$.
*According to "Automorphism Groups of Hyperelliptic Riemann Surfaces" (Bujulance, Etayo, Martinez 1987), a hyperelliptic Riemann surface of genus $g>15$ has at most $8(g+1)$ automorphisms.
*If $n \ge 8$ then $g = (n-1)(n-2)/2 \gt 15$, $8(g+1) = 4(n^2-3n+4) \lt 6n^2$ and so the Fermat curve is not hyperelliptic.
*EDIT added: Evidently from Theorem p.175 in the paper cited (3) the bound $8(g+1)$ applies if $g>9$ and so the $n \ge 6$ Fermat curve is not hyperelliptic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3493593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
For a set function on a semi-ring, does additive imply finitely additive? Prove or disprove: If $\mu$ is an additive (i.e., 2-additive) set function on a semi-ring $S$, then $\mu$ is finitely additive (i.e., n-additive for every finite $n$) on $S$.
If $S$ is a ring, this is true by induction. For a semi-ring, the naive induction proof fails because if $A_1,\ldots,A_n$ are disjoint sets in $S$ and if $\bigcup_{i=1}^{n} A_i$ is in $S$, it does NOT generally follow that $\bigcup_{i=1}^{n-1} A_i$ is in $S$.
Every measure theory book I've looked at works with finitely additive sets functions on semi-rings, never just additive set functions on semi-rings. So maybe additive doesn't imply finite additive on a semi-ring.
| Additive function on a semiring may not be finitely-additive. There is a simple example.
Consider a set $X = \{a,b,c\}$ and a semiring $S = \{X, \{a\}, \{b\}, \{c\}, \emptyset\}$ and a function $\mu:S \rightarrow \mathbb{R}$ that is defined in the following way:
$\mu(X) = 1$, $\mu(A) = 0$ for all $A \in S$ s.t. $A \ne X$. $\mu$ is additive but not finitely-additive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3493674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find last two digits of a number $A=(2016^{2015^{2014}}+2014^{2015^{2016}}+2017)^{2017}$ Find last two digits of a number $A=(2016^{2015^{2014}}+2014^{2015^{2016}}+2017)^{2017}$.
I have tried to write $A=100k+r$ and find r in $A$ but I stuck at it. Any solution will be aprreciated. Thank you.
| We work below modulo $100$. Then
$$
\begin{aligned}
A
&=\left(\ 2016^{2015^{2014}}+2014^{2015^{2016}}+2017\right)^{2017}
\\
&=\left(\ 16^{2015^{2014}}+14^{2015^{2016}}+17\right)^{2017}
\\
&=\left(\
16^{2015^{2014}\text{ taken modulo }5}
+14^{2015^{2016}\text{ taken modulo }10}
+17\right)^{2017}
\\
&\qquad\text{ since $16^1=16^{1+5}$ and $14^2=14^{2+10}$, both modulo hundred}
\\
&\qquad\text{ and we have periodic repetitons after $16^1$ and $14^2$ with periods $5,10$}
\\
&=\left(\
16^{5}
+14^{5}
+17\right)^{2017}
\\
&=\left(\
76
+24
+17\right)^{2017}
=17^{2017}
\\
&=17^{2017\text{ taken modulo }40=\phi(100)}
\\
&= 17^{17}=77\ .
\end{aligned}
$$
All equalities hold in $\Bbb Z/100$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3493805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $\frac ab= \frac bc= \frac cd$ then $(a^2+b^2)(c^2+d^2)=(ab+cd)^2$ [sic?] This is a basic algebra question. I found in class 9 math book and it is little tricky for me.
If $$\dfrac ab= \dfrac bc= \dfrac cd$$
prove that $$(a^2+b^2)(c^2+d^2)=(ab+cd)^2$$
Note (by @Blue). As observed in comments, the problem is incorrect as stated. It becomes valid if $(ab+cd)^2$ is replaced by $(ac+bd)^2$, but it is not clear if this is the source's intention.
| Using complex numbers and their absolute value one can transform
$$
(a^2+b^2)(c^2+d^2)=|a-ib|^2|c+ id|^2=|(ac+bd)+i(ad-bc)|^2,$$
so with $ad=bc$ the right side is pretty fixed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3493943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.