Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
If there is a monomorphism between free $R$-modules $F \hookrightarrow G$, is the rank of $F$ necessarily less than or equal to the rank of $G$?
Let $R$ be a commutative, unital ring. Let $F = R^{(X)}$ and $G = R^{(Y)}$ be the free $R$-modules generated two arbitrary sets $X$ and $Y$ respectively. Assume there is a monomorphism $F \hookrightarrow G$. Does this imply that $|X| \leq |Y|$?
In the case where $X$ and $Y$ are finite, the answer is yes and several proofs can be found here and here. Notice that the solutions given in these threads firmly rely the finiteness condition, which leaves me wondering about the infinite case.
According to this answer, the cardinal of $R^{(X)}$ is $\max \left\{|R|,|X|\right\}$. It follows that if a counterexample exists then the following inequalites must be satisfied: $$\aleph_0 \leq |Y| < |X| \leq |R|$$
In particular, $|R| \geq \aleph_1$.
I feel quite out of my depth when it comes to searching for a counterexample because I don't know any $\aleph_1$-sized rings that are simple enough that it is possible to describe the relevant free modules over them... On the other hand, I really have no idea of how to approach a potential proof.
Many thanks.
| The infinite case follows from the finite case. Specifically, supposing $Y$ is infinite, for each finite subset $S\subseteq Y$, let $A_S\subseteq X$ be the set of basis elements of $F$ which map to elements of $G$ whose support is contained in $S$. Note that our monomorphism $F\to G$ then restricts to a monomorphism $R^{(A_S)}\to R^S$, and thus $|A_S|\leq |S|$. But every element of $X$ is in $A_S$ for some $S$. Since $Y$ has $|Y|$ different finite subsets, and $A_S$ is finite for each one of them, it follows that $|X|\leq |Y|\cdot\aleph_0=|Y|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3611595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
A Quick, Calculus II-Level Proof Regarding the Divergence of Certain Series? I came across the following problem in the Calculus text I'm using to teach my class:
Suppose $\sum a_n$ is a positive series such that $\{a_n\}$ is a decreasing sequence and the sequence $\{na_n\}$ converges, but not to zero. Show that $\sum a_n$ diverges.
Now, this is a fairly standard analysis problem that can be shown using, for example, the Cauchy Condesation Test, among other things, but my students do not have access to this, nor any of the other standard analytic arguments one would use. The hint given in the text is to use the Limit Comparison Test to compare $\sum a_n$ with an "appropriate" series, but I have not been able to figure out what the authors intended.
Secondly, the students are supposed to use this fact to provide a "quick proof" that
$$\sum \frac{\arctan n}{\sqrt{n}}$$
diverges. It's obvious this is a positive series, but I don't see a "quick" Calculus-level argument that it's sequence of terms is decreasing. Furthermore, the sequence $\{na_n\} =: \{\sqrt{n} \arctan n \}$ does not converge, so I don't see how the above fact applies directly.
In each case, what is the text is looking for that the average Calculus II student is supposed to see?
EDIT: The main thing throwing me off here was that students were supposed to somehow "quickly" show that $\frac{\arctan n}{\sqrt{n}}$ is decreasing (for example, in order that solving this problem with this fact is justified, the proof should be quicker than direct use of the Limit Comparison Test, which it is not.)However, as pointed out below, the sequence $\{a_n\}$ actually does NOT need to be decreasing. That makes the problem trivial!
| If $na_n\to c\ne 0$ use limit comparison with $b_n = c/a_n$. (By the way, the decreasing hypothesis is not needed.)
For the second one, direct limit comparison is obvious. I’m not sure what they had in mind.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3611762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Fuglede's theorem in finite-dimensional vector space Let $V$ be a finite dimensional vector space and $A$ be normal operator on $V$ and $B$ is an operator such that $AB=BA$. Show that $BA^*=A^*B$.
I guess that this problem should not be so difficult. I have tried different approaches and I got some identities which do not lead to desired equality.
So I would be thankful if you show the solution to this problem, please!
| The way to think about this problem is when $B$ is diagonalizable, and $A$ being normal is diagonalizable (over $\mathbb C$) so we can call on simultaneous diagonalizability, recognize that being normal $A^*$ may also be simultaneously diagonalized with $B$ (via the same similarity transform that we'd use on $AB$) which implies that $A^*B = BA^*$. However it is conceivable that $B$ might be defective-- so a more direct argument can be employed to compute the norm of the commutator
$\Big\Vert A^*B - BA^*\big\Vert_F^2$
$=\text{trace}\Big(\big(A^*B - BA^*\big)^*\big(A^*B - BA^*\big)\Big)$
$=\text{trace}\Big(\big(B^*A - AB^*\big)\big(A^*B - BA^*\big)\Big)$
$=\text{trace}\Big(B^*AA^*B\Big) + \text{trace}\Big(AB^*BA^*\Big)- \text{trace}\Big(B^*ABA^*\Big) -\text{trace}\Big(AB^*A^*B\Big) $
$=\text{trace}\Big(AA^*BB^*\Big) + \text{trace}\Big(B^*BA^*A\Big)- \text{trace}\Big(B^*ABA^*\Big) -\text{trace}\Big(BAB^*A^*\Big) $
$=\text{trace}\Big(AA^*BB^*\Big) + \text{trace}\Big(B^*BA^*A\Big) - \text{trace}\Big(B^*BAA^*\Big) -\text{trace}\Big(ABB^*A^*\Big)$
$=\text{trace}\Big(AA^*BB^*\Big) + \text{trace}\Big(B^*BA^*A\Big) - \text{trace}\Big(B^*BAA^*\Big) -\text{trace}\Big(A^*ABB^*\Big)$
$=\text{trace}\Big(AA^*BB^*\Big) + \text{trace}\Big(B^*BA^*A\Big) - \text{trace}\Big(B^*BA^*A\Big) -\text{trace}\Big(AA^*BB^*\Big)$
$=0$
thus by positive definiteness of the (squared) Frobenius norm we have
$\Big\Vert A^*B - BA^*\big\Vert_F^2 = 0 \longrightarrow A^*B - BA^* = \mathbf 0\longrightarrow A^*B = BA^*$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3611846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Functional equation $f(x+1)=af(x)+b$
Functional equation $f(x+1)=af(x)+b$
There was a question I solved a few days back that asked for a closed form of an equation for a given system. The function came down to this equation which I solved by noting the pattern. By the way $f(0)=10$
So here's how I saw it:
$f(1)=10a+b(1)$
$f(2)=10a^2+b(1+a)$
$f(3)=10a^3+b(1+a+a^2)$
So I saw the pattern and the geometric series in brackets and I managed to figure it out partly because the question format was leading me in that direction.
My question now is, presented purely with a functional equation $f(x+1)=af(x)+b$ for some constants $a, b \in \mathbb R$, and some starting value $f(0)=5$ maybe, would you solve it the way I did or there's a different approach?
| $$f(x+1)-af(x)=b~~~~(1)$$
Jet $f(x)=g(x)+c$, then
$$g(x+1)+c-ag(x)-ac=b$$
$$g(x+1)-ag(x)=0, c=b/(1-a)$$
Let $$g(x)=d t^x \implies t=a$$
So the solution of (1) is
$$f(x)=da^x+\frac{b}{1-a}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3611973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Different solutions with different results for an inequality Find m such that the following inequality:
$$\left|4x-2m-\frac{1}{2}\right| > -x^2 +2x + \frac{1}{2} - m$$
is always true for $\forall x \in R$.
1st solution:
1st case
$$4x-2m-\frac{1}{2} > -x^2 + 2x +\frac{1}{2} -m$$
$$<=>x^2+2x-m-1>0$$
$$\Leftrightarrow 1^2+(m+1)< 0$$
$$\Leftrightarrow m<- 2$$
2nd case
$$4x-2m-\frac{1}{2}< -(-x^2 + 2x +\frac{1}{2} -m)$$
$$\Leftrightarrow x^2-6x+3m>0$$
$$\Leftrightarrow 3^2-3m<0$$
$$\Leftrightarrow m>3$$
2nd solution:
The inequality is the same as:
$$(x-1)^2+|4x-2m-\frac{1}{2}|>\frac{3}{2}-m$$
Since the left-hand side is always positive, in order for the inequality to be always true, $\frac{3}{2}-m$ has to be negative, or $m > \frac{3}{2}$
The 2 solutions give different answers, so I was quite confused
But I get more confused as Wolfram Alpha gives me the solution:
$$m > \sqrt{3} - \frac{1}{4} \text{ or } m < -\sqrt{3} - \frac{1}{4} $$
There's a high chance that Wolfram Alpha's solution is correct (after testing out some $m$ value). How do I approach their solution? (Or maybe if you believe that solution is wrong, then what's the exact solution to the problem?)
| Your mistake in the first solution: at some stage you drop the variable $x$ by using $\forall x\in\mathbb R$. But as you are doing this case analysis, the $\forall$ no longer holds.
Your mistake in the second solution: from $a>0, a>b$, you conclude $0>b$, which is not tight. (Take $a=2,b=1$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3612123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Finding radius of curvature of a catenary with length of string given I had the following question, asking about the radius of curvature (in$\text{ cm}$) of the string, at the bottom most point, fixed at two ends, with the angle with the vertical being $\pi/4$ at the fixed points, and length of string $40 \text{ cm}$.
My Attempt:
Let the catenary be $y=a\cosh(x)=\mathrm d^2y/\mathrm dx^2$. Then $\mathrm dy=a\sinh(x)\mathrm dx$. As per constraints, $a\sinh(x)=1$ and the length of the half string is $20\text{ cm}$. So that can be used by writing the arc length integral as follows.
$$\int_{0}^{x}\sqrt{1+a^2\sinh^2(x)}\mathrm dx=20$$
$$\boxed{r=\frac{\left(1+\left(\frac{\mathrm dy}{\mathrm dx}\right)^{2}\right)^{3/2}}{\left|\frac{\mathrm d^2y}{\mathrm dx^2}\right|}}$$
How to go about solving this? The $a$ is the cause of trouble inside the integrand. For $a=1$, it evaluates to $\sinh(x)$, but that does not satisfy the slope condition at $x$. How to proceed? Any hints are appreciated. Thanks
Edit $1$:
I would like to put forth a general version of the problem. Say we have a hanging chain of length $l$ making angle $\theta$ with the fixed supports. Knowing that it is a catenary, we can say the equation of the curve will be of the form $y=a\cosh(bx)$ with the coordinate system origin at the mid-point of the hanging chain and $x$-axis in the horizontal direction and $y$ in the vertical. If we let the $x$-coordinate at the point from the which the chain is hanging to be $x_0$, then we have the following system of equations.
$$\begin{aligned}ab\sinh(bx_0)&=\tan(\theta)\\ 2\int_{0}^{x_0}\sqrt{1+a^2b^2\sinh^2(x)}\mathrm dx&=l\end{aligned}$$
Any ideas on how to solve $a,b$ in terms of $l,\theta$. Thanks
| Using your equations
Hoping that I properly understood, we have
$$y(x)=\int_{0}^{x}\sqrt{1+a^2\sinh^2(t)}\,dt=20$$
Making $t=i u$
$$\int\sqrt{1+a^2\sinh^2(t)}\,dt=i \int\sqrt{1-a^2\sin^2(u)}\,du=i E\left(u\left|a^2\right.\right)=-i E\left(i t\left|a^2\right.\right)$$
$$y(x)=-i E\left(i x\left|a^2\right.\right)=20$$
$$\frac {dy(x)}{dx}=\sqrt{1+a^2 \sinh ^2(x)} \qquad \text{and} \qquad \frac {d^2y(x)}{dx^2}=\frac{a^2 \sinh (x) \cosh (x)}{\sqrt{1+a^2 \sinh ^2(x)}}$$
$$r^2=\frac{\left(1+\left(\frac{ dy}{ dx}\right)^{2}\right)^{3}}{\left(\frac{d^2y}{ dx^2}\right)^2}=\frac{\left(a^2 \sinh ^2(x)+2\right)^3 } {\frac{a^4 \sinh ^2(x) \cosh ^2(x)}{1+a^2 \sinh ^2(x)} }$$
$$r^2=\frac 1 {a^4}\left(\text{csch}^2(x)\, \text{sech}^2(x) \left(1+a^2 \sinh ^2(x)\right) \left(2+a^2 \sinh
^2(x)\right) \right)$$
You gave as constraint
$$a\sinh(x)=1 \implies x=\sinh ^{-1}\left(\frac{1}{a}\right)\implies r^2=\frac{6}{1+a^2}$$ but we also have
$$-i E\left(i\ \text{csch}^{-1}(a)|a^2\right)=20$$
I am probably mistaken since this would lead to something close to $a=10^{-9}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3612307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prob. 4, Sec. 29, in Munkres' TOPOLOGY, 2nd ed: $[0, 1]^\omega$ with uniform topology is not locally compact Here is Prob. 4, Sec. 29, in the book Topology by James R. Munkres, 2nd edition:
Show that $[0, 1]^\omega$ is not locally compact in the uniform topology.
Here is a Math Stack Exchange (MSE) post that is of course relevant. However, here I would like to present my own attempt:
First of all, here is an MSE post of mine on $[0, 1]^\omega$ with the uniform topology.
Suppose if possible that $[0, 1]^\omega$ with the uniform metric topology is locally compact.
Let
$$ \mathbf{a} \colon= \left( \frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \ldots \right). \tag{Definition 0} $$
As $[0, 1]^\omega$ is locally compact at point $\mathbf{a}$, so there exists a compact subspace $C$ of $[0, 1]^\omega$ and an open set $U$ in $[0, 1]^\omega$ such that
$$ \mathbf{a} \in U \subset C. \tag{0} $$
Now as $U$ is an open set in the uniform metric space $[0, 1]^\omega$ and as $\mathbf{a} \in U$, so there exists a real number $\delta > 0$ such that
$$ B ( \mathbf{a}, \delta ) \subset U, \tag{1} $$
where
$$ B ( \mathbf{a}, \delta ) \colon= \{ \, \mathbf{x} \in [0, 1]^\omega \, \colon \, \bar{\rho}( \mathbf{x}, \mathbf{a} ) < \delta \, \}. \tag{ Definition 1 } $$
Since reducing $\delta$ will make the set $B ( \mathbf{a}, \delta )$ smaller, we can assume without any loss of generality that our $\delta$ satisfies
$$ 0 < \delta < \frac{1}{2}. \tag{1*} $$
From (0) and (1) above we also obtain
$$ B ( \mathbf{a}, \delta ) \subset C. \tag{2} $$
Since $C$ is a compact subspace of the Hausdorff space $[0, 1]^\omega$ with the uniform metric topology, therefore $C$ is also closed in $[0, 1]^\omega$, by Theorem 26.3 in Munkres.
Now as $C$ is a closed set in $[0, 1]^\omega$ and as $B ( \mathbf{a}, \delta ) \subset C$ by (2) above, so we also have
$$ \overline{B ( \mathbf{a}, \delta ) } \subset C, $$
that is,
$$ \bar{B} ( \mathbf{a}, \delta ) \subset C, \tag{3} $$
where
$$ \bar{B} ( \mathbf{a}, \delta ) \colon= \{ \, \mathbf{x} \in [0, 1]^\omega \, \colon \, \bar{\rho}( \mathbf{x}, \mathbf{a} ) \leq \delta \, \}. \tag{ Definition 2} $$
Moreover, as $\bar{B} ( \mathbf{a}, \delta )$ is a closed set in the compact space $C$, so $\bar{B} ( \mathbf{a}, \delta )$ is also compact, by Theorem 26.2 in Munkres.
Finally, as $\bar{B} ( \mathbf{a}, \delta )$ is a compact (metrizable) space, so it is also limit point compact, by Theorem 28.1 in Munkres.
Now let us take
$$ \alpha \colon= \frac{1}{2} - \frac{\delta}{2}, \qquad \mbox{ and } \qquad \beta \colon= \frac{1}{2} + \frac{\delta}{2}. \tag{Definition 3*} $$
And, then let us define the set $A$ as
$$ A \colon= \{ \, \alpha, \beta \, \}^\omega. \tag{Definition 3 } $$
Then $A$ is an infinite subset of $\bar{B} ( \mathbf{a}, \delta )$, but $A$ has no limit points in $\bar{B} ( \mathbf{a}, \delta )$, as has been shown in my post here. This contradicts the fact that
$\bar{B} ( \mathbf{a}, \delta )$ is limit point compact.
Thus our supposition at the start of this proof is wrong. Hence $[0, 1]^\omega$ in the uniform topology is not locally compact.
Is this proof correct? Is it easy enough to understand? Or, are there issues of accuracy or clarity?
| I think the proof is fine, and easy enough. It's a generalisation of the idea to show the unit ball in $\ell^\infty$ not being compact.
Such infinite-dimensional linear-like spaces will almost never be locally compact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3612458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solution of a first order non linear differential equation I am trying to find the extrema of the integral below
$$
I= \int_0^1 y^2 \mathrm dx
$$
under the conditions
$$
\int_0^1 \left(\frac{dy}{dx}\right)^2 \mathrm dx =1
$$
and y(0)=y(1)=0
Using Lagrange multiplier λ, equivalently, I can find the extrema of the below quantity
$$
I*= \int_0^1 y^2 \mathrm dx+λ\int_0^1 \left(\frac{dy}{dx}\right)^2 \mathrm dx
$$
Since there is not explicit dependence on x, by using beltrami identity
$$
F-y'\frac{\partial F}{\partial y'} = C
$$
the problem above reduced to the solution of the 2nd order differential equation
$$
2λ(y’)^2y’’-λ(y')^{2}- y^{2}-c=0.
$$
And after the substitution u=dy/dx
The second order differential equation above become the first order one below
$$
2λu^{3}u’-λ(u)^{2}- y^{2}-c=0.
$$
where the differentiation refers to y.
And here is where I got stuck.
Can someone give me some hint on how I can procced with that equation?
Many thanks in advance!
| The Beltrami equation can not contain a second derivative, where should it come from? It should evaluate to
$$
y^2-λy'^2=C
$$
which easily leads to solutions in terms of trigonometric or hyperpolic functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3612565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
The value of $\lim_{n\rightarrow \infty}(-0.5)^n$ is
The value of $\lim_{n\rightarrow \infty}(-0.5)^n$ is
What i have tried
As i know $\lim_{n\rightarrow \infty}(a)^n=0$ when $0<a<1$
If it is $\lim_{n\rightarrow \infty}(0.5)^n$ . Then it is $0$
But How do i solve for negative exponent. Help me plese
| $-((0.5)^n) \le (-0.5)^n \le (0.5)^n.$
Squeeze.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3613000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
$f: X \to Y$ order preserving implies $Ord(X) \leq Ord(Y)$ Let $X,Y$ be well-ordered sets and let $f:X \to Y$ be a map that preserves the strict order. I would like to prove that $Ord(X) \leq Ord(Y)$.
You can assume I know the 'basic' results about maps on well-ordered sets and segments etc.
I feel like there should be an easy, short solution.
Attempt:
Composing with isomorphisms $X \cong Ord(X), Y \cong Ord(Y)$ we get a map $f: Ord(X) \to Ord(Y)$ that preserved the strict order. So it suffices to prove:
If $f: \alpha \to \beta$ is a map between ordinals that preserves the strict order, then $\alpha \leq \beta$.
Assume to the contrary that $\beta < \alpha$. Then $\beta$ is a segment of $\alpha$. Thus there is $a \in \alpha$ with $\beta =\alpha_a$. Then I'm stuck.
Thanks in advance.
| If $\beta<\alpha$, you can view $f$ as a map from $\alpha$ into $\alpha$. $\{\xi\in\alpha:f(\xi)<\xi\}\ne\varnothing$, since clearly $f(\beta)<\beta$. Let $\eta=\inf\{\xi\in\alpha:f(\xi)<\xi\}$, and derive a contradiction with the assumption that $f$ is strictly order-preserving.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3613190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove $\forall xA(x) \to B \therefore \exists x(A(x) \to B)$. Working on P.D. Magnus. "forallX: an Introduction to Formal Logic" (p. 297, exercise C. 1):
$
\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}}
\def\Ae#1{\qquad\mathbf{\forall E} \: #1 \\}
\def\Ai#1{\qquad\mathbf{\forall I} \: #1 \\}
\def\Ee#1{\qquad\mathbf{\exists E} \: #1 \\}
\def\Ei#1{\qquad\mathbf{\exists I} \: #1 \\}
\def\R#1{\qquad\mathbf{R} \: #1 \\}
\def\ii#1{\qquad\mathbf{\to I} \: #1 \\}
\def\ie#1{\qquad\mathbf{\to E} \: #1 \\}
\def\ne#1{\qquad\mathbf{\neg E} \: #1 \\}
\def\IP#1{\qquad\mathbf{IP} \: #1 \\}
\def\X#1{\qquad\mathbf{X} \: #1 \\}
$
$
\fitch{1.\, \forall xA(x) \to B}{
\fitch{2.\,\neg\exists x(A(x) \to B)}{
\fitch{3.\, A(a)}{
\fitch{4.\, \neg \forall xA(x)}{
\fitch{5.\, \neg A(a)}{
6.\, \bot \ne{3,5}
}\\
7.\, A(a) \IP{5-6}
8.\, \forall xA(x) \Ai{7}
9.\, \bot \ne{4,8}
}\\
10.\, \forall xA(x) \IP{4-9}
11.\, B \ie{1,10}
}\\
12.\, A(a) \to B \ii{3-11}
13.\, \exists x(A(x) \to B) \Ei{12}
14.\, \bot \ne{2,13}
}\\
15.\, \exists x(A(x) \to B) \IP{2-14}
}
$
I am not sure about my use of $\mathbf{\forall I}$ rule on line 7. $A(a)$ is discharged on line 8, but a appears in an open assumption on line 3. Is this proof correct ?
EDIT:
From @Graham Kemp answer, I modified the proof to reach this one:
$
\fitch{1.\, \forall xA(x) \to B}{
\fitch{2.\, \neg \exists x(A(x) \to B)}{
\fitch{3.\, A(b)}{
\fitch{4.\, \neg A(a)}{
\fitch{5.\, A(a)}{
6.\, \bot \ne{4,5}
7.\, B \X{6}
}\\
8.\, A(a) \to B \ii{5-7}
9.\, \exists x(A(x) \to B) \Ei{8}
10.\, \bot \ne{2,9}
}\\
11.\, A(a) \IP{4-10}
12.\, \forall xA(x) \Ai{}
13.\, B \ie{1,12}
}\\
14.\, A(b) \to B \ii{3-13}
15.\, \exists x(A(x) \to B) \Ei{14}
16.\, \bot \ne{2,15}
}\\
17.\, \exists x(A(x) \to B) \IP{2-16}
}
$
| You are quite correct; that is invalid.
Try building the proof with these assumptions.
$$\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}}\fitch{~~1.~\forall x~A(x)\to B}{\fitch{~~2.~\neg\exists x~(A(x)\to B)}{\fitch{~~3.~A(b)}{\fitch{~~4.~\neg A(a)}{\fitch{~~5.~A(a)}{~~\vdots}\\~~\vdots}\\~~\vdots}\\~~\vdots}\\~~~.~\exists x~(A(x)\to B)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3613356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing that $\tau(t) = (t^2, t^3)$ is not a submanifold
Let $\tau : \mathbb{C} \to \mathbb{C}^2$ be the map $\tau(t) := (t^2, t^3)$. Show that $\tau$ defines an embedding map from $\mathbb{C}^*$ to $\mathbb{C}^2 \setminus{0}$. Is $\tau(\mathbb{C})$ a submanifold of $\mathbb{C}$?
Note that the definition of an embedding map is a map that is holomorphic (i.e. termwise holomorphic), injective and proper. Furthermore, a submanifold of $\mathbb{C}^2$ is defined to be the embedding map of some manifold into $\mathbb{C}^2$ such that each point has Jacobian of maximal rank.
It's clear that $\tau$ is an embedding map. I want to show that it is not a submanifold. Intuitively, by drawing out $\tau(\mathbb{C})$, one can see a "sharp point" at $(0,0)$, which should admit a singularity. Indeed, one can easily check that the Jacobian at $(0,0)$ is the zero matrix. However, this does not count as a proof, as we need to show that no such embedding map exists, not just $\tau$. I'm not sure how to proceed.
Any help is appreciated.
| HINT: if it were a submanifold then around $(0,0)$ it would be the graph of a function of $x$ or a function of $y$. Check neither is true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3613599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is my proposition correct? I have concluded (proved) the following, but haven’t seen it stated anywhere, hence would like to get it verified by you mathematicians.
Let $(a_n)_{n=m}^\infty$ be a sequence of reals for some $m\in\mathbb{Z}$ such that $(|a_n|)_{n=m}^\infty$ converges to some $L\in\mathbb{R}$. (Hence, $L\ge 0$.) Then $L$ or $-L$ (or both) is a limit point of $(a_n)_{n=m}^\infty$. Further, if $L$ is a limit point, then $L$ is the limit superior of $(a_n)_{n=m}^\infty$ and if $-L$ is a limit point, then $-L$ is the limit inferior of $(a_n)_{n=m}^\infty$.
| (Yes it's true) Hint : At least one of the two sets $A_+ := \{n : a_n > 0\}$ and $A_- := \{n : a_n \leq 0\}$ is infinite. W.L.O.G suppose that $A_+$ is infinite; then it immediately follows that $L$ is a limit point of $(a_n)_n$. And therefore $$\varlimsup (a_n)_n = \varlimsup_{n \in A_+} (a_n)_n = \lim_{n \in A_+}(a_n)_n = L. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3613777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is the range of the function $\frac{3}{2-x^2}$ I'm so, so very confused about finding the ranges of real functions, no concept in Mathematics has yet confused me more than this, please tell me what's wrong in my solution for finding the range of the function : $\dfrac{3}{2-x^2}$
Here's how I do it and get a partial answer, please check it out...
$x^2 \geq 0$
$-x^2 \leq 0$
$2 - x^2 \leq 2$
$\dfrac {1}{2 - x^2} \geq \dfrac{1}{2}$
So, $\dfrac {3}{2 - x^2} \geq \dfrac{3}{2}$
So, $f(x) \geq \dfrac{3}{2}$
By this, $Range(f) = [\dfrac{3}{2}, ∞)$
But as per my textbook, the answer is $(-∞,0)∪[\dfrac {3}{2},∞)$, which is (obviously) correct
What my main question here is : How can I add the proof of the negative values in the range in my proof?
I would be very, very grateful to you if you help (no exaggeration, I would be so very thankful cause this topic is frustrating me)
Also, this is a general question : Am I the only one so confused about finding domains and ranges? I mean did you, when you began, also face problems with this concept?
Thanks
| We have: $$y=f(x)=\frac{3}{2-x^2} ; x \ne \pm\sqrt{2}$$
Write $x$ as a function of $y$ as: $$x=\sqrt{2-\frac{3}{y}}$$
Now we have to find the set of values of $y$ for which $x$ is $\mathbf {real}$.
For this, $2-{3\over y} \ge 0$. This can be acquired in two ways:
$\mathbf {Case\ 1:}$ Substituting $x=0$ into the second equation we get $y={3\over 2}$. Hence talking of positive $y$, it can go from $3\over 2$ all the way upto $\infty$, this is because no matter how large the value of $y$ you take, $x$ will always be a little less than $\sqrt 2$, which is safe. $y$ cannot be less than $3\over 2$ because that would mean $x$ is imaginary. $\mathbf {So},\ \mathbf {y\in [{3\over 2}, \infty)}$
$\mathbf {Case\ 2}:$ We note that from the above step, we get $x\in [0,\sqrt 2)$, when $y\in [{3\over 2},\infty)$. But a little observation shows us that if $y\lt 0$, the whole expression $(2-{3\over y})$ can and will take every value $\mathbf {between}$ $2$ and $\infty$, as $y$ goes from $-\infty$ to $0$ (but never equal to $0$), thus giving $x\in (\sqrt 2,\infty)$.
Thus we have $range(f)= (-\infty,0)\cup[{3\over 2}, \infty)$ that gives us real $x$.
There is no need to care about the other possble values of $x$ as our main focus is $range(f)$ i.e. we need values of $y$ as expressed in the second equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3613930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
} |
Series Trouble / Diverging Does anyone mind explaining both of these questions, I’m stuck.
*Determine whether the sequence converges or diverges. If it converges, find the limit it converges to. $$\left\lbrace\sqrt[n]{2^{1 + 2n}}\right\rbrace_{n = 1}^\infty$$
*Find the general $n^{th}$ term of the sequence. Then determine whether the sequence converges or diverges. If it converges, find the limit it converges to. $$\left\lbrace 1, \frac42, \frac96, \frac{16}{24}, \frac{25}{120}, \cdots\right\rbrace$$
| The second one (problem 4) seems to be $u_n = \frac{n^2}{n!}$ and as $$0 \le u_n \le \frac{n}{n-1}\frac{1}{n-2} \le \frac{4}{n}$$
for $n \ge 4$, it converges to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3614232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
For $f\in L^1(\mathbb{R})$, show that $\lim_{\varepsilon \to 0}\int_{-\infty}^\infty \cos(\varepsilon x)f(x) \, dx=\int_{-\infty}^\infty f(x)\,dx$ Question: For $f\in L^1(\mathbb{R})$, show that
$$\lim_{\varepsilon \rightarrow 0}\int_{-\infty}^\infty \cos(\varepsilon x)f(x)\,dx=\int_{-\infty}^\infty f(x)\,dx $$
where the integral is the Riemann integral. End Question
I first thought this was pretty easy, using the dominated convergence theorem for
$$f_n(x) = \cos\left(\frac{1}{2^n}x\right)f(x),$$
but I realized I have to first change the integral to Lebesgue integral then change the order of limit
, i.e.
$$\lim_{n\to \infty}\lim_{A\to \infty}\int^A_{-A}f_n(x) \, dx = \lim_{A\to \infty}\lim_{n\to \infty}\int^A_{-A}f_n(x) \, dx$$
since I only proved that Riemann = Lebesgue holds for closed interval and bounded $f$, and
the dominated convergence theorem works for the Lebesgue integral.
Can I easily change the order of limits here? Is there any general rule for changing the order of limits?
| Crucially, for an $L^1$ function, most of the mass will be concentrated on a large interval and it is this same large interval where each integrand in your sequence will have most of their mass.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3614477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prob. 6, Sec. 29, in Munkres' TOPOLOGY, 2nd ed: Is this map a homeomorphism? Let $n$ be any given natural number, and let
$$ S^n \colon= \left\{ \, \left( x_1, \ldots, x_{n+1} \right) \in \mathbb{R}^{n+1} \, \colon \, \sum_{i=1}^{n+1} x_i^2 = 1 \, \right\}. $$
Let point $\mathbf{p} \in \mathbb{R}^{n+1}$ be given by
$$ \mathbf{p} \colon= \left( 0, \ldots, 0, 1 \right). $$
Then of course $\mathbf{p} \in S^n$.
Now let the map $f \colon S^n \setminus p \rightarrow \mathbb{R}^n$ be given by
$$ f \left( x_1, \ldots, x_n, x_{n+1} \right) \colon= \frac{1}{1-x_{n+1} } \left( x_1, \ldots, x_n \right). $$
Is this map $f$ a homeomorphism?
My Attempt:
Let $\left( u_1, \ldots, u_n, u_{n+1} \right)$ and $\left( v_1, \ldots, v_n, v_{n+1} \right)$ be any points in $S^n \setminus \mathbf{p}$ for which
$$ f\left( u_1, \ldots, u_n, u_{n+1} \right) = f \left( v_1, \ldots, v_n, v_{n+1} \right). $$
Then we have
$$ \frac{1}{1-u_{n+1}} \left( u_1, \ldots, u_n \right) = \frac{1}{1-v_{n+1}} \left( v_1, \ldots, v_n \right). $$
So for each $i = 1, \ldots, n$, we have
$$ \frac{u_i}{1 - u_{n+1} } = \frac{v_i}{1-v_{n+1} }, $$
which is the same as
$$ \frac{u_i}{1 - \sqrt{ 1 - \sum_{j=1}^n u_j^2 } } = \frac{ v_i }{ 1 - \sqrt{ 1 - \sum_{j=1}^n v_j^2 } }, \tag{1} $$
because we have the equalities
$$ \sum_{j=1}^{n+1} u_j^2 = 1 = \sum_{j=1}^{n+1} v_j^2. $$
What next? How to show from here that
$$ \left( u_1, \ldots, u_n, u_{n+1} \right) = \left( v_1, \ldots, v_n, v_{n+1} \right)? $$
Now let $\left( y_1, \ldots, y_n \right)$ be any point in $\mathbb{R}^n$. We need to find a point $\left( x_1, \ldots, x_n, x_{n+1} \right) \in S^n \setminus \mathbf{p}$ such that
$$ f\left( x_1, \ldots, x_n, x_{n+1} \right) = \left( y_1, \ldots, y_n \right). $$
How to find such a point $\left( x_1, \ldots, x_n, x_{n+1} \right) \in S^n \setminus \mathbf{p}$?
We find that if the map $g \colon \mathbb{R}^{n+1} \setminus \mathbf{p} \rightarrow \mathbb{R}^n$ given by
$$ g \left( x_1, \ldots, x_n, x_{n+1} \right) \colon= \frac{1}{1-x_{n+1} } \left( x_1, \ldots, x_n \right). $$
is continuous, then the restriction of $g$ to the subset $S^n \setminus \mathbf{p}$ of $\mathbb{R}^n \setminus \mathbf{p}$ is also continuous, and this restriction is of course our map $f$.
How to rigorously show that the map $g$ is indeed continuous?
Finally, how to show that $f^{-1}$ is also continuous? Equivalently, how to show that $f$ is an open (or closed) map?
| Goal of this Answer
This isn't a complete solution, rather it serves as some notes to help you get over some of the humps in this analysis. I will cover
*
*Injection of $f$
*Surjection of $f$
*Obtaining $f^{-1}$
*Small Conclusion
hope you find this helpful.
Injection
Use the fact that
$$\sum_{i=1}^{n+1}u_{i}^2 =1 $$
to prove this.
We want to prove that
$$\frac{u_i}{1-u_{n+1}}=\frac{v_i}{1-v_{n+1}} \to u_i=v_i$$
So to do this square both sides of the equation:
$$\frac{u_i^2}{(1-u_{n+1})^2}=\frac{v_i^2}{(1-v_{n+1})^2}$$
and then sum both sides
$$\frac{\sum_{i=1}^{n}u_i^2}{(1-u_{n+1})^2}=\frac{\sum_{i=1}^{n}v_i^2}{(1-v_{n+1})^2}$$
to get
$$\frac{1-u_{n+1}^2}{(1-u_{n+1})^2}=\frac{1-v_{n+1}^2}{(1-v_{n+1})^2}$$
which using some difference of squares gives us:
$$\frac{1-u_{n+1}}{1+u_{n+1}}=\frac{1-v_{n+1}}{1+v_{n+1}}$$
from here this is similar to proving that $h(x)=\frac{1-x}{1+x}$ is injective. After you prove that $u_{n+1}=v_{n+1}$ everything else follows from the identies given.
Surjection
We want to prove that for a fixed $a\in\mathbb{R}$ we can find a $(u_1,...,u_{n+1})\in \mathbb{R}^{n+1}$ such that:
$$\frac{u_i}{1-u_{n+1}}=a$$
This too is also trivial.
Inverse Function
To find the inverse function, we start with the identity:
$$y_i = \frac{u_i}{1-u_{n+1}}$$
The goal here is to write $$u_i = g_i(y_1,...,y_n)$$.
The problem in our way is that $u_{n+1}$ is residual information from a larger space. So we need to find out what it is in $\mathbb{R}^n$ to move ahead. To be specific we need to find $g$ where $$u_{n+1} = g_n(y_1,...,y_n)$$
to do this we use a similar trick to what we did with the injection to obtain:
$$\sum_{i=1}^n y_i^2= \frac{1+u_{n+1}}{1-u_{n+1}}$$
using a similar trick to proving the subjectivity of $h(x)=\frac{1+x}{1-x}$ we get
$$u_{n+1}= \frac{\sum_{i=1}^n y_i^2-1}{\sum_{i=1}^n y_i^2+1}$$
using this and
$$1-u_{n+1}= \frac{2}{\sum_{i=1}^n y_i^2+1}$$
you can get your inverse function.
What is left?
After that all you need to do is prove:
*
*Continuity of $f$
*Continuity of $f^{-1}$
*Surjection of $f^{-1}$
and you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3614658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Trouble with $4\times4$ matrix determinant $$
\begin{vmatrix}
1 & -6 & 7 & 5 \\
0 & 0 & 3 & 0 \\
3 & -2 & -8 & 6\\
2 & 0 & 5 & 4\\
\end{vmatrix}
$$
Clearly I want to expand along the second row yielding:
$((-1)^5)3$ times the following matrix
$$
\begin{vmatrix}
1 & -6 & 5 \\
3 & -2 & 6 \\
2 & 0 & 4 \\
\end{vmatrix}
$$
and then breaks down into several smaller matrices:
2 times
$$
\begin{vmatrix}
-6 & 5 \\
-2 & 6 \\
\end{vmatrix}
$$
and 4 times
$$
\begin{vmatrix}
1 & -6 \\
3 & -2 \\
\end{vmatrix}
$$
which should come out to be $-3[(2(-36+10))+(4(-2+18))]$
$-3[(2(-16))+(4(16))]$
$-3(-32+64)=32 \times -3$
but the answer is -36 I don't know what went wrong?
| Another way to approach this 4x4 matrix is to row reduce to a triangular matrix with zeros underneath the diagonal.
\begin{bmatrix}1&-6&7&5\\0&0&3&0\\3&-2&-8&6\\2&0&5&4\end{bmatrix}
Row reducing to the triangular matrix yields:
\begin{bmatrix}1&-6&7&5\\0&16&-29&-9\\0&0&3&0\\0&0&0&\frac{3} {4}\end{bmatrix}
From here, we can multiply the numbers in the diagonal:
-(1 $\cdot$ 16 $\cdot$ 3 $\cdot$ $\frac{3} {4}$) = -36
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3614807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Constructing an equilateral triangle of a given side length inscribed in a given triangle
I am trying to solve the problem of constructing, with straightedge and compass, an equilateral triangle of given side length $a$ inscribed in a given triangle.
I found this post "Inscribe an equilateral triangle inside a triangle" and this other post "How to draw an equilateral triangle inscribed in another triangle?" but the construction must be made with straightedge and compass, using simple constructions such as arcs, parallel lines, perpendicular lines and that kind of thing.
I tried constructing the arcs capable of $120^{\circ}$ on the sides of the given triangles and noticed that the centers of the arcs form an equilateral triangle, but I don't know what to do after that.
| I believe the following diagrams and incorporated explanation will suffice. Let me know if it is not clear. Click on image to get a larger and clearer view.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3614940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Good way of explaining how to find all solutions to $\sin\theta+a=0$ I'm not sure exactly how to properly describe this trigonometry problem, so I will just write it out. It is less of a "problem" and more of a "I can't find a really good way to explain it".
We want to find a value $\theta$ such that $$\sin\theta + a = 0,$$ where $|a| < 1$. Subtracting $a$ from both sides gives us $\sin\theta = -a$, and one solution is easily seen to be $\theta = \arcsin(-a)$. However, because $-1 < -a < 1$, there should be two possible values of $\theta$ that give $\sin\theta = -a$. I'll call the one we already found $\theta_1$, and the one we are looking for $\theta_2$.
The best explanation I could think of is, when looking at the unit circle, $\sin\theta_1$ is considered the $y$ coordinate on the circle, while $\cos\theta_1$ is the corresponding $x$ coordinate. However, if you reflected across the $y$ axis, that other angle will also satisfy $\sin\theta_2 = -a$. This reflection across the $y$ axis corresponds to $\theta_2 = \pi - \theta_1$, and this is correct, because $$\sin\theta_2 = \sin(\pi - \theta_1) = \sin\pi + \sin\theta_1 = 0 -a = -a.$$ So we have our two solutions: $$\theta_1 = \arcsin(-a) \\ \theta_2 = \pi - \arcsin(-a).$$
My problem is I feel like my solution is too "wordy" and not elegant enough. Maybe it's not possible, but is there a better, more analytic way of deriving this result? Since $|\arctan x| \le \pi/2$, I feel like it would be impossible to easily derive $\theta_2$ like we did $\theta_1$, since $|\theta_2| \ge \pi/2$, so there is no way for $\arctan x$ to ever map to that value.
| One way of deriving this result is graphically: if we plot the graphs $y = \sin x$ and $y=a$, we see the following behaviour
It's easy to infer a pattern from this graph: $a$ equals $\sin x$ when $$x \in \{... -\pi - \arcsin a, 0 + \arcsin a, \pi - \arcsin a, 2\pi + \arcsin a ...\}$$
the pattern is thus
$$x = n\pi + (-1)^n \arcsin a, n \in Z$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3615097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to convert the probability P/Q to P⋅Q−1 where Q is co-prime with mod I was dealing with probability in programming but i was stuck on the final answer part.
Below is statement in which format i have to give the answer
Can you find the probabilities It can be proved that for each of these values, the probability can be expressed as a fraction PQ, where P and Q are integers (P≥0, Q>0) and Q is co-prime with 998,244,353. You should compute P⋅Q−1 modulo 998,244,353 for each of these values.
Below is probability that can be calculated on paper and 2nd line is for what i have to print can any1 explain how should calculate P.Q-1.
For 1st Input
Calculated Probability = 1/4
Answer : 748683265
For 2nd Input
Calculated Probability was = 1/16 , 3/16 , 3/16 , 9/16
Answer that was given = 436731905 935854081 811073537 811073537
If anything is unclear then pls comment as iam new to community I am not very good at asking questions.
Thank you for your reply in advance.
| p=998244353;
int power(int x, int y, int p)
{
int res = 1; // Initialize result
x = x % p; // Update x if it is more than or
// equal to p
while (y > 0)
{
// If y is odd, multiply x with result
if (y & 1)
res = (res*x) % p;
// y must be even now
y = y>>1; // y = y/2
x = (x*x) % p;
}
return res;
}
// Returns n^(-1) mod p
int modInverse(int n, int p)
{
return power(n, p-2, p);
}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3615223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
$12$ men can finish a job in $16$ days. $5$ men work at the start; and after 8 days, 3 men were added. How many days needed to finish the whole job?
Twelve men can finish a job in 16 days. 5 men were working at the start and after 8 days, 3 men were added. How many days will it take to finish the whole job?
Solution:
So, the job will take $12 \times 16 = 192$ man days to finish.
In the first $8$ days we have done $8 \times 5 = 40$ man days.
Now we are doing $8 \times 8 = 64$ man days per day and need to do the remaining $192 - 40 = 152 $man days.
*
*Day $1 = 152 - 64 = 88$ man days left
*Day $2 = 88 - 64 = 24$
So, we’ll finish on day $3$ after the extra $3$ men are added.
So based on my solution, I came up with $11$ days but I feel like it's wrong. Can someone point out my mistakes if any?
| $12$ men can finish the job in $16$ days.
In $8$ days, $24$ men can finish the job.
So $5$ men would've finished $\frac 5{24}$ of the job within those $8$ days, leaving $\frac{19}{24}$ of the job remaining.
Add $3$ men, you get $8$ men now. $8$ men would take $24$ days to finish the job.
So the reinforced workforce would take an additional $\frac{19}{24} \times 24 = 19$ days to finish up.
The total time from the start is $8 + 19 = 27$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3615378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 4
} |
How to express matrix multiplication as the sum of individual elements? I want to express $$\sum_{i=1}^N \sum_{j=1}^N a_{ij} x_i x_j$$ as matrix/vector multiplication. I've managed to get that the expression above is equal to $$x'Ax$$ where x is a vector, by expanding the sum but it took a while. Is there a faster way, which I can use to calculate it (and also be able to express matrix multiplication as a sum)?
| Yes, it depends upon what value of $a_{ij}$ and $x$ you are choosing. Relate your question with this example. Suppose $x_i = 1$ and $x_j = 1$ for every value of $i$ and $j$ and $$a_{ij} = \frac{a_i a_j }{i+j+1}$$ where $1\le i,j\le N$ and $a_i, a_j \in \Bbb N$ then $$\sum_{i=1}^N \sum_{j=1}^N a_{ij} x_i x_j = \sum_{i=1}^N \sum_{j=1}^N \frac{a_i a_j }{i+j+1} = \int _{0}^ 1 f(x) f(x) $$ where $f(x) = a_1x + a_2 x^2 +\cdots a_n x^N $. You can relate $x' A x$ to standard inner product of $ Ax $ and $ x $ in real space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3615554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the Fourier series for $f(\theta)=\theta^2$ and prove $\Sigma^\infty_{n=1} \frac{1}{n^2}=\frac{\pi^2}{6}$ Find the Fourier series for $f(\theta)=\theta^2$ and use Parseval's identity for $f$ to derive the identity:
$$\sum^\infty_{n=1} \frac{1}{n^2}=\frac{\pi^2}{6}$$
In addition, find the expansion for $f$ in terms of the functions $\{1, \cos(2\pi \theta), \sin(2\pi \theta), \cos(4\pi \theta),....\}$
Here is a link explaining Parsevals identity:
https://en.wikipedia.org/wiki/Parseval%27s_identity
Can somebody help me with this one? There seems to be a lot going on and a lot of these ideas are really new to me. I'm trying to follow a proof we got in class that we can use the fourier expansion $f(x)=\theta$ on $[0,1)$ to show that
$$\sum^\infty_{n=1} \frac{1}{n^4}=\frac{\pi^2}{6}$$
But the proof is sort of confusing, and I don't see how I could adapt it. I'd really appreciate some help on this one! Thanks MSE!!
| You don't need Parseval (which will square your coefficients so it will give you something with $n^4$). From this question,
$$
f(x)=\frac{\pi^2}{3}+4 \ \sum_{n=1}^{+\infty} \frac{(-1)^n}{n^2} \ \cos(nx).
$$
As $f$ is differentiable everywhere, we have pointwise convergence. Then, evaluating at $\pi$,
$$
\pi^2=f(\pi)=\frac{\pi^2}{3}+4 \ \sum_{n=1}^{+\infty} \frac{(-1)^n}{n^2} \ \cos(n\pi)
=\frac{\pi^2}{3}+4 \ \sum_{n=1}^{+\infty} \frac{1}{n^2} .
$$
Then
$$
\sum_{n=1}^{+\infty} \frac{1}{n^2} =\frac14\,\left(\pi^2-\frac{\pi^2}3\right)=\frac{\pi^2}6.
$$
If you were to use Parseval for this function, what you get is
$$
\frac{2\pi^5}5=\int_{-\pi}^\pi (t^2)^2\,dt=\pi\,\left(2{a_0^2}+\sum_{n=0}^\infty a_n^2\right)=\pi\left(\frac{2\pi^4}{9}+\sum_{n=1}^\infty\frac{16}{n^4}\right).
$$
Solving, you get
$$
\sum_{n=1}^\infty\frac{1}{n^4}=\frac1{16}\left(\frac{2\pi^4}5-\frac{2\pi^4}9\right)=\frac{\pi^4}{90}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3615779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Density of $X_1 + \cdots+X_n$ when $X_i$'s are independent $U(-1,1)$ variables How can we show for independent random variables uniformly distributed over $(1,-1)$ that $X_1 + \cdots+X_n$
has density
$$\pi^{-1}\int^\infty_0 \left(\frac{\sin t}{t}\right)^n \cos tx \; dt \textrm{ for }n \geq 2\text{?} $$
This is problem 26.6 from Billingsley "Chapter : CONVERGENCE OF DISTRIBUTIONS" (3rd edition).
I would really appreciate it, if you show it analytically .
| Take $Y_n:=\sum_{k=1}^nX_k$
Compute the characteristic function of $Y_n$ :
we know that $\varphi_{X_1}(x)=\frac{\sin(x)}{x}$ if $x \neq 0,$ and $\varphi_{X_1}(0)=1,$ using independence we obtain that $\varphi_{Y_n}(x)=\frac{\sin^n(x)}{x^n}$ if $x \neq 0$ and $\varphi_{Y_n}(0)=1.$
Notice that $\varphi_{Y_n}$ is an odd function
Show it is an integrable function:
notice that $n \geq2,$ we have
$$\int_{\mathbb{R}}\frac{|\sin^n(x)|}{|x^n|}dx=\int_{|x| \leq 1}\frac{|\sin^n(x)|}{|x^n|}dx+\int_{|x|>1}\frac{|\sin^n(x)|}{|x^n|}dx\leq \int_{|x| \leq 1}1dx+\int_{|x|>1}\frac{1}{|x|^n}dx<\infty.$$
Conclude using Fourier inversion formula:
Then $Y_n$ has a density such that $$f_{Y_n}(x)=\frac{1}{2\pi}\int_{\mathbb{R}}e^{-ixy}\frac{\sin^n(y)}{y^n}dy=\frac{1}{\pi}\int_{0}^{+\infty}\frac{\sin^n(y)}{y^n}\cos(xy)dy.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3615897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
mapping class group of the real projective plane In most literature I've read about the mapping class group, I found that many authors have stated without any explanation that any homeomorphism of a real projective 2-space to itself is isotopic to the identity. I'm guessing it is obvious but I can't seems to come up with a sound explanation, this is what i have:
$\mathbb{R}P^2$ can be constructed from from glueing the boundary of a disk $D^2$ to the boundary of a Mobius band, since the mapping class group of D^2 and the Mobius band are both trivial then the mapping class group of $\mathbb{R}P^2$ is trivial.
the reason why this doesn't seem sound is because the Klein bottle can be constructed from glueing two Mobius band's together by their boundary but the mapping class group of the Klein bottle is not trivial.
| The free mapping class group (that is, the path components of its full group of homeomorphisms) of the Möbius band M is not trivial. It has a self-homeomorphism h : M → M whose effect h* : H1(M;Z) → H1(M;Z) on the first homology group (which is infinite cyclic) is multiplication by -1.
The projective plane P2 can be modeled as the 2-disk D2 with each pair of antipodal points on its boundary S1 identified. From this model it's easy to see that there is an isotopy of the identity id : P2 → P2 to a homeomorphism taking the generator of H1(P2;Z) (as a simple closed curve) to itself going in the reverse direction. (Just rotate the 2-disk by 180º.) This shows that the inclusion of a Möbius band into P2 can be reversed by an isotopy within P2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3616057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
ODE help with initial conditions Initial conditions are $f(0)=1, f'(0)=0$
Applying the Laplace Transform to both sides and simplifying I reach the form
$$ L(y) = \frac{s^3 + 5s}{s^2 + 4}, $$
which I then further simplify in the following steps
$$ L(y) = \frac{s(s^2 + 5)}{(s^2 + 4)(s^2 + 9)} $$
$$ L(y) = \frac{s(s^2 + 4 + 1)}{(s^2 + 4)(s^2 + 9)}$$
$$ L(y) = \frac{s}{s^2 + 9} + \frac{s}{(s^2 + 4)(s^2 + 9)}. $$
I know the first term to be the Laplace of $\cos(3x)$ , but when I solve for the other term I reach a result of
$$ L^{-1}\biggl[\frac{s}{(s^2 + 4)(s^2 + 9)}\biggr] = \frac{4}{5}\cos(3x) + \frac{1}{5}\cos(2x) $$
Giving a final result of
$ y = \cos(3x) + \frac{4}{5}\cos(3x) + \frac{1}{5}\cos(2x) $
Which I know to be incorrect after approaching the problem with the method of undetermined coefficients. Any help would be appreciated.
| $$y'' + 9y = \cos(2x)$$
$$\implies r=\pm3i$$
$$y(x)=c_1 \cos 3x + c_2 \sin 3x$$
Then for the particular solution
$$y_p=A\cos (2x) \implies A=\frac 15$$
The solution is therefore:
$$y(x)=c_1 \cos (3x) + c_2 \sin (3x) +\dfrac 15 \cos (2x)$$
Apply initial conditions:
$$c_2=0 \text { and } c_1=\frac 45$$
So that:
$$y(x)=\frac 45 \cos (3x)+\dfrac 15 \cos (2x)$$
Note that you have a little mistake in the fraction decomposition
$$ g(s)=\frac{s}{(s^2 + 4)(s^2 + 9)}=\frac s 5 \left(\frac{1}{(s^2 + 4)}-\frac{1}{(s^2 + 9)} \right)$$
$$\implies g(x)=\frac 1 5 (\cos (2x)-\cos(3x))$$
And you get the same answer:
$$y(x)=\frac 45 \cos (3x)+\dfrac 15 \cos (2x)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3616410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why is $f(x)g'(x)+f'(x)g(x)$ a linear map? Assume $f:E \rightarrow \mathbb{R}^m$ and $g:E \rightarrow \mathbb{R}$, and $E \in \mathbb{R}^n$ is open. Assume $x \in E$, and $f$ and $g$ are differentiable at $x$. To get the derivative of $f(x)g(x)$, I get a product rule version, namely $f'(x)g(x)+g'(x)f(x)$.
According to the definition of a derivative (In Rudin's Principles of Mathematical Analysis), $f'(x)g(x)+g'(x)f(x)$ has to be a linear map. But why is it a linear map? Doesn't that imply we need g and f to be linear as well?
| The derivative of $f$ at $x_0$ is a linear map $A$ such that $f(x) - f(x_0) - A(x-x_0) = o(x-x_0)$ as $x \to x_0$. That linear map is what we call $f'(x_0)$. The actual mapping $x_0 \mapsto f'(x_0)$ is usually not linear.
Now when you look at $fg$, the situation is essentially the same. You get a linear map $A$ such that $f(x) g(x) - f(x_0) g(x_0) - A(x-x_0)=o(x-x_0)$ as $x \to x_0$. The product rule winds up telling you that the linear map is $f'(x_0) g(x_0) + f(x_0) g'(x_0)$. You should try to be careful about signatures to make sure this makes sense: $f'(x_0)$ is a map from $\mathbb{R}^n$ to $\mathbb{R}^m$ and $g(x_0)$ is a scalar, so that is the right type of thing (you multiply by $g(x_0)$ after applying $f'(x_0)$ to the input). Now $f(x_0)$ is a fixed vector in $\mathbb{R}^m$ and $g'(x_0)$ is a linear map from $\mathbb{R}^n$ to $\mathbb{R}$, so that is also the right type of thing (you multiply by $f(x_0)$ after applying $g'(x_0)$ to the input).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3616538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is $ f(z)= |z|.\bar z $ analytic? I want to know whether it is analytic - and if so, to find $ f´(z)$
What I do:
I use the polar form.
$ z= x + iy \\ |z|=r \\ \bar z= (cos (\phi) + isin(\phi))$
then: $ f(z)= r^2(cos (\phi) + isin(\phi)) = r^2cos (\phi) + i r^2sin(\phi)$
Cauchy-Riemann:
$\begin{cases} \\
\frac{\partial u}{\partial r} \ = \frac{1}{ r}\frac{\partial v}{\partial \phi} \\[2ex] \frac{\partial u}{\partial r} \ = \frac{1}{ r}\frac{\partial v}{\partial \phi}\\
\end{cases} \quad \Rightarrow \quad
\begin{cases}\\
2r\cos \phi= -r\cos\phi \\[2ex] -2r\sin \phi=r\sin\phi\\ \end{cases} $
$\text{It is differentiable when $ \ 0\le\phi\le 2\pi \ , \ r=0 \ ?$ }$
$ \text{ Any help would be greatly appreciated. Thanks!}$
| Put $z=r.e^{i\theta}$ for polar coordinates then $f(r.e^{i\theta})=r^2(cos\theta-isin\theta)$.
You can see that the real part $u(r,\theta)=r^2.cos\theta$ and imaginary part $v(r,\theta)=-r^2.sin\theta$ satisfy CR-equations
$ru_r=v_\theta, u_\theta=-rv_r$ iff
$3r^2.cos\theta=0$ and $3r^2.sin\theta=0$ i.e. $r=0$
Since CR-equations merely hold at the point $r=0$ (origin) and not in any open neighborhood of it hence $f$ is differentiable at $z=0$ but not analytic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3616655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Prove that $\mathrm{rank}(A)=\dim(\mathbb{Q}\otimes_{\mathbb{Z}}A)$ Call a subset $X$ of an abelian group $A$ independent if, whenever $\Sigma m_ix_i=0$, where $m_i\in \mathbb{Z} $ and almost all $m_i =0$, then $m_i = 0$ for all $i$. Define $\mathrm{rank}(A)$ to be the number of elements in a maximal independent subset of $A$.
Prove that $\mathrm{rank}(A)=\dim(\mathbb{Q}\otimes_{\mathbb{Z}}A)$ and conclude that every two maximal independent subsets of $A$ have the same number of elements.
My attempt: $\mathrm{rank}(A)$ is the free part (non-torsion) of $A$. So, is that $\mathbb{Q}\otimes_{\mathbb{Z}}A$ kill the torsion part of $A$? And why.
| A priori, the general element of $\Bbb Q\otimes A$ is a rational linear combination of elements of the form $q\otimes a$ with $q\in\Bbb Q$ and $a\in A$. As $$q\otimes a=q\cdot 1\otimes a$$ and $$\frac nm\cdot 1\otimes a+\frac rs\cdot 1\otimes b=\frac 1{ms}\cdot 1\otimes(nsa+rmb)$$
we can write each element of $\Bbb Q\otimes A$ more specifically in the form $q\cdot 1\otimes a$ with $q\in \Bbb Q$ and $a\in A$.
If $\{x_i\}_{i\in I}$ are independent in $A$, then $\{1\otimes x_i\}_{i\in I}$ are linearly independent in $\Bbb Q\otimes A$. Indeed, if $\sum q_i\cdot 1\otimes x_i=0$ (with almost all $q_i=0$), then with $N$ as common denominator of all $q_i$, we have $n_i:=Nq_i\in\Bbb Z$ and $$\begin{align}0&=N\sum (q_i\cdot 1\otimes x_i)\\&=\sum(n_i\cdot 1\otimes x_i)\\&=\sum 1\otimes n_ix_i\\&=1\otimes\sum n_ix_i\end{align}$$
and conclude that $M\cdot \sum n_ix_i=0$ holds in $A$ for some integer $M$. Then with $m_i:=Mn_i$, $\sum m_ix_i=0$ and so all $m_i$ are $=0$ and also all $q_i=0$, as was to be shown.
Conversely, let $\{\alpha_i\}_{i\in I}$ with $\alpha_i\in\Bbb Q\otimes A$ be linearly independent. As seen above, we can write $\alpha_i=q_i\cdot 1\otimes a_i$ with $a_i\in A$ and $q_i\in \Bbb Q$. Of course, $q_i\ne 0$ for our linearly independent family. If we multiply each $\alpha_i$ with a non-zero reatioal, we still have a linearly independent family. Hence we may assume wlog $\alpha_i=1\otimes a_i$.
Then the $\{a_i\}_{i\in I}$ are independent. Indeed, if $\sum m_ia_i=0$ (with almost all $m_i=0$), then
$$\begin{align}0&=1\otimes 0
\\&=1\otimes \sum m_ia_i\\
&=\sum 1\otimes m_ia_i\\
&=\sum m_i\cdot 1\otimes a_i\\&=\sum m_i\alpha_i \end{align}$$
and hence all $m_i=0$, as was to be shown.
Specifically, if $a\in A$ is a torsion element and $ma=0$ for some non-zero integer $m$, then
$$1\otimes a=\frac 1m\otimes ma=\frac1m\otimes 0=0. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3616793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\frac{3}{5} + i\frac{4}{5}$ isn't a root of unity Intuitively, I see why $\frac{3}{5} + i\frac{4}{5}$ is not a root of unity because $\frac{2\pi}{\arctan(4/3)}$ appears to be irrational when I plug into my calculator. But how to I show this rigorously?I think contradiction should work, but still I wasnt able to show this rigorously. Any help will be appreciated. (Also: I first tried using the idea that roots of unity had angle $\frac{2\pi}{n}$ for n in the natural numbers but then I realized that n should be replaced by rational number because $e^{\frac{i4\pi}{7}}$ is a root of unity too.)
| Hint: An $n$th root of unity has the form $e^{2\pi ik/n}$ with $0\leq k\leq n-1$.
By Euler's formula, $r(\cos \phi + i\sin\phi) = re^{i\phi}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3617176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
How to solve this Riemann integral problem? Suppose $f$ is Riemann Integrable on $[a,b].$ Is there a real number $c\in [a,b]$ such that
$$\int_a^c f(x)\,dx=\int_c^b f(x)\,dx$$? If it is, then proof it, otherwise, give a counter example.
| If $f$ is Riemann integrable on $[a,b]$ then $f$ is bounded, i.e. there exists an $M$ such that $|f(x)|\leq M$ for all $x\in[a,b]$. This implies that the function
$$F(x):=\int_a^x f(t)\>dt\qquad(a\leq x\leq b)$$
is Lipschitz continuous with constant $M$. Let $\int_a^b f(t)\>dt=:C$. Then $F(a)=0$ and $F(b)=C$. By the intermediate value theorem there is then a $c\in[a,b]$ with $F(c)={C\over2}$ (when $C=0$ choose $c=a$). It is easy to see that we then have
$$\int_a^c f(t)\>dt={C\over2}=\int_c^b f(t)\>dt\ .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3617316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$(\varepsilon, \delta)$ proof for $f(x)=\begin{cases} \frac{3-x}{2} & x<1 \\ x & x \geqslant1 \end{cases} $
Show using $(\varepsilon, \delta)$- definition of continuity that
$$f(x)=\begin{cases}
\frac{3-x}{2} & x<1 \\
x & x \geqslant1
\end{cases}
$$
is continuous at $x=1$.
I had a problem like this on my class and wasn't aware of the $(\varepsilon, \delta)-$definition for continuity and approached the problem a bit differently.
In order for $f(x)$ to be continuous at $x=1$ we would have to have the left and right-handed limits equal to each other.
Since we have:
$\lim_{x\to1^-} \frac{3-x}{2}=1$ (1)
$\lim_{x\to1^+} x= 1$ (2)
we can proceed on proving this using $(\varepsilon, \delta)$ for the limits.
for (1) we can pick $\delta=2\varepsilon$ and since $|\frac{3-x}{2}-1| = |\frac{-x+1}{2}| =|\frac{x-1}{2}| \overset{\mathrm{(x > 1)}}{=}
\frac{x-1}{2} < \frac{\delta}{2} = \frac{2\varepsilon}{2} =\varepsilon$ the limit holds.
similarly, for (2) we can pick $\delta=\varepsilon$ and since $|x-1|< \delta=\epsilon$ the limit holds as well.
I know this isn't what they asked for, but shouldn't it be pretty much the same thing?
| I would say that you are almost done with the problem, but not quite there yet. What you have is $\delta_\text{left}$ and $\delta_\text{right}$ such that the following holds: if $0<x-1<\delta_\text{right}$ then $|f(x)-f(1)|<\varepsilon$, and if $0<1-x<\delta_\text{left}$ then $|f(x)-f(1)|<\varepsilon$. All that's left is finding some $\delta$ such that $|f(x)-f(1)|<\varepsilon$ whenever $|x-1|<\delta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3617470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Acceleration of a ball on a plane I have a plane ($ax+by+cz+d=0$) in a 3D world, and a gravity vector $\vec{g}$ (say it's $[0, 0, -9.81]$.) How would I find the acceleration vector of an object on this plane, ignoring friction?
| What you want to do is project $\vec{g}$ on the plane.
First we will consider a new plane: $ax+by+cz=0$. This plane is parallel to our previous one, so our result won't change, but it goes through the origin.
Then we need a normal vector of this plane. In this case, we're lucky because we have this particular form of the equation of the plane. A normal vector is in this case $\vec{n}=[a,b,c]$.
Now we will create a line going through $\vec{g}$, and parallel to $\vec{n}$. Then we get the following parameter-equation (I don't know the proper term):
$$\vec{x}=\lambda\vec{n}+\vec{g}$$
Where $\lambda$ is the parameter. The intersection of this line and our plane will be $\vec{g}$ projected on our plane.
Let's call this intersection-point $\vec{p}$. For $\vec{p}$ it must be true that:
$$\vec{p}\text{ is on our line: }ap_x+bp_y+cp_z=\vec{p}\cdot\vec{n}=0$$
$$\vec{p}\text{ is on our plane: }\lambda\vec{n}+\vec{g}=\vec{p}$$
Plugging the second equation into the first we get:
$$(\lambda\vec{n}+\vec{g})\cdot\vec{n}=\lambda\vec{n}\cdot\vec{n}+\vec{g}\cdot\vec{n}=0$$
$$\Rightarrow\lambda=-\frac{\vec{g}\cdot\vec{n}}{\vec{n}\cdot\vec{n}}$$
Plugging this into the first equation gives us:
$$\vec{p}=\vec{g}-\frac{\vec{g}\cdot\vec{n}}{\vec{n}\cdot\vec{n}}\vec{n}$$
In your example ($\vec{g}=[0, 0, -9.81]$) it looks like:
$$\vec{p}=\begin{bmatrix}
\frac{9.81c}{a^2+b^2+c^2}a\\
\frac{9.81c}{a^2+b^2+c^2}b\\
\frac{9.81c}{a^2+b^2+c^2}c-9.81\\
\end{bmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3617617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How to prove that elements in the main diagonal of PD matrix are all positive? Reminder: As PD matrices are defined vector X is not the vector 0
Given a positive definite matrix A which is symmetric
We need to prove that he following elements in the main diagonal are all positive
(A(1,1) A(2,2) ... A(n,n))
I started solving this but got stuck at the end, any help?
| $A=[a_{i,j}]$ is the matrix of the scalar product $<x,y>=x^TAy$.
Then $<e_i,e_i>=||e_i||^2=e_i^TAe_i=a_{i,i}>0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3617773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Level Curves Problem Show that $x^2+y^2=6$ is a level curve of $f(x,y)=\sqrt{x^2+y^2}-x^2-y^2+2$.
I know that the first equation is a circle but I do not know how to find out if the second one is it too.
Thanks for the help.
(Sorry my English is not good).
| With
$x^2 + y^2 = c, \; \text{ a constant}, \tag 1$
for any $c$, we have
$f(x, y) = \sqrt{x^2 + y^2} - x^2 - y^2 + 2 = \sqrt{x^2 + y^2} - (x^2 + y^2) + 2 = \sqrt c - c + 2; \tag 2$
thus the circle (1) lies in the $\sqrt c - c +2$-level set of $f(x, y)$; note we needn't prove
$f(x, y) = c_0, \; \text{a constant} \tag 3$
describes a circle to establish this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3617915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that $|\{x\}^2-\{x\}+1/6|\leq \frac{1}{6}$ I am trying to show the following inequality holds for $x>0$:
$$|{x}^2-{x}+1/6|\leq \frac{1}{6}.$.
I was able to show that $\{x\}^2-\{x\}+1/6 \leq 1/6$ because $\{x\}^2-\{x\}<0.$ However I am having trouble with showing $\{x\}^2-\{x\}+1/6\geq -\frac{1}{6}$. This seems to be untrue because this would mean $\{x\}^2-\{x\}\geq -\frac{1}{3}$ which seems impossible if $x$ is chosen just right. But am not able to find a counterexample either.
Could anyone please help me with the second half of the proof above. I appreciate the help!
| Set $y=\{x\}$. As $x>0$, we have $0\le y <1$, so, as the roots of $y^2-y$ are $0$ and $1$, its minimum is attained at $\frac12$, and we have $-\frac14\le y^2 -y\le 0\:$ on $[0,1)$. Therefore
$$-\frac1{12} \le y^2-y+\frac 16\le \frac16.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3618084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
When is it true that $x^2 < \lfloor{x}\rfloor \lceil{x}\rceil$? When is it true that $x^2 < \lfloor{x}\rfloor \lceil{x}\rceil$? It seems like this should be true whenever $x$ is close to $\lfloor{x}\rfloor$ than $\lceil{x}\rceil$, but I'm not sure how to prove this. I am trying to show that this is equivalent to $\frac{x - \lfloor{x}\rfloor}{1 - (x - \lfloor{x}\rfloor)} < 1$, but I am having trouble. If someone could give me a hint about how to proceed it would be much appreciated.
Edit: Write $r = x - \lfloor{x}\rfloor$ so that $\lfloor{x}\rfloor = x - r$ and $\lceil{x}\rceil = x + (1 - r)$. Then using AM-GM, we have that
$$\frac{1}{4}((x - r) + (x + 1 - r))^2 \leq (q - r)(q + 1 - r)$$
which implies that $$\frac{1}{4}\left(2x + (1 - 2r) \right)^2 \leq \lfloor{x}\rfloor \lceil{x}\rceil$$
and it's easy to see that if $r < \frac{1}{2}$ then the LHS is larger than $x^2$. My proof does not work in the other direction though.
| Hint:
First notice, that when $x$ is an integer, the inequality does not hold.
Let's write $x=n+\alpha$, where $n$ is an integer and $0 < \alpha < 1$, then we can rewrite the inequality as $(n+\alpha)^2 < n(n+1)$. Now the problem is reduced to solving the following inequality:
$$\alpha^2 + 2n \alpha - n < 0$$
Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3618227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding the closed-form formula to a recurrence with summation of terms I am currently trying to work through this recurrence problem but am having a hard time coming up with the solution:
$g\left(n\right)=\left(\sum_{i=1}^{n-1}g\left(i\right)g\left(n-i\right)\right)+1$
Where the base $g(0)=0$.
One thing I noticed was that the values involved in the summation are sort of symmetrical, there would be a $g(2)g(3)$ as well as a $g(3)g(2)$ and this duplicate value would exist for each value of i except for when $i=n-i$, which I wanted to take advantage of. I typically approach closed-form recurrences by creating a polynomial in which I would take a guess for a function and solve for its constants but Im not sure how to extend this idea to something like this.
| We can obtain a generating function for your sequence. First we modify the recurrence relation to state $$g(n) = 1 + \sum_{i=0}^ng(i)g(n-i).$$ Note that this gives the same relation because $g(0)=0$. Also note that this equality does not hold for $n=0$ (that gives $g(0) = g(0)^2 + 1$). Now we define the generating function.
Let
$$f(x) = \sum_{n=0}^\infty g(n) \cdot x^n.$$
Then, by taking the Cauchy product of $f$ with itself we see:
\begin{align*}
f(x)^2 &= \sum_{n=0}^\infty \left(\sum_{i=0}^n g(n-i)g(i)\right)x^n\\
&= \sum_{n=1}^\infty \left(\sum_{i=0}^n g(n-i)g(i)\right)x^n\\
&= \sum_{n=1}^\infty \left(g(n) - 1\right)x^n\\
&= \sum_{n=1}^\infty g(n)x^n - \sum_{n=1}^\infty x^n\\
&= f(x) - \frac{x}{1-x}.
\end{align*}
We obtain that the generating function satisfies the relation
$$f(x)^2 = f(x) - \frac{x}{1-x}.$$ There are two functions satisfying this relation. Namely:
$$f(x) = \frac{1}{2} \left(1-\frac{\sqrt{1-5 x}}{\sqrt{1-x}}\right) \quad\text{ or }\quad f(x) = \frac{1}{2} \left(\frac{\sqrt{1-5 x}}{\sqrt{1-x}}+1\right).$$
Only the first choice has its zeroth term equal to zero and thus we conclude that our generating function is $$f(x) = \frac{1}{2} \left(1-\frac{\sqrt{1-5 x}}{\sqrt{1-x}}\right) = x + 2x^2 + 5x^3 + 15x^4 + \cdots.$$ As to a closed form of your sequence. We have now shown that your sequence (up to a different zeroth term because they record the coefficients of $f(x)+1$) is sequence A181768 in the OEIS. You can proceed your research there. The site gives several other ways to calculate the terms of the sequence (see also A007317). A simple formula might be a bit much to ask.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3618418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Combinatorics and integer solutions to equations Here is the question:
How many non-negative integer solutions are there to the equation $x_1 + x_2 + x_3 + x_4 = 74$ with each $x_j \leq 26$?
We've been instructed to use the $C(n+r-1,r-1)$ identity for the amount of integer solutions for the equation $x_1 + x_2 + \cdots + x_r = n$. They also provided a hint on how to start that makes no sense to me.
Given hint for step 1: Let $U$ be the set of solutions without any restriction, and let $S_j$ be the set of solutions where $x_j > 26$. The size of $S_1$ can be found by replacing $x_1$ with $y_1 = x_1 - 27 \geq 0$ and applying the given formula. What is the size of $S_1$?
I am completely stumped on this question, and I'm not actually sure what the hint is actually pointing me towards. Any help or direction would be appreciated!
| Let $U$ be the number of solutions where you don't restrict any of the $x_j$.
Now, let $S_j$ be the number of solutions restricting only $x_j$. Define $y_j = x_j - 27$. Then you have the equation for, for $S_1$ and $x_1$ and $y_1$ for instance,
$$(y_1 + 27) + x_2 + x_3 + x_4 = 74$$
Simplify to $y_1 + x_2 + x_3 + x_4 = 47$. Again, since you only restricted $x_1$, all variables are unrestricted again. ($x_1 \ge 27 \implies y_1 + 27 \ge 27 \implies y_1 \ge 0$)
Why did we let $y_j = x_j - 27$? Because we're counting the number of invalid solutions in every single variable first. $x_j \ge 27$ if it is invalid for any given $j$, and redefining it this way lets us simplify to an equation in unrestricted variables. You can apply your formula to each of the $S_j$.
Bear in mind you will need inclusion-exclusion for this problem, since $S_1$ and $S_2$ for instance could have cases where $x_1$ and $x_2$ both exceed $26$, so you need to account for these duplicates. The process and calculation should be analogous.
The final solution will be $U$ minus a sum of the $S_j$ and their various intersections as a result. This is because what you're ultimately doing is taking the complementary approach: finding the unrestricted solutions, and subtracting from those the number of solutions that are invalid.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3618568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the missing angle in the triangle Given the following triangle $ABC$, find the angle of $ACD$.
Notice that $AC = BC$, I can find the orthocenter of $ABC$. However, I am stucked and do not know what is the next step. Any hint would be appreciated.
| Applying the trigonometric form of the Ceva's theorem we obtain:
$$\frac{\sin\angle ACD}{\sin\angle BCD}=\frac{\sin40^\circ\sin50^\circ}{\sin30^\circ\sin20^\circ}=\frac{2\sin40^\circ\cos40^\circ}{\sin20^\circ}
=\frac{\sin80^\circ}{\sin20^\circ}=\frac{\cos10^\circ}{2\sin10^\circ\cos10^\circ}=\frac{\sin30^\circ}{\sin10^\circ}.$$
From this and $$\angle ACD+\angle BCD=40^\circ$$ one concludes:
$$
\angle ACD=30^\circ,\quad \angle BCD=10^\circ.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3618767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solving $\lfloor|x+1|-|x|\rfloor\geq x^2$ I need to solve this inequality:$$\lfloor|x+1|-|x|\rfloor\geq x^2.$$
I checked the domains:
*
*$x\geq0$ resulted in $[0,1]$.
*$x\leq-1$ resulted in a contradiction.
But when trying to solve the inequality for $-1<x<0$ I get stuck:
$$
\lfloor|x + 1| - |x|\rfloor = \lfloor x + 1 - (-x)\rfloor = \lfloor2x + 1\rfloor \geq x^2,
$$
and I don't how to proceed from here…
Your help is appreciated, thanks in advance!
| $\lfloor 2x + 1\rfloor$ takes on two possible values for $-1 < x < 0$: $-1$ and $0$.
It takes value $-1$ if $2x + 1 < 0$, which happens when $x < -\frac12$; in this case, $\lfloor 2x + 1\rfloor$ cannot be larger than $x^2$, since $x^2$ is positive and $-1$ is not.
It takes value $0$ if $0 \leq 2x + 1 < 1$, which happens when $x \geq -\frac12$ and $x < 0$. In that case, $\lfloor 2x + 1\rfloor$ cannot be larger than $x^2$, since $x^2$ is positive and $0$ is not.
In other words, there are no solutions to the inequality in the interval $(0,1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3618960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Structure of set invariant under multiplication by a multiplicative subgroup Suppose $p$ is prime. What can be said about the structure and the size of a set $ A \subseteq F^*_p$ ($F^*_p $is a multiplicative group of integers modulo $p $), provided that it does not change under multiplication by a multiplicative subgroup $ G \subseteq F^*_p$, i.e.
$$ GA = \{x*a: x \in G, a \in A\} = A $$
I tried representing $ G $ as a set of successive powers of a primitive root module $ p $, and I guess that the set satisfying this condition is a larger multiplicative subgroup of $ F^*_p $. Is this correct?
| In general, for any group $U$, let $G$ be a subgroup of $U$ and let $A$ be any subset of $U$. It is easy to check that the condition $GA=A$ is equivalent to: $A$ is a union of right cosets of $G$. This includes "$A$ is a subgroup of $U$ containing $G$" but is broader.
For example, let $p=7$, and let $G=\{1,6\}$ (considering $F_p \cong \Bbb Z/7\Bbb Z$ to be represented by $\{0,1,2,3,4,5,6\}$). Note that cosets of $G$ in $F_p^*$ are $\{1,6\}$, $\{2,5\}$, and $\{3,4\}$. Then besides $A=G$ and $A=F_p^*$, there are two other examples for which $GA=A$, namely
$$
A = \{1,2,5,6\} \quad\text{and}\quad A = \{1,3,4,6\}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3619083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Inequality between $\mathbb{E}[XY]$ and $\mathbb{E}[X^2]$ if $X$ and $Y$ have the same distribution Does there exist an inequality connecting $\mathbb{E}[XY]$ and $\mathbb{E}[X^2]$ if $X$ and $Y$ have the same distribution, regardless of whether they are independent or not?
| Note that $0\le E(X-cY)^2=c^2EY^2-2cEXY+EX^2$ for all $c\in\Bbb R$, so this quadratic in $c$ has discriminant $\le0$, i.e. $4c^2(EXY)^2-4c^2EX^2EY^2\le0$. This simplifies to $|EXY|\le EX^2EY^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3619417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is the assumption $(dx)^2 = 0$ actually correct instead of just approximately correct? Imagine dividing a sphere into concentric spherical shells of thickness $dr$ and inner radius $r$.
The volume of each shell is $$dV = \frac{4\pi}{3} [ (r + dr)^3 - r^3]$$
Expand the cubic expressions, we get:
$$
(r + dr)^3 - r^3 = r^3 + dr^3 + 3r^2 dr + 3rdr^2 - r^3 = 3r^2dr + 3rdr^2 + dr^3
$$
Assuming that $dr^3 = 0$ and $rdr^2 = 0$, we get:
$$(r + dr)^3 -r^3= 3r^2dr$$
Thus, the volume of each shell is $dV = 4\pi r^2dr$.
If we integrate along the radius, then we get $$\int_0^R 4\pi r^2dr = \frac43\pi R^3$$ This confirms that our analysis of the spherical shell volumes is correct. However, this analysis relies on the assumption that $dr^3 = 0$ and $rdr^2 = 0$. My question is why are these assumptions correct? If we assume those values are zero, shouldn't the final value just be approximately correct by an infinitesimal amount instead of being absolutely correct?
| Instead of thinking of $dr$ as a number, think of the manipulations you've used to get $dV=4\pi r^2dr$ as a shortcut for computing $\frac{dV}{dr}$ using the limit definition of the derivative: $\lim_{h\to0}\frac{3r^2h+3rh^2+h^3}h=3r^2$. What you've calculated isn't really the volume of any particular shell, but the limit of the ratio $\frac{\text{shell volume}}{\text{shell thickness}}$, as the thickness approaches $0$.
Then you're using the fact that $\int_0^R\frac{dV}{dr}{dr}=V(R)-V(0)$. You've basically just taken a derivative followed by an antiderivative.
The formula above is exact because of the fundamental theorem of calculus. The integral is (by definition) the limit of a sequence of approximations obtained by subdividing the domain into smaller and smaller intervals (corresponding to thinner and thinner shells in your example), and it turns out that if you estimate each shell volume in this process by $\Delta V=\frac{dV}{dr}\Delta r$, then the sequence of approximate volumes approaches the exact total volume you want in the limit, because the "error" corresponding to the "erased" higher-degree terms approaches $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3619562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is there typographical error in Stephen Willard's General Topology proof of Theorem 28.11 Here is 28.11:
The proof on page 206 initially refers to separation order E(a,b). In the second paragraph, it supposes distinct points in E(a,c) - {a,b}. I am reading this on my own so I have no-one else to ask. I believe it should read E(a,b) - {a,b}. Am I correct?
It turns I believe that there is an error in the following proof as well. Here's 28.12:
The third line of the proof should read "q > p." The fourth line is correct.
There may be yet a third typo in section 28 in the proof of 28.14. The proof reads "...form a chain of connected sets whose union is K - {x,y}..."
Regardless, I am only asking for an answer to the question regarding theorem 28.11. Any comments regarding the other 2 proofs would be welcome but are not needed.
If I am correct, I hope this will be helpful for those who read General Topology in the future.
| Yes. There is an error. Brian Scott confirmed this in the comments section.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3619687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Maximizing $f(x,y) = (a + x)(b + y)$ under the constraint $d=x+y$, where $a$, $b$, $d$ are known.
Find the maximum of the function $f(x,y) = (a + x)(b + y)$ under the constraint $d = x + y$, where $a$, $b$, $d$ are known.
It seems "obvious" to me that you'd want to split it between the two so that both sides of the product are as equal as possible but no idea how to prove the result.
Sorry I don't know what kind of math this is so I tagged a few.
| It's labeled as pre-calculus problem so I'll solve it without calculus.
$f(x,y)=(a+x)(b+y)=(a+x)(b+d-x)=-x^2+(b+d-a)x+(ab+ad)=-(x-(b+d-a)/2)^2+(ab+ad+(b+d-a)^2/4)$
The maximum can be obtained at $x=(b+d-a)/2$ and the maximum value is $ab+ad+(b+d-a)^2/4$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3619835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is there a self-map of the disk with Jacobian everywhere greater than $1$? It might be silly, but I am not sure how to approach this problem.
Let $D \subseteq \mathbb{R}^2$ be the closed unit disk. Does there exist a smooth map $f:D \to D$ such that $\det df >1$ everywhere?
I don't assume $f$ maps boundary to boundary.
The area formula implies that such a map cannot be injective.
I guess this is somewhat related to the question of existence of maps of non-zero degree between disks.
| Choose an $N\gg1$, and map $D$ to an ellipse $E\subset R:=[-2N,2N]\times\bigl[-{1\over N},{1\over N}\bigr]$ by putting
$$f_1:\quad (x,y)\mapsto \bigl( 2Nx,{1\over N}y\bigr)\ .$$
We have ${\rm det}(df_1)=2$. Now use a map $f_2$ with ${\rm det}(df_2)\approx1$ to wrap the long rectangle $R$ essentially area preserving around the annulus
$$A_\rho:=\bigl\{(x,y)\in D\bigm| \rho\leq\sqrt{x^2+y^2}\leq1\bigr\}$$
with $\rho$ slighty smaller than $1-{2\over N}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3619972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What are the continuous functions $ x f(y)+y f(x)=(x+y) f(x) f(y) ? $ question -
What are the continuous functions on $\mathbb{R}$ which are solutions of the equation
$$
x f(y)+y f(x)=(x+y) f(x) f(y) ?
$$
my try -
by putting $y=x$ i get $f(x)=0$ or $1$ for all $x$ not equal to $0$...
now my answer is same as mention in book but i think it is incorrect because they are not valid for all $x$ ???
can someone tell how to fix this hole with the help of continuity ...
i know this is simple question but i want to clear my doubt...
thankyou
| Wit $x=y$, we arrive at the necessary condition
$$2xf(x)=2xf(x)^2 $$
and hence $$\tag1f(x)\in\{0,1\}\quad\text{for }x\ne0. $$
The only continuous functions with this property are the constant functions
$$f(x)=0$$
and
$$f(x)=1. $$
Both are directly verified to indeed solve the original functional equation.
What if we drop continuity?
We still have $(1)$, but note that $f$ may jump discontinuously between $0$ and $1$.
Suppose $f(x_0)=0$ for at least one $x_0\ne 0$. Then with $y=x_0$, we get
$x_0f(x)=0 $
and hence
$$ f(x)=0\quad\text{for all } x$$
as one solution (again).
So assume $f(x)=1$ for all $x\ne 0$.
Then nothing can be said about $f(0)$, i.e., for any $c$,
$$f(x)=\begin{cases}c&x=0\\1&x\ne 0\end{cases} $$
is a solution, as one readily verifies.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3620340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Classical v/s Bayesian Hypothesis Testing This question has 2 parts:
(1) What is the fundamental difference between classical and bayesian hypothesis testing? How do I interpret this difference.
(2) Here is a paragraph quoted from Casella and Berger Statistical Inference (Section 8.2):
I don't understand:
(i) Why is P(H0 is True | X) = {either 0 or 1} ?? --- if I toss a coin I know that I'll get either heads or tails but I do not say that the probability of getting heads is 0 or 1 if the outcome is unknown.
(ii) Why do these probabilities not depend on X?
| With the coin tossing, the quantity of interest in the distribution is the probability the outcome is heads/tails. This is a fixed and constant number in the classical paradigm. For a fair coin, this probability is $0.5$ irrespective of whether you get an actual heads or tails (i.e. irrespective of the data X) and $P(H_0: p=0.5|X)=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3620494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Remove minimum number of nodes to make graph disconnected Find the minimum number of nodes that need to be removed to make graph disconnected( there exists no path from some node x to all other nodes). Number of nodes can be 105
| You are searching for the minimum $k$ such that your graph $G = (V, E)$ is $k$-vertex-connected.
To solve this problem consider Menger's theorem:
Let $G$ be a finite undirected graph and $x$ and $y$ two nonadjacent vertices. Then the size of the minimum vertex cut for $x$ and $y$ (the minimum number of vertices, distinct from $x$ and $y$, whose removal disconnects $x$ and $y$) is equal to the maximum number of pairwise internally vertex-disjoint paths from $x$ to $y$.
(from Wikipedia)
You can find such paths using a reduction of the vertex-version of Menger's theorem to the edge-version of Menger's theorem, which may be solved in a variety of ways, e.g. by solving a max-flow problem.
$k$ is the minimum over the cardinality of a minimum vertex cut for $(u, v)$ for every pair $(u, v)$ of non-adjacent nodes in $G$.
$$ k = \min \left\{ \text{cardinality of minimum $u$-$v$-cut} \mid (u, v) \in V \times V, u~\text{and}~v~\text{are non-adjacent} \right\} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3620664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to calculate the centre of a circle given two points and the equation of a line going through the centre? A(3,5) and b(9,-3) lie on a circle. show that the centre of the circle lies on the line 4y-3x+14 = 0
Is it possible to work out the centre of the circle algebraically since the only method I can think of is drawing the points and the line on a set of axis and estimating the circle's radius.
that was my attempt; ie I worked out the centre by plotting the points and then estimating it. helpfully the centre in this question was a whole number but I want to know how to answer the question properly(give an algebraic solution)
| You must rember this teorem:
In a circunference, the center of the circle belongs to the perpendicular line drawn from the middle point of a chord.
So, we must calculate the middle point of $AB$, that is:
$$M(6,1)$$
Then we have to find the line passing throught $A$ and $B$, so:
$$y=-\frac{4}{3}x+15$$
Now, we can calculate the slope of the perpendicular line using $m\cdot m'=-1$, and so:
$$m'=\frac{3}{4}$$
This line must pass thriught $M$ and so, the equation becomes:
$$y=\frac{3}{4}x-\frac{7}{2}$$
Passing to the canonical form:
$$4y-3x+14=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3620779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Gradient vector for a function of two variables For the function $z = x^2 + y^2 , z = f(x,y)$
the gradient comes out to be $(2x,2y)$.
However since the function is in 3D , shouldn't the gradient also be a 3D vector?
How would the gradient change if $w = f(x,y,z) = x^2 + y^2 -z =0$?
Will it be $(2x,2y, -1)$?
| Maybe you confuse $f$ with its graph. The graph of $f$ is three dimensional, i.e., a subset of $\mathbb{R}^3$. But $f$ has only two entries. For every partial differentiable function $f = f(x, y)$ the gradient of $f$ is defined as $(\partial_x f, \partial_y f)$, so the gradient is a planar vector in this case.
If you consider $w(x, y, z) = x^2 + y^2 - z$, then $w$ depends on three variables and therefore the gradient is a three dimensional vector $(\partial_x w, \partial_y w, \partial_z w) = (2x, 2y, -1)$.
In the most general case, if $g = g(x_1, \dots, x_n)$ is a function of $n$ real variables, then $g$ has a graph
$$\Gamma = \{ (x_1, \dots, x_n, y) : y = g(x_1, \dots, x_n) \} \subseteq \mathbb{R}^{n + 1}$$
and a gradient
$$\nabla g = (\partial_{x_1} g, \dots, \partial_{x_n} g) \in \mathbb{R}^n.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3620921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
sufficient condition for mean convergence in $L^1$:prove or disprove Let $f_n \; \colon (0,1) \to \mathbb{R} $ a sequence of Lebesgue integrable functions which converges almost everywhere in $(0,1)$ to $0$. Prove or Disprove: if there exists $p \in (1,+ \infty)$ such that $(f_n)$ is bounded in $L^p(0,1)$,then $\lim_{ n \to \infty} \int_0^1|f_n(x)| \, dx =0$.
| I think the property is true and Egorov's theorem will help here. Fix $\varepsilon >0$. Since $f_n \to 0$ a.e., there is a set $E \subset (0,1)$ such that $\lvert E\rvert < \varepsilon$ and $f_n \to 0$ uniformly on $(0,1)\setminus E$. Now fix $\delta > 0$ and $N$ large enough that $\lvert f_n \rvert < \delta$ on $(0,1)\setminus E$ whenever $n \ge N$. Then for $n \ge N$, \begin{align*} \int^1_0 \lvert f_n(x)\rvert dx &= \int_{(0,1)\setminus E} \lvert f_n(x)\rvert dx + \int_E \lvert f_n(x)\rvert dx \\
&\le \delta \int_{(0,1)\setminus E}dx + \lvert E \rvert^{1/q}\|f_n\|_{L^p(0,1)}\\
&\le \delta + \varepsilon^{1/q}M
\end{align*} where $M$ is the bound on $f_n$ in $L^p(0,1)$, and $q$ is the dual exponent to $p$. Since $\varepsilon$ and $\delta$ are arbitrary, this shows that $\int^1_0 \lvert f_n(x)\rvert dx \to 0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3621069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
The right way to use the chain rule for composite functions like $f(g(x,y),h(x,y))$ Im in doubt about a resolution. I made it in a way that gave me the right answer but I don't think it's the right way to answer. I wish some one can help me understand the right way to make.
So, $f:\Bbb{R}^2\rightarrow\Bbb{R}$ is a derivable function and $z = f(x-y,y-x)$ so what is the value of :$$\frac{\partial{z}}{\partial{x}} + \frac{\partial{z}}{\partial{y}}$$
The way I did was:
$$\frac{\partial{z}}{\partial{x}} = \nabla{f}.\bigg(\frac{\partial{(x-y)}}{\partial{x}},\frac{\partial{(y-x)}}{\partial{x}}\bigg) = \frac{\partial{f}}{\partial{x}} - \frac{\partial{f}}{\partial{y}} \Rightarrow\frac{\partial{f}}{\partial{y}} = 0$$
$$\frac{\partial{z}}{\partial{y}} = \nabla{f}.\bigg(\frac{\partial{(x-y)}}{\partial{y}},\frac{\partial{(y-x)}}{\partial{y}}\bigg) = \frac{\partial{f}}{\partial{y}} - \frac{\partial{f}}{\partial{x}}\Rightarrow\frac{\partial{f}}{\partial{x}} = 0$$
Is the way I did correct? Did the use of chain rule is correct?
| Let’s try a concrete example: suppose we have $f:(x,y)\mapsto x^2+y^2$. Then $$z = f(x-y,y-x) = (x-y)^2+(y-x)^2=2(x-y)^2$$ and ${\partial z\over\partial x}=4(x-y)$, which is not identically zero.
What went wrong with your calculations? You’ve made the common mistake of using the same names to mean different things, which I think is encouraged by the conventional notation you’re using. The $x$ and $y$ in $f(x,y)$ are not the same $x$ and $y$ that are in the expressions $x-y$ and $y-x$. Since the former are just placeholders, anyway, let’s call them $u$ and $v$ instead, and further, let’s give some names to the two unnamed functions, $\phi:(x,y)\mapsto x-y$ and $\psi:(x,y)\mapsto y-x$. We then have $z=f(\phi(x,y),\psi(x,y))$ and the chain rule says that \begin{align} {\partial z\over\partial x}(x,y) &= {\partial f\over \partial u}\left(\phi(x,y),\psi(x,y)\right) {\partial \phi\over\partial x}(x,y)+{\partial f\over\partial v}\left(\phi(x,y),\psi(x,y)\right) {\partial\psi\over\partial x}(x,y) \\ &= {\partial f\over \partial u}(x-y,y-x)-{\partial f\over\partial v}(x-y,y-x). \end{align} This is not in general identically zero. Similarly, $${\partial z\over\partial y}(x,y) = {\partial f\over\partial v}(x-y,y-x)-{\partial f\over\partial u}(x-y,y-x).$$ Their sum obviously vanishes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3621355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculate $\lim_{n\to\infty} \frac{ (1^{1^p}2^{2^p}\cdot...\cdot n^{n^p})^{ 1/n^{p+1} }}{n^{1/(p+1)}}$
Calculate:
$$\lim_{n\to\infty} \frac{ (1^{1^p}2^{2^p}\cdot...\cdot n^{n^p})^{ 1/n^{p+1} }}{n^{1/(p+1)}}$$
I've done some steps as follows: $$a_n:=\frac{ (1^{1^p}2^{2^p}\cdot...\cdot n^{n^p})^{ 1/n^{p+1} }}{n^{1/(p+1)}} \iff \ln a_n=\frac{1}{n^{p+1}}\big(\sum_{k=1}^nk^p\ln k-\frac{n^{p+1}}{p+1}\ln n\big) \iff \\\ln a_n =\frac{1}{n}\sum_{k=1}^n\big[\big(\frac{k}{n}\big)^p\ln \frac{k}{n}\big]+\frac{1}{n}\sum_{k=1}^n\big(\frac{k}{n}\big)^p\ln n-\frac{\ln n}{p+1}.$$
Then, I was wondering if I could make some integrals out of it but still there are some odd terms.
I think my approach isn't so good...
| I seem to remember answering this question sometimes, but I didn't find it! So, I write the answer again, I didn't COPY my previous answer. Thank @metamorphy for pointing out this!The following is my previous answer.
Computing limit of a product
$$\frac{1}{n}\sum_{k=1}^n\big[\big(\frac{k}{n}\big)^p\ln \frac{k}{n}\big]
\to\int_{0}^{1}x^p\ln x dx.$$
is not difficult.
What you really need is the limit:
$$\lim_{n\to\infty}\frac{1}{n}\left(\sum_{k=1}^n\big(\frac{k}{n}\big)^p\ln n-\frac{\ln n}{p+1}\right)
=\lim_{n\to\infty}\left(\frac{1}{n}\sum_{k=1}^n\left(\frac{k}{n}\right)^p-\frac{1}{p+1}\right)\ln n=0.$$
To get this, we have the following result(https://math.stackexchange.com/a/149174/72031):
Suppose $f'$ exists on $[a,b]$, let
$$A_n=\frac{b-a}{n}\sum_{k=1}^{n}f\bigg(a+\frac{k(b-a)}{n}\bigg)
-\int_{a}^{b}f(x)\mathrm{d}x,$$
then
$$\color{red}{\lim_{n\to \infty}nA_n=\frac{f(b)-f(a)}{2}(b-a).}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3621545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Getting the differential equation back from its solutions.
Find a linear differential equation with constant coefficients satisfied by all the given
functions.$u_1(x) = x^2$, $u_2(x) = e^x$, $u_3(x) = xe^x$.
How do I proceed with this? I know that 1 is a twice repeated root of characteristic equation, but what can I say about $x^2$? If $1$ and $x$ are also given as solutions, I could have claimed 0 is a thrice repeated root of the characteristic equation. Can I still do this or is there a third order DE that satisfies all the functions?
|
''I know that $1$ is a twice repeated root of characteristic equation''
So you know that $k=1$ solves the characteristic equation $ak^2 + bk + c =0$. Then, you could have, for example $a = 1, b = -2, c = 1$. Since the characteristic equation corresponds directly to the homogeneous linear SODE, we have
\begin{equation}
y'' - 2y' + y = 0
\end{equation}
This has the complementary solution:
\begin{equation}
y(x) = C_1e^x + C_2xe^x
\end{equation}
To match your given solution $u_2(x) = e^x$ and $u_3(x) = xe^x$, you need to thrown in an initial condition like $y(0) = 3$ and $y'(0) = \pi$ or something so that $C_1 = C_2 = 1$. This is easy, I'll leave you to do it.
The tricky bit is how do you get the particular solution to be $u_3(x) = x^2$?
If you want $u_3(x) = x^2$ to be a particular solution, it has to solve
\begin{equation}
y'' - 2y' + y = \text{something}
\end{equation}
Well, let's forget about the something part and just focus on the LHS. Suppose $y = x^2$. Then, $y' = 2x$ and $y'' = 2$. Plugging this into the LHS we have:
\begin{equation}
y'' - 2y' + y = x^2 - 4x + 2
\end{equation}
Turns out $x^2 - 4x + 2$ is the something part that we needed. So in fact, the last equation above is a linear SODE which $u_1, u_2$ and $u_3$ satisfies. Of course, you still need the initial condition bit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3621741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the minimum value of $f$
Find the minimum value of $$f(x)=\frac{\tan \left(x+\frac{\pi}{6}\right)}{\tan x}, \qquad x\in \left(0,\frac{\pi}{3}\right).$$
My approach is as follows. I tried to solve it by segregating it
$$f(x)=\frac{1}{\sqrt{3}\tan x}+\left({\sqrt{3}}+\frac{1}{\sqrt{3}}\right)\frac{1}{\sqrt{3}-\tan x},$$ but $f'(x)$ is getting more and more complicated.
| We know that $\tan(x)\tan(y)=1$ then $x+y=\frac{\pi}{2}$ thus we have $f(x)=\frac{1}{\tan(x)\tan(\frac{\pi}{3}-x)}$ thus we need to maximize denominator (lll be referring to it as k)to get minimum value . Now using the fact that $\tan(x+y)=\frac{\tan(x)+\tan(y)}{1-\tan(x)\tan(y)}$ we have $\tan(\frac{\pi}{3})=\frac{\tan(\frac{\pi}{3}-x)+\tan(x)}{1-k}$ thus $k=g(x)=1-\frac{1}{\sqrt{3}}(tan(\frac{\pi}{3}-x)+\tan(x))$ thus maximum value is at $g'(x)=\sec^2(\frac{\pi}{3}-x)-\sec^2(x)=0$ in the given domain we have only one solution at $x=\frac{\pi}{6}$ thus the minimum value of $f(x)=3$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3621911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Trouble to understand an Analysis proof. I am currently studying the following proof, which may be found on this article (page 12):
However, I'm facing difficulties to proper understand some of the steps. Here are my questions:
*
*Why is it possible to say that "there exists an integer $N$ and $\alpha > 0$ such that $\Sigma_n(x) \leq n^{1 - \alpha}$, for all $n \geq N$"?
*How does integrating ${f_n}^{\prime} \geq n^{\alpha} f_n$ between $x^{\prime}$ and $x$ gives me that $f_n(x^{\prime}) \leq M e^{-\delta n^{\alpha}}$, for all $n \geq N$? Same question applies to the integration of ${f_n}^{\prime} \geq \frac{n}{\Sigma}f_n$ between $x^{\prime}$ and $x$.
*Why does "$T_n(x)$ converge to $f(x)$ as $n$ tends to infinity"? And why does this imply that $f(x) - f(x^{\prime}) \geq (x - x^{\prime}) \left(\limsup_{n \rightarrow +\infty}\frac{\ln\Sigma_n(x^{\prime})}{\ln(n)}\right)$?
EDIT:
Question 1 remains unsolved.
Question 2 is easily answered by noticing that $\frac{f_n^{\prime}}{f_n} = \left(\ln f_n\right)^{\prime}$ for both cases.
The second part of question 3 may by answered by applying $\limsup$ on both sides of the given inequality; however, the first part remains unsolved. Why does $T_n(x)$ converge to $f(x)$ as $n$ tends to infinity?
Thanks in advance.
| (1) $x < x_1$, so by defn of inf, $\limsup_{n \to \infty} \frac{\log\Sigma_n(x)}{\log n} < 1$, call the limsup $1-2\alpha$ for $\alpha > 0$. Then there is some $N$ so that for all $n \ge N$, $\frac{\log \Sigma_n(x)}{\log n} \le (1-2\alpha)+\alpha = 1-\alpha$. In other words, there is some $N$ so that for all $n \ge N$, $\Sigma_n(x) \le n^{1-\alpha}$.
(3) part 1: It suffices to prove the following general statement: Let $(a_n)_n$ be a sequence of real numbers such that $a_n \to a$. Then $\frac{1}{\log n}\sum_{i=1}^n \frac{a_i}{i} \to a$. Below is a hint.
$a_n \approx a$ for large $n$, so $\sum_{i=1}^n \frac{a_i}{i} \approx \sum_{i=1}^n \frac{a}{i} \approx a\log n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3622040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $\alpha(f)=|f(0)|^2+\int_0^1f(x)^2dx$,find the maximum value of $\alpha(fg)/\alpha(g)$ Let $\alpha(f)=|f(0)|^2+\int_0^1|f(x)|^2dx$ be a functional on $C^2[0,1]$, the space of real functions o $[0,1]$ whose 2-oder derivative is continuous. Determine $\textrm{sup}\{\alpha(fg)/\alpha(g): g\not=0\}$, where $f$ is fixed in $C^2[0,1]$.
If we consider $||f||:=\sqrt{\alpha(f)}$, then $||f||$ is indeed a norm on $C^2[0,1]$, so we may consider $||fg||/||g||$ in stead of $\alpha(fg)/\alpha(g)$. It seem that its sup is ||f||, but I still have some confusion.
| The sumpremum is equal to $$\max\{|f(t)|^2:\ t\in[0,1]\},$$ i.e., the square of the infinity norm of $f$.
We need to maximize $\alpha(fg)$ for $g$ with $\alpha(g)=1$. So
$$
\alpha(fg)=|f(0)|^2\,|g(0)|^2+\int_0^1|f(t)|^2\,(g(t)|^2\,dt\leq\|f\|_{\infty}^2.
$$
Now if, $\|f\|_\infty=|f(0)|$, choose a twice-differentiable function $g_n$ with $g_n(0)=\sqrt{1-\tfrac1n}$, $g_n(t)=0$ for all $t>\tfrac1n$, and $\int_0^1 g(t)^2\,dt=\sqrt{\tfrac1n}$. Then
\begin{align}
\alpha(fg_n)&=\left(1-\tfrac1n\right)|f(0)|^2+\int_0^1 |f(t)|^2\,|g(t)|^2\,dt\\[0.3cm]
&=\left(1-\tfrac1n\right)\|f\|_\infty^2+\int_0^{1/n} |f(t)|^2\,|g(t)|^2\,dt\\[0.3cm]
&\geq \left(1-\tfrac1n\right)\|f\|_\infty^2.
\end{align}
And if $\|f\|_\infty=|f(t)|$ for $t>0$, fix $\varepsilon>0$. By continuity there exists $\delta>0$ such that $|f(t)|\geq \|f\|_\infty\varepsilon$ on $(t-\delta,t+\delta)$. Now construct twice-differentiable $g$, supported on $(t-\delta,t+\delta)$, with $\int_0^1 |g(t)|^2\,dt =1$. Then
$$
\alpha(fg)=\int_{t-\delta}^{t+\delta}|f(t)|^2\,|g(t)|^2\,dt\geq(\|f\|_\infty^2-\varepsilon)\int_{t-\delta}^{t+\delta}|g(t)|^2\,dt=(\|f\|_\infty^2-\varepsilon).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3622176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Understanding the Fundamental theorem of Calculus in plain english I am learning Calculus. I am trying to understand the fundamental theorem of calculus. I am following this wikipedia article: https://en.wikipedia.org/wiki/Integral.
I am having a hard time understanding what they refer to as the the Fundamental theorem of Calculus. Could someone kindly explain to me what it is in plain english. The wikipedia article is quite gibberish.
| The FTC says that integration and differentiation are inverse operations. If you differentiate the right kind of integral, then you get the integrand back. If you integrate a derivative, you get the original function back.
D(I(f)) = f
I(D(f)) = f.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3622287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
} |
Question about $f(x)=\sum_{k=1}^\infty (-1)^{k+1}\sin (\frac{x}{k}) $ This function is rather peculiar. It is easy to establish the following:
$$f(x) =\sum_{k=0}^\infty (-1)^k A_{2k+1} \cdot x^{2k+1}, \mbox{ with } A_k=\Big(1-\frac{1}{2^{k}} + \frac{1}{3^{k}}- \frac{1}{4^{k}}+\cdots\Big).$$
Note that $A(1)=\log 2$, and for $k>1$, we have
$$A(k)= \Big(1-\frac{1}{2^{k-1}}\Big)\zeta(k)$$
where $\zeta$ is the Riemann Zeta function. Also, $f(-x) = - f(x)$ and we have the following approximation when $x$ is large, using a value of $K$ such that $x/K < 0.01$:
$$f(x) \approx \sum_{k=1}^K (-1)^{k+1}\sin \Big(\frac{x}{k}\Big) - x\cdot\sum_{k=K+1}^\infty \frac{(-1)^{k}}{k}$$
The function is smooth but exhibits infinitely many roots, maxima and minima. I am in particular interested in the following quantity:
$$g(x) = \sup_{0\leq y\leq x}f(y).$$
What is the growth rate for $g(x)$? Is it linear, sub-linear, or super-linear? Another question of interest is the average spacing between two roots or two extrema.
Below are two plots of $f(x)$, the first one for $0\leq x\leq 200$, the second one for $0\leq x\leq 2000$.
Addendum: Failed attempt to solve this
I used the Euler-Maclaurin summation formula to get a good approximation for $f(x)$ when $x$ is large, and this leads to
$$f(x) \approx \int_1^\infty \Big(\sin\frac{x}{2u} - \sin\frac{x}{2u+1}\Big) du.$$
A closed form for this integral exists, involving the cosine integral, see WolframAlpha here. Lots of asymptotic formulas are available (see here) but when I apply them, I end up with $f(x)$ being bounded, which is very clearly not the case based on my observations.
As an illustration, below is the computation of $f(x)$ for $x = 52,000,001$. The first chart shows $f(x)$ based on the first $n=2000$ terms in the series. Here the X-axis represents $n$, and the Y-axis represents $f(x)$ for the particular value of $x$ in question, when using a growing number of terms. In the second chart, $n$ goes to $200,000$. Stability is reached after adding about $4,100$ terms, and oscillations are slowly dampening then.
One promising approach is this. Let
$$ f_k(x)=\sum_{i=1}^k (-1)^{i+1}\sin \Big(\frac{x}{i}\Big) .$$ Define $h_k(x) =\frac{1}{2}(f_k(x) + f_{k-1}(x))$.Then $f(x) = \lim_{k\rightarrow\infty} h_k(x)$. The iterates $h_k$'s are much smoother than the $f_k$'s, and convergence is much faster.
| Distribution of Roots - Example
There was a question about the distribution of roots and extrema values of this function. From the analysis above, it sufficient to study first $4K^{\prime}$ elements of the series, as it should be a good approximation.
It is intuitive to group values of $x$ having the same integral part of $K^{\prime}$ and to check the distrubution of function values for them.
Our study, by no means a complete one, considers this sequence of values
$$x = \pi / 4 * i, i = 10000, 10001, ..., 100000.$$
We check how terms in this sum
$$
2 \sum_{k < 2K^{\prime}} \sin\big(\frac{x}{4k(2k-1)}\big) \cos\big(\frac{x(4k -1)}{4k(2k-1)}\big)
$$
balance out - if sine and cosine have the same sign for most of the pairs, then we could expect a local maximum, and local minimum, otherwise. The root values must correspond to more or less equal number of pairs with same and opposite signs.
The corresponding number of elements in the sum above for values of $x$ ranges from 63 to 197.
A plot below shows the distribution of function values by total number of elements in the sum
We see that we have at least one root for $x$ values with the same number of $2K^{\prime}$ pairs.
Local Minima and Maxima of $f$
In this section we will show that the distribution of local minima and maxima values for $f$ is
more or less homogeneous, i.e. evenly spread on $x$ axis.
For that, let's consider the first derivative of $f$
$$
f^{\prime}(x) = \sum_{k} \frac{1}{2k-1} \cos{\frac{x}{2k-1}} - \frac{1}{2k} \cos{\frac{x}{2k}} =
$$
$$
\sum_{k} \big(\frac{1}{2k-1} - \frac{1}{2k}\big) \cos{\frac{x}{2k-1}} + \frac{1}{2k} \cos{\frac{x}{2k-1}} - \frac{1}{2k} \cos{\frac{x}{2k}} =
$$
$$
\sum_{k} \frac{1}{2k(2k-1)} \cos{\frac{x}{2k-1}} + \frac{1}{2k} \big(\cos{\frac{x}{2k-1}} - \cos{\frac{x}{2k}}\big) =
$$
$$
\sum_{k} \frac{1}{2k(2k-1)} \cos{\frac{x}{2k-1}} - \frac{1}{k} \sin{\frac{x}{4k(2k-1)}} \sin{\frac{x(4k-1)}{4k(2k-1)}}.
$$
Now the last formula suggests that the derivative converges more rapidly than for the original function - see my previous answer.
Indeed, the first term converges absolutely, and the last term suggests rapidly decresing oscillations. So a good approximation of the derivative depends on just a few terms of series.
We are going to show an equally spaced sequence of $x$, where $f^{\prime}(x)$ changes signs 59% of the time between two consecutive
values. Let's consider interval $[51,989,419; 52,009,776]$ and plot the function there for 1000 uniformly selected points.
We used 5099 pairs(see the definion in my previous answer) to approximate function values.
Now let's turn to the sequence with oscillating about zero derivatives.
The first value is $x_{0} = 51,989,402$, and it approximately equals $\pi/2\mod{2\pi}$.
Then $x_{i} = x_{0} + 2\pi*i, i = 0, 1, ....$ is defined on the interval above, and it has 3247 values. Here is a plot of derivative values for this sequence
The function oscillates about zero, and for 1920 points it changes sign from current value to the next one suggesting local extrema somewhere in between.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3622469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Maximum number of iterations of a simple algorithm Suppose there is a 0-1 string of length n. We can perform the following operation on the string:
We can choose two zeros and invert the subsequence between them. The inversion includes the two zeros aswell. For example if the string is 011010, and we choose the first and fourth zeros it becomes 100110. We can also choose just one 0 and turn it into 1.
It can be proved that after some iterations the whole string will only consist of 1s.
So my question is: What is the maximum number of iterations we can perform before it becomes the all 1 string.
My approach was to construct a sequence of iterations that seems to be optimal, but I can't prove that it is.
(Obviously the maximum can be achieved if we start from the all 0 string.)
If the lenght of the string is even, so n is an even number. I would choose the middle 2 bits, and change them to 11 in two iterations $(00 \rightarrow 01 \rightarrow 11)$. After that i would reset the middle by choosing the bits next to these two $(0110 \rightarrow 1001$). So I could the first step again, and so on.
If n is an odd number. Then I would do almost the same. I would convert the middle one into zero, then reset it with the two bits next to it. $( 00000 \rightarrow 00100 \rightarrow 01010 \rightarrow 01110 \rightarrow 10001 \rightarrow 10101 \rightarrow 11011 \rightarrow 11111) $
We can calculate that the number of iterations for this algorithm is:
\begin{cases}
2^{{\frac{n+1}{2}}}-1, & \text{for odd } n \\
2^{\frac{n}{2}}+2^{\frac{n}{2}-1}-1, & \text{for even } n
\end{cases}
So we can conclude that the maximum number of iterations is greater than this amount. But I think this is the maximum, so this sequence of iterations optimal, but I can't prove it.
Could you please give me some hints on how to prove this, or if it is not true, give me a counterexample.
| Let $n$ be the length of the word. For even $n$, a working idea is to build a metric for the word so that every transition increases the metric, and so that the strategy you proposed increases the metric in every step exactly by $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3622598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Decomposing a function's variable and adding up the partials of the parts equals the original partial? Theorem
Let $f(x)$ and $g(x_1, x_2, \ldots, x_n)$ be differentiable and equal when
$x_1 = x_2 = \ldots = x_n = x$. Then
$$\frac{\partial f}{\partial x} = \frac{\partial g}{\partial x_1} + \frac{\partial g}{\partial x_2} + \ldots + \frac{\partial g}{\partial x_n}$$
when $x_1 = x_2 = \ldots = x_n = x$.
Example
\begin{align*}
f(x) &= x^3 + x^2 + x \\
g(x_1, x_2, x_3) &= x_1 x_2 x_3 + x_1 x_2 + x_1
\end{align*}
Now the sums of partials are shown to equal the partial of the original polynomial when all the $x_i$'s are equal $x$.
\begin{align*}
\frac{\partial f}{\partial x} &= \frac{\partial g}{\partial x_1} + \frac{\partial g}{\partial x_2} + \frac{\partial g}{\partial x_3} \\
3x^2 + 2x + 1 &= (x_2 x_3 + x_2 + 1) + (x_1 x_3 + x_1) + (x_1 x_2) \\
&= (x_1 x_2 + x_1 x_3 + x_2 x_3) + (x_1 + x_2) + 1\\
&= 3x^2 + 2x + 1 \tag*{$x_i = x$}
\end{align*}
Use Case
The Backpropagation Through Time algorithm used for RNNs seems to assume this when it calculates the partial of the error function $E$ with with respect to a certain weight matrix by adding the partials with respect to the matrix at each time step.
$$\frac{\partial E}{\partial W_{hh}} = \frac{\partial E}{\partial W_{hh_t}} + \frac{\partial E}{\partial W_{hh_{t-1}}} + \ldots +\frac{\partial E}{\partial W_{hh_{t-s}}}$$
Here $W_{hh}$ is the weight matrix between the hidden layers of two timesteps, $t$ is the latest timestep, and $s$ is the number of timesteps backwards at which the backpropagation is truncated.
Question
What is this property of partials called? And where might I find a proof of it?
Or alternatively, how might it be proven?
| Define $\mathbf{X}(x)=(x,...,x)$, now we have that $f(x) =g(\mathbf{X}(x))$, try taking an $x$ derivative of both sides making sure to use the multivariable chain rule and you'll obtain your theorem!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3622741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
a PDE problem involving divergence
Consider the PDE on a bounded smooth domain $\Omega \subset R^n$, $\triangle u(x)= 0$ on $\Omega$, $\frac{\partial u(x)}{\partial n}|_{\partial \Omega}=g(x)$. Prove that if it admits a smooth solution u, we have $\int_{\partial \Omega} g d \sigma=0$
I know that $\triangle u=div(\triangledown u)$. Can someone give me a hint to do this?
| We have
$\nabla \cdot \nabla u = \triangle u = 0 \tag 1$
on $\Omega$. Thus the divergence theorem yields
$\displaystyle \int_{\partial \Omega} \nabla u \cdot \mathbf n \; dS = \int_\Omega \nabla \cdot \nabla u; dV = \int_\Omega 0 \; dV = 0, \tag 2$
where $\mathbf n$ is the outward-pointing unit normal vector field on $\partial \Omega$ and $dS$ is the volume element on $\partial \Omega$; but on $\partial \Omega$ we have
$\nabla u \cdot \mathbf n = \dfrac{\partial u}{\partial n} = g = 0, \tag 3$
whence
$\displaystyle \int_{\partial \Omega} g \; dS = 0. \tag 4$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3622912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are there geodesic triangles on surfaces with non-constant curvature with angle sum 180? I've been reading about differential geometry and the Gauss-Bonnet theorem to write a paper for my geometry class and am interested specifically in geodesic triangles on surfaces.
I was wondering if it is possible to create a geodesic triangle on a surface with non-constant curvature such that the interior angles sum to $π$, $$\theta _{1} + \theta_{2}+\theta_{3}= \pi,$$ even though the Gaussian curvature $K \neq 0.$
By this I mean, is it possible to place part of the triangle on a positive curvature section of the surface and part of it on a negative curvature section of the surface so that one or two vertices are affected by the positive curvature and the other two or one vertices are affected by the negative curvature, causing the interior angle sum to still be $π$?
For example, here's a surface with positive and negative curvature from Kristopher Tapp's Differential Geometry of Curves and Surfaces. Can the top triangle be moved downward so its bottom two angles, $\theta_{1}$ and $\theta_{2}$, are on positive curvature and the top angle $\theta_{3}$ is on negative curvature so that $\theta_{1} + \theta_{2} + \theta_{3} = \pi$ and its sides are still geodesics?
If so, does anyone have any resources they know of that I could read that specifically talk about this?
| Just consider your favorite flat triangle T in a euclidian plane. the consider a $C^\infty$ fonction $f: T\to \bf R$ wih is $0$ in the neighbourhood of the boundary of $T$. Now the triangle which is the graph of this fonction ha $>0$ curvature near the point where $f$ is maximal or minimal. It cannot be everywhere positive , because due to Gauss Bonnet the integral of the curvature is $0$. The boundary is still geodesic as you did not change the metric in near it, and the angles did not change.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3623080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Evaluate $\lim_{x\to 1}\frac{x^{x^a}-x^{x^b}}{\ln^2 x}$
Evaluate $$\lim_{x\to 1}\frac{x^{x^a}-x^{x^b}}{\ln^2 x}$$
I've tried to use fundamental limits to solve it and conclude that the limit is $a-b$ as follows:
\begin{align*}
\lim_{x\to 1}\frac{e^{x^a\ln x}-e^{x^b\ln x}}{\ln^2 x}&=\lim_{x\to 1}\frac{e^{x^a\ln x}-1-(e^{x^b\ln x}-1)}{\ln^2 x}\\
&=\lim_{x\to 1}\left(\frac{e^{x^a\ln x}-1}{\ln^2 x}-\frac{e^{x^b\ln x}-1}{\ln^2 x}\right)\\
&=\lim_{x\to 1}\frac{x^a-x^b}{\ln x}=\lim_{x\to 1}\left(\frac{a(e^{a\ln x}-1)}{a\ln x}-\frac{b(e^{b\ln x}-1)}{b\ln x}\right)\\
&= a-b
\end{align*}
I uses the fact that the $\lim_{x\to a}\dfrac{e^{u(x)}-1}{u(x)}=1$ where $\lim_{x\to a} u(x)=0$
| It's not clear to me what happens in the step from blue to red:
$$\begin{align*}
\lim_{x\to 1}\frac{e^{x^a\ln x}-e^{x^b\ln x}}{\ln^2 x}&=\lim_{x\to 1}\frac{e^{x^a\ln x}-1-(e^{x^b\ln x}-1)}{\ln^2 x}\\
&=\color{blue}{\lim_{x\to 1}\left(\frac{e^{x^a\ln x}-1}{\ln^2 x}-\frac{e^{x^b\ln x}-1}{\ln^2 x}\right)}\\
& = \color{red}{\lim_{x\to 1}\frac{x^a-x^b}{\ln x}}=\lim_{x\to 1}\left(\frac{a(e^{a\ln x}-1)}{a\ln x}-\frac{b(e^{b\ln x}-1)}{b\ln x}\right)\\
&= a-b
\end{align*}$$
If you take $x = e^u$, then:
$$\lim_{x\to 1}\frac{e^{x^a\ln x}-e^{x^b\ln x}}{\ln^2 x}
=\lim_{u\to 0}\frac{e^{ue^{au}}-e^{ue^{bu}}}{u^2}$$
Now if $u \to 0$, then $ue^{au} \to 0$ so you can use a Taylor expansion on $e^{ue^{au}}$ and $e^{ue^{bu}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3623286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
f(x) increasing or decreasing
Let $f(x) = x + 2x^2\sin(1/x)$, with $x\ne0$,
and $f(x)= 0$ when $x= 0$. Detrmine if $f(x)$ is increasing or decreasing at $x= 0$
my attempt: using first principle it is easy to see that $f'(0)=1$ however if we find derivative of $f(x)$ as $f'(x) = 1+4x\sin(1/x) -2\cos(1/x)$ ,as $f'(1/(2k\pi)) = -1$ for integral $k$ hence there is no interval around $0$ where $f(x)$ is increasing. So it shouldn't make sense that $f(x)$ is increasing at $x=0$. so how should we handle this? Any help is greatly appreciated.
| $$f(0)=0$$ and
$$f(\epsilon)=\epsilon+2\epsilon^2\sin\frac1{\epsilon}=\epsilon\left(1+2\epsilon\sin\frac1\epsilon\right)>f(0)$$ for all $\epsilon<\frac12$. Similarly $f(-\epsilon)<f(0)$ and the function is indeed growing at $x=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3623433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is there a functor $F$ of left modules preserving $\oplus$ by arbitrary isomorphism, but its restriction to fin. gen. proj. modules isn't additive? Are there rings $R$, $S$ and a functor $F:{_R\textbf{Mod}}\to{_S\textbf{Mod}}$ such that
*
*For all left $R$-modules $M,N$, we have $F(M\oplus N)\cong F(M)\oplus F(N)$
via an arbitrary isomorphism, and such that $F(0)=0$
*$F(R)$ is a finitely generated, projective left $S$-module,
but when we restrict and corestrict $F$ to the full subcategory of finitely generated projective left $R$- resp. $S$-modules, which we denote by $\mathcal{P}(R)$ resp. $\mathcal{P}(S)$, then this restricted functor $F:\mathcal{P}(R)\to\mathcal{P}(S)$ isn't additive, in the sense that it doesn't preserve split exact sequences?
With the help of this forum, I found out that when $S$ is stably finite (i.e whenever $M\oplus S^n\cong S^n$ for a left $S$-module $M$ and a $n\geq 0$, then $M=0$), then such a functor $F$ as above cannot exist. The proof of this is very similar to the answer of Jeremy Rickard in this post, and the condition of $S$ being stably finite is exactly what makes the proof work in this more general situation. Also note that if $S$ is commutative then it is stably finite.
One idea I had was to use the functor $F$ constructed in this answer, and give the abelian group $FA$ somehow naturally the structure of a e.g. $\operatorname{End}_{\mathbb{Z}}(\mathbb{Z}^{\oplus \mathbb{N}})$ module, such that $F\mathbb{Z}$ becomes finitely generated and free, but maybe this approach is too optimistic.
| Let $R$ be a field $k$, $S=\operatorname{End}_k(k^{\oplus\mathbb{N}})$, and $F(M)=S\otimes_k (M\otimes_k M)$. Note that $S$ has the property that $S\cong S^n$ as an $S$-module for all finite $n>0$. If $M$ is nontrivial and finite-dimensional, then so is $M\otimes_k M$, so $F(M)\cong S$. If $M$ is infinite-dimensional, then $M\otimes_k M$ has the same dimension, so $F(M)$ is free over $S$ of rank $\dim M$. It follows that $F(M\oplus N)\cong F(M)\oplus F(N)$ for any $M$ and $N$. However, $F$ is not additive, even when restricted to finite-dimensional vector spaces.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3623570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A structure which looks almost like a semi-ring. Today I have encountered an interesting structure, similar to that of a ring or a semi-ring.
It is a structure $(S, +, \cdot, 1)$, where $S$ is a set, $+, \cdot$ are binary operations, and $1\in S$.
$(S, \cdot, 1)$ is a commutative monoid, $(S, +)$ is a commutative semigroup, and $+$ is distributive with respect to $\cdot$, i. e. $a(b+c) = ab+ac$ for any $a, b, c\in S$.
Does this structure have any names in the literature?
| Hopefully, I am not saying anything stupid here.
Consider such a set $S$. Define $R= S \cup \{ 0_R \}$ with the operations extended by
$$
0_R+x =x \\
0_R\cdot x= 0_R$$
Then $R$ becomes a commutative semi-ring without zero divisors (i.e. $xy=0$ implies $x=0$ or $y=0$).
Converesely, let $R$ be any commutative semi-ring without zero divisors. Then
$$S= R \backslash \{ 0 \}$$
satisfies your given conditions.
In other words, your structures are just commutative semi-rings without zero divisors, with the zero removed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3623902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove a formula for Gamma function I made some observation for Gamma function
Suppose
$$x=a+i b,a\in \mathbb{R},b\in \mathbb{R}$$
Then
$$
\left| \cos \left(\frac{\pi (a+i b)}{2}\right) \Gamma (a+i b)\right|\to\left| \sqrt{\frac{\pi }{2}} (a+i b)^{a-\frac{1}{2}}\right|
$$
When
$$
a\in [0,1],b\to\infty
$$
How can i prove this?
| Let
$$
x=a +i b,a \in \mathbb{R},b\in \mathbb{R}
$$
Use following formula from
Bateman, Harry (1953) Higher Transcendental Functions, Volumes I, p.47, (6)
$$
\left| \Gamma (x)\right| \to \left| \sqrt{2 \pi } e^{-\frac{1}{2} (\pi b)} x^{a -\frac{1}{2}}\right|,a \in [0,1]
$$
Expand cosine
$$
\left| \cos \left(\frac{\pi x}{2}\right) \Gamma (x)\right| \to \left| \sqrt{2 \pi } e^{-\frac{1}{2} (\pi b)} x^{a -\frac{1}{2}} \left(\frac{1}{2} e^{-\frac{1}{2} i \pi (a +i b)}+\frac{1}{2} e^{\frac{1}{2} i \pi (a +i b)}\right)\right|
$$
Due to monotonicity $\left| e^{-\frac{1}{2} (\pi b)} \left(\frac{1}{2} e^{-\frac{1}{2} i \pi (a +i b)}+\frac{1}{2} e^{\frac{1}{2} i \pi (a +i b)}\right) \right|$, and $\left| \sqrt{2 \pi } x^{a-\frac{1}{2}}\right|$, $a \in [0,1]$
Take limit for exponents
$$
\lim_{b\to \infty } \left[e^{-\frac{1}{2} (\pi b)} \left(\frac{1}{2} e^{-\frac{1}{2} i \pi (a +i b)}+\frac{1}{2} e^{\frac{1}{2} i \pi (a +i b)}\right)\right] = \frac{1}{2} e^{-\frac{1}{2} i \pi a }
$$
And then
$$
\lim_{b\to \infty } \left[\left| \frac{1}{2} e^{-\frac{1}{2} i \pi a }\right| \right]=\frac{1}{2}
$$
Hence
$$
\left| \cos \left(\frac{\pi x}{2}\right) \Gamma (x)\right| \to \left| \frac{1}{2} \sqrt{2 \pi } x^{a -\frac{1}{2}}\right|
$$
It equals
$$
\left| \cos \left(\frac{\pi x}{2}\right) \Gamma (x)\right| \to \left| \sqrt{\frac{\pi }{2}} x^{a -\frac{1}{2}}\right|, a \in [0,1]
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3624042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\int_{-5}^{\sqrt{x}}(\frac{\cos t}{t^{10}})dt$
Evaluate $y=\int_{-5}^{\sqrt{x}}(\frac{\cos t}{t^{10}})dt$
I've tried differentiating both sides of the fraction until the denominator was 1, and then integrating that by parts, but this was marked wrong. I know I can't just integrate by parts right off the bat because differentiating $t^{-b}$ will always end in another varied reciprocal, it'll just make things worse and worse.
How can I solve this?
| First note that this integral is improper because $-5 < 0<\sqrt x.$ Split off a bad piece near 0 by writing $c=\min(5,\sqrt x,\pi/3)$ so $$\int_{-5}^\sqrt x \frac{\cos t}{t^{10}}dt=\int_{-5}^{-c} \frac{\cos t}{t^{10}}dt+\int_{-c}^c \frac{\cos t}{t^{10}}dt+\int_c^\sqrt x \frac{\cos t}{t^{10}}dt$$. Of the three integrals on the right side of the equation above, the first and third can be expressed in terms of the special functions Si and Ci, called sine-integral and cosine-integral but we shall see that there is no need to do so. For now, note that in both the first and third integrals, we have a bounded function integrated over a finite interval, so both the first and third integrals are definite real numbers. Now look at the second of the three inegrals on the rirght side. That inegral is improper with a discontinuity at 0. By definition, $$\int_{-c}^c \frac{\cos t}{t^{10}}dt=\lim_{u,v \to 0}(\int_{-c}^u \frac{\cos t}{t^{10}}dt+\int_v^c \frac{\cos t}{t^{10}}dt)$$ where $u<0,v>0.$ The function $\frac{\cos t}{t^{10}}$ is even so it is enough to look at the integral $$\int_v^c \frac{\cos t}{t^{10}}dt).$$ (Alternatively, it will turn out that all we have to do is note that $\frac{\cos t}{t^{10}}$ is positive on $[-c,u].)$ On the interval $[v,c], \cos t \ge 1/2 $ so $$\int_v^c \frac{\cos t}{t^{10}}dt \ge \frac{1}{2}\int_v^c \frac{1}{t^{10}}dt$$ $$\lim_{v \to 0}\int_v^c\frac{1}{t^{10}}dt=\lim_{v \to 0}\frac{1}{9}(\frac{1}{v^9}-\frac{1}{c^9})=\infty.$$ Thus $$\int_v^c \frac{\cos t}{t^{10}}dt=\infty.$$ We conclude that
$$\int_{-5}^\sqrt x \frac{\cos t}{t^{10}}dt=\infty.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3624116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Finding det(A) with standard basis vectors For example, let $e_1 = [1 \quad 0]$ and $e_2 = [0 \quad 1] $be standard basis vectors. A is a $2 \times 2$ matrix. $Ae_1 = [-3\quad 7]$ and $Ae2 = [3 \quad 5] $
How do I find the $\det(A)$?
| Since A is a 2x2 matrix, simply use the formula for finding determinants for 2x2 matrices.
\begin{bmatrix}a&b\\c&d\end{bmatrix}
The formula is ad-bc. In this case, A$e_1$ = \begin{bmatrix}-3\\7\end{bmatrix}
And A$e_2$ = \begin{bmatrix}3\\5\end{bmatrix}
Combine these vectors to get: \begin{bmatrix}-3&3\\7&5\end{bmatrix}
Using the formula, det(A) = (-3 $\cdot$ 5) - (3 $\cdot$ 7) = (-15) - (21) = -36.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3624245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
About the theorem of dual basis Theorem for dual basis: Let V be a finite dimensional vector space and $\beta={u_1,u_2,...,u_n}$ be an ordered basis for V. Then there exists a basis $\beta^*={f_1,...,f_n}$ of $V^*$ such that $f_i(u_j)=\delta_{ij}$
So what is $f_i(u_j)=\delta_{ij}$ in terms of significance in a linear functional, is that a coordinate function similar to a coordinate vector in a vector space?
| In all of the contexts I've seen it being used, you have
$$\delta_{ij} = \begin{cases}
1, & \text{ for } i = j \\
0, & \text{ for } i \neq j
\end{cases}\tag{1}\label{eq1A}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3624386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Hilbert's Hotel Paradox: Guests moving to new room every day? Suppose there are infinitely many coaches with infinitely many members in each coach. They stay at the hotel for infinitely many days. I know that guests can be accommodated using various methods like the prime powers method, but there's a slight variation in the question which is that the guests have to change their room every day such that one guest can't occupy the same room again (i.e., they have to occupy unique rooms every day). How can we achieve that?
I tried solving the problem using the following method:
*
*I allotted rooms using the prime powers method.
*The next day, guests move from their current room $x$ to the new room $x+c$.
I'm struggling after this step. Can someone please help me out?
| A far simpler approach than prime powers is as follows. Number the rooms starting from $0$. Each day,
*
*guests in even-numbered rooms move two rooms up
*guests in odd-numbered rooms move two rooms down, except for the one in room $1$, who moves to room $0$
This creates an infinite cycle linking every room, on which all guests move:
$$\dots\to5\to3\to1\to0\to2\to4\to6\cdots$$
So not only is it possible to have all guests occupy different rooms on infinitely many days, it is possible to achieve full utilisation while doing so for all eternity.
If all rooms can only ever be occupied by at most one guest, the following construction (also simpler than prime powers) still ensures that every room is eventually used. Arrange the rooms in an array like this, and this time start from $1$:
$$\begin{array}
01&2&4&7&11&\dots\\
3&5&8&12&17&\dots\\
6&9&13&18&24&\dots\\
10&14&19&25&32&\dots\\
15&20&26&33&41&\dots\\
\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{array}$$
On the first day, the guests stay in triangular-numbered rooms, i.e. the first column of the above array. Each subsequent day, all guests move to the room that is to their immediate right in the array, e.g. $6$ moves to $9$, then to $13$ and $18$, etc.
Both solutions here are based on canonical bijections to $\mathbb N$, from $\mathbb Z$ and $\mathbb N^2$ respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3624538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Combinations problem - Finding the number I would like to ask for your confirmation to my thought in regards with the following exercise:
One man has 30 different statues, 27 genuine and 3 fake. He sold 10 of these statues in museum A, 10 in museum B and 10 in museum C. What is the probability that each of these museums bought fake statue from the man?
I found the probability for each museum, through combinations, that is :
$$
\frac{{3 \choose 1} {27 \choose 9}+{3 \choose 2 } {27 \choose 8}+{3 \choose 3} {27 \choose 7}}{{30 \choose 10}}
$$
Is my way of thinking correct?
Thank you very much in advance.
| Your answer seems wrong. Although, from your answer I am unable to judge about how you exactly arrived at it but here's a hint to how you could have approached the problem:
Just in case you don't know: Number of ways to divide n objects in n1 groups of m1 object, n2 groups of m2 object and so on till nk groups of mk objects such that Σni mi = n can be given as:
$\frac{n!}{(m1 !)^{n1} (n1 !) .... (mk !)^{nk} (nk !)}$
(This result can be arrived at using simple product rules and a bit of intuition or more formally by using set theory and providing appropriate bijections)
You can use this to frame your answer with the following thought process:
Compute total possible ways to distribute( without any restrictions ) and use this as denominator of your probability fraction.
Compute total possible ways to distribute 27 genuine pieces among 3 groups of 9 each. Also, calculate total possible ways to distribute 3 defected pieces among 3 groups of 1 each. And multiply these two computations using product rule to get total number of ways to distribute 30 pieces such that each one gets 9 genuine and 1 defected piece. This gives you the numerator.
Finally, obtain the probability in terms of a fraction.
Hope this helps!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3624715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Cramer-Rao bound for LS estimator It's a problem from Machine Learning: A Bayesian And Optimization Perspective (problem 3.7):
Derive the Cramer-Rao bound for the LS estimator, when the training data result from the linear model $$y_n=\theta x_n+\eta_n, n=1, 2, ..., N$$ where $x_n$ and $\eta_n$ are i.i.d sample of a zero mean random variable, with variance of $\sigma^2_x$ and a Gaussian one with zero mean and variance of $\sigma^2_{\eta}$, respectively. Assume, also, that x and η are independent. Then show that the LS estimator $$\theta=\frac{\sum^N_{n=1}{x_n y_n}}{\sum^N_{n=1}{x_n^2}}$$ achieves the CR bound only asymptotically.
It needs the pdf of $y_n$ which is the sum of two independent random variables, am I supposed to use the convolution formula (the theorem of summation of two random variables)? It seems the calculation would go extremely difficult because of the integration:$$\text{y}\sim\int_{\mathbb{R}}{\frac{1}{\theta}p(\frac{y}{\theta})\frac{1}{\sqrt{2\pi \sigma_{\eta}^2}}\text{exp}(-\frac{(y-u)^2}{2\sigma_{\eta^2}})\text{du}}, \text{where}\,\,p\,\,\text{is the pdf of x}$$ there is a $\theta$ in $p$ which mostly causes the computational difficulty.
Any helps or hints will be appriciated.
| This is partly a summary, and thanks to works by @a_student, @StubbornAtom, and @jld in https://stats.stackexchange.com/q/320600. This may be the final answer to this problem, please point out mistakes if you find some.
First, we compute the C-R bound of the estimator, by the definition of
$$
I_{(X, Y)}(\theta)=-\mathbb{E}_{\left( X,Y \right)}\text{[}\frac{\partial ^2}{\partial \theta ^2}\ln p\left( X,Y;\theta \right) \text{]}, \text{where}\,\,X=\left( x_1,x_2,...,x_N \right) ^T, Y=\left( y_1,y_2,...,y_N \right) ^T
$$
and the chain rule, we have that
$$
I_{\left( X,Y \right)}\left( \theta \right)
=-\mathbb{E}_{(X, Y)}\left[ \frac{\partial ^2}{\partial \theta ^2}\ln p\left( Y|X;\theta \right) \right] +0
$$
as $X$'s distribution doesn't consists of $\theta$, and we have $Y|X\sim \mathcal{N}\left( \theta X, \sigma _{\eta}^{2}I \right) $, after computation we get
$$
C-R\,\,bound=\frac{1}{I\left( \theta \right)}=\frac{\sigma _{\eta}^{2}}{N\sigma _{x}^{2}}
$$
Then, compute the variance of the estimator $\hat{\theta}$. By the computation in @jld 's work we have
$$
Var(\hat{\theta})=σ^2_\eta\,\mathbb{E}_X[\frac{1}{X^TX}]
$$
we don't have a Gaussion X, but it suffices to prove that $\mathbb{E}_X[\frac{1}{X^TX}]\rightarrow 0$, as $N\rightarrow\infty$.
By the law of large numbers, $\overline{X^T X}\xrightarrow{P}\sigma^2_x$, then by continuous mapping theorem we have
$$
\frac{1}{X^TX}=\frac{1}{N}\frac{1}{\overline{X^TX}}\xrightarrow{P}0
$$
we finally get the result that $\hat{\theta}$ asymptotically attains the bound, since both of them tends to 0 as $N\rightarrow\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3624887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Likely fake proof of the irrationality of a rational multiple of pi/pi I would like to know what has gone wrong in this 'proof'.
Suppose that $$k\frac 1\pi\pi=\frac ab\operatorname{,where}a,b\in \mathbb Z\operatorname {and}k\in \mathbb Q.$$
Then, we multiply both sides by $i$ and raise $e$ to the power of both sides. We find that $$e^{i(k\frac 1\pi)\pi}=e^{i\frac{2a}{2b}}$$ This implies that $$e^{i(2bk\frac 1\pi)\pi}=e^{i2a}.$$ Using laws of expontents, we rewrite this as $$\begin{pmatrix}e^{i2b\pi}\end{pmatrix}^{\frac k\pi}=e^{i2a}.$$ Since b is an integer, the left hand side is equal to $1$. We take the natural log of both sides using the definition of the natural log in the complex domain.Thus,$$0=i(2a+2n\pi)\operatorname{for some integer }n.$$ Since the imaginary part of $0$ is $0$, $$a+n\pi=0.$$ Rearranging the equation for $\pi,$ we find that $\pi=-\frac an$, which is a contradiction as $\pi$ is irrational.
This is very odd because $k\frac 1\pi=k$ and $k$ is assumed to be a rational number. I have no idea what I have proven despite trying being careful, particularly when using complex numbers. Could someone point out an error?
| The main problem is that some of the rules for working with powers from real numbers are no longer true with complex numbers.
While for 2 complex number $z_1,z_2$ the rule $e^{z_1+z_2}=e^{z_1}e^{z_2}$ is still correct, the version dealing with multiplied exponents isn't, generally we have
$$e^{z_1z_2} \color{red}\neq \left(e^{z_1}\right)^{z_2}.$$
A simple example is $z_1=2\pi i, z_2=\frac12$, where the left hand side is $e^{\pi i}=-1$ while the right hand side is $1^{\frac12}=1$.
The reason why that is so is that even defining what the right hand side means is non-trivial in the general case. I refer you to the Wikipedia entry for Complex exponentiation for further information.
In your proof, where you go wrong is in rewriting $e^{i(2bk\frac1{\pi})\pi}$ as $\left(e^{i2b\pi}\right)^{\frac{k}\pi}$, because that is using exactly the above incorrect formula for multiplied exponents.
Note that you did a similar step before, when you went from
$$e^{i(k\frac 1\pi)\pi}=e^{i\frac{2a}{2b}}$$
to
$$e^{i(2bk\frac 1\pi)\pi}=e^{i2a}.$$
This is not incorrect, though I guess just "by accident". You muliplied both exponenets by the integer $2b$, which works, for any complex $z$ and any integer $n$ we have
$$e^{nz}=\left(e^z\right)^n$$
That's a consequence of $n$ being an integer and using the valid rule about adding exponents:
$$e^{nz}=e^{z+z+\ldots+z}=e^ze^z\ldots e^z=(e^z)^n,$$
if $n$ is positive and for negative integer $n$ it then follows from $e^{-z}=\frac1{e^z}$.
To summarize, dealing with complex exponentiation can be hard because internalized rules that were valid for real numbers no longer apply, or only in special cases. Any time you see a potentially non-real complex number raised to a non-integer power is cause for thinking really hard what that means in the given context. Same goes for a real number raised to a complex power.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3625057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proof equation $\lim_{n \to \infty}^{}\sum_{k=1}^{n}k^{p}/(n+1)^{p}= \frac{1}{p+1}$ Just tried some approach (Stolz-Cesàro theorem and sandwich theorem)
can't prove this equation
$$\lim_{n \to \infty}^{}\frac{\sum\limits_{k=1}^{n}k^{p}}{(n+1)^{p}} =\frac{1}{p+1}$$
| [Assuming the correction in Eeyore Ho's answer is correct]
The easiest way must be by using the bounds: $$\int_1^n x^p\; dx\leq \sum_{k=1}^n k^p\leq \int_1^{n+1} x^p\; dx$$
However, you can also avoid using calculus. Instead it can be proven by induction.
We know that each sum of $p$th powers up till $n$ equals some polynomial of degree $p+1$ in $n$ (you don't need to assume this for the following argument). This is clear for $p=0$. Now, we can find a recurrence relation for the coefficients in general.
Let $m$ be a positive integer. Suppose we have found an expression $\sum_{k=1}^n k^p=\sum_{j=0}^{p+1} (a_{j,p}k^j)$ for all $p<m$ where each the list $(a_{j,p})_j$ is the set of associated constant coefficients.
We rearrange the sum of $m$th powers as follows $$\sum_{k=1}^n k^m=\sum_{k=1}^n k\cdot k^{m-1}=\sum_{k=1}^n\sum_{j=1}^k k^{m-1}=\sum_{l=1}^n\sum_{q=l}^n q^{m-1}=\sum_{l=1}^n\left(\sum_{q=1}^nq^{m-1}-\sum_{q=1}^{l-1}q^{m-1}\right)$$$$=\sum_{l=1}^{n+1}\left(\sum_{q=1}^nq^{m-1}-\sum_{q=1}^{l-1}q^{m-1}\right)=(n+1)\sum_{q=1}^nq^{m-1}-\sum_{l=1}^{n+1}\sum_{q=1}^{l-1}q^{m-1}$$$$=(n+1)\sum_{q=1}^nq^{m-1}-\sum_{r=1}^n\sum_{q=1}^rq^{m-1}$$
By the induction hypothesis, then $$\sum_{k=1}^n k^m=(n+1)\sum_{b=0}^m(a_{b,m-1}n^b)-\sum_{r=1}^n\sum_{c=0}^m (a_{c,m-1}r^c)$$
Further evaluating, we get $$=(n+1)\sum_{b=0}^m(a_{b,m-1}n^b)-\sum_{c=0}^m \left(a_{c,m-1}\left(\sum_{r=1}^n r^c\right)\right)$$$$=(n+1)\sum_{b=0}^m(a_{b,m-1}n^b)-\sum_{c=0}^m \left(a_{c,m-1}\left(\sum_{s=0}^{c+1}a_{d,c}n^d\right)\right)$$$$=(n+1)\sum_{b=0}^m(a_{b,m-1}n^b)-\sum_{c=0}^m\sum_{s=0}^{c+1}(a_{c,m-1}a_{d,c}n^d)$$
This affirms (by strong induction) that the general sum of $p$th powers up till the number $n$ is a polynomial in $n$ of degree $p+1$.
Now assume also that we have proven for the case of $p=m-1$ that the term $a_{p+1,p}=a_{m,m-1}$ (i.e. the coefficient of $n^m$ in the polynomial expansion of the sum of $(m-1)$th powers up till $n$) is equal to $\frac{1}{m}$ (it clearly holds for $m-1=0$).
Then, picking the coefficient of the $n^{m+1}$ in our final summation expression, we have $$a_{m+1,m}=a_{m,m-1}-a_{m,m-1}a_{m+1,m}$$
But, by assumption, we already have $a_{m,m-1}=\frac{1}{m}$. So rearranging, we get $a_{m+1,m}=\frac{1}{m+1}$.
So the coefficient of the $(p+1)$th power of $n$ appearing in $\sum_{k=1}^n k^p$ is $\frac{1}{p+1}$ for all nonnegative $p$. The desired result in your question then immediately follows.
(feel free to comment or edit for any corrections)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3625242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If pair of tangents to a circle in the first quadrant is $6x^2-5xy+y^2=0$ and if one point of contact is $(1,2)$, find the radius. The tangents are $2x-y=0$ and $3x-y=0$. Let the radius be $r$ and centre be $(h,k)$
$$r=\frac{|3h-k|}{\sqrt {10}}$$
$$r=\frac{|2h-k|}{\sqrt 5}$$
$$(h-1)^2+(k-2)^2=r^2$$
I invested a considerable amount of effort in solving this equations, but to no result. My method was to square all terms to avoid the modulus, but that complicated things adding the $hk$ term. This also led me to believe that there must be a better way to solve this. Can I know how this problem must be approached?
| The perpendicular to $y=2x$ throught $(1,2)$ is:
$$y=-\frac{1}{2}x+\frac{5}{2}$$
The line bisector of the two lines $y=2x$ and $y=3x$ is:
$$y=(\sqrt2+1)x$$
The other bisector line is:
$$y=(\sqrt2-1)x$$
but in this case the circunference wouldn't be tangent to either $y=2x$ and $y=3x$ lines.
Now, we have to inresect these two lines, or:
$$(\sqrt2+1)x=-\frac{1}{2}x+\frac{3}{2} \leftrightarrow x=5(3-2\sqrt2) \land y=-5+5\sqrt2$$
From this we can compute:
$$r=\sqrt{(14-10\sqrt2)^2+(-7+5\sqrt2)^2}=\sqrt{(5\sqrt{10}-7\sqrt5)^2}=5\sqrt{10}-7\sqrt5$$
Note that we pass from $(14-10\sqrt2)^2+(-7+5\sqrt2)^2$ to $(5\sqrt{10}-7\sqrt5)^2$, simply calculating the first sum and then solve for $a,b$ this system:
$$\left\{\begin{matrix}
a^2+b^2=495
\\ 2ab=-350\sqrt2
\end{matrix}\right.$$
And the solutions are:
$$a=\pm5\sqrt{10} \land y=\mp7\sqrt{5}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3625404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Why $a$ is not an essential singularity and how $f$ can be extended in our online lecture we had the following statement but I don't see exactly why it's true , if anyone could explain me why it would be appreciated.
We have $a \in \mathbb{C}$ a complex number and $r \in \mathbb{R}$ with $r>0$. Let $U=\mathbb{D}(a,r)\ _{\diagdown \left\{a \right\}}$ and $f \in O(U)$ a holomorphic function such that $Re(f(z)) \geq 0 \; \forall z \in U$.
And it says that $f$ can be extended ( in a holomorphic function ) all over the disk $\mathbb{D}(a,r)$ and that $a$ isn't a essential singular point.
Thanks in advance for your help .
| The singularity of $f$ at $a$ is either a pole, a removable singularity or an essential singularity.
If it were a pole of order $n$, you'd have $f(z) = c (z-a)^n + O((z-a)^{n-1})$ for some $c \ne 0$. By having $z$ approach $a$ from a direction such that $c (z-a)^n$ is on the negative real axis you'd get $\text{Re}(f(z)) < 0$.
If it were an essential singularity, Casorati-Weierstrass theorem would say there is $z$ near $a$ such that $f(z)$ is near, say, $-1$.
The only other possibility is a removable singularity. That says that $f$ can be extended to be analytic at $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3625544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Why does stability of $\varphi(x,y)$ imply that for the Shelah local-2-rank, $R_\varphi(x=x)$ is finite . I was reading Artem Chernikov's "Lecture notes on stability theory".
He defines Shelah's local-2-rank $R_{\Delta}$ (taking values in $\mathbb{N}\cup\{\pm \infty\}$) recursively. By definition, $R_\Delta(p) \ge 0$ if the type $p(x)$ is consistent, and $R_\Delta(p) \ge n + 1$ if for some $\Delta$-formula $\varphi(x,a)$ (with $a$ being a parameter), both $R_\Delta\left(p\cup\{\varphi\}\right)\ge n$ and $R_\Delta\left(p\cup\{\neg\varphi\}\right)\ge n$.
The text proceeds to prove (2.17) that a formula $\varphi(x, y)$ is stable (that is, satisfies the $m$-order property) iff $R_\varphi(x=x) < \infty$.
There is a part in the "only if" direction I am having trouble with. The idea is to show that for some set of parameters $B$, $\left|B\right| < \left|S_\varphi(B)\right|$ where $S_\varphi(B)$ is the set of $\{\varphi\}$-types over $B$. The proof goes:
Conversely, assume that the rank is infinite, then we can find an infinite tree of parameters $B = (B^\eta: \eta \in 2^{<\omega})$ such that for every $\eta \in 2\omega$ the set of formulas $\{\varphi^{\eta(i)}(x, b_{\eta \| i}): i<\omega\}$ is consistent (rank being $\ge k$ guarantees that we can find such a tree of height $k$, and then use compactness to find one of infinite height).
How is compactness used here? I couldn't fill in the details.
| We can express the properties of the tree we want with a set $\Sigma$ of first-order formulas. Introduce a constant symbol $b_\sigma$ for every $\sigma \in 2^{<\omega}$. Then for every branch $\eta \in {}^2 \omega$ we can add formulas to $\Sigma$ saying that every finite part of that branch is consistent. That is, for every $n < \omega$ we add
$$
\exists x \bigwedge_{0 \leq i \leq n} \varphi^{\eta(i)}(x, b_{\eta |_i})
$$
to $\Sigma$.
Every finite part of $\Sigma$ will only say something about finitely many $b_\sigma$, say up to height $k$. Then using the assumption on the rank we can already build a tree of height $k$ (as the proof you quoted suggests). So $\Sigma$ is finitely consistent, and hence by compactness $\Sigma$ is consistent.
Any realisation of $\Sigma$ then assigns actual elements to the $b_\sigma$ such that $\{\varphi^{\eta(i)}(x, b_{\eta |_i}) : i < \omega\}$ is consistent for all branches $\eta \in {}^2 \omega$, as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3625676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integral by parts $\int_0^\infty e^{-st}\frac{\sin(t)}{t} dt $ I want to solve the following integral by parts:
$$\int_0^\infty e^{-st}\frac{\sin(t)}{t} dt $$
I have been trying but I don't know what else to do. The result should be $\frac{\pi}{2}-\arctan\left(s\right) $. I took $\frac{\sin(t)}{t}$ as u and $ e^{-st} $ as dv, obtaining this:
$$ du = \bigg[\frac{\cos(t)}{t}-\frac{\sin(t)}{t^2}\bigg]dt $$
$$ v = -\frac{1}{s} e^{-st}$$
Applying the formula for definite integration by parts:
$$ uv\bigg|_0^\infty -\int_0^\infty vdu $$
$$ -\frac{1}{s}e^{-st}\frac{\sin(t)}{t}\bigg|_0^\infty +\frac{1}{s}\int_0^\infty e^{-st}\bigg[\frac{\cos(t)}{t}-\frac{\sin(t)}{t^2}\bigg]dt $$
From that point onwards, I am stuck. I would be most grateful if you may help me.
Thanks in advance.
| Per double integral
$$\begin{align}
\int_0^\infty \frac{e^{-st}}t \sin t \>dt
=\int_0^\infty \int_s^\infty e^{-xt}\sin t \>dx \>dt
= \int_s^\infty \frac1{1+x^2}dx = \tan^{-1}\frac1s\\
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3625889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Showing uniqueness property over a vector space that is the direct sum of two of its subspaces. Let $V$ be a vector space over a field $F$, and let $S$ and $T$ be subspaces of $V$ such that
$V=S⊕T$.
Show that for every $x∈V$ ,there are unique $y_1∈S$ and $z_1∈T$ such that $x=y_1+z_1$. In other words, show that, if $y_2 ∈ S$ and $z_2 ∈ T$ also satisfy $y_1 +z_1 = x = y_2+z_2$, then
$y_1 =y_2$ and $z_1 =z_2$
$P(x):=$unique $y∈S$ such that $x=y+z$ for some $z∈T$
Prove that $P$ is a linear map, and also that we have $P^2(= P ◦ P) = P$. Show also
that $Range(P) = S$ and $Ker(P) = T$.
Attempt for the first part:
By definition of direct sum $S \cap T$={$0$}.
If $V=S⊕T$ then for every $x \in V$, $x=s+t$ for some $s \in S$ and $t\in T$.
Let $n \in S$ and $m \in T$.
Now assume $$n+m=s+t$$ Rearranging we have $0=(s-n)+(t-m)$
Then since $S,T$ are subspaces of $V$ and $s,n \in S$ and $t,m \in T$, then $s-n \in S$ and $t-m\in T$ Therefore both $s-n$ and $t-m$ are equal to $0$ which gives us our uniqueness property.
For the second part I'm stuck I know that this can be rearranged to get
$P(x):=$ unique $y ̄ ∈ S$ such that $x − y ∈ T$. But I'm unsure how to continue from here. I know I need to prove the below but I'm confused where to begin to do this with this function and how the range and kernel would be S and T respectively.
$P()=P()$
$P(+)=P()+P()$
| If $x = a+b, y=c+d$ are the unique decompositions for x and y respectively, then
*
*$P(x+y) = P((a+c)+(b+d)) = a+c = P(x) + P(y).$
*$P(\alpha x) = P(\alpha a+ \alpha b) = \alpha a.$
*$P(P(a+b)) = P(a) = a.$
*$S \subseteq Range(P)$ since $a = a+0$ in $V$.
*$P(a+b) = 0 \Leftrightarrow$ $a=0$. Therefore the kernel is T.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3625983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Could $\int\frac{1}{x^{N+1}(x-1)}dx$ be solved analytically? I am trying to solve this integral:
$$\int\frac{1}{x^{N+1}(x-1)}dx$$
I have tried integration by partial fraction, substitution and by parts. But, I can't solve it. So, I would like to ask could this be solved?
Also, May I know when partial fraction does not exist?
Thank you very much.
Update:
N is any number that is greater than 0. Sorry for forgetting to include such information, and apology for any inconvenience caused.
| Hint:
Assuming that $N$ is natural, you may do the following
$$\int\frac{1}{x^{N+1}(x-1)}dx = \int\frac{1-x^{N+1}+x^{N+1}}{x^{N+1}(x-1)}dx$$
$$= -\int \frac 1{x^{N+1}}\sum_{n=0}^Nx^n \; dx + \int \frac{dx}{x-1}$$
The first sum comes from
$$\frac 1{x^{N+1}}\frac{1-x^{N+1}}{x-1}=-\frac 1{x^{N+1}}\frac{x^{N+1}-1}{x-1}=-\frac 1{x^{N+1}}(1+x+\cdots + x^N)$$
Basically, the above is the partial fraction decomposition of the integrand.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3626129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Evaluating the ratio of beta functions I came across a question which asks for the value of $\alpha$ in the result to the ratio of two $\mathrm{B}$ functions:
$$\frac{\mathrm{B}(m, \frac{1}{2})}{\mathrm{B}(m, m)}=2^{\alpha}$$
I know the results for integer values of $m$, but the question demands that $m>0$ be any real number.
I also tried changing the functions to $\Gamma$ ones, but that didn't lead anywhere meaningful.
Does anyone know how to approach this?
P.S. It seems to be an easy question: the marks to be awarded against it is unity in modulus.
| For any value of $m$
$$\frac{{B}(m, \frac{1}{2})}{{B}(m, m)}=2^{2m-1}$$
If you use the gamma function
$$B(m,n)=\frac{\Gamma (m) \Gamma (n)}{\Gamma (m+n)}$$
$$\frac{{B}(m, \frac{1}{2})}{{B}(m, m)}=\frac {\sqrt{\pi }\frac{ \Gamma (m)}{\Gamma \left(m+\frac{1}{2}\right)} } {\frac{\Gamma (m)^2}{\Gamma (2 m)} }=\sqrt{\pi }\frac{ \Gamma (2 m)}{\Gamma (m) \Gamma \left(m+\frac{1}{2}\right)}$$Use Stirling approximation up to any order of your choice, continue with Taylor series to prove it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3626271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Shortest distance from circle to a line
Let $C$ be a circle with center $(2, 1)$ and radius $2$. Find the shortest distance from the line $3y=4x+20$.
This should be very simple, but I seem to end up with no real solutions.
The shortest distance would be from the center of the circle perpendicular to the line right?
Solving the line for $y$ we get $y=\frac{4}{3}x+\frac{20}{3}$
Substituting this to the equation of the circle we get $(x-2)^2+(\frac{4}{3}x+\frac{20}{3}-1)^2=2^2$, but solving this for $x$ ended up with no real roots. What am I missing here?
| Hint. You have shown that your line doesn't intersect the circle. Therefore the shortest distance between the circle and the line is given by the distance between that line and a line such that $$y=\frac43x+c,$$ where the last line is tangential to the circle -- in other words it intersects the circle in just one point. There are two such lines, and you can easily select the closest to the given line. Thus, you want to solve for values of $c$ such that $$(x-2)^2+\left(\frac43x+c-1\right)^2=4$$ has only one root. That is, set the discriminant of this quadratic in $x$ equal to $0.$ Then take the bigger value for $c.$ You then have your line. Compute the distance between this line and the given line, and you'd be done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3626410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 5
} |
Are field embeddings unique? Sorry if this is a simple question, as I'm not well versed in field theory.
Suppose a field $K$ has an embedding into $\mathbb R$: $f:K\hookrightarrow\mathbb R$. Is $f$ unique? And if $\mathbb R$ is replaced by an arbitrary field $F$, is the answer still the same?
| $f$ is not unique in general. For instance, there are two embeddings of $\mathbb Q[x]/(x^2-2)$ into $\mathbb R$.
In general, if $K=\mathbb F(\alpha)$, where $\mathbb F$ is the prime field of $K$, and $F$ is a field with the same prime field, then then number of $\mathbb F$-embeddings of $K$ into $F$ is the number of roots in $F$ of the minimal polynomial of $\alpha$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3626745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Every square matrix is a sum of two diagonalisable matrices I've been stuck with this question for quite a while and am not sure where to start:
Prove that if $A$ is an $n \times n$ matrix, then $A$ can be written as $B + C$ where both $B$ and $C$ have $n$ distinct eigenvalues. (Hence every square matrix is a sum of two diagonalisable matrices)
I'm thinking that maybe we can split into two triangular matrices but not sure if that's going in a right direction.
| You're almost there. Assuming you're working with real matrices, let us denote the diagonal entries as $d_1, d_2, \cdots, d_n$. All you have to do is "split" each $d_i$ into the sum of two numbers $u_i + t_i$, and ensure that all of the $u_i$ and $t_i$ are all different. This is always possible, since for any fixed real $d$ there are infinitely many pairs $(t, u)$ of real numbers such that $t + u = d$.
Note that this works because then you can just represent the matrix as the sum of an upper triangular matrix U with distinct diagonal entries and a lower triangular matrix T with distinct diagonal entries. Since the eigenvalues of such matrices are exactly the diagonal elements, and all diagonal elements are distinct by construction, we know that $U$ and $T$ are both diagonalizable, achieving your result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3626913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
I need help understanding the use of | in a specific context. I am reading the wikipedia article on Multiple Sequence Alignments and came across some notation I haven't seen yet, specifically $x_i|i = 1,...,r$ in the statement $L > max \{x_i|i = 1,...,r \}$. I was wondering if anyone can shed light onto what that means. Thanks for any help.
| Given the context of your question, it would mean 'such that'; 'so that'; 'everywhere'.
Or in other words: 'L is greater than the maximum of the set $x_1, x_2, ...x_r$'
In other contexts, it can mean various other things. I suggest you take a good look at:
https://en.wikipedia.org/wiki/List_of_mathematical_symbols
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3627133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Using the Divergence Theorem on the surface of a unit sphere
Using the Divergence Theorem, evaluate $\int_S F\cdot dS$ , where $F=(3xy^2 , 3yx^2 , z^3)$, where $S$ is the surface of the unit sphere.
My Attempt
$$ \text{div} F = \left(\frac{\partial F_1}{\partial x} + \frac{\partial F_2}{\partial y} + \frac{\partial F_3}{\partial z} \right) = 3y^{2} + 3x^{2} + 3z^{3} = 3(y^{2} + x^{2} +z^{2}).$$
Let us use spherical coordinates
$$ x = \rho \sin \varphi \cos \theta \quad, y = \rho\sin\varphi \sin\theta \quad, z = \rho\cos\varphi. $$
We get ,
\begin{align*}
\int\limits_{S} \vec{F} \cdot ds &= 3 \int_{0}^{\pi}\int_{0}^{2\pi} \int_{0}^{1} (\rho^{2} \sin^{2}\varphi \sin^{2} \theta + \rho^{2} \sin^{2}\varphi \rho \cos^{2}\theta + \rho^{2} \cos^{2} \varphi ) \rho^{2} \sin\varphi \ d\rho d\theta d\varphi \\
&= 3 \int_{0}^{\pi}\int_{0}^{2\pi} \int_{0}^{1} \rho^{4} (\sin^{3}\varphi (\sin^{2} \theta + \cos^{2}\theta) + \cos^{2}\varphi \sin\varphi) \ d\rho d\theta d\varphi \\
&= \frac35 \int_{0}^{\pi} \int_{0}^{2\pi} \sin^{3}\varphi + \sin\varphi \cos^{2}\varphi \ d\theta d\varphi \\
&= \frac35 \int_{0}^{\pi} \int_{0}^{2\pi} \sin\varphi \ d\theta d\varphi \\
&= \frac{6\pi}{5} \int_{0}^{\pi} \sin\varphi \ d\varphi = \frac{12\pi}{5}.
\end{align*}
Could someone confirm if my reasoning is correct ? I feel like I have missed a step somewhere in the beggining.
| It is correct, in fact, it is even more simple because you could have directly substituted $\rho^{2}=x^{2}+y^{2}+z^{2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3627276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Applying propagator to Laplace eigenfunctions Let $\Omega \subseteq \mathbb{R}^n$ be a non-empty domain with piece-wise smooth boundary and consider a Dirichlet eigenfunction $\varphi$ on $\Omega$. That is, $\varphi : \Omega \to \mathbb{C}$ is a non-trivial solution to $\Delta \varphi = \lambda \varphi$ for some $\lambda \geq 0$ and $\varphi = 0$ on $\partial\Omega$.
Consider the propagator $U(t) = e^{-it\sqrt{-\Delta}}$. Is it true that $U(t)\varphi = \varphi$? This fact seems to be part of a proof I'm reading but I am not sure why this is true.
Note: It is known that $U(t)$ commutes with the Laplacian, hence
$$
\Delta \left(U(t)\varphi\right) = U(t)\left(\Delta \varphi\right) = \lambda\left(U(t)\varphi\right).
$$
That is, $U(t)\varphi$ is itself an eigenfunction.
I'm relatively new to this subject, so I might be misunderstanding what is needed in the proof. I appreciate any input!
| My answer will be related to the link. I'll provide the necessary context.
Here, $\Delta_g u_j=\lambda_j^2 u_j,$ $p=\frac{1}{2}\left(|\xi|_g-1\right)+\mathcal{O}(h),$ $P:=\text{Op}_h(p)=\frac{1}{2}\left(h^2\Delta_g^2-1\right),$ and $U(t;h)=\exp (-itP/h).$ Note that if we let $h_j=1/\lambda_j,$ then $\text{Op}_{h_j}(p)u_j=0,$ and we can compute directly that $$\partial_t (U(t;h_j)u_j-u_j)=0.$$ So, the difference is constant in time. Evaluating at $t=0$ yields that the constant is zero, so that $U(t;h_j)u_j=u_j.$ It was very important that $h_j=\lambda_j^{-1}.$ For example, if $P(h)=-\Delta_g+V(x)$ with eigenfunctions $$P(h)u_j(h)=E_j(h)u_j(h)$$ normalized in $L^2,$ then $$U(t;h)u_j(h)=\exp \left(\frac{-itE_j}{h}\right) u_j(h).$$
Also, note that this is using the Schrodinger propagator, not the wave propagator (like in the original).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3627531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$I-cP$ Invertible Matrix Question: Assume P is a nonzero $n$ x $n$ matrix, $n \ge 2$, such that $P^2=P$. Let $c\in\mathbb R, c\ne1$. Show that the matrix $I-cP$ is invertible and find its inverse.
I'm having trouble going about this question. By manipulating $Av=\lambda v$, I get that P has eigenvalues $0$ and $1$. I know that if the determinant of $I-cP$ is nonzero then it is invertible, and that if it has a trival kernel it is invertible. But I don't know how to proceed. Any help would be appreciated.
| If $c = 0$ then $I-cP$ is clearly invertible. So assume $c \neq 0$.
If $I-cP$ is not invertible, then there is a vector $x \neq 0$ with $(I-cP)x =0$, that is, $x = cPx$. This implies $Px \neq 0$. Multiplying both sides by $P$ we have $Px = cPx$, i.e., $(1-c)Px = 0$, since $Px \neq 0$ we must have $1-c=0$, i.e., $c=1$. A contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3627705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
possibility of choosing three from a pool and choosing three from a different pool At the local zoo, a new exhibit consisting of 3 different species of birds and 3 different species of reptiles is to be formed from a pool of 8 bird species and 6 reptile species. How many
exhibits are possible if
a. there are no additional restrictions on which species can be selected?
b. 2 particular bird species cannot be placed together (e.g., they have a predator-prey
relationship)?
c. 1 particular bird species and 1 particular reptile species cannot be placed together?
(A) Because we're choosing three from 8 and three from 6, I got
8*7*6 + 6*5*4. Is my reasoning correct?
(B) For reptiles we get the same as above (6*5*4) but for birds, because I got
8*6*5 because two birds cannot be paired with together. The complete answer would be 8*6*5 + 6*5*4
(C) 8*7*6 + 5*4*3 because reptile numbers depend on what we choose for birds.
Is my reasoning for above answers correct?
| It does not matter which order the species are selected. Thus we will (usually) need to divide by $3!$ for the count of ways of selecting the bird and reptile species, something which you didn't do for all the questions.
You also added the counts for the birds and reptiles, which is wrong (it would be true if you were choosing either the birds or the reptiles, instead of both at once). Instead you should multiply them. For (a) you should get an answer of $\frac{8×7×6}{3×2×1}×\frac{6×5×4}{3×2×1}$; this forms the base count, which we will rely on in solving (b) and (c).
The other two questions are better solved by counting the number of possibilities that are not allowed, then subtracting from the base count. For (b), if we select the forbidden pair of birds, we have $6$ possibilities for the remaining bird, so there are $6×\frac{6×5×4}{3×2×1}$ disallowed selections.
For (c), if we select the forbidden reptile/bird pair, we still have $2$ birds to select from $7$ and $2$ reptiles from $5$. Again, order does not matter, so we divide by $2$ for each. The number of disallowed selections is $\frac{7×6}2×\frac{5×4}2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3627816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding a polynomial whose roots are connected to the roots of a different polynomial Suppose we have a polynomial function $$f(x) =x^5-4x^4+3x^3-2x^2+5x+1$$ Function $f$ will have 5 roots which can be denoted by $a, b, c, d, e$. I was interested in trying to find a degree 10 polynomial whose roots are given by $abc, abd, abe, acd, ace, ade, bcd, bce, bde, cde$. My idea was that we can relate the coefficients of the degree 10 polynomial to the coefficients of the degree 5 polynomial using Vieta's relations. However, I soon realised that this led to expressions that were extremely difficult to simplify and the method in general, was time-consuming. I was interested in knowing if general techniques exist to solve such problems or if brute is the only way to go about it.
Thanks
| Let
*
*$g(x) = x^5 f\left(\frac1x\right) = x^5+5x^4-2x^3+3x^2-4x+1$.
*$S = \{ a,b,c,d,e \}$ be the roots of $f(x)$.
*$T = \{ \frac1a, \frac1b, \frac1c, \frac1d, \frac1d \}$ be the roots of $g(x)$.
*For $I \subset S$ and $J \subset T$, let $\lambda_I = \prod_{\lambda \in I}\lambda$ and $\mu_J = \prod_{\mu \in J}\mu$.
The polynomial we seek equals to
$\quad\displaystyle\;F(x) \stackrel{def}{=} \prod_{I \subset S,|I| = 3}(x - \lambda_I)$.
Define a similar polynomial for $g$,
$\quad\displaystyle\;G(x) \stackrel{def}{=} \prod_{J \subset T,|J| = 2}(x - \mu_J)$.
By Vieta's formula, we have $abcde = -1$, this implies
$$F(x) = \prod_{I\subset S,|I|=3} \left(x + \frac{1}{\lambda_{S \setminus I}}\right) = \prod_{J\subset T,|J|=2}(x + \mu_J) = G(-x)$$
The problem comes down to given $g(x)$, how to compute $G(x)$ whose roots are
product of distinct pairs of roots of $g(x)$.
It will be hard to relate the coefficients of $g$ and $G$ directly. However, there is a simple relation between the power sums. More precisely, for any
$k \in \mathbb{Z}_{+}$, let
*
*$P_k(g) \stackrel{def}{=} \sum_{\mu \in T} \mu^k$ be the sum of roots of $f(x)$ raised to power $k$.
*$P_k(G) \stackrel{def}{=} \sum_{J \subset T,|J|=2} \mu_J^k$ be the sum of roots of $G(x)$ raised to power $k$.
We have
$$P_k(G) = \frac12( P_k(g)^2 - P_{2k}(g))\tag{*1}$$
To make following descriptions more generic, let $n = 5$ and $m = \frac{n(n-1)}{2}$.
Define coefficients $\alpha_k, \beta_k$ as follow:
$$g(x) = x^n - \sum\limits_{k=1}^n \alpha_k x^{n-k}
\quad\text{ and }\quad
G(x) = x^m - \sum\limits_{k=1}^m \beta_k x^{m-k}$$
Following are the steps to compute coefficients $\beta_k$ from coefficients $\alpha_k$ manually.
*
*Compute $P_k(g)$ using
Newton's identity for $1 \le k \le 2m$.
$$P_k(g) = \sum_{j=1}^{\min(n,k-1)} \alpha_j P_{k-j}(g) + \begin{cases}
k \alpha_k, & k \le n\\
0, & \text{otherwise}\end{cases}
$$
*Compute $P_k(G)$ from $P_k(g)$ using $(*1)$.
*Compute $\beta_k$ from $P_k(G)$ using Newton's identities again:
$$\beta_k = \frac1k\left( P_k(G) - \sum_{j=1}^{k-1} \beta_j P_{k-j}(G) \right)$$
I am lazy, I implement above logic in maxima (the CAS I use) and compute these numbers. The end result is
$$F(x) = x^{10}-2x^9+19x^8-112x^7+82x^6+97x^5-15x^4+58x^3+3x^2+3x+1$$
If one has access to a CAS, there is a quicker way to get the result.
For example, in maxima, one can compute the resultant between $g(t)$ and $g\left(-\frac{x}{t}\right)$ using the command resultant(g(t), g(-x/t), t)).
The resultant of two polynomials is essentially the GCD of them over the polynomial ring. It vanishes when and only when the two polynomials share a root. When the resultant between $g(t)$ and $g\left(-\frac{x}{t}\right)$ vanishes, $x$ either equals to $-\mu^2$ for a root $\mu \in T$ or $-\mu\nu$ for some $\mu, \nu$ in $T$.
If one ask maxima to factor output of above command, the result is
$$-(x^5+29x^4-34x^3+3x^2+10x+1)F(x)^2$$
The first factor is nothing but $\prod\limits_{\mu \in T}(x + \mu^2)$, this confirm the expression we get for $F(x)$ is the product $\prod\limits_{J \subset T,|J| = 2}(x + \mu_J)$ we desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3628123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Solving $e^z = 1 $ in complex plane I am solving the equation $e^z = 1 $ in $\mathbb{C}$. The book says, other than $z = 0$, $z = 2 \pi k i$ for $ k \in \mathbb{Z}$ is also the solution. It explains the solution by saying that $e^z$ is periodic function so that
$1 = e^z = e^{2\pi k i}$
However I want to know how the identity is derived to solve other cases such as $e^z = 2 $.
| Let $z=x+iy$ where $x, y$ are real. Then $e^z = 2$ means
$$
e^x\cos y + i e^x\sin y = 2,
\\
e^x\cos y = 2\quad\text{and}\quad e^x\sin y = 0
$$
Now $e^x \ne 0$ for all $x$, so from $e^x\sin y = 0$ we get $\sin y = 0$, and thus
$y = n\pi$ for some $ n \in \mathbb Z$. From this we get $\cos y = \cos(n\pi) = (-1)^n$. But $e^x > 0$ for all $x$, so we must have $n$ even, say $n=2k$. Finally,
$2 = e^x\cos y = e^x\cos(2k\pi)= e^x$, so $x=\ln 2$. Conclusion: $z = \ln 2 + i2k\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3628277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Complex Analysis. How to use cauchy intergral Given the following integral of gamma on the path $[0,2]$ and $[-2,2]$ we have the integral $$ \int_\gamma \frac{z} {(z^2-1)(z-3)}dz$$
I set it up like
$$ \int_\gamma \frac{\frac{z}{(z+1)(z-3)}}{z-1}=2\pi i $$
and i get that =
$$ \frac{-\pi i}{2} $$
and then i did the following for $[-2,2]$ but i dont believe they are set up correctly and was wondering if i can have some guidance or the error pointed out
$$ \int_\gamma \frac{\frac{z}{(z^2-1)}}{z-3}=2\pi i $$
then after everything i get its = 0 or pi i/4
| If I understand your notation, by $[-2,2]$ you mean the circle of radius $2$ centered at $-2$. But the integrand's only pole is then at $z=-1$.
So you get $\oint_Cf(z)/(z+1)$, where $f(z)=z/((z-1)(z-3))$. So the integral is equal to $2\pi if(-1)=2\pi i(-1/8)=-\pi i/4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3628416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
linear least squares -- complex observations, real estimate constraint Consider the following least squares optimization problem:
$$
\hat{x} = \arg\min_x \| y - A x\|^2
$$
where the observations are complex $y\in{\cal C}^{N\times 1}$, and the complex design matrix $A\in {\cal C}^{N\times K}$ is full rank ($K$). Is there a simple closed-form solution if $x$ is constrained to be real (i.e., $x\in{\cal R}^{K \times 1}$)?
| Let
\begin{align}
f(x) := \|y - Ax\|_2^2 := \left( y - Ax \right)^*: \left( y - Ax \right),
\end{align}
where $()^*$ is complex conjugate.
Now, let us compute the gradient of $f(x)$ (by computing the differential first), i.e.,
\begin{align}
df(x)
&= \left[ -A^*dx: \left( y - Ax \right) \right] + \left[ \left( y - Ax \right)^*: -A dx \right] \\
&= \left[ \left( y - Ax \right): -A^*dx \right] + \left[ -A^T\left( y - Ax \right)^*: dx \right] \\
&= \left[ -A^H\left( y - Ax \right):dx \right] + \left[ -A^T\left( y - Ax \right)^*: dx \right] \\
&= \left[\left( -A^H\left( y - Ax \right) \right) + \left( -A^T\left( y - Ax \right)^*\right) \right]: dx
\end{align}
The gradient is set to zero, such that
\begin{align}
\frac{\partial f(x)}{dx} &= A^H\left( y - Ax \right)+ A^T\left( y - Ax \right)^* = 0 \\
&\Rightarrow x = \left( A^HA + A^TA^*\right)^{-1} \left(A^Hy + A^Ty^* \right).
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3628616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Does there exist a sequence $\{a_n\}_{n \ge 0}$ of nonnegative reals such that $ \sum_{j \ge 0} a_{nj} = \frac{1}{n}$ holds for all naturals $n$? Does there exist a sequence $\{a_n\}_{n \ge 0}$ of nonnegative reals such that
$$ \sum_{j \ge 0} a_{nj} = \dfrac{1}{n}$$
holds for all naturals $n$?
My progress: I could show that $a_n\le \frac{1}{2n}$. I am not sure if this is even useful.
| Let's put
$$
f(z) = \sum\limits_{0\, \le \,j} {\,a_{\,j} z^{\,j} }
$$
Then the application of the Series Multisection tells us that
$$
\sum\limits_{0\, \le \,j} {\,a_{\,n\,j} z^{\,n\,j} } = {1 \over n}\sum\limits_{0\, \le \,k\, \le \,n - 1} {f(\omega ^{\,k} z)}
$$
Therefore
$$
\sum\limits_{0\, \le \,j} {\,a_{\,n\,j} z^{\,n\,j} } = {1 \over n}\sum\limits_{0\, \le \,k\, \le \,n - 1} {f(\omega _{\,n} ^{\,k} z)} \quad \left| {\;\omega _{\,n} = e^{\,i2\pi /n} } \right.
$$
and we are looking for a function $f(z)$ such that
$$
\sum\limits_{0\, \le \,k\, \le \,n - 1} {f(\omega _{\,n} ^{\,k} )} = 1
$$
which means that the function takes on values over the roots of $1^n$ which sum to $1$ for all values of $n$.
Now, the roots corresponding to $k/n = 1/2, 2/4, 3/6, ...$ have the same value, as all those for which $k/n = const.$.
Consequently also the function will have the same value on those roots.
I am not an expert in the theory of functions, but definitely $f(x)$ cannot be continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3628828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Solution verification: $(2x\ln y)\mathrm{d}x+(\frac{x^2}{y}+3y^2)\mathrm{d}y=0$ I've come across this problem while solving an exercise on Exact Differential equations and it's as follows:
Solve the differential equation:
$$(2x\ln y)\mathrm{d}x+\left(\dfrac{x^2}{y}+3y^2\right)\mathrm{d}y=0$$
So what I've noticed is that if we expand the right side bracket as such:
$$(2x\ln y)\mathrm{d}x+ \dfrac{x^2}{y}\mathrm{d}y+3y^2\mathrm{d}y=0
$$
Is that I can express the first two terms of the above equation as
$$\mathrm{d}\left(x^2 \ln y\right)$$
Form here, simple integration leads us to:
$$x^2\ln y + y^3= \mathrm{C}$$
Where $\mathrm{C}$ is the constant of integration. The solution on the other hand is given:
$$x^2+\ln y+y^3=\mathrm{C}$$
Is this a misprint or am I missing something somewhere?
| $$(2x\ln y)\mathrm{d}x+ \dfrac{x^2}{y}\mathrm{d}y+3y^2\mathrm{d}y=0$$
$$\ln y{d}x^2+{x^2}d \ln y+3y^2\mathrm{d}y=0$$
$$d x^2\ln y+3y^2dy=0$$
Integration gives:
$$x^2\ln y +y^3=C$$
Your answer is perfectly correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3628968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.