Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Second derivative of a third degree polynomial function Let $f(x) = ax^3 + bx^2 + cx +d $ be a third degree polynomial function.
$$f'' = 6ax + 2b = 0 \Longrightarrow x = \frac{-b}{3a} $$
This is equal to $\frac{1}{3}$ of the sum of the roots of $f(x)$. So my question is: can we say that the root of the second derivative is equal to $\frac{1}{3}$ of the sum (of roots) for all third degree polynomial functions? If not, why?
| If the zero of $f''(x)$ is $p$. then the polynomial$f(x)$ has root $p$ with multiplicity 3. i.e,$(x-p)^3|f(x)$ and $(x-p)^4$ does not divides$ f(x)$
$(x-p)^3|f(x)$ and $f(x)$ is cubic polynomial $\implies$ $f(x)=k(x-p)^3$ , where $k$ is a real number.
therefore roots of $f(x)$ are $p,p,p$.........................(1)
Sum of the roots$=\frac{-b}{a} = p+p+p=3p$ $\implies$
$\frac{1}{3}$( Sum of the roots)$=p$...................(2)
from (1) and (2) , $\frac{1}{3}$( Sum of the roots) is also a root of $f(x)$.
this property is not available for all cubic equations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2775319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Number of solutions of the function
Let $'f'$ be an even periodic function with period 4 such that $f(x) = 2^x -1$, $0\le x \le2$. The number of solutions of the equation $f(x) = 1$ in $[-10,20]$ are?
Since the given function is even,
$f(x) = f(-x)$
since the given function is periodic,
$f(x+4) = f(x)$
So,
$f(x+4) = f(-x)$
Well, according to the function defined for $x \in [0,2]$ we get one solution for $f(x) = 1$ at $x=1$
I don't know how do I proceed from here.
Any help would be appreciated.
| First, we observe that as $f$ is even, the solutions $f(x) = 1$ on $[-2,2]$ is 1 and -1. Then, use the periodicity, $f(x+4) = f(x)$ to extend this function to find all the roots in your interval. For instance, we have $f(1) = f(5) = f(9)$ ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2775421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Induction principle: $n^2-7n+12≥0$ for every $n≥3$ How can I prove that $n^2-7n+12≥0$ for every $n≥3$?
I know that for $n=3$ I have $0≥0$ so the inductive Hypothesis is true.
Now for $n+1$ I have $(n+1)^2-7(n+1)+12=n^2-5n+6$ and now I don't know how to go on...
| Note that
$$n^2-5n+6=n^2-7n+12+2n-6 \stackrel{\color{red}{n^2-7n+12\ge 0}}\ge 0 + 2n-6\ge 0$$
for $n\ge3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2775499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 3
} |
Program for drawing geometry What program do math teachers use to draw geometry, or what is used in books?
I want the drawing to look EXACTLY (I mean the same font and the same line thickness etc.) as in the screenshot I send. Found many programs, but not a one to do it in this way. Please help! Here is the image:
| I have not used this program in a few years, so I don't know what changes have occurred lately, but geometer's sketchpad was really nice and probably still is. It is not free, but there is a free trial I think.
http://www.keycurriculum.com/sketchpad.1.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2775585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Evaluation of a definite integral with square root in denominator The definite integral
$\displaystyle \int_0^{\pi } \frac{x \cos (x)}{\sqrt{\alpha^2-x^2}} \, dx$
is evaluated numerically.
After numerical evaluation with a CAS, it is found that the integral has real numerical values for $\alpha \geq \pi$, and complex numerical values for $\alpha<\pi$.
How is this proven or explained?
Does it hold that for any definite integral
$\displaystyle \int_0^{\beta } \frac{x \cos (x)}{\sqrt{\alpha^2-x^2}} \, dx$
the numerical values are real for $\alpha \geq \beta$? ($\alpha$ and $\beta$ are real)
| If $0<\alpha < \beta$, then $a^2-x^2<0$ for $\alpha < x < \beta$, so you're taking the square root of negative numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2775698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Solving the problem : $z z_x + z_y = 0, \quad z(x,0) = x^2$ Exercise :
For the problem :
$$\begin{cases} zz_x + z_y = 0 \\ z(x,0) = x^2\end{cases}$$
derive the solution :
$$z(x,y) = \begin{cases} x^2, \quad y = 0\\ \frac{1+2xy - \sqrt{1+4xy}}{2y^2}, \quad y \neq 0 \; \text{and} \; 1+4xy >0 \end{cases}$$
When do shocks develop ? Use the Taylor series for $\sqrt{1+\epsilon}$ about $\epsilon= 0$ to verify that $\lim_{y\to 0} z(x,y) = x^2$.
Attempt :
$$\frac{\mathrm{d}x}{z} = \frac{\mathrm{d}y}{1} = \frac{\mathrm{d}z}{0}$$
We yield the integral curves :
$$\frac{\mathrm{d}y}{1} = \frac{\mathrm{d}z}{0} \implies z_1 = z $$
$$\frac{\mathrm{d}x}{z} = \frac{\mathrm{d}y}{1} \implies z_2 = x -zy$$
Thus, the general solution will involve a $F \in C^1$ function, such that :
$$z(x,y) = F(x-zy)$$
For $y=0$ :
$$z(x,0) = F(x) \Rightarrow F(x) = x^2$$
How would one proceed now to find the second branch of the solution ?
(The shock-taylor part is easy)
| The general solution is :
$$z(x,y) = F(x-zy)\qquad\text{OK}.$$
$F$ is an arbitrary function, to be determines according to the boundary condition.
Condition : $\quad z(x,0)=x^2=F(x-0y)$
So, the function $F$ is determined :
$$F(X)=X^2\qquad\text{any }X$$
We put this function into the general solution where $X=x-zy$.
$$z(x,y)=F(x-zy)=(x-zy)^2$$
$z=x^2-2xyz+y^2z^2$
$y^2z^2-(2xy+1)z+x^2=0$
Solving for $z$ leads to :
$$z=\frac{2xy+1\pm\sqrt{(2xy+1)^2-4x^2y^2}}{2y^2}$$
$$z=\frac{2xy+1\pm\sqrt{1+4xy}}{2y^2}$$
For $y\to 0$ :
Let $\quad 4xy=\epsilon>0 \quad$ because $1+4xy>0$.
$$z=\frac{\frac{\epsilon}{2}+1\pm\sqrt{1+\epsilon}}{2y^2}=\frac{\frac{\epsilon}{2}+1\pm\sqrt{1+\epsilon}}{2\left(\frac{\epsilon}{4x}\right)^2}=8x^2\left(\frac{\frac{\epsilon}{2}+1\pm\sqrt{1+\epsilon}}{\epsilon^2}\right)$$
$\sqrt{1+\epsilon}\simeq 1+\frac12\epsilon-\frac18\epsilon^2+...$
$$z\simeq 8x^2\left(\frac{\frac{\epsilon}{2}+1\pm\left(1+\frac12\epsilon-\frac18\epsilon^2+...\right)}{\epsilon^2}\right)$$
Case of sign $+$ :
$z\simeq 8x^2\left(\frac{2+\epsilon-\frac18\epsilon^2+...}{\epsilon^2}\right)\to\infty\quad$ when $\epsilon\to 0$. This case is rejected because $z\to x^2$.
Case of sign $-$ :
$z\simeq 8x^2\left(\frac{\frac18\epsilon^2+...}{\epsilon^2}\right)\to x^2\quad$when $\epsilon\to 0$. This agrees. Thus the final result is :
$$z=\frac{2xy+1-\sqrt{1+4xy}}{2y^2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2775803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$Z_5$ isomorphic to a subgroup of $S_4$ The question asks me if $Z_5$ is isomorphic to a subgroup of $S_4$.
What I was thinking of doing is writing down all the elements of $S_4$ and then again finding the subgroups generated by every element. But that is just very long. I assume there should be a proper way to check this. Any hint?
| No. First, we know $|S_4|=4!=24,$ and $|\Bbb{Z}_5|=5$. Then it is clear that $5$ does not divide $24,$ and by Lagrange's theorem, the order of any subgroup of $S_4$ must divide $24.$ So no such subgroup can exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2775938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If three points are chosen at random on a circle's edge, what is the probability that the triangle contains the circle's center?
If I created a triangle with 3 random points on the outside edge of a circle, then what’s the probibility that the triangle contains the centerpoint of the circle?
Please answer in as many ways as possible. I’m only in 8th grade. You can use calculus, because my math teacher said that’s how he would solve it; however, I only know a little bit of calculus, so I would also like alternatives.
Also, I'm sorry if my question was confusing. I was having trouble with the wording.
| The center will be included if the three points are not all within the same semicircle. This question shows the chance they are within a semicircle is $\frac 34$ so the chance the center is inside your triangle is $\frac 14$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2776061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show that the area ,A,of the rectangle PQRS is given by $2p(16-p^2)$ Show that the area ,A,of the rectangle PQRS is given by $2p(16-p^2)$
For question 10a) I am having trouble finding the $p^2$
I understand the 2p and is able to find the 16 using dy dx=0
Please help
(https://i.stack.imgur.com/GemEv.jpg)
| Let $S(4-p,0)$, then $P(4-p,(4-p)(4+p))$ or $P(4-p,16-p^2)$. Then you have $A=SR\times SP$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2776129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Normal linear maps over $\mathbb C$
Prove that if $\alpha: V \to V$ is a normal linear map on a finite-dimensional inner product space $V$ over $\mathbb C$ then $\alpha = \alpha_1 + i\alpha_2$ where $\alpha_1$ and $\alpha_2$ are self-adjoint, and $\alpha_1 \alpha_2 = \alpha_2 \alpha_1$.
Since every normal linear map is diagonalisable this is easy to show by diagonalising the matrix representing $\alpha$, but I have a feeling it can be shown directly from the definition of normal: $\alpha \alpha^* = \alpha^* \alpha$. Unfortunately I cannot see how to do it. A hint would be appreciated.
| If we set
$\alpha_1 = \dfrac{\alpha + \alpha^\ast}{2}, \tag 1$
then evidently
$\alpha_1^\ast = \alpha_1, \tag 2$
i.e., $\alpha_1$ is self-adjoint; likewise, setting
$\alpha_2 = \dfrac{\alpha - \alpha^\ast}{2i}, \tag 3$
we see that
$\alpha_2^\ast = \dfrac{\alpha^\ast - \alpha}{-2i} = \dfrac{\alpha- \alpha^\ast}{2i}; \tag 4$
so $\alpha_2$ is also self-adjoint; also,
$\alpha = \alpha_1 + i \alpha_2, \tag 5$
as is easily seen. Now
$\alpha_1 \alpha_2 = \dfrac{1}{4i}(\alpha + \alpha^\ast)(\alpha - \alpha^\ast) = \dfrac{1}{4i}(\alpha^2 - (\alpha^\ast)^2) = \dfrac{1}{4i}(\alpha - \alpha^\ast)(\alpha + \alpha^\ast) = \alpha_2 \alpha_1, \tag{6}$
where we have used $\alpha \alpha^\ast = \alpha^\ast \alpha$ in performing the algebraic operations of (6). It follows that $\alpha_1$ and $\alpha_2$
meet the stated requirements.
No diagonalization needed!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2776243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$X$ a compact Hausdorff space and a sequence $A_1 \supset A_2 \supset \dots$ of closed connected subsets. Show that $\bigcap A_n$ is also connected. X a compact Hausdorff space and a sequence $A_1 \supset A_2 \supset \dots$ of closed connected subsets.
I need to show that the intersection $\bigcap _{n\in\Bbb N}A_n$ is also connected.
I know that from the compactness of $X$ that the intersection is non-empty by the F.I.P criterion. if I choose a converge sequence of points $(x_n)$ so for every $n\in \Bbb N, x_n\in A_n$ we have a single limit $x \in \bigcap_{n\in\Bbb N}A_n$ (by the fact that X is Hausdorff), but I don't know how to show that $\bigcap _{n\in\Bbb N}A_n$ is connected...
| Let $U$ be an open set containing $A=\cap_n A_n$. Then there exists $n_0$ such that $A_n \subset U$ for $n\ge n_0$ ( otherwise the sets $A_n \backslash U$ would be closed, non void and decreasing so their intersection would be nonvoid).
Assume now $\cap_n A_n$ is not connected, $A=A'\cup A''$. Take $U'\supset A'$, $U''\supset A''$, open, disjoint. There exists $n_0$ such that $A_n\subset U'\cup U''$ for $n\ge n_0$. We conclude that $A_n$ is disconnected for $n\ge n_0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2776439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Binary symmetric channel probability
Given are vectors $x$ and $y$ with $d (x, y) = k ≤ n$, compute $P \left \{Y = y | X = x \right \}$. Does this probability depend on the concrete choice of the vectors x and y?
I'm little bit confused what should I do with d or Hamming distance. Also what formula should I use?
Usually I would use probability mass function but what would n and k be?
| If the Hamming distance between $x$ and $y$ is $k$ then there are $k$ positions at which $x$ and $y$ are different.
If $x$ is given and we would like to see $y$ at the output then we would like to see $k$ changes and $n-k$ non-changes.
Assuming that the channel works independently of the input and independently of its own history then the probability we seek is
$$(1-p)^{n-k}p^k$$
because we want $k$ changes and ...
This probability does not depend on $x$, it depends only on its Hamming distance from $y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2776527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$A^2=-I_4$. Find possible values of minimal polynomial and characteristic polynomial Let $A\in\mathbb{R}^{4\times 4}$ satisfy $$A^2=-I_4 .$$
(a) Find possible values of $m_a$ (minimal polynomial) and $p_a$ (characteristic polynomial).
(b) Find an example for A satisfying the condition.
Please help me approach the first question. I can assume (b) would immediately follow.
| We have
$$
A^2 = -I\\
A^2 +I = 0
$$
and we direclty read off the polynomial $f(x) = x^2 + 1$, with $f(A) = 0$. Now you just need to figure out how the minimal and characteristic polynomials both relate to any given polynomial where $A$ is a root.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2776654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Show that $g'(x)+g(x)-2e^x=0$
Given a function $g$ which has derivative $g'$ for all x $\in {R}$ and satisfying $g'(0)=2$ and $g(x+y)=e^yg(x)+e^xg(y)$ for all $x,y\in {R}$
Show that $g'(x)+g(x)-2e^x=0$
$\dfrac{g(x+y)}{e^{x+y}}=\dfrac{g(x)}{e^{x}}+\dfrac{g(y)}{e^{y}}$
Putting $x=0$,
$g'(y)=2e^y+g(x)$
Also putting $y=0$,we get,
$0=e^xg(0)\implies g(0)=0$
I had a strong hunch that $g(x)=e^x-e^{-x}$ but it does not satisfy $g(x+y)=e^yg(x)+e^xg(y)$
Please help.
| $$g(x+y) = e^y g(x) + e^x g(y)$$
Differentiating w.r.t. y
$$g'(x+y) = e^y g(x) + e^x g'(y)$$
Now put $y=0$
$$g'(x) = e^0 g(x) + e^x g'(0)\implies g'(x) - g(x) - 2e^x = 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2776776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
The intuition of the dual space? The dual space of X is defined to be the space of all linear and continuous functionals that map X to R. But, What exactly is a dual space intuitively?
In my current self-guided understanding, I think of a space of function as a set of points( or a region) in infinite dimensional space $\mathbb R^\infty$. Let $f(x)$ be a element of a space of functions $X$, can I think of each value $f(x)$ as the magnitude in the dimension $x$?
If my assumption above is correct, then what does it mean to have a space consists of functionals? Functionals take a function as input and spit out a scalar, right? There are many functionals that involve differentiation and are not continuous. These functionals in no sense correspond to any functions, right?
Since all linear functionals that are bounded are also continuous, can I say that the only class of functionals that is linear and continuous is simple convolution with certain bounded function g(x)? Namely, $\int f(x)g(x)dx$?
And so, all g(x) that make the integral mapping continuous are the elements of the dual space? This is the best explanation I can come up with so far.
If all my assumptions are incorrect, can someone explain to me what it means to have a space which consists of functionals?
|
In my current self-guided understanding, I think of a space of function as a set of points( or a region) in infinite dimensional space $\Bbb R^\infty$. Let $f$ be a element of a space of functions $X$, can I think of each value $f(x)$ as the magnitude in the dimension $x$?
This is a very good intuition. In fact, the usual notation for the set of functions from $A$ to $\Bbb R$ is $\Bbb R^A$.
Since all linear functionals that are bounded are also continuous, can I say that the only class of functionals that is linear and continuous is simple convolution with certain bounded function $g$?
Definitely wrong (true in a few cases). Counterexample: the dual of $C_b(\Omega,V ) = f:\Omega\rightarrow V$ (space of all bounded continuous functions in Banach space $V$ with the $\sup$ norm) is the space of regular bounded finitely additive measures. See Dual space of continuous functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2776841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
How would one set up barycentric coordinates for a trapezoid? Barycentric coordinates are great for triangles, but I'm interested in how to construct a barycentric coordinate system for an arbitrary trapezoid.
I've seen this done for an arbitrary quadrilateral, but it ought to be simpler in the case of a trapezoid because of the two parallel sides. There does not appear to be information on this specific case online.
This source has a good solution in general, but has this note: "Note that the special case $\vec{c}\times\vec{d}=0$ must be treated separately", but gives no explanation of how the special case must be treated. This special case turns out to be when two opposite sides are parallel — the definition of a trapezoid.
| The source derives that
$$(\mathbf{c} \times \mathbf{d})\mu^2 + (\mathbf{c} \times \mathbf{b} + \mathbf{a} \times \mathbf{d})\mu + \mathbf{a} \times \mathbf{b} = 0.$$
When $\mathbf{c} \times \mathbf{d}$ is zero, we should just be able to conclude that
$$\mu = \frac{\mathbf{a} \times \mathbf{b}}{\mathbf{c} \times \mathbf{b} + \mathbf{a} \times \mathbf{d}}.$$
I think perhaps they mean that one cannot apply the quadratic formula in this case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2776961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding integers and the limit of a sequence - from a 100 sequence and more (Russian book) Consider a sequence $a_{1}=1$ and for every $k>1$ integer $a_k=a_{k-1}+\dfrac{1}{a_{k-1}}$.
a) How many positive integers $n$ are there, satisfying that $a_n$ be an integer?
b) Find the limit (if there is) of the $a_k$ sequence!
Please help! It looks like that for a) the answer is 2. Thanks in advance!
| Let's prove that
$$
1 \le a_n \le n
$$
Since $a_1 = 1$ and $a_{n+1} > a_n$ we clearly have $a_n \ge 1$. We can prove that $a_n \le n$ by induction. We have $a_1 = 1 \le 1$ and induction step is
$$
a_{n+1} = a_n + \frac{1}{a_n} \le n + \frac{1}{a_n} \le n + 1,
$$
since $a_n \ge 1$.
About question a). If $a_n$ is an integer then $1/a_n$ is an integer only if $a_n = 1$. That gives us case $a_2 = 2$. Also by definition $a_1 = 1$. There is no other integers in this sequence. Let $a_n = \frac{p}{q}$ where $p$ and $q$ are coprime and $q>1$. Then
$$
a_{n+1} = a_n + \frac{1}{a_n} = \frac{p^2 + q^2}{pq}.
$$
So we must have $p^2 + q^2 = m\cdot pq$ for some $m \in \mathbb{N}$. But that's not possible if $p$ and $q$ are coprime, because $q$ must divide $p^2$ and $q > 1$. So there are only two indices ($n=1,2$) for which $a_n$ is integer.
Now regarding question b). Since
$$
a_{n+1} = a_n + \frac{1}{a_n} \ge \frac{1}{a_n} + \frac{1}{a_{n-1}} + \ldots + \frac{1}{a_1} \ge \frac{1}{n} + \frac{1}{n-1} + \ldots + 1 = H_n
$$
where $H_n$ is a harmonic series, which clearly diverges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2777275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How is this proof for the scalar product rule of limits valid? If we let $K=\lim\limits_{x \to a} f(x),$
and let $c$ be a constant,
Then in order to show that $\lim\limits_{x \to a} cf(x) = cK$,
we must show that there is an $\epsilon$ for every $\delta$ such that
$\lvert cf(x)-cK \rvert < \epsilon$ whenever $\lvert x-a \rvert < \delta$.
The proof claims to prove this by staing:
$$\lvert cf(x)-cK \rvert = \lvert c \rvert \lvert f(x)-K \rvert < \lvert c \rvert \frac{\epsilon}{\lvert c \rvert} = \epsilon.$$
However, I don't see how this proves anything other than basic manipulation of terms, and I don't see how it relates $\epsilon$ to $\delta$ in any way.
| The step missing is saying "So take $\delta>0$ such that $|f(x)-K|<\epsilon/|c|$ whenever $|x-a|<\delta$. Thus, that $\epsilon$ and $\delta$ will work to show that $|cf(x)-cK|<\epsilon$ whenever $|x-a|<\delta$."
Most of the time, this step is left out of proof involving limits since it has the same flavor and is largely the same every time.
One other thing worth mentioning is that this proof doesn't work when $c=0$, but that exceptional case is rather trivial since it just leads to the claim that $\lim_{x \to a}{0}=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2777377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Do Real Symmetric Matrices have 'n' linearly independent eigenvectors? I know that Real Symmetric Matrices have real eigenvalues and the vectors corresponding to each distinct eigenvalue is orthogonal from this answer. But what if the matrix has repeated eigenvalues? Does it have linearly independent (and orthogonal) eigenvectors? How to prove that?
PS: In the answer I referred to has another answer which might have answered this question. I'm not sure if it answered my question since I didn't understand it. If it did answer my question, can anyone please explain it?
Thanks!
| Real Symmetric Matrices have $n$ linearly independent and orthogonal eigenvectors.
There are two parts here.
1. The eigenvectors corresponding to distinct eigenvalues are orthogonal which is proved here.
2. If some of the eigenvalues are repeated, since the matrix is Real Symmetric, there will exist so many independent eigenvectors. (Proof here and here.) As John Ma pointed out, in this case we can use Gram–Schmidt orthogonalization to get orthogonal vectors.
So, all $n$ eigenvectors of a Real Symmetric matrix are linearly independent and orthogonal
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2777494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Lebesgue outer measure intervals may all be assumed to be open On page 6-7 of Invitation to Ergodic Theory by C. E. Silva
Proposition 2.1.1 Lebesgue outer measure satisfies the following properties.
(1) The interval $I_j$ in the definition of outer measure may all be assumed to be open.
With the proof for (1) being:
Let $\alpha(A)$ denote the outer measure of $A$ when computed using only open bounded intervals in the covering. Clearly, $\lambda^*(A)\leq\alpha(A).$
I don't understand why $\lambda^*(A)\leq\alpha(A)$ is true as a closed interval $I_k$ being $[a, b]$ contains $(a,b)$.
edit:
Following that snippet. The book says:
Now for $\epsilon>0$. For any covering ${I_j}$ of $A$ let $K_j$ be an open interval containing $I_j$ such that $\mid K_j\mid < \mid I_j\mid + \frac{\epsilon}{2^j}, j\geq1$. Then
$$\sum_{j=1}^\infty\mid K_j\mid<\sum_{j=1}^\infty\mid I_j\mid+\epsilon$$
Taking the infimum of each side gives $\alpha(A)\leq\lambda^*(A)+\epsilon$, as this holds for all $\epsilon$, $\alpha(A)\leq\lambda^*(A)$.
| The outer measure is defined as an infimum. If you take the infimum over less sets (only the open ones) then the infimum is possibly greater.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2777592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Gradient of distance vector length Suppose that there is a surface described by:
$\phi(x,y,z)=c$
And suppose that there is a fixed point A:
$\vec{r_A}=(x_A,y_A,z_A)$
Let $\vec r$ be position vector of any point on the surface so that:
$R=|\vec r-\vec r_A|$
Show that $\nabla R$ is a unit vector whose direction is along $\vec r-\vec r_A$
I tried to write $z=f(x,y)$ and then calculate what is $\nabla R$ but I got that it has no $z$ component which leads to a contradiction...
| By denoting $\vec r = (x,y,z)$ you can write $R = \sqrt{(x-x_A)^2+(y-y_A)^2+(z-z_A)^2}$. Then as $\nabla = (\frac{d}{dx},\frac{d}{dy},\frac{d}{dz})$ I will just do the calculation for the $x$-coordinate, as it is completely analogous for $y,z$:
$\frac{dR}{dx} = (\frac{1}{2}) R^{-1} 2 (x-x_A) = \frac{x-x_A}{R}$
And by putting all coordinates together that gives you:
$\nabla R = \frac{1}{R} (\vec r - \vec r_a) $ which is just $\vec r - \vec r_a$ divided by its norm and has therefore unit length.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2777707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How is the derivative of $\textrm{Trace}\left\{ X^T A X B\right\}$ with respect to $X$ equal to $AXB + A^TXB^T$? How is the derivative of $\mbox{Trace}\left\{ X^T A X B\right\}$ with respect to matrix $X$ equal to $AXB + A^TXB^T$?
\begin{align}
\nabla_X \ \textrm{Trace}\left\{ X^T A X B\right\} = AXB + A^TXB^T
\end{align}
where $A$ and $B$ matrices are given.
| With implicit summation over repeated indices,
$$\frac{\partial\operatorname{tr}X^T AXB}{\partial X_{ij}}=A_{lm}B_{nk}\frac{\partial}{\partial X_{ij}}(X_{lk}X_{mn})=A_{lm}B_{nk}(\delta_{il}\delta_{jk}X_{mn}+X_{lk}\delta_{im}\delta_{jn}),$$where $\delta_{rs}$ is the Kronecker delta ($1$ if $r=s$, $0$ is $r\neq s$). The right-hand side is $$A_{im}X_{mn}B_{nj}+A_{li}X_{lk}B_{jk}=(AXB+A^T XB^T)_{ij}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2777803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
A common tangent line
The graph of $f(x)=x^4+4x^3-16x^2+6x-5$ has a common tangent line at $x=p$ and $x=q$. Compute the product $pq$.
So what I did is I took the derivative and found out that $p^2+3p+q^2+3q+pq=0$. However when I tried to factorize it I didn't find out an obvious solution. Can someone hint me what to do next? Thanks in advance
| Let the equation of the common tangent be $y=mx+b$.
Solving simultaneously the equations of the common tangent and $f(x)$, we get
$x^4+4x^3-16x^2+(6-m)x-(b+5)=0$
This equation will have two double roots (since a line is touching a curve twice), so let the roots be $p,\,p,\,q,\,q$.
Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2777897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Surface described by ${\bf r^\top A r + b^\top r}=1$? I was asked to describe the surface described by
$${\bf r}^\top {\bf A} {\bf r} + {\bf b}^\top {\bf r} = 1,$$
where $3 \times 3$ positive definite matrix ${\bf A}$ and vector $\bf b$ are given.
My intuition tells me that it is a rotated ellipsoid with a centre that is off the origin. However, I am told to show this via the substitution ${\bf r} = {\bf x} + {\bf a}$, with $\bf a$ being a constant vector, and dictate the conditions on this vector to obtain a new quadric surface ${\bf x}^\top{\bf A}{\bf x} = C$. However, upon substitution, I get a ridiculously messy answer involving combinations of position and constant vectors. Is there a trick I am missing out on? Thank you!
| Note: for convenience we use $2b$ instead of $b$.
$$(x+a)^TA(x+a)+2b^T(x+a)=x^TAx+a^TAx+x^TAa+a^TAa+b^Tx+2b^Ta.$$
Notice that by symmetry of $A$, $x^TAa=a^TAx$. Collecting all the $x$ terms,
$$(2a^TA+2b^T)x$$ can be cancelled with the choice
$$a=-A^{-1}b.$$
Then
$$C=1-a^TAa-2b^Ta=1+b^TA^{-1}b>1.$$
(As $A$ is positive definite, it is invertible. Note that this is a matrix version of the "complete the square" paradigm.)
Going further, you can diagonalize the matrix and write, in the basis defined by the Eigenvectors
$$y^T\Lambda y=C$$
or
$$\lambda_0u^2+\lambda_1v^2+\lambda_2w_2=C.$$
As the three Eigenvalues are positive,
$$(\sqrt\lambda_0u)^2+(\sqrt\lambda_1v)^2+(\sqrt\lambda_2w)^2=C$$ describes a stretched sphere.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2777982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show that unit circle is not homeomorphic to the real line Show that $S^1$ is not homeomorphic to either $\mathbb{R}^1$ or $\mathbb{R}^2$
$\mathbf{My \ solution}$:
So first we will show that $S^1$ is not homeomorphic to $\mathbb{R}^1$.
To show that they are not homeomorphic we need to find a property that holds in $S^1$ but does not hold in $\mathbb{R}^1$ or vice-versa.
$S^1$ is compact however $\mathbb{R}^1$ is not compact.
The set $\{1\} $ is closed, and the map
$$f: \Bbb R^2 \longrightarrow \Bbb R,$$
$$(x, y) \mapsto x^2 + y^2$$
is continuous. Therefore the circle
$$\{(x,y) \in \Bbb R^2 : x^2 + y^2 = 1\} = f^{-1}(\{1\})$$
is closed in $\Bbb R^2$.
Set $S^1$ is also bounded, since, for example, it is contained within the ball of radius $2$ centered at 0 of $\Bbb R^2$ (in the standard topology of $\Bbb R^2$).
Hence it is also compact.
However real line $\Bbb R^1$ is not because there is a cover of open intervals that does not have a finite subcover. For example, intervals (n−1, n+1) , where n takes all integer values in $\mathbb{Z}$, cover $\mathbb{R}$ but there is no finite subcover.
Hence $S^1$ can not be isomorphic to $\mathbb{R}^1$.
How to show now that $S^1$ is not homeomorphic to $\mathbb{R}^2$? Can i show it now in the same way?
They can not be homeomorphic since $S^1$ is compact however $\mathbb{R}^2$ not.
How to show that $\mathbb{R}^2$ is not compact?
| To prove that $\mathbb R^2$ is not compact: Assume that it is. The image of a compact space under a continuous map is compact. The mapping $f:\mathbb R^2 \to \mathbb R, (x,y)\mapsto x$ is continuous and has image $\mathbb R$. Hence $\mathbb R$ is compact. But you yourself showed that $\mathbb R$ is not compact. Contradiction.
To show that $S^1$ is not homeomorphic to $\mathbb R^2$: Observe that $S^1$ is compact but $\mathbb R^2$ isn't. Done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2778159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
Are the probabilities the same? For any problem, is the probability that at least 3 out of 10 people like doing something and the probability that at least 30 out of 100 people like doing something the same? I was wondering if as it grew larger, the probability would grow or shrink. Is it exponential?
| You are probably assuming that each person independently either does it or not with some probability $p$. Fluctuations get smaller as the sample size goes up, so if $p \gt 0.3$ you would expect more chance for $30$ out of $100$ and if $p \lt 0.3$ you would expect more chance for $3$ out of $10$. If $p$ is just slightly above $0.3$ you might get a reversal because of the granularity of cases with $10$ people.
If $p=0.3$ the probability will be close to $0.5$, a little higher because you accepted exactly $0.3$ of the people doing it. It will be higher for $3$ in $10$ because the chance of exactly $3$ of $10$ is higher than the chance of exactly $30$ of $100$ due to granularity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2778227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Using expectation/variance algebra in normal distribution A shop sells apples and pears. The masses, in grams, of the apples may be assumed to have a A~$N(180, 12^2)$ distribution and the masses of the pears, in grams, may be assumed to have a P~$N(100, 10^2)$ distribution. Find the probability that the mass of a randomly chosen apple is more than double the mass of a randomly chosen pear.
Although this question appeared in the linear combination of normal variables section, I wasn't quite sure how to incorporate it into this particular problem. I thought that it could just simply be
$P(A>2P)$, where 2P can be found by multiplying the mean, 100, by 2. So it would be
$P(A>200)$. But I don't think this approach is correct. How should I solve this? The answer is 0.196.
| Guide:
If assume independence, then we have $A-2P$ is normal distribution with mean $$\mathbb{E}[A-2P]=\mathbb{E}[A]-2\mathbb{E}[P]=180-2(100)$$
and
$$Var[A-2P]=Var[A]+4Va[P]=12^2+4(10^2)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2778369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
consequence of Hahn-Banach theorem In Wikipedia, it says that Hahn-Banach Theorem shows there are "enough" continuous linear functionals. But, why is that so in a space that is not necessarily normed? How does the statement of Hahn-Banach show this?
| Let $X$ be a normed space and $M$ a finite-dimensional subspace of $X$. Take any linear functional $f : M \to \mathbb{F}$. It will be continuous because $M$ is finite-dimensional.
Hahn-Banach lets you extend $f$ to a continuous linear functional defined on all of $X$, so you obtain a nontrivial element of $X^*$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2778467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Series 1 - 1/2^2 + 1/3 - 1/4^2 + 1/5 - 1/6^2 + 1/7... This is the exercise 2.7.2 e) of the book "Understanding Analysis 2nd edition" from Stephen Abbott, and asks to decide wether this series converges or diverges:
$$ 1-\dfrac{1}{2^2}+\dfrac{1}{3}-\dfrac{1}{4^2}+\dfrac{1}{5}-\dfrac{1}{6^2}+\dfrac{1}{7}-\dfrac{1}{8^2}\dots $$
I have noticed that
$$\dfrac{1}{3}< 1-\dfrac{1}{2^2}+\dfrac{1}{3};\\
\dfrac{1}{3}+\dfrac{1}{5}<1-\dfrac{1}{2^2}+\dfrac{1}{3}-\dfrac{1}{4^2}+\dfrac{1}{5};\\
\dfrac{1}{3}+\dfrac{1}{5}+\dfrac{1}{7}<1-\dfrac{1}{2^2}+\dfrac{1}{3}-\dfrac{1}{4^2}+\dfrac{1}{5}-\dfrac{1}{6^2}+\dfrac{1}{7}\\
$$
which is true in general because
$$1-\dfrac{1}{4}-\dfrac{1}{16}-\dfrac{1}{36}-\dfrac{1}{64}-\dots=1-\sum_{n=1}^\infty \dfrac{1}{(2n)^2}=1-\dfrac{1}{4}\sum_{n=1}^\infty\dfrac{1}{n^2}=1-\dfrac{\pi^2}{24}>0.
$$
Thus
$$
\sum_{n=1}^\infty \dfrac{1}{2n+1}<1-\dfrac{1}{2^2}+\dfrac{1}{3}-\dfrac{1}{4^2}+\dfrac{1}{5}-\dfrac{1}{6^2}+\dfrac{1}{7}-\dfrac{1}{8^2}\dots
$$
Finally, as the series $\sum_{n=1}^\infty \dfrac{1}{2n+1}$ diverges, so does the series requested.
My two questions are:
I) Is this reasoning correct?
II) Can this exercise be done without using that $\sum_{n=1}^\infty\dfrac{1}{n^2}=\dfrac{\pi^2}{6}$? I would like to find a solution with more elementary tools.
Thank you.
| Don't be swayed by the particular case. You have the following
Lemma Let $\sum a_n$ be a series (resp. $\sum b_n$ be a convergent series), then the intertwining $x_{2n}=a_n$ and $x_{2n+1}=b_n$ is convergent iff $\sum a_n$ is so.
Proof Let us, for short, suppose the indexing starting from zero. Then, we get
$$
\sum_{n=0}^{2N+1}x_n=\sum_{n=0}^N a_n + \sum_{n=0}^N b_n
$$
this proves the equivalence
$$
\sum x_n\mbox{ converges }\Longleftrightarrow \sum a_n\mbox{ converges }
$$
end of proof
Here
$$
1+\dfrac{1}{3}+\dfrac{1}{5}+\dfrac{1}{7}+\dfrac{1}{9}\ldots
$$
diverges while
$$\dfrac{1}{2^2}+\dfrac{1}{4^2}+\dfrac{1}{6^2}+\dfrac{1}{8^2}\ldots
$$
converges. Hence your series diverges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2778607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Finding $\int^{1}_{0}\frac{\ln^2(x)}{\sqrt{4-x^2}}dx$
Finding $$\int^{1}_{0}\frac{\ln^2(x)}{\sqrt{4-x^2}}dx$$
Try: Let $$I=\frac{1}{2}\int^{1}_{0}\frac{\ln^2(x)}{\sqrt{1-\frac{x^2}{4}}}=\frac{1}{2}\int^{1}_{0}\sum^{\infty}_{n=0}\binom{-1/2}{n}\bigg(-\frac{1}{4}\bigg)^nx^{2n}\ln^2(x)dx$$
$$I=\frac{1}{2}\sum^{\infty}_{n=0}\binom{-1/2}{n}\bigg(-\frac{1}{4}\bigg)^n\int^{1}_{0}x^{2n}\ln^2(x)dx$$
Using By parts , We have
$$I=\frac{1}{2}\sum^{\infty}_{n=0}\binom{-1/2}{n}\bigg(-\frac{1}{4}\bigg)^n\frac{2}{(2n+1)^3}$$
But answer given as $\displaystyle\frac{7\pi^3}{216}$
I am not understand How can i get it. could some help me , Thanks
| This is as far as I've gotten with the integral. I will admit, I underestimated the difficulty of this integral. The four in the denominator proved to be a lot more of a nuisance than I thought. Nevertheless, this integral can be painstakingly computed by hand with the help of our best friend
$$\log\sin x=-\sum\limits_{n\geq1}\frac {\cos 2nx}n-\log 2\tag1$$
First, we make the substitution $x\mapsto 2\sin x$ to clear the four in the denominator$$\begin{align*}I & =\int\limits_0^{\pi/6}dx\,\log^2(2\sin x)\\ & =\int\limits_0^{\pi/6}dx\,\log^22+\log 4\int\limits_0^{\pi/6}dx\,\log\sin x+\int\limits_0^{\pi/6}dx\,\log^2\sin x\tag2\end{align*}$$Call the remaining integrals in (2) $I_1$, $I_2$, and $I_3$ respectively. The first integral $I_1$ is trivial
$$I_1\color{blue}{=\frac {\pi}6\log^22}\tag3$$
The second and third integrals $I_2$ and $I_3$ don't have to be fully computed to simplify the result. First, let's tackle $I_2$. Using the expansion for (1), we get
$$\begin{align*}I_2 & =-\log 4\int\limits_0^{\pi/6}dx\,\sum\limits_{n\geq1}\frac {\cos 2nx}n+\log 2\\ & \color{red}{=-\frac {\pi}3\log^22-\log 4\int\limits_0^{\pi/6}dx\,\sum\limits_{n\geq1}\frac {\cos 2nx}n}\tag4\end{align*}$$
Leave $I_2$ as is, because we can expand $I_3$ courtesy of the square and find out that a portion of $I_3$ cancels out with $I_1+I_2$. Doing the math gives
$$\begin{align*}I_3 & \color{brown}{=\sum\limits_{n\geq1}\sum\limits_{m\geq1}\int\limits_0^{\pi/6}dx\,\frac {\cos 2mx\cos 2nx}{mn}+\log 4\int\limits_0^{\pi/6}dx\,\sum\limits_{n\geq1}\frac {\cos 2nx}n+\frac {\pi}6\log^22}\tag5\end{align*}$$
Immediately, notice how the two infinite series cancel out, leaving us with a much more nicer sum to deal with. Adding (3), (4), and (5) together and canceling out all the terms leaves us with
$$I=\sum\limits_{n\geq1}\sum\limits_{m\geq1}\int\limits_0^{\pi/6}dx\,\frac {\cos 2mx\cos 2nx}{mn}$$
Now, what's left is to show that the nested sum actually equals $\frac {7\pi^3}{216}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2778736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 3,
"answer_id": 2
} |
How to show that ${(\ln n)}^{\ln n}=n^{\ln(\ln n)}$ How to show that $${(\ln n)}^{\ln n}=n^{\ln(\ln n)}$$
Attempt:
$y={(\ln n)}^{\ln n}$ then $\ln y=\ln n\ln(\ln n)$ what to do next?
| HINT
We have
$${(\ln n)}^{\ln n}=e^{\ln n\cdot \ln (\ln n)}=(e^{\ln n})^{\ln (\ln n)}$$
and recall that by definition $e^{\ln n}=n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2778879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Euler's totient function - prove the product formula Let n $\prod_{i=1}^k p_{i}^{e_i}$ with $k\in \mathbb{N}$, $p_i\neq p_j \in \mathbb{P}, \quad \forall i\neq j$ and $e_i \in \mathbb{N}^+$. Then for Euler's totient function follows:
$$\phi(n)=\prod_{i=1}^k (p_i-1)p_{i}^{e_i-1}$$
I have to prove this theorem in the 3 following steps:
1) The formula holds for $n=p^{e_i}$.
This step is done by considering that all numbers $k$ with $gcd(n,k)\neq 1$ have to be multiplicatives of p and there are $p^{e_i-1}$ of them, therefore $\phi(n)=p^{e_i}-p^{e_i-1}$.
2) A residue class is prime modulo $m$, iff it is a unit in the multiplicative semigroup of $\mathbb{Z}_m$.
3) Use the Chinese remainder theorem and (2) to reduce the problem for arbitrary $n$ to the one of the prime power.
My problem is that I don't know how to prove that for any unit in $\mathbb{Z}_m$ follows that it has to be a prime residue class. The other direction clear to me I can state it if needed.
I would really appreciate some help!
| Good strategy.
It's not quite true that the residue class must be prime, but instead that it must be coprime to $m$.
For $x$ to be a unit in the multiplicative group $(\mathbb{Z}/m\mathbb{Z})^\times$ is the same as saying there exists $y$ such that $xy \equiv 1 \bmod m$. In this phrasing, the question is to show that such a $y$ exists exactly when $x$ is coprime to $m$.
We show that if there exists such a $y$, then $x$ must be coprime to $m$ first. The congruence $xy \equiv 1 \bmod m$ implies that there is a $z$ such that $xy + mz = 1$. Any common divisor of $x$ and $m$ must divide $1$, and thus $x$ and $m$ are coprime.
For the converse, we must show that if $\gcd(x,m) = 1$ then $xy \equiv 1 \pmod m$ has a solution (for $y$). This is a classical question (perhaps generally called finding modular inverses), and one typically constructively proves this using the Extended Euclidean Algorithm. See this answer for example.
Together, these imply that the size of the multiplicative group of units for a prime power is $\lvert(\mathbb{Z}/p^n\mathbb{Z})^\times \rvert = p^n - p^{n-1}$, completing your middle step.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2779047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
show whether $\frac {xy}{x^2+y^2}$ is differentiable in $0$ or not? (multivariable) Q: $f(x,y)=\frac {xy}{x^2+y^2}$ if $(x,y)\not=(0.0)$, and $0$ if $(x,y)=(0,0)$.
Is $f$ differentiable at $(0,0)$?
Attempt: $$\lim_{(x,y) \to (0,0)} \frac {f(x,y)-f(0,0)}{||(x,y)-(0,0)||} = \lim_{(x,y) \to (0,0)} \frac {\frac{xy}{x^2+y^2}}{\frac {\sqrt {x^2+y^2}}{1}}$$
$$=\lim_{(x,y) \to (0,0)} \frac {xy}{(x^2+y^2)^{3/2}}.$$
Let $x=r\cos(\theta)$ and $y=r\sin(\theta)$. Then,
$$ \frac {xy}{(x^2+y^2)^{3/2}}= \frac {r^2\cos(\theta)\sin(\theta)}{r^{3/2}}=\sqrt r\cos(\theta)\sin(\theta).$$
What should be the next step? If I use polar coordinates, how can I transform $$\lim_{(x,y) \to (0,0)}\rightarrow\lim_{(r,\theta) \to (?,?)}$$??
| Hint:
AM-GM gives
$$\left|\frac{xy}{x^2+y^2}\right| = \frac{|x||y|}{x^2+y^2} \le \frac{\frac{x^2+y^2}2}{x^2+y^2} = \frac12$$
with equality when $x=y$.
Therefore $\lim_{(x,y)\to (0,0)} f(x,y) \ne 0$ so $f$ is not continuous at $(0,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2779130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proving Brooks' Theorem in Graph Theory Okay, so I'm mainly concerned with this lemma we do beforehand (although a similar, albeit less severe, issue comes up in the actual proof of Brooks' Theorem).
"Let $ G $ be a connected graph with maximum degree $ \Delta $ which has a vertex of degree less than $ \Delta $. Then the chromatic number of $ G $ is at most $ \Delta $."
So the proof goes like this: we pick a vertex $ x $ of degree less than $ \Delta $ and for each vertex we determine its distance to $ x $. Then we look at the subgraph induced by the furthest vertices. This has max degree at most $\Delta - 1 $ since all of its vertices are adjacent to a vertex closer to $ x $. Therefore we can colour it with $ \Delta $ colours (since we can colour a graph of max degree $ \delta $ with $ \delta + 1 $ colours).
Now consider the vertices one step closer to $ x $. Each is adjacent to a vertex closer to $ x $ so we need only $ \Delta $ colours to colour these.
Continue until we get to $ x $, then since $ x $ has degree $ \Delta - 1 $ we have a spare colour. QED, supposedly.
But how do we know that the colourings agree? For instance, how do we know that when we colour in this way, a vertex of distance $ n - 1 $, where $ n $ is the max distance from $ x $, is not coloured the same way as a vertex of distance $ n $? I'm having trouble even trying to prove that this holds if the max distance is 2, let alone prove it in general.
| When you color a vertex $v$ in layer $k$ then it is adjacent to at most $\Delta-1$ vertices in total in layers $k,k+1$ (and some yet uncolored vertices in layer $k-1$). Therefore $v$ is adjacent to at most $\Delta-1$ vertices already colored and you have one color available to color $v$ greedily.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2779253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Sigmoid functions for a biased random number generator I'm trying to create a random number generator biased toward certain values. To do this I'm using a random number [0,1] as the parameter for a sigmoid curve. I've found the perfect curve to get the results I want,
$c\left( \frac{1}{1+exp(-ax + b(a-16) + 8))}\right) + (1-c)x$,
where $a = [16,\infty)$ is the tightness of the bias, $b = [0,1]$ is the position of the bias and $c = [0,1]$ is the strength of the bias.
However, I need this as a function of y, as it biases away from the number if I plug in an x value at the moment. Here's an example of how I want to find a random number.
I've tried to inverse this function and have got stuck. When $c=1$, it's a lot simpler and I've got the function,
$-c\frac{ln|x^{-1} - 1|+b(16-a)-8}{a}$.
However, when $c\neq1$, I cannot find the inverse, and simply adding $(1-c)x$ as before doesn't work due to the asymptotes at $x=0$ and $x=1$.
Am I missing something obvious or is there a better way to do what I want? I wan't this to be as computationally simple as possible! You can play around with the curves in desmos here.
| I ended up using the function $\frac{1}{a^{|x-m|}}$ where $m$ is the value I'm biased toward and $a = 1 + \frac{b^2}{n}$ where n is the interval of x, and b is the strength of the bias. Using integration, I picked a random area between the min and max $x$ values and solved for $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2779342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
I'm not getting Wilson's theorem to work? Wilson's theorem states that:
p is a prime if and only if (p - 1)! $\cong$ -1 (mod p)
Obviously 5 is a prime so this should be true:
(5 - 1)! $\cong$ -1 (mod 5)
But when I tried to test it it doesn't work:
$(5 - 1)! = 4*3*2*1 = 24$
I get the results:
24 % 5 = 4
$24 \cong 4$(mod 5)
Edit: I'm so stupid
| Do not worry, you are correct.
Note that $$24 \cong 4(mod5) \cong -1(mod5)$$
Remember, $$24-(-1)=24+1 =25=5k$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2779495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How I construct a segment $a^2$? The length of a segment is given. How can we construct a segment equal to the square of the given segment?
| Revised attempt:
1) Construct a $\triangle DBC$ with :
$|DB|=1;$ $ \angle BDC =90°$; $|DC|=a$ .
2) Construct a right angle at $C$, with one leg $BC$.
The other leg of this angle intersects $BD$
at $A$, I.e $\angle BCA =90°$.
3) We have :
$|AD| \cdot 1 = a^2.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2779601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Is there a shape that can be wrapped perfectly? Wrapping presents in the real world always involves overlapping paper (due to folds, etc).
Is there any shape that can (theoretically) be wrapped by a rectangular piece of paper without any overlap (the shape and the paper have the same surface area)?
If such a thing exists, I imagine it would have to have angles to allow the paper to wrap to another side. I don't care if the shape is concave or convex.
The shape must have a volume greater than 0
| One solution is a regular tetrahedron. We can even generalize this to tetrahedrons constructed from regular ones where we just pull two oposing edges apart. The following pictures show a 3D models in blender. The red edges show where we cut the surface apart (called seams) and on the right side we see the unwrapped net of each model (done using UV-unwrapping, usually done for texturing objects). We need to cut one triangle in half in order to get a rectangle (otherwise we'd just get a parallelogram).
We can easily observe that this technique can be used for any side ratio of rectangles.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2779722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Linear Algebra - Complete solution for Ax = b Alright, I'm having some trouble understanding the "complete" solution for Ax = b.
For instance, suppose
$$A = \pmatrix{ 1 & 2 & 2 & 2 \\ 2 & 4 & 6 & 8 \\ 3 & 6 & 7 & 10}$$
I can already see that $Ax = \pmatrix{1 \\ 5 \\ 6}$ is a solution for this system but after elimination we get
$$A = \pmatrix{ 1 & 2 & 2 & 2 &b_1\\ 0 & 0 & 2 & 4 & b_2 - 2b_1 \\ 0 & 0 & 0 & 0 & b_3 - b_2 - b_1 0}$$
From there my textbook shows how to find a particular solution by setting all free variables to 0, yielding:
$$ x_1+ 2x_3 = 1$$
$$2x_3 = 3 $$
So that
$$x-particular = \pmatrix{-2 \\ 0 \\ \frac{3}{2} \\ 0}$$
It then claims that the complete solution to Ax = b is given by
$$x-complete + x_n$$
where $x_n$ is a "generic vector in the nullspace",and since $$x_n = c_1 * \pmatrix{-2 \\ 1 \\ 0 \\ 0} + c_2 * \pmatrix{2 \\ 0 \\ -2 \\ 1}$$
$$x-complete = \pmatrix{-2 \\ 0 \\ \frac{3}{2} \\ 0} + c_1 * \pmatrix{-2 \\ 1 \\ 0 \\ 0} + c_2 * \pmatrix{2 \\ 0 \\ -2 \\ 1}$$
What I don't understand is:
*
*what is this "generic vector in the nullspace?
*Do we have to set the free variables to 0 and 1 respectively in order to find the nullspace? If not, how can this solution be considered complete if we're not describing all the possibles values the free variables could've taken?
| Since $m>n$ the system $Ax=b$ has infinitely many or zero solutions depending upon the augmented RREF
$$A = \pmatrix{ 1 & 2 & 2 & 2 &b_1\\ 0 & 0 & 2 & 4 & b_2 - 2b_1 \\ 0 & 0 & 0 & 0 & b_3 - b_2 - b_1 }$$
Notably if $b_3-b_2-b_1\neq 0$ we have no solution otherwise the general solution is given by $x_P+x_H$ that is the sum of
*
*one particular soution to $Ax_P=b$
*the homogeneous solution to $Ax_H=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2779835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability that the millionth decay occurs within 100.2 seconds?
Radioactive decay of an element occurs according to a Poisson process
with rate $10,000$ per second. What is the approximate probability
that the millionth decay occurs within $100.2$ seconds?
Let $X$ be the number of decays and the number of expected decays within $100.2$ seconds is $\lambda=100.2\cdot10000=1002000.$ Thus $\bar{X}\sim \text{Poi}(1002000)$ and $\mu=1002000, \ \sigma=\sqrt{1002000}=1000.995.$
How to I formulate "probability that the millionth decay occurs within $100.2$ seconds?"
Is it $P(X>1000000)?$ I don't se how.
| You have a Poisson process with expected value $1,002,000$. The chance that the millionth decay has not happened yet is the sum of the probabilities of exactly $0,1,2,\ldots ,999,999$ decays having happened. I believe you are supposed to use the normal approximation. Based on the figures you quote, a million decays is just about $2\sigma $ low so you need the chance that a random normal is greater than mean-$2\sigma $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2779997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
What is the difference between discrete interval and continous interval I'm looking at my math textbook and it says for discrete distribution where the range is from a to b, $$f(x) = \frac{1}{b-a+1}$$
While for continuous distribution it states that $$f(x) = \frac{1}{b-a}$$
What is it different?
| The discrete interval from $a$ to $b$ (when both are integers) consists of the integers
$$
a, a+1, \ldots, b-1, b;$$
there are $s = b-a + 1$ of these, so with equal weighting, they each get probability $1/s$.
The continuous interval consists of all real numbers $x$ with $a \le x \le b$. Its length is $b-a$, so the uniform probability density function must be $\frac{1}{b-a}$ in order to have it integrate to $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2780081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$\frac{\alpha}{\alpha + \beta}\|u\|^2 + \frac{\beta}{\alpha + \beta}\|v\|^2 > \langle u, v\rangle $ is true? I have the following question:
It's true that $$ \frac{\alpha}{\alpha + \beta}\|u\|^2 + \frac{\beta}{\alpha + \beta} \|v\|^2 > \langle u, v\rangle $$
for all $u,v \in \mathbb R^N $ with $\{u, v\} $
linearly independent and all $\alpha, \beta \in (0, \infty)$?
where $\langle \cdot, \cdot \rangle $ is the usual inner product in $\mathbb R^N$ and $\|\cdot\|$ is the euclidean norm.
In the particular case that $\alpha = \beta = \frac{1}{2}$
is easy. If $\{u, v\} $
linearly independent then by the Hölder inequality $\frac{1}{2}\|u\|^2 + \frac{1}{2}\|v\|^2 > \|u\|\cdot\|v\| \geq |\langle u, v\rangle| \geq \langle u, v\rangle $. but in the general case I do not know how to do.
| Unfortunately that is not true. What are you asking is equivalent to
$$\begin{align}
\frac{\alpha}{\alpha + \beta}\|u\|^2 + \frac{\beta}{\alpha + \beta} \|v\|^2 &> \langle u, v\rangle \\
\alpha\|u\|^2 + \beta \|v\|^2 &> (\alpha + \beta)\langle u, v\rangle \\
\alpha\langle u, u\rangle - (\alpha + \beta)\langle u, v\rangle + \beta \langle v, v\rangle &> 0 \\
\langle \alpha u - \beta v, u - v\rangle &> 0
\end{align}$$
Now since $u$ and $v$ are arbitrary we can drop the minus sign, thus obtaining
$$ \langle \alpha u + \beta v, u + v\rangle > 0 $$
To see that the above inequality does not hold, in $\mathbb{R}^2$, take $u = (2, -\frac{1}{2})$, $v = (-1, \frac{1}{2})$, $\alpha = 1$ and $\beta = 3$. Hence $ \alpha u + \beta v = (-1, 1)$ and $u + v = (1,0)$, which clearly does not satisfy the inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2780151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is there a function that gives unique values when a unique sequence of numbers is given as input? Consider a random sequence of numbers, like 1, 4, 15, 21, 27, 15... There are no constraints on what numbers may appear in the sequence. Think of it as each element in the sequence is obtained using a random number generator. The question is, do we have a function that will give unique output by performing mathematical operations on this sequence? By unique, I mean when the function is applied on sequence A, it must output a value that's different from the output obtained by applying the same function on any other sequence (or the same sequence but numbers placed in different order) in the world. If we don't have such functions, can you tell me if it is even possible? Do we have anything that gets close?
| Assume each input sequence has finitely many terms, and all terms are nonnegative integers.
Let the function $f$ be given by
$$f\bigl((x_1,...,x_n)\bigr) = p_1^{1+x_1}\cdots p_n^{1+x_n}$$
where $p_k$ is the $k$-th prime number.
Then by the law of unique factorization, the function $f$ has the property you specified.
More generally, if negative integer values are also allowed, then define $f$ by
$$f\bigl((x_1,...,x_n)\bigr) =p_1^{e(x_1)}\cdots p_n^{e(x_n)}$$
where
$$
e(x_k)=
\begin{cases}
1+x_k&\text{if}\;x_k\ge 0\\[4pt]
x_k&\text{if}\;x_k < 0\\[4pt]
\end{cases}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2780241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Probability with candies My problem is: A jar containing $1000$ candies, $750$ are red and $250$ are yellow. If you randomly select $10$ candies from the jar, record the amount of red and yellow then replace the candies and repeat this process $50$ times how many times do you think you will get $0$ red candies, $1$ red candy, $2$ red candies,..., and $10$ red candies?
I thought I was doing this right but I seem to have run into an issue that makes me think I am doing it wrong. if I am calculating the amount of times we draw 10 red candies I use this process and a similar on that for other amounts of red candies: $(750!*990!)/(740!*1000!)$ which gets the probability of drawing 10 red candies and $(750!*250!*990!)/(741!*249!*1000!)$ for the chance of getting 9 red candies, and so on. Then I would take that probability and multiply it by 50 to get the number of times I would draw 10 red candies in my 50 trials and so on for the other cases but those numbers are far too small. When I sum all of those percentages that I calculated I got a number far less than $100\%$ which means I cannot use those percentages to find out what number of trials out of 50 would contain 10 red candies. Where am I going wrong? Thank You!
| Your formula for all $10$ candies being red is correct. However, the formula you give for exactly $9$ red candies is not correct. What you have written actually calculates the probability that the first $9$ candies are red and the last is yellow - it can equivalently be written as
$$\frac{750}{1000}.\frac{749}{999}.\frac{748}{998}.\frac{747}{997}.\frac{746}{996}.\frac{745}{995}.\frac{744}{994}.\frac{743}{993}.\frac{742}{992}.\frac{250}{991}.$$
Since any of the $10$ candies could be the yellow one, you need to multiply this probability by $10$. In general, for $r$ red candies, you will need to multiply by $\binom{10}r=\frac{10!}{r!(10-r)!}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2780359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
matrices $A$ and $B$ such that $AB = -BA$? I was trying to find matrices non-singular $A$ and $B$ such that $AB = -BA$.I tried taking $A$ and $B$ to be general matrices and started with an order of the matrix as $2$ but I go into a bit of lengthy calculation.
This made me think while it was intuitive for me to calculate the inverse of a $2 \times 2$ , $3 \times 3$ matrix for simple matrices so is it intuitive to find matrices say $A$ such that $A^2 = 0$ or $AB = BA$ or similar type of questions?.
I think such type of interesting generalizations and results can be done and found out?
EDIT -
From the answer's below and comments we see that taking the determinants simplifies the problem a bit that it can work only for even order square matrices but still a way/ hint to guessing it would help?
| For $2 \times 2$ matrices you can use $$A=
\begin{pmatrix}
a & 0 \\
0 & -a \\
\end{pmatrix},$$
with $a\not =0$ and
$$B=
\begin{pmatrix}
0 & x \\
y & 0 \\
\end{pmatrix}.$$
Then $AB=-BA$ for arbitrary $x,y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2780483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Anti symmetric matrix and rotations Studying the lagrangian formulation of Noether's theorem and came upon how the invariance under rotations gives conservation of angular momentum.
Whilst setting up the problem the notes state that if a potential only depends on the distance between 2 points, namely $V(|r_i-r_j|)$, then you can apply the transformation:
$$\textbf{r}\rightarrow \textbf{r}+\epsilon T\textbf{r}$$
where $\epsilon$ is a small variation, $\textbf{r}$ is just a vector and $T$ is a rotation matrix. I'm confused about the fact that the notes state that $T$ is an anti-symmetric matrix, I thought rotation matrices where orthogonal.
| If you consider the set of $n$-by-$n$ rotation matrices $SO(n)$ as a Lie group, then the corresponding Lie algebra is the set of antisymmetric or skew symmetric $n$-by-$n$ matrices. I.e., in the limit of $\epsilon \to 0$, the any rotation matrix $U$ is equal to $I + \epsilon T$ up to first-order. This is known as an "infinitessimal rotation". See this Wiki article for more details and references:
https://en.wikipedia.org/wiki/Skew-symmetric_matrix#Infinitesimal_rotations
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2780589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to calculate the geometric moments of a log-normal distribution? If I only know geometric mean and geometric standard deviation of a log-normal distribution how can I calculate the $n$-th moment of the distribution?
In the Wikipedia article I can only see a relationship for the $n$-th moment if the (arithmetic) mean and standard deviation are known:
$$
E\left[X^n\right]=e^{n\mu+\frac{1}{2}n^2\sigma^2}
$$
If only the geometric mean and geometric standard deviation are known, is there still a way to calculate the moments?
The geometric mean and standard deviation are defined by:
$$
GM = e^{\mu_l}\quad\mathrm{where}\quad\mu_l = \frac{\sum_{i=1}^N\ln(x_i)}{N}
$$
and
$$
GSD = e^{\sigma_l}\quad\mathrm{where}\quad\sigma_l=\sqrt{\frac{\sum_{i=1}^N\left[\ln(x_i)-\mu_l\right]^2}{N}}
$$
| For simplicity I will call $\mathcal{N}_k$ the $k$-th moment of the normal distribution with parameters $\mu$ and $\sigma$, so for example $\mathcal{N}_1 = \mu$, $\mathcal{N}_2 = \mu^2 + \sigma^2$, $\cdots$
Now if $X$ follows a lognormal distribution, then
\begin{eqnarray}
\mathbb{E}[\ln^k X] &=& \int\frac{{\rm d}x~}{x}\frac{\ln^k x}{\sigma\sqrt{2\pi}} \exp\left[-\left(\frac{\ln x - \mu}{2\sigma^2}\right)^2\right] \\
&\stackrel{y=\ln x}{=}&\int{\rm d}y ~\frac{y^k}{\sigma\sqrt{2\pi}}\exp\left[-\left(\frac{y-\mu}{2\sigma^2}\right)^2 \right] \\
&=& \mathcal{N}_k \tag{1}
\end{eqnarray}
In you case, you have then
$$
\mu_l = \ln{\rm GM} = \mathbb{E}[\ln X] = \mathcal{N}_1 = \mu
$$
and
$$
\sigma_l^2 = \mathbb{E}[(\ln X - \mu_l)^2] = (\cdots) = \sigma^2
$$
$\mu_l$ and $\sigma_l$ thus allow you to estimate $\mu$ and $\sigma$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2780709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding shape preserving positioning of a 'piston' I have a chain of rods that are setup in a default, ideal orientation. Points are labelled with a 'p' prefix, their lengths with an r:
I then stretch them to reach a desired end location, but they lose their shape:
I'd like to see if there's a better 'result' that tries to respect the shape of the original setup, with the following constraints:
*
*r1, r2 and r3 can't change length
*p1 and p4 are fixed
My intuition here says that I could build an equation for p2 based on the circle described by p1 and r1, and do the same thing for p4 and r3. I have an equation for p3 and p2 (satisfying length), but I'm unsure how to use the math to describe "maintaining the shape".
| As in your bottom drawing you know p2 is on the left circle and p3 is on the right. Put p1 at the origin and measure the angle of the p1p2 segment by $\theta$ with $0$ to the right and increasing counterclockwise. The position of p2 is then $(r1\cos \theta, r1\sin \theta)$ You can put p4 on the positive $x$ axis. Also define $\phi$ as the angle of p3p4 measured the same way. The position of p3 is $(p4+r3\cos \phi,r3\sin \phi)$. For a given $\theta$ there will be $0,1,$ or $2$ points that p3 can be to maintain the length r2. Probably the best measure of similarity is to compare the two angles of the linkage from start to finish. You might add the absolute differences, but you have to watch out for wrapping around.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2780940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving that a relation R is an equivalence relation While I fully understand what it means to be an equivalence relation, I have a difficulty establishing proof that $R$ is an equivalence relation without just listing all pairs that $R$ creates and testing them.
However this method is greatly time consuming and is not possible during exams as we usually have only 2 minutes (exam is 120 min long and is out of 120 marks, the question below is worth 2 marks only) to show that R is an equivalence relation.
For the following relation, can someone show me a fast method for proving that $R$ is an equivalence relation?
Let $\mathcal{P}(S)$ be the power set of $S =\{0,1,2,...,9\}$ and define $$R = \{ (A,B) \in \mathcal{P}(S) \times \mathcal{P}(S) : A=S\backslash B \text{ or } A=B\}.$$
I know we can use $A=B$ from the relation definition to assert that it is reflexive, but what about symmetry and transitivity?
If I prove that $xRy$ is the same as $yRx$ for one example, that doesn't prove that all $A$s and $B$s have symmetric relations as there might be a contradiction somewhere, or is just proving one example symmetric enough to assert that all $A$s and $B$s have a symmetric relation?
| $A$ and $B$ are related iff they are equal or complements.
Reflexivity: For every subset $A$ of $S$ we have $A=A$
Symmetry: If $A$ is related to $B$ then either they are equal or complements, so $B$ is also related to $A$
Transitivity: If $A$ is related to $B$ and $B$ is related to $C$, then either $B=A$ or $B$ is complement of $A$ and either $C=B$ or $C$ is complement of $B$
In either case we have $A=C$ or $C$ is complement of $A$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2781065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Maximal Ideals in $K[X,Y]$ So, given a field $K$ and a polynomial $f(x,y)$ in the ring of polynomials $K[X,Y]$, I am trying to understand why the following statement is true:
If $f(a,b) = 0$ for $(a,b) \in K \times K$, then $(f) \subset (x-a,y-b)$. I know that roots for a polynomial $g(x)$, give linear factors, but that's not true for polynomials over more than one indeterminates, is it?
| A useful trick is to think about the isomorphism $K[x,y] \cong K[x][y]$, i.e.
a polynomial in $x$ and $y$ can be thought of as a polynomial in $y$ with coefficients that are polynomials in $x$.
For a polynomial $g(x)$, we can use the division algorithm to write $g(x) = q(x)(x-a) + r$, where $r$ is a constant. This shows $g(a) = 0$ iff $r=0$ iff $x-a$ divides g.
For a polynomial $f(x,y)$ we can do the same thing, using our trick. Thinking of $f(x,y)$ as a polynomial in $y$ with coefficients in $x$, the $r$ above is now a constant in $K[x][y]$, i.e. a polynomial in $x$. Thus we can write $f(x,y) = q(x,y)(y-b) + r(x)$. Now dividing $r(x)$ by $(x-a)$ we have that $f(x,y) = q(x,y)(y-b) + s(x)(x-a) + t$, with $t \in K$. We see that $f(a,b) = 0$ iff $t=0$.
So we have actually proved the more precise statement that $$(f) \subset (x-a)K[x] + (y-b)K[x,y]$$
Indeed this argument generalizes perfectly well to any amount of variables and any commutative ring with identity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2781128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Calculating Residues using L'Hopital? I am given that the complex function $$f(z)=\frac{(e^{z-1}-1)(\cos(z)-1)}{z^3(z-1)^2}$$ has 2 simple poles, one at $z=0$ and another at $z=1$, and asked to calculate the Residues of the function at the singularities. I know that the residue of a pole $z_0$ of $f(z)$ with order $n$ is given by the formula $\frac{1}{(n-1)!}\lim_{z\rightarrow z_0}(z-z_0)^nf(z)$ and that sometimes using L'Hopital's rule is necessary to calculate the values, however, with the poles in this equation, using L'Hopital's rule seems to make it more difficult.
So far, I've done the following:
$$\text{Res}(f,0)=\lim_{z\rightarrow 0}zf(z)=\lim_{z\rightarrow 0}\frac{(e^{z-1}-1)(\cos(z)-1)}{z^2(z-1)^2}=\lim_{z\rightarrow 0}\frac{(e^{z-1}-1)}{(z-1)^2}\cdot \lim_{z\rightarrow 0}\frac{\cos(z)-1}{z^2}\\=(e^{-1}-1)\lim_{z\rightarrow 0}\frac{\cos(z)-1}{z^2}.$$
From here, I'm not sure how to continue. I checked the answer according to the mark scheme and from this step, the marker jumps to $\text{Res}(f,0)=(e^{-1}-1)\cdot(\frac{-1}{2})$. I can't see where the $\frac{-1}{2}$ has come from.
This happens in a similar way with the residue at $z=1$.
$$\text{Res}(f,1)=\lim_{z\rightarrow 1}(z-1)f(z)=\lim_{z\rightarrow 1}\frac{(e^{z-1}-1)(\cos(z)-1)}{z^3(z-1)}=\lim_{z\rightarrow 1}\frac{e^{z-1}-1}{z-1}\cdot\lim_{z\rightarrow 1}\frac{\cos(z)-1}{z^2}\\=(\cos(1) -1)\cdot\lim_{z\rightarrow 1}\frac{e^{z-1}-1}{z-1}.$$
Again, the marker jumps from this step to $Res(f,1)=(\cos(1)-1))\cdot 1$.
If anyone can help me see how to go from my working out to the answer, that would be much appreciated.
Thank you.
| Hint. Once that we note that both poles are of order $1$ (simple poles) then your computations are correct. What you need now is that
$$\cos(z)=1-\frac{z^2}{2}+o(z^2)\quad\mbox{and}\quad e^{z-1}=1+(z-1)+o(z-1).$$
Or equivalently, by L'Hopital's rule,
$$\lim_{z\rightarrow 0}\frac{\cos(z)-1}{z^2}=\lim_{z\rightarrow 0}\frac{-\sin(z)}{2z}=\lim_{z\rightarrow 0}\frac{-\cos(z)}{2}=-\frac{1}{2}
\quad\mbox{and}\quad\lim_{z\rightarrow 1}\frac{e^{z-1}-1}{z-1}=\lim_{z\rightarrow 1}\frac{e^{z-1}}{1}=1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2781210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $A^*x_n \to y$, there exists a sequence such that $A^*Ay_n \to y$. I'm struggling with a problem from Young's Introduction to Hilbert Space (7.30 to be more specific)
Let $\mathbb{H}$ be a Hilbert space, $A\in B(\mathbb{H})$ (ie a bounded operator) and $(x_n)_{n=1}^{\infty}$ a sequence in $\mathbb{H}$.
Prove that if $A^*x_n\rightarrow y$, there exists a sequence $(y_n)_{n=1}^{\infty}$ such that $A^*Ay_n \rightarrow y$.
I'm 99% sure this has something to do with the relationship between the kernel and the image of $A$ and $A^*$ ($\ker A^*$ being the orthogonal of the image of $A$, etc), but I haven't been able to make much progress. Could I get a hint on how to proceed?
| Recall that $\ker A = \ker A^*A$. Namely, clearly $\ker A \subseteq \ker A^*A$. Conversely
$$x \in \ker A^*A \implies A^*Ax = 0 \implies 0 = \langle A^*Ax, x\rangle = \langle Ax, Ax\rangle \implies Ax = 0\implies x \in \ker A$$
Since $(\ker T)^\perp = \overline{\operatorname{Im} T^*}$, taking orthogonal complements in the above relation gives
$$\overline{\operatorname{Im}A^*} = \overline{\operatorname{Im}A^*A}$$
Therefore, $A^*x_n \to y$ implies that $y \in \overline{\operatorname{Im}A^*}$, so $y \in \overline{\operatorname{Im}A^*A}$ implying that there exists a sequence $(y_n)$ such that $A^*Ay_n \to y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2781349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Factorial of odds I am trying to find a simple factorial of all the preceding odd numbers. If $9$ were to be picked the equation would read $9\times 7\times 5\times 3\times 1$ (only odd numbers can be picked). Would the following fraction work?
$$\dfrac{x!}{2^{\left(\frac{x-1}{2}\right)}\left(\frac{x-1}{2}\right)!}$$
| Hint:
\begin{align}
9\cdot7\cdot5\cdot3\cdot1&= \frac{9\cdot8\cdot7\cdot6 \cdot5\cdot4\cdot3\cdot2\cdot1}{8\cdot6\cdot4\cdot2}\\&= \frac{9\cdot8\cdot7\cdot6 \cdot5\cdot4\cdot3\cdot2\cdot1}{2^4(4\cdot3\cdot2\cdot1)}\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2781449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
What should I use in order to get the required function $g \in L^1$? Let $f \in L^1$ be such that $f$ is not equivalent to any bounded function. Prove there exists a function $g \in L^1$ such that $fg \notin L^1$.
I know that $m( \{x:|f(x)|>M\} )>0$, $\forall M>0$, but I can't see to come up with anything else useful to use for this problem in order to get the required function.
Thank you for any hints as to how may I go about approaching this problem.
| One can give a simple constructive proof of existence of such $g$.
Assume we have $f\in L^1(0,1)$ which is not $L^\infty$ (we fix $(0,1)$ for the sake of clarity, this does not impose any restriction on the construction below). For each $n\in \mathbb{N}$ define
$$
E_n = \{ x \in (0,1): n\leq |f(x)| \}.
$$
Since $f\notin L^\infty$, then $\mu(E_n) >0$ for all $n\in \mathbb{N}$.
The aim now is to construct a function $g\in L^1$ such that $fg \notin L^1$.
To this end, set
$$
(1) \qquad g = \sum_{n=1}^\infty \frac 1n a_n \chi_{E_n},
$$
where $a_n>0$ is a sequence of positive numbers to be fixed in a moment.
We thus have
$$
(2) \qquad \int|fg| d\mu = \sum_{n=1}^\infty \frac{a_n}{n} \int_{E_n}|f| d\mu \geq \sum_{n=1}^\infty \frac{a_n}{n} n \mu(E_n) = \sum_{n=1}^\infty a_n\mu(E_n).
$$
Hence, to complete the construction we need to fix $a_n$ so that
(a) $\sum_{n=1}^\infty \frac{1}{n} a_n \mu (E_n) < \infty$ which is the condtion of $g\in L^1$ (see $(1)$ )
(b) $\sum_{n=1}^\infty a_n \mu (E_n) = \infty$, which amounts to $fg \notin L^1$
(see $(2)$ ).
Thanks to $f\notin L^\infty$, we have $\mu(E_n) >0 $ for all $n$ and hence
setting $a_n = \frac{1}{\mu(E_n)} \frac{1}{n^{1/2}}$ we get both (a) and (b) and hence complete the construction of $g$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2781573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Proof about relatively prime numbers.
Let $a,m,n, \in \mathbb{N}$. I want to show that if $a$ and $mn$ are relatively prime, then $a$ and $m$ are relatively prime.
To start us off, To say $a$ and $mn$ are relatively prime means that gcd($a,mn) = 1.$ I've tried using Bezout's Identity, but have not gotten anywhere. Also, can we assume that $a$ and $n$ are relatively prime?
| Try the contrapositive: If $a$ and $m$ are not relatively prime, then $d=\gcd(a,m)>1$ and clearly $d$ divides both $a$ and $mn$, so $\gcd(a,mn)\ge d > 1$, and thus $a$ and $mn$ are not relatively prime.
Since Not B implies Not A is logically equivalent to A implies B, you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2781723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Under what conditions can the exponent product rule $a^{nm}=(a^n)^m$ be used. Under what conditions can you use the exponent product rule $a^{nm}=(a^n)^{^m}$?
For example $i=\sqrt{-1}=(-1)^{1/2}=(-1)^{2\times 1/4}=((-1)^2)^{1/4}=(1)^{1/4}=1$
What's going wrong here?
| What's wrong in your string of equalities is that:
*
*$i=\sqrt{-1}$ is wrong since $i$ is one of the square roots of $-1$ (the other one being $-i$, of course);
*$(-1)^{2\times(1/4)}=\bigl((-1)^2\bigr)^{1/4}$ is wrong because $(-1)^{2\times(1/4)}$ can be $i$ or $-i$, whereas $\bigl((-1)^2\bigr)^{1/4}$ can be one of four different numbers ($\pm1$ and $\pm i$);
*$1^{1/4}=1$ is wrong because $1^{1/4}$ can be any fourth root of $1$: again, $\pm1$ and $\pm i$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2781858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Galois group of $\mathbb Q(\sqrt{2+\sqrt{2}})$ I am interested in the elements of the Galois group of $\mathbb Q(\sqrt{2+\sqrt{2}})/\mathbb Q$.
Let $\alpha:=\sqrt{2+\sqrt{2}}$, then the minimal polynomial $m_{\alpha,\mathbb Q}(X)=X^4-4X^2+2$ has roots $$\pm\sqrt{2+\sqrt{2}}=\pm\alpha\\\pm\sqrt{2-\sqrt{2}}=\pm\beta$$ where $\beta=\frac{\alpha^2-2}{\alpha}\in\mathbb Q(\alpha)$ so the field extension is normal and we can extend the identity $id:\mathbb Q\to\mathbb Q$ to an automorphism $\phi:\mathbb Q(\alpha)\to\mathbb Q(\alpha)$ by permuting the $4$ roots:$$\phi_\alpha=\begin{cases}\alpha\mapsto\alpha\\\beta\mapsto\beta\end{cases}\\\phi_\beta=\begin{cases}\alpha\mapsto\beta\\\beta\mapsto-\alpha\end{cases}\\\phi_{-\alpha}=\begin{cases}\alpha\mapsto-\alpha\\\beta\mapsto-\beta\end{cases}\\ \phi_{-\beta}=\begin{cases}\alpha\mapsto-\beta\\\beta\mapsto\alpha\end{cases} $$ as the images of $\beta$ are determined by the images of $\alpha$ already. In total we have that $\phi_\alpha$ acts as the identity on $\mathbb Q(\alpha)$ and $$\text{Gal}(\mathbb Q(\sqrt{2+\sqrt{2}})/\mathbb Q)=\langle\phi_\beta\rangle=\langle\phi_{-\beta}\rangle\cong\mathbb Z_4$$.
| Everything looks correct. I would add argumentation that the polynomial you got is really irreducible (Eisenstein, for example) and I'd add some calculation to support the claim that, for example, $\alpha\mapsto \beta \implies \beta\mapsto -\alpha$, but I'm guessing you've already done that.
Let me just write this one example, in case you wasn't sure about it. Let's say we have $\alpha\mapsto \beta$. We know that automorphisms act as permutations on the set of roots of the minimal polynomial. We also know that $\beta$ is determined by action on $\alpha$, so it is enough to check that $\beta\mapsto -\alpha$ is indeed permutation consistent with $\beta = \frac{\alpha^2-2}\alpha$. Maybe it involves some case checking, but once you've found the correct thing, you immediately know it's the only possibility.
However, this is not really a viable strategy for bigger extensions. What I would do is write $\beta$ as a polynomial in $\alpha$. We have
\begin{align}\alpha^4-4\alpha^2 = -2 &\implies \alpha(\alpha^3 - 4\alpha) = -2\\
& \implies \frac 1\alpha = \frac{4\alpha-\alpha^3}{2} \\
&\implies \beta = \alpha - \frac 2\alpha = \alpha - (4\alpha-\alpha^3) = \alpha^3-3\alpha\end{align}
which can simplify your calculations for automorphisms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2781979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
When Higher Dimensions Help I'm interested in examples of situations that are easier in higher dimensions. To give a flavor of what I am looking for, here are two of my favorites:
(a) In dimensions 2 and higher, one can characterize the standard normal distribution (up to a constant multiple) as the spherically symmetric distribution with independent marginals. Obviously, such a characterization fails miserably in one dimension.
(b) In dimension 3 and higher, you can prove Desargues' Theorem using the incidence axioms. No such proof works in two dimensions (and there are non-Desarguesian planes).
What are some other nice results where having at least $n$ dimensions allows one to prove things or characterize things in ways that are not possible in fewer than $n$ dimensions?
| The Poincaré conjecture is much easier to prove for its generalization to higher dimensions. Actually, the first proof was for dimension $n\ge 5$ by Smale in $1960$.
Michael Freedman solved the case $n = 4$ in 1982 and received a Fields Medal in 1986.
Grigori Perelman solved case $n = 3$ in 2003. This was still possible to prove, but barely so.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2782111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is there a correlation between the degree of an extension and it being an algebraic extension? $\Bbb R/ \Bbb Q$ has degree $\infty$ and is not algebraic
$\Bbb C/ \Bbb R$ has degree 2 and is algebraic
Is the degree of $\Bbb Q(x)$ over $\Bbb Q$ infinity or $1$?
The degree of an extension over a finite field is a positive integer, (if finite degree meant algebraic) does that mean $\Bbb F_{p^n}$ is algebraic over $\Bbb F_p$ for any $n\in \Bbb N$?
Now if an element $\alpha$ is algebraic over a field $F$ what can we say about $F(\alpha)$? The degree of that extension is certainly going to be finite (equal to the degree of the minimal polynomial $m_{\alpha,F}$). But does every element in $F(\alpha)$ have a corresponding polynomial in $F[x]$ that it is a root of?
If $\alpha$ was transcendental then the degree would be infinite and of course $F(\alpha)$ would not be algebraic since it would contain $\alpha$ itself...
Am I getting this right?
| Every element of $F(\alpha)$ would be algebraic over $F$.
To see this, suppose that $[F(\alpha):F]=n$ and suppose $a \in F(\alpha)$. Then, for each $i$ $a^i=a_{i,0}+a_{i,1}\alpha+\cdots+a_{i,n-1}\alpha^{n-1}$ for some $a_{i,0},\dots,a_{i,n-1} \in F$. By linear algebra, $a^0,\dots,a^n$ are linearly dependent (when $F(\alpha)$ is thought of as a vector space over $F$). This means that there exists $c_0,\dots,c_n$ such that $c_0+c_1a+\cdots+c_na^n=0$.
I have found that a lot of results about field extensions (and about Galois theory) are found by using linear algebra like this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2782278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the $10^5\pmod{35}$? How to evaluate $10^5 \pmod {35}$ ?
I tried this $a=(10^2\cdot 10^3)\pmod{35}$
then again a mod $35$. This is very lengthy please tell me a shorter way?
| $$
10^2 \equiv 30 \left[35\right] \Rightarrow 10^3 \equiv 300 \left[35\right]\equiv 20 \left[35\right]
$$
So
$$
10^4 \equiv 200 \left[35\right] \equiv 25 \left[35\right] \Rightarrow 10^5 \equiv 250 \left[35\right]
$$
So
$$
10^5 \equiv 5 \left[35\right]
$$
( and in fact $ 10^5=2857*35 +5$ )
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2782379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Invertible matrix of inner product values Let $\{v_1,v_2,\ldots,v_k\}$ be a basis in inner product space $V.$
I need to prove that the matrix
$$A=
\begin{pmatrix}
(v_1,v_1) & (v_1,v_2) & \cdots &(v_1,v_k) \\
\
\vdots & \vdots & \ddots&\vdots\\
(v_k,v_1) & (v_k,v_2) & \cdots&(v_k,v_k)
\end{pmatrix}
$$
is invertible.
Any hints?
| Hint 1
Take a linear combination of the rows which is zero.
Hint 2
So there are coefficients $a_{i}$ such that $\sum_{i=1}^{k} a_{i} (v_{i}, v_{j}) = 0$ for all $j$.
Hint 3
Your aim is to show that all $a_{i}$ are zero.
Hint 4
So for the vector $v = \sum_{i=1}^{k} a_{i} v_{i}$ you have $(v, v_{j}) = 0$ for all $j$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2782491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to get all the factors of a number using its prime factorization? For example, I have the number $420$. This can be broken down into its prime factorization of $$2^2 \times3^1\times5^1\times7^1 = 420 $$
Using $$\prod_{i=1}^r (a_r + 1)$$ where $a$ is the magnitude of the power a prime factor is raised by and $r$ is the number of prime factors. I get $24$ possible factors.
Is there an easy way to iterate through all those $4$ factors to obtain all $24$? I know this can be easily done using a table with numbers with only $2$ factors. But, as this one has $4$ I obviously can't implement the table method. So any general solution to do this? Thanks!
| If
$n = \prod_{i=1}^r p_i^{a_i}
$
is the prime factorization on $n$,
there are
$\prod_{i=1}^r (a_i + 1)
$
prime factors.
Look at this
as counting a
$r$-digit number in a variable base,
with the base of the
$i$-th digit being
$a_i+1$,
so that digit goes from
$0$ to $a_i$.
If
$b_i$ is the $i$-th digit,
then the value corresponding
to that digit is
$p_i^{b_i}$.
Here is my take
on a moderately efficient algorithm
to compute all the
possible factors.
The divisor starts at $1$.
When a digit is incremented,
the value of the divisor
is multiplied by $p_i$.
If the $i$-th digit
exceeds $a_i$,
it is set to zero,
the divisor divided by
$p_i^{a_i}$,
and the next digit is examined.
Initialize
$d = 1$
(the divisor)
and
$b_i = 0$
and
$c_i = p_i^{a_i}$
for $i=1$ to $r$
(the exponents and max powers of $p_i$).
$\text{do forever}\\
\quad\text{output } d\\
\quad\text{for }i=1\text{ to }r\\
\qquad \text{if } b_i<a_i\text{ then } b_i=b_i+1; d=d\cdot p_i; \text{ exit for loop}
\quad\text{(digit did not overflow)}\\
\qquad\text{else } b_i=0; d=d/c_i
\quad\text{(digit overflowed - reset and look at next digit)}\\
\quad\text{end for}\\
\quad\text{if }d=1\text{ then exit do loop}
\quad\text{(all digits overflowed - done)}\\
\text{end do}\\
$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2782625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Finding roots of trigonometric function I have been given the following functions (where $\omega_0$ is some positive constant)
$$\begin{align}
\Gamma(t) &= \frac{1}{t^2} \left(\; \frac{1}{2} \omega_0^2 t^2 - \cos(\omega_0 t) - \omega_0 t \sin(\omega_0 t) + 1 \;\right) \\[4pt]
\gamma(t) = \frac{d \Gamma(t)}{dt} &= \frac{1}{t^3} \left(\; 2 \omega_0 t \sin(\omega_0 t) - (\omega_0^2 t^2 - 2) \cos(\omega_0 t) - 2 \;\right)
\end{align}$$
and need to find the intervals $(a_n, b_n)$ such that $\gamma(t)<0$ for $ t \in (a_n,b_n)$.
I then need to calculate
$$N(\Phi)= \int_{\gamma(t)<0} - \gamma(t) e^{-\Gamma(t)}dt =\sum_n e^{-\Gamma(b_n)}-e^{-\Gamma(a_n)}$$
I have been working on this and can't get it to work.
The question really only is if $N(\Phi)=\infty$ or if it is finite. Both are possible. I tried working on the roots but don't get anywhere for an explicit interval. I know that
$$\lim_{t \to \infty} \Gamma(t) = \frac12\omega_0^2$$
I am not sure though what this tells me about my sum. Clearly individually the sums are divergent. But if I take the difference I get that
$$\lim_{n \rightarrow \infty} e^{-\Gamma(b_n)} - e^{-\Gamma(a_n)}=0$$
and thus I have the possibility of convergence.
If it is not possible to calculate the roots explicitly but know that $N(\Phi)=\infty$ that is sufficient. Otherwise I guess I will have to use numerical methods.
I would really appreciate any kind of help or hint. Thank you.
Edit: What I gather so far from Maxims answer is this.
$\gamma(t)$ behaves like $\frac{-\omega_0^2 \cos(\omega_0 t)}{t}$. This I assumes is the case as $\sin(t)/t^2$ and $2/t^3$ are "faster" decreasing then my other term. So finding the roots of $\frac{-\omega_0^2 \cos(\omega_0 t)}{t}$ gives me
$$ r_k=\frac{\pi (2k +1)}{2 \omega_0} + O(k^{-3})$$
Shouldn't there be $O(k^{-3})$ as I have the term $2/t^3$? If not, why not?
The calulations for $\Gamma(r_k)$ are clear for the roots. But then calculating
$$ exp(-\Gamma(r_{2k}) - exp(-\Gamma(r_{2k-1}) = e^{\frac{-\omega_0^2}{2}}(e^{\frac{\omega_0^2}{2 \pi k}}-e^{-\frac{\omega_0^2}{2 \pi k}}) + O(k^{-2}) $$
Thus leaving me with the following sum
$$e^{\frac{-\omega_0^2}{2}} \sum_k (e^{\frac{\omega_0^2}{2 \pi k}}-e^{-\frac{\omega_0^2}{2 \pi k}}) + O(k^{-2}) $$
But then $\sum_k O(k^{-2}) \rightarrow \frac{\pi^2}{6}$.and what happens to the other sum? Thank you for your time and effort.
| The general idea goes like this. $\gamma(t)$ behaves like $-\omega_0^2 \cos(\omega_0 t)/t$, and the position of the $k$th root of $\gamma(t)$ is
$$r_k = \frac {\pi (2 k + 1)} {2 \omega_0} + O(k^{-1}).$$
(The next order term is $-2 / (\pi \omega_0 k)$, but we only need the $O$ estimate.) Then
$$\Gamma(r_{2 k}) =
\frac {\omega_0^2} 2 - \frac {\omega_0^2} {2 \pi k} + O(k^{-2}), \\
\Gamma(r_{2 k - 1}) =
\frac {\omega_0^2} 2 + \frac {\omega_0^2} {2 \pi k} + O(k^{-2}), \\
\exp(-\Gamma(r_{2 k})) - \exp(-\Gamma(r_{2 k - 1})) =
\frac {\omega_0^2 e^{-\omega_0^2 / 2}} {\pi k} + O(k^{-2})$$
(the remainder is in fact $O(k^{-3})$, but that isn't important either), and the sum over the first $n$ negative intervals grows as $\ln n$ times the coefficient at $k^{-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2782750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why are skew-symmetric matrices of interest? I am currently following a course on nonlinear algebra (topics include varieties, elimination, linear spaces, grassmannians etc.). Especially in the exercises we work a lot with skew-symmetric matrices, however, I do not yet understand why they are of such importance.
So my question is: How do skew-symmetric matrices tie in with the topics mentioned above, and also, where else in mathematics would we be interested in them and why?
| This is not the area of math you're interested in, but here's an example I might as well write down. In convex optimization we are interested in the canonical form problem
$$
\text{minimize} \quad f(x) + g(Ax)
$$
where $f$ and $g$ are closed convex proper functions and $A$ is a real $m \times n$ matrix. The optimization variable is $x \in \mathbb R^n$. This canonical form problem is the starting point for the Fenchel-Rockafellar approach to duality.
The KKT optimality conditions for this optimization problem can be written as
$$
\tag{$\spadesuit$} 0 \in \begin{bmatrix} 0 & A^T \\ -A & 0 \end{bmatrix} \begin{bmatrix} x \\ z \end{bmatrix} + \begin{bmatrix} \partial f(x) \\ \partial g^*(z) \end{bmatrix},
$$
where $g^*$ is the convex conjugate of $g$ and $\partial f(x)$ is the subdifferential of $f$ at $x$ and $\partial g^*(z)$ is the subdifferential of $g^*$ at $z$. The notation $\begin{bmatrix} \partial f(x) \\ \partial g^*(z) \end{bmatrix}$ denotes the cartesian product $\partial f(x) \times \partial g^*(z)$.
The condition $(\spadesuit)$ is a great example of a "monotone inclusion problem", which is a type of problem that generalizes convex optimization problems. The subdifferential $\partial f$ is the motivating example of a "monotone operator", but the operator
$$
\begin{bmatrix} x \\ z \end{bmatrix} \mapsto
\begin{bmatrix} 0 & A^T \\ -A & 0 \end{bmatrix}\begin{bmatrix} x \\ z \end{bmatrix}
$$
is a good example of a monotone operator which is not the subdifferential of a convex function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2782814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 0
} |
Let $f_n \to f$ pointwise on $[0,1]$. If $f_n, f$ are continuous, is it true that $\int_0^1 f_n(x) dx \to \int_0^1 f(x)dx$?
Let $\{f_n\}$ be a sequence of continuous functions on on $[0,1]$. Let $f_n \to f$ pointwise. If $f$ is continuous on $[0,1]$, is it true that $$\int_0^1 f_n(x) dx \to \int_0^1 f(x)dx?$$
I couldn't think of a counter-example, so my inclination is that it is true. If I can show that $f_n \to f$ uniformly, then I would be done, since I can choose an $N$ such that for all $n > N$ and $x\in [0,1]$ we have $|f_n(x) - f(x)| < \varepsilon$, which gives
$$ \left| \int_0^1 f_n(x) - f(x) dx \right| \leq \int_0^1 |f_n(x) - f(x)|dx < \varepsilon$$
Can it be shown that $f_n \to f$ uniformly since we're working on a compact set and $f$ is continuous? Or can a counter-example be constructed from here?
| Let $f_n(x) = n^2x^n(1-x).$ Then $f_n\to 0$ pointwise everywhere in $[0,1],$ but $\int_0^1 f_n(x)\,dx \to 1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2782921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Integrating integer powers of $\cos(\theta)$ Consider the following integral for $n\in\mathbb{N}$:
$$I_n = \int_0^\pi\cos^n\theta\,d\theta \tag1$$
which, using integration by parts, one can show to be $I_n = 0$ for $n$ odd and to be equal to
$$I_{2m} = \frac{(2m-1)!!}{(2m)!!}\pi \tag2$$
for $n = 2m$ even.
However, I've tried to evaluate $(1)$ using the binomial theorem to write powers of cosine as a sum over powers of exponential functions:
\begin{align}
I_n &= \frac{1}{2^n}\int_0^\pi\left(e^{i\theta}+e^{-i\theta}\right)^n\,d\theta\\
&=\frac{1}{2^n}\sum_{k=0}^n{n\choose k}\int_0^\pi e^{ik\theta}e^{-i(n-k)\theta}\,d\theta\\
&=\frac{1}{2^n}\sum_{k=0}^n{n\choose k}\int_0^\pi e^{i(2k-n)\theta}\,d\theta\\
&=\frac{1}{2^n}\sum_{k=0}^n{n\choose k}\frac{e^{i\pi(2k-n)}-1}{i(2k-n)}
\end{align}
However, this result seems to contradict $(2)$ - note that if $n = 2m$ is even, then $2k-2m$ is an even integer and $e^{2\pi i(k-m)} = 0$ for any value of $k$, which would imply that $I_n$ is nonzero only for odd values of $n$.
Furthermore, if $n = 2m -1$, then we have $e^{i\pi(2k-2m+1)} = -1$ and we thus have
\begin{align}
I_{2m-1} &= \frac{1}{2^{2m-1}}\sum_{k=0}^{2m-1}{{2m-1}\choose k}\frac{-2}{i(2k-2m-1)}\\
&=\frac{i}{2^{2m}}\sum_{k=0}^{2m-1}{{2m-1}\choose k}\frac{1}{2(k-m)-1} \tag3
\end{align}
which is purely imaginary, which is obviously wrong.
So what's the problem here? Am I somehow wrong in using the binomial theorem? Have I made a computational error that explains this odd result? Can this approach to computing $(1)$ be salvaged?
EDIT.
Taking into consideration that the sum is actually non-zero if the argument of the exponential function is itself zero, which occurs if, for $n$ = $2m$, we have $k = m$, this gives us:
\begin{align}
I_{2m} &= \frac{1}{2^{2m}}{{2m}\choose m}\pi = \frac{(2m)!}{2^{2m+1}m!}\pi
\end{align}
However, I don't see how this is equal to $(2)$.
I have also tried to show that $(3)$ is $0$, by symmetry, but I have no managed to show this yet either.
| There seems to be a sign error in your expansion of the odd-power integral. It should be
$$
I_{2m-1} = \frac{1}{2^{2m-1}}\sum_{k=0}^{2m-1}{{2m-1}\choose k}\frac{-2}{i(2k-2m+1)}.
$$
Now notice that
$\binom{2m-1}{2m-1-k} = \binom{2m-1}{k}$
and that
$2(2m-1-k)-2m+1 = -(2k-2m+1),$
and perhaps it will be clearer that the last $m$ terms of the sum exactly cancel the first $m$ terms, leaving a final sum equal to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2783011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Finding the intersection of two lines, in polar coordinates The sticking point is figuring out the substitutions for a ratio of cosines of differences.
I have a pair of lines in polar coords:
$$r = \frac{s_1}{\cos(\theta - \alpha_1)} \qquad r = \frac{s_2}{\cos(\theta - \alpha_2)}$$
where
$$\begin{align}
\alpha_1 &= 6^\circ \\
s_1 &= 0.9945218953682733 \\
\alpha_2 &= 74^\circ \\
s_2 &= 0.27563735581699916
\end{align}$$
I then need to do trigonometric substitution to solve for $\theta$.
$$\begin{align}
\frac{s_1}{\cos(\theta - \alpha_1)} &= \frac{s_2}{\cos(\theta - \alpha_2)} \\[4pt]
\frac{s_1}{s_2} &= \frac{\cos(\theta - \alpha_1)}{\cos(\theta - \alpha_2)}
\end{align}$$
I am stumped after that point.
| HINT
We have
$$s^1_{val} \cos(\theta - s^2_{ang}) =s^2_{val} \cos(\theta - s^1_{ang})$$
$$s^1_{val} \cos \theta\sin (s^2_{ang})+s^1_{val} \sin \theta\cos (s^2_{ang})=s^2_{val} \cos \theta\sin (s^1_{ang})+s^2_{val} \sin \theta\cos (s^1_{ang})$$
$$s^1_{val} \cos \theta\sin (s^2_{ang})-s^2_{val} \cos \theta\sin (s^1_{ang}) =s^2_{val} \sin \theta\cos (s^1_{ang})-s^1_{val} \sin \theta\cos (s^2_{ang})$$
$$\cos \theta \,[s^1_{val} \sin (s^2_{ang})-s^2_{val} \sin (s^1_{ang})] =\sin \theta [s^2_{val} \cos (s^1_{ang})-s^1_{val} \cos (s^2_{ang})]$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2783118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Gradient In Complex space I have a triangle on the upper half-plane. One corner is at infinity (0,inf), one corner is at zero (0,0) and one corner is at (0,1). However when displayed using the poincare disk model the triangle is equilateral.
I want to put a gradient on the triangle so that (0,1) has a value of red, while zero and infinity have a value of blue. That is easy, however the hard part is to make a symmetrical gradation.
By gradient I mean a color gradient of course. Going from blue to red, say. Let's set red equal to 1 and blue equal to zero to keep this simple. Gradient is a function of X and Y where X is the real component and Y is the imaginary component.
The best I got was this:
gradient = x*(1./(1.+y))
This was close, but as you see below there is more red in the top half then in the bottom half.
| Let us denote the Poincaré disk coordinates with $(x_P,y_P)$ and the half-plane coordinates with $(x_H,y_H)$. Some possible solutions:
*
*Inspection of your Poincaré disk picture shows that gradient = $x_P$ clearly gives us the solution. To obtain the same gradient in half-plane coordinates, we need to use the mapping from half-plane to Poincaré disk, i.e., inversion (plus some scaling and shifting). You seem to be using inversion centered at point (0,-1), which gives gradient = $x_P = 2x_H/(x_H^2+(y_H+1)^2)$.
*In the solution above the "lines of constant colors" do not seem to have any strong interpretations in hyperbolic geometry (they are equidistants, but not equidistants from the "blue line"). If we want the "lines of constant colors" to be equidistant from the vertical line $(0,0)-\infty$ (or, in other words, the color to be the function of hyperbolic distance from that line), we need to recall that equidistants from such a line are represented in the halfplane model by straight lines passing through $(0,0)$. So, our gradient should be a function of the angle between the vertical line $(0,0)-\infty$ and the line $(0,0)-(x_H,y_H)$, for example, $\frac{2}{\pi}\arctan(x/y)$.
*We could also make our "lines of constant colors" to be hyperbolic straight lines -- just like in the first solution, but use the $x$ coordinate in the Klein model for the gradient. It seems this approach yields more complex formulas.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2783214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $0Given that $f(x)=e^x(x^2-6x+12)-(x^2+6x+12),\;\;x>0$ is an increasing function. I want to prove that:
If $0<x<\infty$, then $0<\frac{1}{e^x-1}-\frac{1}{x}+\frac{1}{2}<\frac{x}{12}$.
Here is what I have done:
If $0<x<\infty$, then by Mean Value Theorem, $\exists\; c\in(0,x)$ such that
$$f'(c)=\frac{e^x(x^2-6x+12)-(x^2+6x+12)- 0}{x- 0}>0$$
but how do I get the desired inequality? Can anyone help out?
| We'll prove that $$0<\frac{1}{e^x-1}-\frac{1}{x}+\frac{1}{2}$$ or
$$\frac{1}{e^x-1}>\frac{2-x}{2x},$$ which is obvious for $x\geq2$.
But, for $0<x<2$ we need to prove that
$$e^x-1<\frac{2x}{2-x}$$ or $f(x)>0,$ where
$$f(x)=\ln(x+2)-\ln(2-x)-x.$$
Indeed, $$f'(x)=\frac{x^2}{4-x^2}>0,$$ which says
$$f(x)>\lim_{x\rightarrow0^+}f(x)=0$$ and the left inequality is proven.
By the same way we can prove a right inequality.
Indeed, we need to prove that
$$\frac{1}{e^x-1}<\frac{x}{12}+\frac{1}{x}-\frac{1}{2}$$ or
$$e^x-1>\frac{12x}{x^2-6x+12}$$ or $g(x)>0,$ where
$$g(x)=x-\ln(x^2+6x+12)+\ln(x^2-6x+12)$$ and since
$$g'(x)=\frac{x^4}{(x^2+6x+12)(x^2-6x+12)}>0,$$ we obtain:
$$g(x)>\lim_{x\rightarrow0^+}g(x)=0$$ and we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2783368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Integration limits problem Here is a part of a problem I have a hard time with:
Let $$f(x)= 10e^{-0.201x}+3$$
Let $$g(x)= -x^2+12x-24$$
Find the area enclosed by the graphs of f and g
Here is the answer as explained by the teacher:
Finding limits $3.8953$ and $8.6940$
Evidence of integrating and subtracting functions
Correct expression is....
And then he integrates both the functions with these two limits above.
He finds an area of $19.5$.
But how did he find these two limits in the first place?
Thanks.
| The limits are found as the intersections of the two curves, a decaying exponential and a downward parabola. There will be no closed-form of the roots and you need to use numerical methods.
To obtain good starting estimates, you can replace the exponential (blue) by its second order development (magenta), to obtain a quadratic approximation. For convenience we will shift the origin of the coordinates to the vertex of the parabola, at $x=6$. Now with $z=x-6$,
$$-z^2+12=10e^{-0.203(z+6)}=10e^{-1.218}e^{-0.203z}+3\\
\approx10e^{-1.218}(1-0.203z+0.0206045z^2)+3.$$
Solving the quadratic equation, we find
$$x=z+6=3.879932\text{ or }x=8.68608.$$
Then you can refine with Newton.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2783504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
USAMO 2018: Show that $2(ab+bc+ca) + 4 \min(a^2,b^2,c^2) \geq a^2 + b^2 + c^2$ Here is question 1 from USAMO 2018 Q1 (held in April):
Let $a,b,c$ be positive real numbers such that $a+b+c = 4 \sqrt[3]{abc}$.
Prove that: $$2(ab+bc+ca) + 4 \min(a^2,b^2,c^2) \geq a^2 + b^2 + c^2$$
This question is on symmetric polynomials. I recall as many facts as I can think of regarding inequalities. (This test was closed book).
*
*AM-GM inequality: $a + b + c \geq 3 \sqrt[3]{abc}$ so the equality there is strange.
*quadratic mean inequality suggests $3(a^2 + b^2 + c^2) > a + b + c $.
*The $\min(a^2,b^2,c^2)$ on the left side makes things difficult since we can't make it smaller.
*I am still looking for other inequalities that might work.
It's tempting to race through this problem with the first solution that comes to mind. I'm especially interested in some kind of organizing principle.
| Without loss of generality, let $a\le b=ax\le c=axy, x\ge 1, y\ge 1$.
Then:
$$a+b+c = 4 \sqrt[3]{abc} \Rightarrow a+ax+axy=4\sqrt[3]{a(ax)(axy)} \Rightarrow 1+x+xy=4x^{\frac23}y^{\frac13} \qquad (1)$$
Also:
$$a+b+c = 4 \sqrt[3]{abc} \Rightarrow a^2+b^2+c^2=16\sqrt[3]{(abc)^2}-2(ab+bc+ca) \qquad (2)$$
Plugging $(2)$ and then $(1)$ into the given inequality:
$$2(ab+bc+ca) + 4 \min(a^2,b^2,c^2) \geq a^2 + b^2 + c^2 \overbrace{\Rightarrow}^{(2)}\\
2(ab+bc+ca)+4a^2\ge 16\sqrt[3]{(abc)^2}-2(ab+bc+ca) \Rightarrow \\
(a(ax)+(ax)(axy)+(axy)a)+a^2\ge 4\sqrt[3]{(a(ax)(axy))^2} \Rightarrow\\
x+x^2y+xy+1\ge 4x\sqrt[3]{xy^2} \overbrace{\Rightarrow}^{(1)}\\
4x^{\frac23}y^{\frac13}+x^2y\ge 4x^{\frac43}y^{\frac23} \Rightarrow\\
\left(x^{\frac23}y^{\frac13}\right)^2-4\left(x^{\frac23}y^{\frac13}\right)+4\ge 0 \Rightarrow \\
\left(x^{\frac23}y^{\frac13}-2\right)^2\ge 0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2783647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
find ratio of area of triangle $\Delta{AFG}$ to area of $\Delta {ABC}$ In the figure below, $BD=DE=EC$, $F$ divides $AD$ so that $FA:FD=1:2$ and $G$ divides $AE$ so that $GA:GE=2:1$.
Find ratio of area of triangle $\Delta{AFG}$ to area of $\Delta {ABC}$
My Try:
I noticed that $G$ is centroid of $\Delta{ADC}$ and
$$Ar(ABD)=Ar(ADE)=Ar(AEC)$$
any clue?
| HINT:
Lemma: Prove that $S_{ABC}=\dfrac{1}{2}AB\times AC\times\sin{BAC}$ and apply similarly for other angles
I will use the lemma above, but I will not prove it here.
Firstly, notice that $S_{ADE}$ has the same altitude as $S_{ABC}$, but $\dfrac{DE}{BC}=\dfrac{1}{3}$, so using the area formula $\dfrac{\text{base}\times\text{height}}{2}$, we will have $\dfrac{S_{ADE}}{S_{ABC}}=\dfrac{1}{3}$ (this step have not used the lemma yet).
$\Rightarrow S_{ADE}=\dfrac{1}{3}S_{ABC}$
Secondly, using the lemma above, we can prove that $S_{ADE}=\dfrac{1}{2}AD\times AE\times\sin{DAE}$ and $S_{AFG}=\dfrac{1}{2}AF\times AG\times\sin{DAE}$
$\Rightarrow \dfrac{S_{AFG}}{S_{ADE}}=\dfrac{\frac{1}{2}AF\times AG\times\sin{DAE}}{\frac{1}{2}AD\times AE\times\sin{DAE}}=\dfrac{AF}{AD}\times\dfrac{AG}{AE}$
Hope these hints are enough.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2783741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
General request for a book on mathematical history, for a VERY advanced reader. I am aware that there are answered similar questions on here, however I am specifically after a text that would be engaging for a professor of mathematics, also Fellow of the Royal Society (FRS).
He is unwell and in the hospital, and I would like to get him something to pass the time. However anything aimed at undergraduate / postgraduate level is going to be far too patronising. Honestly, I'm not sure if there exists such a book, but if anyone has any recommendations, I would be extremely grateful.
| I suggest
*
*Mathematical Thought from Ancient to Modern Times, Vol. 1&2, by Morris Kline
*Mathematics and Its History, by John Stillwell
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2783869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "83",
"answer_count": 29,
"answer_id": 3
} |
Concluding a proof ($\pi$ is irrational) I am making a proof for irrationality of $\pi$ and i proceeded as follows:
Let $\pi=\frac{u}{v}$ for some $u,v\in \Bbb{N}$, define family of integrals:
$$I_n=\frac{v^{2n}}{n!}\int_0^\pi x^n(\pi-x)^n \sin x\,dx$$ By some elementary estimates we have
$$0<I_n\leq\frac{v^{2n}}{n!}\pi^{2n+1}$$ thus by squeeze lemma we have $\lim_{n\to\infty}I_n=0$.
In the next part I prove that $I_0,I_1\in \Bbb{N}$ and applying some integration by parts i get recursive formula $$I_n=(4n-2)v^2I_{n-1}-u^2v^2I_{n-2}\tag{1}$$
Now I should conclude the proof and I have two ideas but I'm not really sure whether both of them are correct (if both, which one is better?):
a) Because of (1), we can say that $\lim_{n\to\infty}I_n=\infty$ because of the factor $4n-2$ thus implying there are two different limits for $I_n$ which is impossible thus contradiction.
b) Because $I_0,I_1\in \Bbb{N}$. Then because $u,v\in\Bbb{N}$ too, recursive formula shows that $\forall n\in\Bbb{N}:I_n\in\Bbb{N}$ but one cannot have infinite sequence of natural numbers tending to zero from above (obviously they can't approach from below).
| Good question! Given the fact that your estimate and (1) are correct, you should choose (b) rather than (a) because the latter is not correct.
As @InterstellarProbe has mentioned in the comment, there are some sequences $a_n$ tending to $0$, but with $(4n-2)a_n$ still tending to zero$.
The point is how fast $a_n$ tends to zero. If it tends to zero "faster than" $(4n-2)$ tends to $\infty$, then their product $(4n-2)a_n$ still tends to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2784007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof verification: $x_n \rightarrow a$ and $x_n \rightarrow b$ then $a=b$ Let $(x_n)$ be a sequence in a metric space $S$. Prove: if $x_n \rightarrow a$ and $x_n \rightarrow b$ then $a=b$.
Assume $a \neq b$. Take two balls $B_r(a)$ and $B_r(b)$ with such an $r$ that $B_r(a) \cap B_r(b)=\emptyset$. Then WLOG assume there is a sequence $(x_n)$ that converges to $a$. Then for all $\varepsilon > 0$, there is $N \in \mathbb{N}$ so that $d(x_n,a) < \varepsilon$ for all $n \geq N$. Then for $\varepsilon < r$, all the terms of the sequence past the corresponding $N$ are in $B_r(a)$ and not in $B_r(b)$, so the sequence certainly does not converge to $b$.
Is it correct enough?
| Here is a simpler argument.
Take $\varepsilon>0$.
Then $d(x_n,a) < \varepsilon$ and $d(x_n,b) < \varepsilon$ for all $n$ sufficiently large.
But then $d(a,b) \le d(a,x_n)+ d(x_n,b) = d(x_n,a)+ d(x_n,b) < 2\varepsilon$.
Since $\varepsilon$ is arbitrary, this can only happen if $d(a,b)=0$, which implies $a=b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2784117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Doubts about a question I asked a long time ago (eigenvalues) Here I posted a question about the eigenvalues of the matrix $A:=vv^t$ (where $v\in\mathbb{R}^n$).
The question was answered but I think (after some time) that I am not satisfied.
Can someone please expand the answer? I don't understand why $A$ has rank at most $1$ and why this fact implies that $\lambda=\sum x_i^2$ is the unique eigenvalue. In addition, can I conclude that $A$ is diagonalizable?
| Note that $$(vv^T)v = v(v^Tv) = \|v\|^2 v$$
so $v$ is an eigenvector with the eigenvalue $\|v\|^2 = \sum_{i=1}^n x_i^2$.
Also, explicitly
$$vv^T = \begin{pmatrix} x_1^2 & x_1x_2 & \ldots & x_1x_n \\
x_2x_1 & x_2^2 & \ldots & x_2x_n\\
\vdots & \vdots & \ddots & \vdots \\
x_nx_1 & x_nx_2 & \ldots & x_n^2\end{pmatrix}$$
so every column is a multiple of $v$, therefore the rank of $vv^T$ is at most $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2784186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 2
} |
Misunderstanding Caley-Hamilton Theorem - Characteristic Polynomial in the Standard/Factorized Form so my question is about the Caley-Hamilton theorem. Consider the following Matrix A.
$$A =\begin{pmatrix}
-1 & 0 & 4 \\
2 & -1 & 0 \\
3 & 2 & -1
\end{pmatrix}$$
The characteristic polynomial is (according to WolframAlpha)
$$\chi_A(X)=-X^3-3X^2+9X+27$$
or in factorized form
$$\chi_A(X)=-(X-3)(X+3)^2.$$
Now, according to CHT, $\chi_A(A)=0$. This is true in this particular case if the characteristic polynomial is in the standard form.
$$\chi_A(A)=-A^3-3A^2+9A+27I_3=0$$
But appearantly, if we use the factorized form of the polynomial, it is false.
$$\chi_A(A)=-(A-3I_3)(A+3I_3)^2\neq 0$$
So, I must be doing something wrong or misunderstanding something. I just don't get where I made a mistake. Any help would be greatly appreciated.
PS: I'm sure that I'm not the first one to ask this question, but I didn't know how to phrase it correctly to find an old answer. Also, I hope my tags are okay.
| You have to check your algebra. The same manipulations that show that $$-x^3-3x^2+9X+27=-(x-3)(x+3)^2$$ will work when writing $A$ instead of $x$.
In WA, the matrix product is denoted with a period. And the square is interpreted entrywise. Here is the computation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2784539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Determinant of the matrix associated with the quadratic form If A is the matrix associated with the quadratic form $4x^2+9y^2+2z^2+8yz+6zx+6xy$ then what is the determinant of A? I don't know how to solve quadratic form of a matrix pls help me
| Consider the matrix $$A= \begin{pmatrix}
4 & 3 & 3 \\
3 & 9 & 4 \\
3 & 4 & 2 \\
\end{pmatrix}.$$
Note that the coefficients relating to $x^2,y^2$ and $z^2$ lie on its diagonal. The other entries correspond to half of the coefficients of the interaction terms. Multiplying this matrix with $\begin{pmatrix}
x&y&z \\
\end{pmatrix}^T$ and $\begin{pmatrix}
x\\y\\z \\
\end{pmatrix}$ yields
\begin{align} \begin{pmatrix}
x&y&z \\
\end{pmatrix}^T A \begin{pmatrix}
x\\y\\z \\
\end{pmatrix} &= \begin{pmatrix}
x&y&z \\
\end{pmatrix}^T \begin{pmatrix}
4 & 3 & 3 \\
3 & 9 & 4 \\
3 & 4 & 2 \\
\end{pmatrix}\begin{pmatrix}
x\\y\\z \\
\end{pmatrix}\\
&=\begin{pmatrix}
x&y&z \\
\end{pmatrix}^T \begin{pmatrix}
4x + 3y + 3z \\
3x + 9y + 4z \\
3x + 4y + 2z \\
\end{pmatrix}\\
&= 4x^2 +3xy + 3xz +3xy +9y^2+4yz+3xz+4yz +2z^2\\
&= 4x^2 +9y^2 +2z^2 +6xy + 6xz + 8yz.
\end{align}
Hence, $A$ is the matrix we are looking for which produces the desired function.
However, we are interested in the determinant of $A$. Luckily, $A$ is a $3 \times 3$ matrix. Hence, we can use the rule of Sarrus to calculate the determinant:
\begin{align}
\det(A)= A&= \begin{vmatrix}
4 & 3 & 3 \\
3 & 9 & 4 \\
3 & 4 & 2 \\
\end{vmatrix}\\
&= 4\cdot 9 \cdot 2 +3\cdot4\cdot3 +3\cdot4\cdot3 - 3\cdot9\cdot3 -3\cdot3\cdot2 -4\cdot4\cdot4\\
&= 72 +36+36 - 81-18 -64\\
&=-19.
\end{align}
Hence, det$(A)=-19$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2784685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is it only the generator of the group that commutes with all the other elements? If a group is generated by an element does that mean the generator commutes with all the other elements or does it mean that because the group is cyclic(as it has a generator) that all elements commute with each other.
For example, I am trying to find the conjugacy classes of the group D4 and am not sure if I could use the property that elements commute with each other. It seems to be taking too long so I was wondering what would be some facts I could be using?
From the notation below I understand that D4 is generated by a and b. So, is it only these two elements that commute with the others?
$$
D_4=\langle a, b\rangle=\{e, a, a^2, a^3, b, ab, a^2b, a^3b\}
$$
| The short answer is: the idea of "generators" and "commutativity" are completely disjoint. I don't really know what else to say...
In $D_4$, the generators $a$ and $b$ do not commutate with each other, so cannot each commute with every element. For example, $ba=a^3b\neq ab$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2784813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Inverse projective transformation: given $\varphi_A:\mathbb{P^3}\to \mathbb{P^3}$ and a line $r$, find $\varphi_A^{-1}(r)$ The transformation $A:\mathbb{R^4}\to \mathbb{R^4}$, represented by the matrix:
$A$ =$
\begin{bmatrix}
3 & 0 & 1 &0 \\
0 & -3 & 0 & 1 \\
-1 & 0 & 1 & 0 \\ 0&-1&0&-1
\end{bmatrix}
$ induces a projectivity $\varphi_A:\mathbb{P^3}\to \mathbb{P^3}$
consider the line $r =
\begin{cases}
x_0-x_1+x_3=0 \\
2x_0-x_1-2x_2=0
\end{cases}$, $r\in \mathbb{P^3}$
How to find the equation of the line $s=\varphi_A^{-1}(r)$?
Could I say (?): let $P = [x_0, x_1, x_2, x_3]$ be the generic point on $\mathbb{P^3}$, $P \in \varphi_A^{-1}(r) \Leftarrow\Rightarrow \varphi_A(P)\in r.$ So I find $A(P)$ = $
\begin{bmatrix}
3x_0+x_2 \\
-3x_1+x_3 \\
-x_0+x_2 \\ -x_1-x_3
\end{bmatrix}
$ and, replacing in $r$, I obtain: $s =
\begin{cases}
3x_0+2x_1+x_2-2x_3=0 \\
8x_0+3x_1-x_3=0
\end{cases}$
Thank you.
| Your method works. Another method is to use $A$ directly: If you have a point transform given by the invertible homogeneous matrix $A$, i.e., $\mathbf p' = A\mathbf p$, then planes transform as $\mathbf\pi'=A^{-T}\mathbf\pi$ because $$\mathbf\pi^T\mathbf p = 0 \iff \mathbf\pi^T(A^{-1}\mathbf p') = (A^{-T}\mathbf\pi)^T\mathbf p'=0.$$
Your line is described as the meet of the planes $\mathbf\pi_1=[1:-1:0:1]^T$ and $\mathbf\pi_2=[2:-1:-2:0]^T$, so the inverse image of this line is the meet of $A^T\mathbf\pi_1 = [3:2:1:-2]^T$ and $A^T\mathbf\pi_2 = [8:3:0:-1]^T$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2785008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How can I compute $\mathbb P\{T_nLet $(X_n)_{n}$ a random walk over $\mathbb Z$ starting at $0$, i.e. $\mathbb P\{X_0=0\}=1$. I denote $T_k=\inf\{n\geq 1\mid X_n=k\}$. I suppose that $$\mathbb P\{X_{n+1}=X_n+1\mid X_n,...,X_0\}=p\quad \text{and}\quad \mathbb P\{X_{n+1}=X_n-1\mid X_n,...,X_0\}=q.$$
Remark that $q=1-p$. How can I compute $\mathbb P\{T_n<T_0\}$, i.e. the probability to touch $n$ before touching $0$ ? In fact I have problem to interpret $\{T_n<T_0\}$ using $(X_n)_n$.
I tired as follow :
*
*$\mathbb P\{T_1<T_0\}=\mathbb P\{X_1=1\mid X_0=0\}=p$.
*$\mathbb P\{T_2<T_0\}=\mathbb P\{X_1=1\mid X_0=0\}+\mathbb P\{X_2=2\mid X_1=1\}=2p$
*$\mathbb P\{T_3<T_0\}=\mathbb P\{X_1=1\mid X_0=0\}+\mathbb P\{X_2=2\mid X_1=1\}+\mathbb P\{X_3=3\mid X_2=2\}+\mathbb P\{X_3=2\mid X_2=2\}(\mathbb P\{X_4=3\mid X_3=2\}+\mathbb P\{X_4=1\mid X_3=2\})$
I know that the last one is not clear at all, but I don't know how to interpret that fact that the walker can past many time between 1 and 2 many times before arriving at $3$.
Any explanation would be appreciated.
| Good question! For your three examples, I think you shouldn't condition on $X_0 =0$ because it does not change anything ($P(X_0=0)=1$). Instead we should condition on the value of $X_1$ (for $n\geq 2$) and use Bayesian formula. $P(T_1<T_0) = P(X_1 = 1) = p$ is obvious. For $T_2$, we have
$$
P(T_2<T_0) = P(T_2<T_0|X_1=1) \cdot P(X_1=1)+P(T_2<T_0|X_1=-1) \cdot P(X_1=-1).
$$
Since $P(T_2<T_0|X_1=1) = P(X_2=2|X_1=1) = p$, and $P(T_2<T_0|X_1=-1)=0$ (starting at $-1$, you must pass $0$ before arrive at $2$), we have $P(T_2<T_0) = p^2$.
For general $n>2$, we can still do
$$
P(T_n<T_0) = P(T_n<T_0|X_1=1) \cdot P(X_1=1)+P(T_n<T_0|X_1=-1) \cdot P(X_1=-1)
$$
and get $P(T_n<T_0) = P(T_n<T_0|X_1=1) \cdot P(X_1=1)$ because $P(T_n<T_0|X_1=-1) = 0$. So it remains to get $P(T_n<T_0|X_1=1)$.
Now it turns to a special case of the Gambler's ruin problem: we start at $1$, and we want to find the probability that we hit $n$ before hitting $0$. Doing a shift, it's equivalent to the probability that we start at $0$ and hit $n-1$ before hitting $-1$.
Thus by the well-known result (see at the bottom of page 3) for Gambler's ruin problem, where $a = n-1$ and $b=1$ for our problem, we get
$$
P(T_n<T_0|X_1=1) = \begin{cases}
\frac{1-(q/p)^{n-1}}{1-(q/p)^{n}},\ p\neq 0.5\\
\frac{1}{n},\ p=0.5.
\end{cases}
$$
In conclusion, we have
$$
P(T_n<T_0) = P(T_n<T_0|X_1=1) \cdot P(X_1=1) = \begin{cases}
p\frac{1-(q/p)^{n-1}}{1-(q/p)^{n}},\ p\neq 0.5\\
\frac{1}{2n},\ p=0.5.
\end{cases}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2785131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Derivative of L-1 norm of matrix Assume you want to find the derivative respect to X ($p \times p$) matrix of
$$
\frac{\partial}{\partial X} || X - A ||_1
$$
where A is ($p \times p$) matrix.
How can I do it?
| Define
$$\eqalign{
Y &= (X-A) \cr
B &= {\rm abs}(Y) \cr
G &= {\rm signum}(Y) \cr
B &= Y\odot G \cr
}$$ where the functions are applied element-wise.
Then find the differential and gradient of the norm as
$$\eqalign{
\phi &= 1:B = 1:Y\odot G = G:Y \cr
d\phi &= G:dY = G:dX \cr
\frac{\partial\phi}{\partial X} &= G = {\rm signum}\big(X-A\big) \cr\cr
}$$
In the above, the symbols {$\,:\,, \odot$} are used to denote the {Frobenius, Hadamard} products, respectively.
Also note that signum has a discontinuity at zero, where its value jumps between $-1$ and $+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2785215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
About a continuous function satisfying a given integral equation
Question: Let $f(x):[0,2] \to \mathbb{R}$ be a continuous function, satisfying the equation
$$
\int_{0}^{2} f(x)(x-f(x)) \,dx = \frac{2}{3}.
$$
Find $2f(1)$.
The solution took $f(x)=\frac {x}{2}$. Yes, I know it does not contradict the condition but how can we be sure that $f(x)=\frac{x}{2}$ and not any other function?
| By A.M-G.M inequality we have $$f(x)(x-f(x))\leq \Big({f(x)+x-f(x)\over 2}\Big)^2 = {x^2\over 4}$$
so we always have $$
\int_{0}^{2} f(x)(x-f(x)) \,dx \leq \int_{0}^{2} {x^2\over 4} \,dx=\frac{2}{3}.
$$
Since we have equality we have $f(x)=x-f(x)$ for each $x$ and we are done (remember that we have equality in A.M-G.M iff numbers are equal).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2785349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Confused about diagonal matix notation Reading a book of physics I found the following definition of diagonal matrix:
$$A_{ij}= A_{ii}\delta_{ij}$$
I understand a diagonal matrix has only diagonal elements nonzero, but is the previous notation correct?
I'm somehow confused because if we choose $A_{ij}$ with $i\neq j $ (f.e $A_{12}$) then we need to use $A_{11}$ from the right side, which seems a bit forced.
| It's correct, if $i=j$, $A_{ij}=A_{ii}\delta_{ij}=A_{ii}\cdot 1 = A_{ii}$
if $i\ne j$, $A_{ij}=A_{ii}\delta_{ij}=A_{ii}\cdot 0 = 0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2785462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Bayes theorem P(A) for second step Here is the problem. Alice went to doctor and doctor said test will produce 99% true result for ill people and 99% true negative for non-ill people. For this particular illness there is 1 per 1000 who get ill.
So I implemented this in bayes theorem formula and found out Alice chance that she is ill are about 9%, considering tests were positive.
I am trying to figure out how to calculate prob. that Alice is ill if she would go to second doctor, who has same test BUT Alice would be negative there.
For P(A) I used 9%, because this is the result from first testing. Is this a way to do it? For this parameter, I get prob that Alice is ill with score 0,09%, which is very small percent. Note, it's 0.09%. Please let me know if I calculated everything ok.
| $\newcommand{\ill}{\text{ill}}\newcommand{\well}{\text{well}}$
First we'll do it by turning the crank. Then further below, we'll see a simpler way.
\begin{align}
\Pr(\ill\mid +) & = \frac{\Pr(+\mid \ill)\Pr(\ill)}{\Pr(+\mid\ill)\Pr(\ill) + \Pr(+\mid\well)\Pr(\well)} \\[10pt]
& = \frac{99\times 1}{(99\times 1) + (1\times999)} = \frac{11}{122} = 0.0901639\ldots \\[20pt]
\Pr(\ill\mid+,-) & = \frac{\Pr(+,-\mid \ill)\Pr(\ill)}{\Pr(+,-\mid\ill)\Pr(\ill) + \Pr(+,-\mid\well)\Pr(\well)} \\[10pt]
& = \frac{(99\times 1\times 1)}{(99\times1\times1)+(1\times99\times 999)} = \frac 1 {1000}.
\end{align}
But now take the prior probability of illness to be $11/122:$
\begin{align}
\Pr(\ill\mid -) & = \frac{\Pr(-\mid \ill)\Pr(\ill)}{\Pr(-\mid\ill)\Pr(\ill) + \Pr(-\mid\well)\Pr(\well)} \\[10pt]
& = \frac{99\times 1}{(99\times 1) + (1\times999)} = \frac 1 {1000}.
\end{align}
It's an exercise in algebra to see why these two methods yield identical results.
Here's a simpler way to look at it:
\begin{align}
\frac{\Pr(\ill\mid +)}{\Pr(\well\mid +)} = \frac{\Pr(\ill)}{\Pr(\well)} \cdot \frac{\Pr(+\mid\ill)}{\Pr(+\mid\well)}.
\end{align}
Looking at it this way makes that algebra exercise far simpler.
This simpler point of view can be written in this way:
$$
\text{posterior odds} = \text{prior odds} \times \text{likelihood ratio} \\
(\text{where “odds'' means }\frac p {1-p} \text{ where } p \text{ is probability}).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2785579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
AP Calculus Fundamental Theorem of Calculus Please help me go over this problem; I am a bit confused.
Find ${\displaystyle \frac{\mathrm d}{\mathrm dt} \int_2^{x^2}e^{x^3}\mathrm dx}$.
| The Fundamental Theorem of Calculus states that if
$$g(x) = \int_{a}^{f(x)} h(t)~{\rm d}t$$
where $a$ is any constant, then
$$g'(x) = h(f(x)) \cdot f'(x)$$
Using this with the integral, $g(x) = g(x)$, $f(x) = x^2$, and $h(x) = e^{x^3}$.
So, $$\frac{\mathrm d}{\mathrm dx} \int_2^{x^2}e^{x^3}\mathrm dx=(2x)e^{x^6}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2785698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is there a proof for $\lim_{x \to a} \frac{1}{x-a} = \infty$? I am an adult software developer who is trying to do a math reboot. I am working through the exercises in the following book.
Ayres, Frank , Jr. and Elliott Mendelson. 2013. Schaum's Outlines Calculus Sixth Edition (1,105 fully solved problems, 30 problem-solving videos online). New York: McGraw Hill. ISBN 978-0-07-179553-1.
So far as I can tell, the following question either has a misprint or the book does not cover the material. It is entirely possible that I failed to grasp a key important sentence.
Chapter 7 Limits, problem 24.
Use the precise definition to prove:
$$
\text{a) }\lim_{x \to 0} \frac{1}{x} = \infty \\
\text{b) }\lim_{x \to 1} \frac{x}{x-1} = \infty \\
$$
My understanding.
It is possible to prove $\lim_{x \to 0^+} \frac{1}{x} = +\infty$ or $\lim_{x \to 0^-} \frac{1}{x} = -\infty$, but not $\lim_{x \to 0} \frac{1}{x} = \infty$ because $\frac{1}{x}$ is a hyperbola with no limit at 0. A similar argument can be made for $\frac{x}{x-1}$ at 1.
Is there a proof for $\lim_{x \to a} \frac{1}{x-a} = \infty$?
| A function is not defined properly unless you specify its domain. When we say the function $\frac{1}{x}$ we usually mean the function $f: \mathbb{R}\setminus \lbrace 0 \rbrace \rightarrow \mathbb{R} $ defined by $f(x)=\frac{1}{x}$ for all $x \in \mathbb{R} \setminus \lbrace 0 \rbrace $. Here $\lim_{x \to a} f(x) $ does not eexist.
But when you take $g=f_{|[0, \infty)}$ then we can say that $\lim_{x \to a} g(x) = \infty$.
So try to find out the domain of the mapping from the context.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2785912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Transformation of a square Having fun with some integrals, I caught myself thinking about transforming of regions. So I have the following questions.
Suppose we have the square determined by inequalities $0<x<1, 0<y<1$ and a transformation rule $u=xy,v=x+y$.
The question is: what form will this square have in new coordinates $(u,v)$?
I tried to express $x$ and $y$ in terms of $u$ and $v$ and got the following: $$x=\frac{v-\sqrt{v^2-4u}}{2}$$ $$y=\frac{v+\sqrt{v^2-4u}}{2}$$
And I don't know what my next step should be.
| Perhaps you can get some insight from mapping some specific sets
$(x, y_0)\mapsto (x y_0, x + y_0)$
where $0\leq y_0 \leq 1$ is a constant number. In the $xy$-plane this a horizontal line. In the $uv$-plane a couple of things can happen, if $y_0 =0$, we obtain a vertical line running through the origin. If $y_0\not = 0$ then $u=x y_0$ and $v = x + y_0$, equivalently
$$
u = y_0(v-y_0)
$$
These are straight lines with slope $1/y_0$ and intercept $y_0$.
From this we then know we are bounded in the $uv$-plane from above by the line $v = y + 1$ which is the result of setting $y_0 = 1$. A simlar analysis can be done for points of the form $(x_0,y)$.
$(x,\alpha x)\mapsto (\alpha x^2, (\alpha + 1)x)$
In the $xy$-plane these are straight lines going through the origin with slope $\alpha > 0$. Since we want to be inside the unit square we need to put some constraints on the domain of $x$: $ x \leq 1 / (\alpha+ 1)$. In the $uv$-plane this is mapped to $u = \alpha x^2$ and $v = (\alpha + 1) x$, or equivalently
$$u = \frac{\alpha}{(1 + \alpha)^2} v^2$$
These are parabolas opening along the $u$-axis. The maximum value of the coefficient happens with $\alpha=1$, in that case the parabola has the form $v=2u^{1/2}$.
From this we then learn we are bounded in the $uv$-plane from below by the parabola $v=2u^{1/2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2786100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Name for geometry that differs by a translation I have a simplistic question: If I have two triangles, and there exists a translation that makes them equivalent (all their vertices would be the same after the translation), then is there a special term in geometry that I would use to describe the relationship between the two triangles?
| They say the translate.
See https://en.wikipedia.org/wiki/Translation_(geometry):
"If $T$ is a translation, then the image of a subset $A$ under the function $T$ is the translate of $A$ by $T$. The translate of $A$ by $T_v$ is often written $A + v$."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2786178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
A formula for higher order derivatives of inverse function The formula for higher order derivatives of compound functions is known as Faà di Bruno's formula. Does there exist a similar formula for higher order derivatives of an inverse function, i.e. $D^k(f^{-1}(x))$?
I would be most interested in a non-recursive formula, if such exists. A combinatorial term seems unavoidable.
Related: The first derivative of the inverse function is very well known, and the second one is also not that difficult to determine.
| As requested by the OP, I add a link to a paper containing the formula and the bibliographic data (in Japanese).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2786280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
multivariable quadratic form I see the multivariable quadratic form given by 2 different expressions:
$$f(X)=X^{T}AX$$
versus
$$f(X)=\frac{1}{2}X^{T}AX +B^{T}X+C$$
Which is right? and crucially why the difference? What affect does it have?
| $X^{T}AX$ is a quadratic form.
$\frac{1}{2}X^{T}AX +B^{T}X+C$ is a quadratic polynomial, of which the quadratic term, $\frac{1}{2}X^{T}AX$, is a quadratic form.
The factor of $\frac{1}{2}$ in the quadratic polynomial is there "for convenience" because it makes the Hessian of the quadratic polynomial equal to $A$ (presuming $A$ is symmetric, which it always can be made to be). However, the factor of $\frac{1}{2}$ could instead be absorbed into (included in) $A$, in which case the Hessian would be $2A$ (again assuming $A$ is symmetric).
In the case of optimization with quadratic objective, the version $\frac{1}{2}X^{T}AX +B^{T}X+C$ can serve as a general objective (although the presence or absence of the constant $C$ has no effect on the optimal argument values (argmin or argmax). $X^{T}AX$ could be used a a quadratic objective, but if so, it would be for the special case in which there is no linear term.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2786426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I show that this set is compact and not compact at the same time? Given:
1.) For each $n \in \mathbb{N}, L_n$ is a line segment from $(0,0)$ to $(1, \frac{1}{n}) $
2.)$ L_\infty$ is a line segment from $(0,0)$ to $(1,0) $
3.) Both $L_n$ and $L_\infty$ are equipped with the subspace topology induced on $\mathbb{R}^2$
4.) $X= \bigcup_{n=1}^\infty L_n $, where $X$ is a topology with open sets $Y$ such that $Y$ is open in $X$ IFF $Y \bigcap L_n$ is open for all $n \in \mathbb{N}$
How do I show that $X$ is not compact in the defined topology but is compact with respect to the subspace topology induced by the usual topology on $\mathbb{R}^2$?
With that being said, what possible open subset $Y$ in $X$ is open with respect to the defined topology on X but not with respect to the subspace topology on X induced by the usual topology on $\mathbb{R}^2$?
| In the "defined topology", $\{(1,1/n):n\in\Bbb N\}$ is a closed
discrete subset. Compact spaces cannot have closed
infinite discrete subsets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2786502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
How to get the exact value of $\sin(x)$ if $\sin(2x) = \frac{24}{25}$ How to get the exact value of $\sin(x)$ if $\sin(2x) = \frac{24}{25}$ ?
I checked various trigonometric identities, but I am unable to derive $\sin(x)$ based on the given information.
For instance:
$\sin(2x) = 2 \sin(x) \cos(x)$
| Refer to the diagram below.
$AD$ is the angle bisector of the right-angled triangle $\Delta ABC$.
Given $BC=24$ and $AC=25$.
Let $\angle DAB = x$.
From $\Delta ABC$ we see that $\sin(2x) = \dfrac{24}{25}$.
Now, by angle bisector theorem, $BD:DC = 7:25$.
Therefore, $BD = \dfrac{7}{7+25} \times 24 = \dfrac{21}4$.
Observing that $AB:BD = 4:3$, we see that $AD = \dfrac{35}4$
Therefore, $\sin x = \dfrac {BD} {DA} = \dfrac 3 5$.
The other possible $x$ would be $x+180^\circ$, since the period of sine is $360^\circ$, making $\sin x = -\dfrac35$.
In conclusion, $\sin x = \pm \dfrac 35$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2786868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 4
} |
Maximizing $3\sin^2 x + 8\sin x\cos x + 9\cos^2 x$. What went wrong?
Let $f(x) = 3\sin^2 x + 8\sin x\cos x + 9\cos^2 x$. For some $x \in \left[0,\frac{\pi}{2}\right]$, $f$ attains its maximum value, $m$. Compute $m + 100 \cos^2 x$.
What I did was rewrite the equation as $f(x)=6\cos^2x+8\sin x\cos x+3$. Then I let $\mathbf{a}=\left<6\cos x,8\cos x\right>$ and $\mathbf{b}=\left<\cos x,\sin x\right>$.
Using Cauchy-Schwarz, I got that the maximum occurs when $\tan x=\frac{4}{3}$, and that the maximum value is $10\cos x$. However, that produces a maximum of $9$ for $f(x)$, instead of the actual answer of $11$.
What did I do wrong, and how do I go about finding the second part? Thanks!
| Using the identity $$\cos^2 x=\frac {1+\cos 2x}{2}$$ and $$2\sin x\cos x=\sin 2x$$ the question changes to finding minimum value of the function
$$6+3\cos 2x+4\sin 2x$$
And now using a standard result that the range of a function $a\sin \alpha\pm b\cos \alpha$ is $[-\sqrt {a^2+b^2},\sqrt {a^2+b^2}]$
Hence the range of the given expression becomes $[1,11]$
Hope you can continue further
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2787031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Summation of $\sum_{r=1}^{n} \frac{\cos (2rx)}{\sin((2r+1)x \sin((2r-1)x}$ Summation of $$S=\sum_{r=1}^{n} \frac{\cos (2rx)}{\sin((2r+1)x) \sin((2r-1)x)}$$
My Try:
$$S=\sum_{r=1}^{n} \frac{\cos (2rx) \sin((2r+1)x-(2r-1)x)}{\sin 2x \:\sin((2r+1)x \sin((2r-1)x}$$
$$S=\sum_{r=1}^{n} \frac{\cos (2rx) \left(\sin((2r+1)x \cos (2r-1)x-\cos(2r-1)x)\sin(2r+1)x\right)}{\sin 2x \:\sin((2r+1)x \sin((2r-1)x)}$$
$$S=\sum_{r=1}^n \frac{\cos(2rx)}{\sin 2x}\left(\cot(2r-1)x-\cot(2r+1)x\right)$$
Any clue here?
| Good question! Here is one possible approach. First we rewrite the numerator as
$$
\cos(2rx) = \cos[(r+\frac{1}{2})x+(r-\frac{1}{2})x] = \cos\frac{2r+1}{2}x\cos\frac{2r-1}{2}x - \sin\frac{2r+1}{2}x\sin\frac{2r-1}{2}x
$$
and the denominator as
$$
4\sin\frac{2r+1}{2}x\cos\frac{2r+1}{2}x\cdot \sin\frac{2r-1}{2}x\cos\frac{2r-1}{2}x.
$$
Therefore the original summation can be written as
$$
S_n = \frac{1}{4}\sum_{r=1}^n \frac{1}{\sin\frac{2r+1}{2}x\sin\frac{2r-1}{2}x} -\frac{1}{4}\sum_{r=1}^n \frac{1}{\cos\frac{2r+1}{2}x\cos\frac{2r-1}{2}x}.
$$
Now we can use the trick that multiply a factor $\frac{\sin x}{\sin x}$ and use $$\sin x = \sin\frac{2r+1}{2}x\cos\frac{2r-1}{2}x - \sin\frac{2r-1}{2}x\cos\frac{2r+1}{2}x.$$ For the first sum,
$$
\frac{\sin x}{\sin\frac{2r+1}{2}x\sin\frac{2r-1}{2}x} = \cot \frac{2r-1}{2}x-\cot \frac{2r+1}{2}x.
$$
For the second sum,
$$
\frac{\sin x}{\cos\frac{2r+1}{2}x\cos\frac{2r-1}{2}x} = \tan \frac{2r+1}{2}x-\tan \frac{2r-1}{2}x.
$$
Thus the original sum is
$$
S_n = \frac{1}{4\sin x}(\cot \frac{1}{2}x-\cot \frac{2n+1}{2}x-\tan \frac{2n+1}{2}x+\tan \frac{1}{2}x).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2787215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Valuations and different ideal in local fields We know that if we have a totally ramified finite extension of local fields $L/K$ then the different ideal is $D_{L/K} = ( h'(\pi))$ i.e. the ideal generated by te derivative of the minimal polinomial of $\pi$ (a uniformizer of L). My question is: since the uniformizer is not unique then if we consider another one uniformizer $\tilde{\pi}$ what happens to $D_{L/K}$ and its valuation is still the same? (i.e. the valuation of the minimal polynomial ?)
Thank you for the answers!
| Another uniformizer has the form $\tilde{\pi} = u \pi$ for some unit $u \in O_L^{\times}$. The minimal polynomial $\tilde h$ is just $\tilde h(X) = h(u^{-1}X)$, and in particular we get
$$\tilde h'(\tilde{\pi}) = u^{-1} h'(u^{-1} \tilde{\pi}) = u^{-1} h(\pi).$$
Thus the ideal generated by $\tilde h'(\tilde{\pi})$ in $O_L$ is the same as the ideal generated by $h'({\pi})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2787316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to show $e^t$ is the unique solution to the integral equation $\int_0^1 e^{ts} x(s) ds = \frac{e^{t+1}-1}{t+1}$? How to show $x(t) = e^t$ is the unique solution to the following integral equation? $$\int_0^1 e^{ts} x(s) ds = \frac{e^{t+1}-1}{t+1}$$
My thoughts:
$$(t+1)\int_0^1 e^{ts} x(s) ds = e^{t+1}-1 $$
Integrating by parts gives:
$$\int_0^1 e^{ts}(x(s)-x'(s)) ds = (e - x(1))e^{t} + x(0) - 1$$
Taking the $k$-th derivative of both sides about $t$:
$$\int_0^1 s^k e^{ts}(x(s)-x'(s)) ds = (e - x(1))e^{t}$$
Let $t = 0$
$$\int_0^1 s^k (x(s)-x'(s)) ds = e - x(1)$$
I cannot figure out how to proceed next.
| Suppose $x(s)$ and $y(s)$ both satisfy the integral equation. Then $\int_0^1 e^{ts}(x(s)-y(s))ds=0$ for all $t$. Then the same is true if we replace $e^{ts}$ with a finite linear combination of elements of the family
$\left\{e^{ts}\right\}_{t\in \mathbb{R}}$. Thus
$$\int_0^1 f(s)(x(s)-y(s))ds,\qquad \forall f\in \operatorname{span}\left\{e^{ts}\right\}_{t\in \mathbb{R}} $$
But $\operatorname{span}\left\{e^{ts}\right\}_{t\in \mathbb{R}}$ is a subalgebra of $C([0,1])$ which separates points and contains a nonzero constant function (let $t=0$), and therefore by the Stone-Weierstrass theorem, it is dense in $C([0,1])$. Thus, using a standard limiting under the integral sign procedure,
$$\int_0^1 f(s)(x(s)-y(s))ds=0,\qquad \forall f\in C([0,1]) $$
It is now a known fact in analysis that this implies $x(s)-y(s)\equiv 0$, i.e. $x=y$. Thus the solution to your integral equation is unique.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2787421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability approach in the expected payoff of a dice game I am trying to understand the problem of expected payoff of a dice game explained here. I can roll the dice up to three times, but after each one I decide if I want to try once again or not. The idea is to find an optimal strategy that maximizes the expected payoff (expected number of spots; they do not sum up).
Let's say I know the optimal strategy: If in the first roll I get 1,2 or 3 I roll the dice once again. If in the second I get 1,2,3 or 4 I roll the dice once again. The end.
I wanted to calculate the expected number using probability $P(X=k)$ that I end up with $k$ spots. I am probably doing something wrong because I don't get the correct answer. My reasoning is the following:
$$P(X=\{1,2,3\}) = \underbrace{\frac{3}{6} \cdot \frac{4}{6} \cdot\frac{1}{6}}_{\text{3rd roll}}$$
$$P(X=4) = \underbrace{\frac{1}{6}}_{\text{1st roll}}+\underbrace{\frac{3}{6} \cdot \frac{4}{6} \cdot\frac{1}{6}}_{\text{3rd roll}}$$
$$P(X=5) = \underbrace{\frac{1}{6}}_{\text{1st roll}}+\underbrace{\frac{3}{6}\cdot\frac{1}{6}}_{\text{2nd roll}}+\underbrace{\frac{3}{6} \cdot\frac{4}{6}\cdot\frac{1}{6}}_{\text{3rd roll}}$$
$$P(X=6) = \frac{1}{6} + \frac{3}{6}\cdot\frac{1}{6}+\frac{3}{6} \cdot \frac{4}{6} \cdot\frac{1}{6}$$
Expected number is $E[X] = \sum_{k=1}^{6}kP(X=k) \approx 4.58$, which is not correct.
I appreciate any help.
| Using your method:
$1$-roll game:
$$E(X)=\sum_{k=1}^6 kP(X=k)=1\cdot \frac{1}{6}+2\cdot \frac16+3\cdot \frac16+4\cdot \frac16+5\cdot \frac16+6\cdot \frac16=3.5.$$
$2$-roll game:
$$E(X)=\sum_{k=1}^6 kP(X=k)=1\cdot \frac{3}{36}+2\cdot \frac{3}{36}+3\cdot \frac{3}{36}+\\
4\cdot \left(\frac16+\frac{3}{36}\right)+5\cdot \left(\frac16+\frac{3}{36}\right)+6\cdot \left(\frac16+\frac{3}{36}\right)=4.25.$$
Note: The player rerolls if the first roll was $1,2,3$.
$3$-roll game:
$$E(X)=\sum_{k=1}^6 kP(X=k)=1\cdot \frac{12}{216}+2\cdot \frac{12}{216}+3\cdot \frac{12}{216}+\\
4\cdot \left(\frac{4}{36}+\frac{12}{216}\right)+5\cdot \left(\frac16+\frac{4}{36}+\frac{12}{216}\right)+6\cdot \left(\frac16+\frac{4}{36}+\frac{12}{216}\right)=4\frac23.$$
Note: The player rerolls if the first roll is $1,2,3,4$ and the second roll is $1,2,3$. You did the other way around!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2787564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Topological "closure" of a binary relation Let $f$ be a binary relation on a set $U$.
Topology $T f = \{ E \in \mathscr{P} U \mid
f [E] \subseteq E \}$ (here $f[E]$ is the image of a set $E$ by binary relation $f$).
Conjecture Closure operator $\operatorname{cl}$ of $T f$ is equal to $E \mapsto ( \operatorname{id}_U \cup f \cup
f^2 \cup f^3 \cup \ldots ) [E]$.
| $E \mapsto ( \operatorname{id}_U \cup f \cup f^2 \cup f^3 \cup \ldots ) [E]$ maps open sets to itself. So, it can be closure only if all open sets are closed.
For a counterexample for the conjecture take $f = \{ (0, 0), (1, 1), (0, 1) \}$. Open sets are $\{ \}$, $\{ 1 \}$, $\{ 0, 1 \}$. $\{ 1 \}$ is open but not closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2787679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving a set is bounded. Let $E = \{x \in\mathbb R^p : \sum_{i = p}^p X_i^2/\alpha_i^2 \leq 1\}$
Prove that $E$ is closed and bounded.
To prove that $E$ is closed I used the fact that the boundary of the set $E$ is equal to $\{x \in\mathbb R^p : \sum_{i = p}^p X_i^2/\alpha_i^2 = 1\}$ and the boundary is contained in the $E.$ So $E$ is closed.
However I do not know how to prove the fact that it is bounded.
| Try to get a lower bound on $\sum\frac{x_i^2}{\alpha_i^2}$ involving the Euclidean norm of $x$. For example:
$$\sum\frac{x_i^2}{\alpha_i^2}\geq\min_i\left(\frac1{\alpha_i^2}\right)||x||_2^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2787757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.