Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Area of triangle using double integrals I have one (rather simple) problem, but I'm stuck and can't figure out what I'm constantly doing wrong. I need to calculate area of triangle with points at $(0,0)$, $(t,0)$, $(t,\frac{t}{2})$. In other words triangle under function $y=\frac{x}{2}$, for $x\in [0,t]$ I thought it is calculated with
$$ \int_0^t \int_0^\frac{t}{2} dudv$$
But it turns out that this equals to $\frac{t^2}{2}$, when obviously this area is $\frac{t\times\frac{t}{2}}{2} = \frac{t^2}{4}$.
What am I doing wrong here?
I need to calculate it this way, not with single integral, or geometrically.
| The integral that you actually computed corresponds to the area of a rectangle. You should actually compute$$\int_0^t\int_0^{\frac x2}\,\mathrm dy\,\mathrm dx.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3025762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Number of solutions of the equation $\cos(\pi\sqrt{x-4})\cos(\pi\sqrt{x})=1$
Find the number of solutions of the equation $\cos(\pi\sqrt{x-4})\cos(\pi\sqrt{x})=1$
\begin{align}
2\cos(\pi\sqrt{x-4})&.\cos(\pi\sqrt{x})=2\\\implies\cos\Big[\pi(\sqrt{x-4}+\sqrt{x})\Big]&+\cos\Big[\pi(\sqrt{x-4}-\sqrt{x})\Big]=2\\
\implies\cos\Big[\pi(\sqrt{x-4}+\sqrt{x})\Big]=1\quad&\&\quad\cos\Big[\pi(\sqrt{x-4}-\sqrt{x})\Big]=1\\
\pi(\sqrt{x-4}+\sqrt{x})=2n\pi\quad&\&\quad\pi(\sqrt{x-4}-\sqrt{x})=2m\pi\\
\sqrt{x-4}+\sqrt{x}=2n\quad&\&\quad\sqrt{x-4}-\sqrt{x}=2m\\
2\sqrt{x}=2(n-m)\quad&\&\quad2\sqrt{x-4}=2(n+m)\\
\sqrt{x}=n-m\quad&\&\quad\sqrt{x-4}=n+m\quad\&\quad x\geq4
\end{align}
How do I properly find the solutions ?
Or can I simply say
$$
x=(n-m)^2=(n+m)^2-4nm=x-4-4nm\implies nm=-1\\
\implies x=\bigg[n+\frac{1}{n}\bigg]^2\in\mathbb{Z}\implies n,\frac{1}{n}\in\mathbb{Z}\\
\implies n\neq0\implies n=1,x=4\text{ is the only solution}
$$
| I suppose $x$ is real in the following and that $\cos$ is the function $\cos:\Bbb R\to\Bbb R$. (There is an other function $\cos:\Bbb C\to\Bbb C$, to use it i have to ask for the branch of the square root(s) first.)
The two $\cos$ functions in the product (evaluated at those two places) must have (in a correlated way) the value $\pm 1$. This makes things easier maybe to decide that $\sqrt x$ and $\sqrt {x-4}$ are two integers of same parity. In particular $x\ge 4$. Starting with the perfect square $2^2=4$ the distance between two perfect squares is $\ge 3^2-2^2=9-4=5$, so it is at least $5$. So the bigger perfect square $x$ (among $x,x-4$) is at most $4$. We get the solution $x=2^2=4$. (There is no $x=1^2$ or $x=0^2$ as solution, because $x-4<0$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3025913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Expected values of squares Question A fair coin is tossed three times. Let Y be the random variable that denotes the square of the number of heads. For example, in the outcome HTH, there are two heads and Y = 4. What is E[Y]?
My answer:
possible outcomes to toss a coin three times : 0, 1, 2, 3
possible outcomes of Y : 0, 1, 4, 9
E[Y] = (1/6 * 0) + (1/6 * 1) + (1/6 * 4) + (1/6 * 9)
Is it ok? Thanks!
| In general for $n$ tosses
$$
\mathbf{E} = \sum_{i=0}^n{n \choose i}\left(\frac1{2}\right)^{\!\!n} i^2
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3026066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
On the definition of locally connectedness. Why don’t we just define the locally connectedness the same way we define locally compactness, that is, every point has a connected neighborhood?
On the wiki page the weak locally connectedness and connectedness are proved to be “almost identical”, but it does not mention the motivation of the definition.
| I think the crucial fact is that (reference)
A space is locally connected if and only if for every open set U, the connected components of U (in the subspace topology) are open.
With the other definition (the weak one), this would be false.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3026183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Is this a sufficient proof for "For all $k \in\Bbb R$, if $k$ is odd, then $4k + 7$ is odd Can I just say 4k is obviously even and even + odd is always odd, so odd?
Or is that too simple, am I missing something?
| Yes, its simple like that. But if you are making a test or something like that you have to be sure that you can use the fact that "even+odd is odd" without proving it... otherwise just prove it! (Not so difficult)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3026283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Calculating convergence of a sum I have the sum:
$$S=\sum_{n=1}^\infty\frac{n!}{n^n}$$
and I am using D'Alembert's test for convergence which states for some sum:
$$\sum_{n=a}^\infty u_n\,\,(a\neq\pm\infty)$$
that it is convergent if:
$$\lim_{n\to\infty}\left|\frac{u_{n+1}}{u_n}\right|<1$$
so to begin with I know that:
$$L=\lim_{n\to\infty}\frac{\frac{(n+1)!}{(n+1)^{n+1}}}{\frac{n!}{n^n}}=\lim_{n\to\infty}\frac{n^n}{(n+1)^n}$$
Up to this point is fine but I am unsure if what I have done next is correct:
$$\ln(L)=\lim_{n\to\infty}n\ln\left(\frac{n}{n+1}\right)=\lim_{n\to\infty}\frac{\ln\left(1-\frac{1}{n+1}\right)}{\frac{1}{n}}$$
now I used the substitution $u=\frac{1}{n+1}$ and with rearrangement I believe I can obtain:
$$\ln(L)=\lim_{u\to0}\frac{(1-u)\ln(1-u)}{u}$$
now since when $u\to0$ both the top and bottom also tend to $0$ I can use L'Hopitals rule:
$$\ln(L)=\lim_{u\to0}\frac{-1-\ln(1-u)}{1}=\lim_{u\to0}-\bigl(1+\ln(1-u)\bigr)=-1$$
which now gives:
$$\ln(L)=-1\therefore L=e^{-1}\approx0.368<1$$
and so there is convergence
In the question it states that I will have to use:
$$\lim_{n\to\infty}\left(1+\frac1n\right)^n=e$$
| Yes that's correct, more simply form here
$$\frac{n^n}{(n+1)^n}=\frac1{\left(1+\frac1n\right)^n} \to \frac1e$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3026425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is $det(A)=0$ a good indicator to say that a matrix is not invertible? In finite elements, for example, appears huge sparce (CRS) matrices (matrices with a lot of zeros). It is possible that matlab (or some other program) calculates $det(A)=0$ even though the matrix is invertible?
| Computing determinant of a matrix is quite sensitive to round-off. On top of that, it is easy to obtain a zero or infinite determinant as output of computational procedures due to floating precision underflow or overflow.
Consider, e.g., $A_n=0.1\times I_n$, where $I_n$ is the $n\times n$ identity matrix. We have $\det(A_n)=10^{-n}$. If $n$ is large enough (324 for double precision), standard techniques to compute the determinant will report you zero although the matrix $A_n$ itself is perfectly conditioned and invertible.
Conditioning of the matrix is a better measure of "(non)singularity" in numerical computations. It gives you information on what is the sensitivity of the matrix "inversion". This is the usual definition of the condition number. Higher the condition number, more sensitive the solution of $Ax=b$ to the perturbations of the input and to round-off.
On top of that, you know how far is the matrix from the nearest singular matrix. If $\kappa(A)$ is the condition number of a nonsingular $A$ in some suitable norm (usually one of the three popular $p$-norms), we know that there is a $\delta A$ such that $\|\delta A\|/\|A\|=1/\kappa(A)$ is singular. Higher the condition number, closer we are to a singular matrix. Eventually, if $1/\kappa(A)\approx\epsilon$, where $\epsilon$ is the machine precision (e.g., $\approx 10^{-16}$ for the double precision floating point arithmetic), the matrix is considered numerically singular.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3026547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Show that $\text{sin}(\bar z)$ is not holomorphic using uniqueness theorem. I want to show that $\text{sin}(\bar z)$ is not analytic using the uniqueness theorem.
The theorem essentially states that if we have a series $z_n$ such that non-constant $f(z_n)$ is zero for each $n$, then the function is not holomorphic if the infinite limit exists, but is not equal to any $z_n$.
The problem is $\text{sin}(\bar z)$ has zeros at $z=n\pi$. The theorem is directly of no help. What transform should be performed?
| If $\sin (\overline {z})$ is holomorphic then it must coincide everywhere with $\sin \, z$ because these two holomorphic functions are equal on the real line (which has limit points). This is a contradiction because these functions are not equal when $z=i$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3026675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If $\mathbb E[X\mid Y]$ can be seen as a projection why $\mathbb E[X\mid Y]=\frac{\mathbb E[XY]}{\mathbb E[Y^2]}Y$ is not always true? We know that $\mathbb E[XY]$ is a scalaire product on $L^2(\mathbb P)$. In a book (an introduction to stochastic differential equation of Evans) page 30-31, it's written that $\mathbb E[X\mid Y]$ can be seen as the orthogonal projection of $X$ on $Y$. So, why $$\mathbb E[X\mid Y]=\frac{\mathbb E[XY]}{\mathbb E[Y^2]}Y,$$
not always true ?
| Well if you take $X$ and $Y$ to be independent, the equation reads $$1 = \frac{Y\mathbb{E}(Y)}{\mathbb{E}(Y^2)},$$ which seems a little bit weird.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3026847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Does exist a compact connected $K' \subset U$ such that $K \subset K'$, if $U$ is an open connected and $K$ in $U$ a compact? Let $U$ be an open and connected set in $\mathbb{R}^n$.
Suppose $K \subset U$ is a compact set. Is it true that there exists a compact and connected set $K' \subset U$ such that $K \subset K'$?
I know that there exists a compact set $K' \subset U$ such that $K \subset K'$.
But how I can guarantee that $K'$ is connected?
| For any $x\in K$ there is $\epsilon >0$ such that $B(x,\epsilon)\subset U$. Let $A_x=B(x,\epsilon/2)$. Then the closure of $A_x$ is a compact connected set contained in $U$.
Since $\{A_x, x\in K\}$ is an open conver of $K$, by compactness it has a finite sub-cover, say $A_1,\dots,A_n$. Choose $x_0\in U$ and connect each $A_i$ to $x_0$ with a path. This is possible because open connectet subset of $\mathbb R^n$ are path-connected.
Now set $B_i$ to be the union of $\overline{A_i}$ and the path from $A_i$ to $x_0$. It is a connected set because union of (two) connected sets witn nonempty intersection. It is compact because it is a union of two compacts.
Now let $K'$ be the union of the $B_i$'s. It is connected because it is union of connected set with non-empty intersection (they intersect at least at $x_0$) and it is compact because it is a finite union of compact sets.
By construction $K'\subset U$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3026934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Solve for the exponent of a matrix we discussed matrices in class and had the following task: Given
$$U=\begin{bmatrix}0 & 0 & 0 & 0 & 0 & 0\\1 & \frac{1}{15} & 0 & 0 & 0 &0\\0 & \frac{8}{15} & \frac{3}{15} & 0 & 0 & 0\\0 & \frac{6}{15} & \frac{9}{15} & \frac{6}{15} & 0 & 0\\ 0 & 0 & \frac{3}{15} & \frac{8}{15} & \frac{10}{15} & 0\\0 & 0 & 0 & \frac{1}{15} & \frac{5}{15} & 1\end{bmatrix},
\quad \overrightarrow{s}=\begin{bmatrix}1\\0\\0\\0\\0\\0\end{bmatrix},$$
solve for $x$ such that $U^x\overrightarrow{s}=\overrightarrow{s_x}$ where the last element (Row) of $\overrightarrow{s_x}$ should be equal or greater than 0.99.
We were told that the only way to get $x$ is by inserting random numbers and "search" for it. By doing this we actually found:
$$\overrightarrow{s_{15}}=\begin{bmatrix}0\\0\\0\\0\\0,0137\\0,9863\end{bmatrix},\quad \overrightarrow{s_{16}}=\begin{bmatrix}0\\0\\0\\0\\0,0091\\0,9909\end{bmatrix}.$$
So the answer is pretty much $x=16$, but is there no way to solve for $x$ instead of inserting random numbers until you find the answer?
Thanks for any answers
| Hint :
Using this you get the diagonalization of $U=PDP^{-1}$, then $U^x = PDP^{-1} \dots PDP^{-1} = P D^x P^{-1}$ where $D$ is the matrix with the elements of $D$ elevated to power $x$. Using this you can solve for the last element being greater to $0.99$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3027061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A counterexample to the epsilon-delta criterion for Absolute Continuity of Measures Let $p>0$, and let $\mu$ be a Borel measure on $[0,\infty)$ defined by $\mu(E)=\int_Ex^pd\lambda$ where $\lambda$ denotes Lebesgue measure. Show that $\mu$ is absolutely continuous with respect to $\lambda$, but $\mu$ does not meet the epsilon-delta criterion for absolute continuity, namely for every $\epsilon>0$ there's a $\delta>0$ such that $\mu(E)<\epsilon$ whenever $\lambda(E)<\delta$.
I've managed to prove that $\mu$ is absolutely continuous with respect to $\lambda$, but I'm not sure how to approach the second part. I basically need to find a sequence of sets $E_n$ (in $B([1,\infty)$) such that $\lambda(E_n)\rightarrow 0$ but $\int_{E_n}x^pd\lambda$ does not go to zero. But I can't even think of such a sequence in the case where $p=1$, let alone the general case.
| Hopefully the case $p=1$ will help:
$$\lambda([a,b])= b-a$$
and
$$
\mu([a,b]) = \frac{1}{2}(b^2-a^2) = \frac{1}{2}(b-a)(b+a)
$$
Taking, for example, $a=3^n$ and $b=3^n+\frac{1}{2^n}$ will give you sets $E_n$ which you can show the desired properties of.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3027194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Recurrence and Fibonacci: $a_{n+1}=\frac {1+a_n}{2+a_n}$ For the recurrence relation
$$a_{n+1}=\frac {1+a_n}{2+a_n}$$
where $a_1=1$, the solution is $a_n=\frac {F_{2n}}{F_{2n+1}}$, where $F_n$ is the $n$-th Fibonacci number, according to the convention where $F_1=0, F_2=1,\cdots
$
This can easily be proven by substituting the solution back into the recurrence relation, or alternatively by induction.
Can the solution be derived directly from the recurrence relation itself, i.e. not by substituting the solution into the recurrence, or by induction?
| Yes, it can be derived directly, assuming some familiarity with the Fibonacci numbers. I am using the initial conditions $F_1=F_2=1$ for the Fibonacci numbers, which impies that
$$
a_{n}=\frac{F_{2n-1}}{F_{2n}}
$$
There is a nice property involving functions of the form
$$
f(x) = \frac{ax+b}{cx+d}
$$
If you compose such an $f$ with another $g(x)=\frac{px+q}{sx+t}$ of the same form, the result is another function in the same form, whose coefficients are identical to matrix product of the squares of coefficients of $f$ and $g$:
$$
f\circ g = \frac{(ap+bs)x+(aq+bt)}{(cp+ds)x+(cq+dt)}
$$
Letting
$$
f(x)=\frac{1+x}{2+x}
$$
this implies that
$$a_n=f\circ f\circ \dots \circ f\big(1\big),$$ with $n-1$ functions composed. Using the matrix connection, and the observation that substituting $x=1$ results in a fraction whose numerator and denominator are the components of the right column of this matrix, this implies that $a_n$ is a fraction whose numerator an denominator are given by the matrix
$$
a_n=\frac{b_n}{c_n},\qquad \begin{bmatrix}b_n\\c_n\end{bmatrix}=\begin{bmatrix}1&1\\1&2\end{bmatrix}^{n-1}\begin{bmatrix}1\\0\end{bmatrix}
=\begin{bmatrix}0&1\\1&1\end{bmatrix}^{2(n-1)}\begin{bmatrix}1\\0\end{bmatrix}
$$
Finally, recall that $\begin{bmatrix}0&1\\1&1\end{bmatrix}$ is the "Fibonacci matrix" which satisfies
$$
\begin{bmatrix}0&1\\1&1\end{bmatrix}^n=\begin{bmatrix}F_{n-1}&F_n\\F_{n}&F_{n+1}\end{bmatrix},
$$
an identity which follows directly from the recurrence $F_n=F_{n-1}+F_{n-2}$ and base cases $F_0=0,F_1=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3027273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
} |
Group Isomorphism regarding Sylow Subgroups Suppose I have given two groups say, $G_1,G_2$ such that they have same order.I'm assuming they are non commutative.Then their Syllow subgroups has same order clearly.If I'm given that the number of Syllow subgroups of these are also same then "are $G_1,G_2$ isomorphic"? I have always find this statement as true considering lower order groups but can't proved it. Is it true or there are some counterexamples too! Thanks for reading.
| This is easily seen to fail for abelian groups, since all abelian groups of a given order have the same number of Sylow subgroups. For a nonabelian example, consider two distinct nonabelian groups of order $p^n$ for some prime $p$ and integer $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3027502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
$\mathbb{F}_{p^d}\subseteq\mathbb{F}_{p^n}$ if and only if $d$ divides $n$ I am trying to solve the following exercise of Dummit and Foote Book(page # 551).
Let $a>1$ be an integer. Prove for any positive integers $n,d$ that $d$ divides $n$ if and only if $a^d-1$ divides $a^n-1$. Conclude in particular that $\mathbb{F}_{p^d}\subseteq\mathbb{F}_{p^n}$ if and only if $d$ divides $n$.
I did the first part and I know that for all $\alpha\in \mathbb{F}_{p^d}$, $\alpha^{p^d}=\alpha$. How can I apply the first part for the second? Any help is greatly appreciated. Thank you.
| If $\mathbb{F}_{p^d}\subseteq\mathbb{F}_{p^n}$, then $n=[\mathbb{F}_{p^n}:\mathbb{F_p}]=[\mathbb{F}_{p^n}:\mathbb{F}_{p^d}][\mathbb{F}_{p^d}:\mathbb{F_p}]=[\mathbb{F}_{p^n}:\mathbb{F}_{p^d}]d$, and so $d$ divides $n$.
If $d$ divides $n$, then $\mathbb{F}_{p^d}^\times$ is a subgroup of $\mathbb{F}_{p^n}^\times$ and so $\mathbb{F}_{p^d}\subseteq\mathbb{F}_{p^n}$. Here we use that $\mathbb{F}_{p^n}^\times$ is cyclic and that there is at most one finite field of a given size.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3027600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Invertibility of $(\textbf{A}^T\textbf{A}+\epsilon \textbf{I})$? I'm given a problem:
$\sigma_1 \geq \sigma_2 \geq ... \geq \sigma_r$ are the nonzero singular values of $\textbf{A}\in\mathbb{R}^{M\times N}$. If $\epsilon \neq 0$ is a real scalar, s.t. $|\epsilon| < \sigma^{2}_r$, show that $(\textbf{A}^T\textbf{A}+\epsilon \textbf{I})$ is invertible.
I found the resources Why is $A^TA$ invertible if $A$ has independent columns? and Matrix inverse of $A + \epsilon I$, where $A$ is invertible
But I'm not sure how useful they are. The first is in the case where A has independent columns, which is not necessarily true here, and the second presumes A is invertible. I believe that $\textbf{A}^T\textbf{A}$ is invertible by definition, but I'm not sure if I can just plug $\textbf{A}^T\textbf{A}$ in everywhere that post uses A and follow through. That also wouldn't help me understand the problem, just blindly substitute into a solution.
Can anyone help me understand WHY $(\textbf{A}^T\textbf{A}+\epsilon \textbf{I})$ is invertible? And/or point me in the right direction to construct a proof of it?
| You can show that $A^\top A$ is positive semi-definite (specifically, that it has nonnegative eigenvalues $\sigma_1^2, \ldots, \sigma_r^2, 0, \ldots, 0$).
[It is not always invertible. Specifically, if some of its eigenvalues are zero, then it is not invertible.]
Knowing this fact about $A^\top A$, can you explicitly write down the eigenvalues of $A^\top A + \epsilon I$? What values of $\epsilon$ make this matrix invertible or non-invertible?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3027674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What do brackets mean for mod operation? I'm solving equation 5 = (6 * 8 + 9 * b)(mod 10). I tried to use wolframalpha and it gives me answer b = 3. But if I remove brackets around mod 5 = (6 * 8 + 9 * b) mod 10 it makes a plot, and doesn't give me any real answer. I have no idea how to solve this without guessing the b. So I assume there is some meaning behind those brackets?
| Okay.
$\pmod n$ means we are doing modulo arithmetic on equivalence classes.
$5 \equiv (6*8 + 9*b)\pmod {10}$ means to find which modulo class $b$ belongs to.
Perhaps a less confusing notation is $5 \equiv_{10} (6*8+9b)$. The $\pmod {10}$ isn't something you do. It's a statement about what "universe" of arithmetic you are working in. And we can solve it:
$5 \equiv_{10} (6*8+9b)$
$5 \equiv_{10} 48 + 9b$
$5 \equiv_{10} 8 + (-1)b$
$-3\equiv_{10} -b$
$3 \equiv_{10} b$
$b \equiv 3 \pmod {10}$.
And without the parenthesis it means the similar but entirely different "gimme the remainder" operation in the "universe" of regualar old arithetic.
$5 = (6*8 + 9b) \mod 10$ means.
The remainder of $(6*8+9b)\div 10$ is $5$
So $48 + 9b = 10n + 5$ for some number $n$
$9b = -43 + 10n$
$9b = 27 + 10(n-6)$
$b = 3 + 10\frac {n-6}9$ for some integer $\frac {n-6}9$. We don't actually care what $n$ is. Just that be if $b= 3 + 10k$ we will get that remainder.
So Wolfram is programmed to solve those in different manners. Even though in practice the results are very very similar.
Note: $5 = 6*8 + 9b \mod 10$ would be different.
$-43 = (9b \mod 10)$ but $0 \le 9b \mod 10 < 10$ and that can't be $-43$ so this has no solutions. (Whereas $5 = (6*8 + 9b)\mod 10$ has infinite solutions and $5 \equiv 6*8 + 9b \pmod{10}$ has one solution that is an equivalence class that represents infinitely many equivalent integers.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3027798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Why are open sets denoted $U$, $G$, and measurable sets $E$? Why are open sets usually denoted by $U$?
Is there a reference about this?
Sometimes open set uses the letter $G$, such as $G_{\delta} $ set.
I also wonder the meaning of $G$.
Additional question: Why do we use or who first used $E$ to denote a subset in measure theory?
| For $G_\delta$ set and $F_\sigma$ set, each of this is from german word Gebiet and french word fermé respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3028215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
$\Bbb Q(\sqrt 2)$ and $\Bbb Q(\sqrt 3)$ are not isomorphic How to prove that $\Bbb Q(\sqrt 2)$ and $\Bbb Q(\sqrt 3)$ are not isomorphic. I thought that they are but I got this problem in Dummit Foote Section 14.1. Question no 4. As they extension over $\Bbb Q$ by the polynomials $x^2-2$ and $x^2-3$ resp.
| Hint:
Take $\;w=a+b\sqrt2\in\Bbb Q(\sqrt2)\;$ s.t. $\;\phi w=\sqrt3\in\Bbb Q(\sqrt3)\;$ , with $\;\phi\;$ an isomorphism. This means that
$$3=\phi w^2=\phi(a^2+2b^2+2ab\sqrt2)=a^2+2b^2+2ab\phi\sqrt2\implies\phi\sqrt2\in\Bbb Q$$
and now get a contradiction...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3028343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the division symbol $\div$ acceptable based on international standards? The division symbol $\div$ is found in almost all calculators; however, I seldom see it in any formal writing. It seems people almost exclusively prefer $\frac{a}{b}$, $a/b$ or $ab^{-1}$ to $a\div b$. Is the symbol $\div$ considered outdated today? Is it all right to use it in formal writtings (for example, denote $ab^{-1}$ by $a\div b$ when $a$ and $b$ are elements in a field such that $b\neq 0$)?
Edit: Since the question was on hold since it is "opinion based", I would like to reask my question in the following way:
Is the usuage of the symbol $\div$ in professional mathematical writtings acceptable based on objective international standards, such as ISO 80000-2?
| The $\div$ symbol is outdated and should be avoided.
Quoting from Florian Cajori's book A History of Mathematical Notations.
(Volume I, Chapter III, Part B, Paragraph $243$ A critical Estimate of $:$ and $\div$ as Symbols)
In 1923 the National Committee on Mathematical Requirements voiced the following opinion:
"Since neither $\div$ nor $:$, as signs of division plays any part in business life, it seems proper to consider only the needs of algebra, and to make more use of the fractional form and (where meaning is clear) of the symbol $/$, and to drop the symbol $\div$ in writing algebraic expressions."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3028499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Can a linear isometry always be expressed in terms of an orthogonal matrix? Is the following true?
Let $S: \mathbb{R}^n \to \mathbb{R}^n$ be a linear transformation such that $||S(v)|| = ||v|| \ \text{for all} \ v \in \mathbb{R}^n$, where $||\cdot||$ denotes the Euclidean norm. Then, for some $A \in O(n)$ and for all $v \in \mathbb{R}^n$, we have $S(v) = Av$. Here $O(n)$ denotes the set of all orthogonal $n \times n$ matrices.
If so, how can one prove it?
| Let $A$ be the matrix of $S$. Since $\langle u,v\rangle=\frac14\lVert u+v\rVert^2-\frac14\lVert u-v\rVert^2$, we have that $\langle Au,Av\rangle=\langle u,v\rangle$ for all $u,v$. Id est, $u^t(A^tA)v=(Au)^t(Av)=u^tv=u^tIv$ for all $u,v$. Since $e_i^tXe_j=X_{ij}$, that identity implies $A^tA=I$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3028687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Weird thing about z-transform and difference here is my doubt: we were told that the ROC of the Z-transform of the sum of two sequences is the intersection of the respective ROCs as the two of them are limited only if both of them are. Now I had to solve an exercise where I had to compute the z-transform of the difference of two sequences and establish the ROC. It looked like this: $u[n] - u[n-k]$ where $u[n]$ is the heaviside step function and k=10. I tried in two different ways: first one was by using the linearity of z-transform and I got that
$X(z) = \frac{z}{z-1} - z^{-10}\frac{z}{z-1} = \frac{z}{z-1} + z^{-10}\frac{z}{1-z}$
And it would look like we need to have for the first to be finite that $ \lvert z\rvert > 1$ and for the second one as well. Instead if I compute this by definition I get that $X(z) = \sum\limits_{k=0}^{9} z^{-k}$ that looks to be finite for each $z\neq 0$.
Why does it happen? Is there some particular configuration that caused this?
| You are right. The ROC of the sum of $u[n]$ and $-u[n-10]$ is the full plane except $z=0$. And the ROC of each summand is $|z|>1$.
we were told that the ROC of the Z-transform of the sum of two sequences is the intersection of the respective ROCs
Not quite. More precisely: given the ROCs of two signals, the ROC of the sum is "at least" the intersection of the two. That is the best you can say, if you are not given more data. But it can be larger. In other words: $ROC(x_1 +x_2)\subseteq ROC(x_1) \cap ROC(x_2) $
In terms of zeros and poles (if we are dealing with rational Z-transforms - as it's usual, and as it's the case case), if our sequences are right-sized (zero for $n<0$) the ROC is given by $|z|> |z_0|$ where $z_0$ is the out-most pole.
Hence, the ROC of the sum will depend on the largest pole of the sum (intersection of ROCs)... unless there is a pole cancellation. That's what happen here: the pole at $z=1$ disappears.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3028842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Every inner product space is a metric space. Show that every inner product space is a metric space.
To show this should I set the distance metric as $d(x,y) = <x-y,x-y>$, then show properties of being metric space such as d(x,y) = d(y,x) etc.?
If so the point I do not understand is why we set metric as $d(x,y) = <x-y,x-y>$ (this metric is mentioned in wolfram)
| That is wrong. It should be $d(x,y)=\sqrt{\langle x-y,x-y\rangle}$ because the map $x\mapsto\sqrt{\langle x,x\rangle}$ is a norm. And, whenever you have a norm $\lVert\cdot\rVert$, the map $(x,y)\mapsto\lVert x-y\rVert$ is a distance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3028948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Sigma notation for iterating through number of members of a set with constant expression Say I have a graph G and I want to sum some constant C (like the minimum degree of the graph) for every vertex. Can I use the following notation?
$$\sum_{x \in V(G)}C $$
Is this an appropriate way to use sigma notation?
There is a similar question here Notation of the summation of a set of numbers
but it doesn't account for the fact that the expression could be a constant. A person I am working with questioned it and I couldn't find any resources where it is used in this manner. I don't see why it would be improper because you could have an expression like $\sum_{i=1}^{n}C$.
Thank you
| Indeed, you are correct, you can use $\sum_{x \in S} C$ for any set $S$ and constant $C$, and since $C$ does not depend on $x$, this simplifies to
$$
\sum_{x \in S} C = C \cdot |S|,
$$
for any finite set $S$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3029105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\lim_{x\to \infty}(\frac{x}{x-1})^x$ I am going over a solution given to solving the follow limit,
$$\lim_{x\to \infty}(\frac{x}{x-1})^x$$
The solution continues as follows,
Consider raising the function to $e^{ln\cdots}$
We can find the limit as follows,
$$\lim_{x\to \infty} x \ln(\frac{x}{x-1}) = \lim_{x\to \infty} \frac{\ln(\frac{x}{x-1})}{\frac{1}{x}}$$
The solution argues this is just $\frac{0}{0}$ and as such we can apply L'Hospital's rule. It continues on to find the limit equals 1, so the limit of the function is $e$.
However, I don't understand how that expression evaluates to $\frac{0}{0}$, in fact it seems to express
$$\frac{\ln(\frac{\infty}{\infty})}{0}$$
I assume the argument is that $\frac{\infty}{\infty}$ equals 1, and $\ln(1) = 0$, so we have $\frac{0}{0}$. But I thought we cannot evaluate $\frac{\infty}{\infty}$?
| L'Hopital is rarely the method of choice. In this case, let $y = x-1$. Then
$$
\left(\frac{x }{x-1}\right)^x
=
\left(\frac{y+1}{y}\right)^{y+1}
=
\left(1 + \frac{ 1}{y}\right)^{y }\left(1 + \frac{ 1}{y}\right)^{ 1}.
$$
Now you can recognize the limit as $y \to \infty$ as $e \times 1 = e$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3029295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
${x^4}$ as "tesseracting" a number $x$ So, this strange thought popped up into my head. You know how we call ${x^2}$ squaring due to the fact that what you're essentially doing is finding the area of a square with side length $x$? The same goes for cubing. Saying ${x^3}$ is really going to give you the volume of a cube with side length $x$. Now, what if I tried to coin a term that would take this -ing pattern to another level with ${x^4}$? This would technically give me the 4D volume, per se, of a tesseract(a 4D cube). So, couldn't this really be thought of as "tesseracting" a number?
In fact, I may have a deduction/thought. Saying ${x^n}$ could just be thought of as n-cubing a number. A square can be thought of as a cube in 2D. As in, a cube with only length and height, no depth. So, saying ${x^2}$ can be seen as 2-cubing, or squaring a number. The same goes for ${x^3}$. You are 3-cubing, or just cubing, a number. So it seems this n-cubing pattern holds. So, why not extend it to the tesseract? Why isn't ${x^4}$ just thought of as 4-cubing or "tesseracting"? The pattern I thought of would still hold.
Also, do you mind going easy with the criticisms? I'm not trying to sound like a you-know-what, but just keep in mind I'm extremely amateur and only in 11th grade. And since it seems like this is original to my thoughts, I'm a bit overexcited about this thought.
| I think this is a really good way to bring geometric intuition into exponentiation. I'm not sure "tesseract" is universally the term for the "4-cube," but "4-cubing" seems great.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3029414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Show that $\mathbb{Q}(\sqrt{3},\sqrt[4]{3}, \sqrt[8]{3},...)$ is algebraic over $\mathbb{Q}$ but not a finite extension.
Show that $\mathbb{Q}(\sqrt{3},\sqrt[4]{3}, \sqrt[8]{3},...)$ is algebraic over $\mathbb{Q}$ but not a finite extension.
I think for the algebraic part, since for every simple extension, each of those elements can be adjoined and each of these simple extensions has a minimal polynomial that cannot be reduced in $\mathbb{Q}$. For example, the simple extension $\mathbb{Q}(\sqrt{3}, \sqrt[4]{3})(\sqrt[8]{3})$ has minimal polynomial $x^{8}-3$. And since each simple extension has an increasingly large degree, the degree of the simple extensions over the previous extension gets larger for each attachment. But I am not sure how to express this formally...
For the infinite degree part, I was thinking because the set $\left \{\sqrt{3},\sqrt[4]{3}, \sqrt[8]{3},...\right \}$ is linearly independent?
| Any element $\alpha$ of $F$ is a rational expression in the numbers adjoined. As such, it can involve only finitely many of the $\sqrt[2^k]3$. If in such an expression, $\sqrt[2^n]3$ is one with maximal $k$, then all other $\sqrt[2^k]3$ are powers of $\sqrt[2 k]3$. It follows that $\alpha\in\Bbb Q(\sqrt[2^n]3)$ and $\alpha $ is algebraic.
A different approach for the infinity part: By Eisenstein, the polynomial $X^{2^n}-3$ is irreducible. Hence $[F:\Bbb Q]\ge[\Bbb Q(\sqrt[2^n]3):\Bbb Q]\ge 2^n$, where $n$ is arbitrariy. It follows that $[F:\Bbb Q]$ is inifnite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3029568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Roots of a polynomial with holomorphic coefficients Let $f_1,f_2,\dots,f_n : \mathbb{D} \to \mathbb{C}$ be holomorphic functions and consider the polynomial
$$ w^n + f_1(z)w^{n-1} + \dots + f_n(z). $$
Suppose, I happen to know that
*
*For each $z$, the roots of the above polynomial are all in $\mathbb{D}$.
*For each $z$, one of the roots is an $n$-th root of $z$.
Given these two conditions, is it true that for each $z$, all the roots of the above polynomial are nothing but $n$-th roots of $z$?
| It is true. Here's the proof. Let $\zeta=e^{\frac{2\pi i}{n}}$ be the $n$-th root of unity. Note that by the assumption, for each $z\in \mathbb{D}$, there is $k\in \{0,1,\ldots,n-1\}$ such that
$$
p(z^n, z\zeta^k) = 0
$$ where $$
p(z,w) = w^n + f_1(z)w^{n-1} + \cdots + f_n(z).
$$ Let $D_k$ be the set of all $z\in \mathbb{D}$ such that $p(z^n, z\zeta^k) = 0$. Then each $D_k$ is closed in $\mathbb{D}$ and we have
$$
\mathbb{D} = \bigcup_{0\leq k\leq n-1} D_k.
$$ Note that by pigeonhole principle, one of $D_k$ has $0$ as its limit point. Then Identity theorem implies that for some $k$,
$$
p(z^n, z\zeta^k) \equiv 0\quad\cdots(*),
$$ for all $z\in\mathbb{D}$. Note that change of variable $z\mapsto z\zeta^j$ yields
$$
p(z^n, z\zeta^{k+j})=0,\quad\forall j,
$$ and hence
$$
p(z^n, z\zeta^{j})=0, \quad \forall j=0,1,\ldots, n-1.
$$ It is saying that $p(z,w)=0$ for $\textbf{all}$ roots of $w^n = z$, giving us the desired result that
$$
p(z,w) = w^n -z.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3029748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Spectrum of the product of two bounded operators
If $T$ is not invertible normal operator and $S$ is a bounded operator. Why $TS$ and $ST$ have the same spectrum?
Proof: Assume that $T$ is not invertible normal operator, then $0 \in \sigma(T)$. Since $0$ is in the approximate point spectrum of $T$, it is clear that $0 \in \sigma(ST).$ Since $T$ is normal, it holds $\|Tx \| = \|T^*x \|$ for any vector $x$. Hence $0$ is in the approximate point spectrum of $T^*$ and hence $0 \in \sigma(S^*T^*) = \sigma((TS)^* = \overline{\sigma(TS)}$. Hence $0 \in \sigma(TS)$.
Recall the following definition:
Definition: Let $T$ be a bounded linear operator of a complex Hilbert space $\mathcal{H}$. The approximate point spectrum of $T$ is the set of all values $\lambda \in \mathbb{C}$ such that there exists a sequence of unit vectors $(x_n)_n\subset \mathcal{H}$ so that $\|(T-\lambda)x_n\|\to 0$ as $n\to \infty$.
| Without the assumption of normality, we can at least prove the following:
If $T,S\in \mathcal{B}(\mathcal{H})$, then $\{0\}\cup\sigma(ST)=\{0\}\cup\sigma(TS)$.
(This holds more generally in unital Banach algebras.) The second case is saying that $0$ is already in $\sigma(ST)$ and $\sigma(TS)$.
EDIT
This follows from the general statement:
If $T\in \mathcal{B}(\mathcal{H})$ is normal, then $\sigma(T)=\sigma_{ap}(T)$.
Indeed, fix $\lambda\in\sigma(T)$. If $\lambda$ is an eigenvalue, we are done, so assume it is not, i.e., assume $\ker(T-\lambda)=\{0\}$. Since $T$ is normal, we have $\ker(T^*-\overline\lambda)=\{0\}$, and thus
$$\overline{\operatorname{Range}(T-\lambda)}=\ker(T^*-\overline\lambda)^\perp=\mathcal{H}.$$
Hence $\operatorname{Range}(T-\lambda)$ is a proper dense subspace of $\mathcal{H}$. Thus $T-\lambda$ is not bounded below, and the result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3029923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Show that $\lim_{n\to\infty}\frac{\ln(n!)}{n} = +\infty$
Show that
$$
\lim_{n\to\infty}\frac{\ln(n!)}{n} = +\infty
$$
The only way i've been able to show that is using Stirling's approximation:
$$
n! \sim\sqrt{2\pi n}\left(\frac{n}{e}\right)^n
$$
Let:
$$
\begin{cases}
x_n = \frac{\ln(n!)}{n}\\
n \in \Bbb N
\end{cases}
$$
So we may rewrite $x_n$ as:
$$
x_n \sim \frac{\ln(2\pi n)}{2n} + \frac{n\ln(\frac{n}{e})}{n}
$$
Now using the fact that $\lim(x_n + y_n) = \lim x_n + \lim y_n$ :
$$
\lim_{n\to\infty}x_n = \lim_{n\to\infty}\frac{\ln(2\pi n)}{2n} + \lim_{n\to\infty}\frac{n\ln(\frac{n}{e})}{n} = 0 + \infty
$$
I'm looking for another way to show this, since Stirling's approximation has not been introduced at the point where i took the exercise from yet.
| The Cesaro-Stolz criterion is your easiest and cleanest way out of this. It states that given sequences $x, y \in \mathbb{R}^{\mathbb{N}}$ such that $y$ is strictly increasing and unbounded and the sequence of successive increments converges in the extended real line
$$\lim_{n \to \infty} \frac{x_{n+1}-x_{n}}{y_{n+1}-y_{n}}=t \in \overline{\mathbb{R}}$$
then $$\lim_{n \to \infty}\frac{x_{n}}{y_{n}}=t$$
You can apply this to $x=(\mathrm{ln}(n!))_{n \in \mathbb{N}}$ and $y=(n)_{n \in \mathbb{N}}$.
A similar argument would rely on a version of the ratio criterion: if $x \in (0, \infty)^{\mathbb{N}}$ is a sequence of strictly positive reals such that
$$\lim_{n \to \infty} \frac{x_{n+1}}{x_n}=a \in [0, \infty]$$
then
$$\lim_{n \to \infty} \sqrt [n]{x_{n}}=a$$
This criterion itself can be proved by the Cesaro-Stolz criterion (there are also other methods) and you can apply it to conclude that
$$\sqrt[n]{n!} \xrightarrow{n \to \infty} \infty$$
as $\frac{(n+1)!}{n!}=n+1 \xrightarrow{n \to \infty} \infty$.
In the same vein of employing ratios, one can settle the convergence of the sequence
$$\left(\frac{\sqrt[n]{n!}}{n}\right)_{n \in \mathbb{N}^{*}}=\left(\sqrt[n]{\frac{n!}{n^n}}\right)_{n \in \mathbb{N}^{*}}$$
by studying the sequence of successive ratios:
$$ \frac{(n+1)!}{(n+1)^{n+1}} \cdot \frac{n^n}{n!}=\left(\frac{n}{n+1}\right)^{n} \xrightarrow{n \to \infty} \frac{1}{\mathrm{e}}$$
Hence,
$$\sqrt[n]{n!}=n \cdot \frac{\sqrt[n]{n!}}{n} \xrightarrow{n \to \infty} \infty \cdot \frac{1}{\mathrm{e}}=\infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3030037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
$a = d$ implies $a^b = d^b$
Prove that $a = d$ implies $a^b = d^b$, where $a, d$ are arbitrary
nonnegative integers and $b$ is any positive integer.
If I could use division I think it could be something like that:
$a^b / d^b = a ^{b-b} = a^0 = 1$ (assuming $a = d$), but I'm trying to figure out how to prove this using only multiplication and addition properties (natural numbers).
Here's my idea:
If a = b then there exists an integer $k$ such that $x = ak$ and $x = bk$, that also implies $(a^b)k = (d^b) k$ and $a^b = d^b$ by using the cancellation law of multiplication.
| There must be something I don't understand. Are you sure you have asked the question you intended?
The equals sign in the assumption $a=d$ means that "$a$" and "$d$" are essentially just different names for the same number. So you can substitute one for the other in any formula. There is nothing to prove.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3030168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove that $r(A)=\operatorname{tr}(A^2)$ Let $A\in M_n(\mathbb{C})$. Show that if $A^3=A$, then $r(A)=\operatorname{tr}(A^2)$.
Since $A^3=A$, the possible eigenvalues are $0,1,-1$. I don't know from here how to compute the rank of $A$.
Edited
Since eigenvalues of $A$ are $0,1,-1$ SO eigenvalues of $A^2=0,1$ So $r(A^2)=tr(A^2)$ Now we have to show $r(A^2)=r(A)$ where rank of $A^2$ is the number of non-zero eigenvalues .
| The rank is the number of nonzero eigenvalues and as their squares are 0 or 1, this number is just the same as the sum of the squares.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3030278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating the limit using Taylor Series We're asked to find the following limit by using Taylor expansions $$\lim_{x\to{}0}\frac{e^{3x}-\sin(x)-\cos(x)+\ln(1-2x)}{-1+\cos(5x)}$$
My Attempt:
Expressing $e^{3x}$, $\sin(x)$, $\cos(x)$, $\ln(1-2x)$ and $\cos(5x)$ in their respective taylor expansions yielded the following monstrous fraction, https://imgur.com/a/xGyfIyL (Picture size too big to be uploaded here for some reason, plus fraction too large to be expressed in the space given :/) But anyways, I can't seem to factorize this thing and evaluate the limit as $x\to{}0$, any help would be appreciated.
| We need only quote the numerator and denominator up to $x^2$ terms: $$\lim_{x\to 0}\frac{1+3x+\color{blue}{9x^2/2}-x-1+\color{blue}{x^2/2}-2x\color{blue}{-2x^2}+O(x^3)}{-1+1\color{blue}{-25x^2/2}+O(x^3)}.$$You'll find only $x^2$ terms survive in each.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3030377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
de Rham cohomology of doubly punctured torus Let $T^2=S^1\times S^1$. I'd like to know all de Rham cohomology groups of $M=T^2-\{a,b\}$ but I couldn't find a result. So I want to compute it and I'm thinking of using Mayer Vietoris sequence. I need two open sets whose union covers $M$. I'm having difficulty choosing these open sets. Any help is appreciated.
| Its easy to visualize when looking at the fundamental domain of the torus:
$U$ is a neighbourhood of $x$, $V$ is a neighbourhood of $y$. Removing $x$ and $y$ from the torus is homotopy equivalent to removing the whole neighbourhoods $U$ and $V$. Further we can choose $U,V$ so large, that they fill the entire triangle they lie in.
So the fundamental domain is homotopy equivalent to the union of the boundary of the square with the diagonal.
But some parts of the boundary are identified with each other.
Doing the identifications, we obtain the wedge sum of three circles (their common point is $A$). One circle corresponds to the left side = right side of the bundary, one circle corresponds to the top side = bottom side and one circle corresponds to the diagonal.
More precisely, first identifying top with bottom, we get two distinct edges connecting the bottom/top left vertex with the bottom/top right vertex (corresponding to the bottom/top edge and the diagonal) and two circles (corresponding to the left and right edges).
Then we identify the left and right edges. The two circles we already had get identified, and as the bottom/top left edge gets identified with the bottom/top right edge, the edges which previously connected these two vetices become circles.
So in the end we get three circles which are connected in their common point $A$, the bottom/top now left/right vertex.
I hope the detailled description did not make things more confusing. I probably should've also drawn this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3030496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Can every connected reductive group over a char $p$ field be defined over $\mathbb F_p$? If I have a connected reductive group $G$ over a field with characteristic $p>0$, can it always be defined over $\mathbb F_p$? For split groups like $GL_n, GSp_{2n}$ it's trivial, how about general case?
| No. Here's a simple example. Let $T:=\mathsf{Res}^1_{\mathbb{F}_{p^4}/\mathbb{F}_{p^2}}\mathbb{G}_{m,\mathbb{F}_{p^4}}$. Then, $T$ is a non-split one-dimensional torus over $\mathbb{F}_{p^2}$ which does not have a model over $\mathbb{F}_p$. Indeed, to say that $T$ has a model over $\mathbb{F}_p$ would mean that there was some torus $T'$ over $\mathbb{F}_p$ such that $T'_{\mathbb{F}_{p^2}}\cong T$. But evidently $\dim T'=1$ and, up to isomorphism, the only one-dimensional tori over $\mathbb{F}_p$ are $\mathsf{Res}^1_{\mathbb{F}_{p^2}/\mathbb{F}_p}\mathbb{G}_{m,\mathbb{F}_{p^2}}$ and $\mathbb{G}_{m,\mathbb{F}_p}$. Both of these split over $\mathbb{F}_{p^2}$ so can't be models of $T$.
EDIT: To be clear, I was answering the question in the first sentence of the body of the post. The answer to the question in the title is yes, as Tobias Kildetoft pointed out in the above comments. Every group over $\overline{\mathbb{F}_p}$ is split, and every split group has a model over $\mathrm{Spec}(\mathbb{Z})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3030607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
If the square of every element of a ring is in the center, must the ring be commutative? Let $R$ be a ring with identity such that the square of any element belongs to the center of $R$. Is it necessary true that $R$ is commutative?
(I can show that for any $x,y\in R$, $2(xy-yx) =0 $ but I cannot prove commutativity of $R$.)
| Here's a counterexample. Consider the $\mathbb{F}_2$-algebra $R$ generated by two elements $x,y$ modulo relations that $x^2=y^2=0$ and every word of length $3$ formed by $x$ and $y$ is $0$. Explicitly, $R$ has $\{1,x,y,xy,yx\}$ as a basis and any product of basis elements that would give a word not in this set is $0$. Since $xy\neq yx$, $R$ is not commutative.
I now claim that the square of every element of $R$ is central. Indeed, for an element $r=a+bx+cy+dxy+eyx\in R$ ($a,b,c,d,e\in\mathbb{F}_2$) we have
$$r^2 = a^2 + bcxy + bcyx.$$ To show that such an element is central, it suffices to show that $xy+yx$ is central. But this is trivial, since $xy+yx$ annihilates both $x$ and $y$ on both sides.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3030793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
How to prove that there is a differentiable function $f$ such that $[f(x)]^{153}+f(x)+x=0$ for all $x$ Prove that there is a differentiable function $f$ such that $[f(x)]^{153}+f(x)+x=0$ for all $x$. Furthermore, find $f'$ in terms of $f$.
To me I just write $y$ instead of all $f(x)$ and find that $x=-(y^{153}+y)$ If I just derive that would that be the solution of this question? If not what should i do to answer it appropriately.
| HINT:
If $x = -[f(x)]^{153} - f(x)$,
then $g(x) = -x^{153} - x$ is the inverse function of $f(x)$.
Then if you prove $g(x)$ is differentiable, what does it tell about its inverse function ?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3030957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove that the set-theoretic difference operation $\setminus$ cannot be defined through $\cap$ and $\cup$ How does one go about proving that the set-theoretic difference operation $\setminus$ cannot be defined through the operations $\cap$ and $\cup$?
My thoughts: I first assumed $A$ and $B$ are two non-disjoint non-empty sets since if they are disjoint and non-empty, then we have that $A\setminus B= A =A\cap A=(A\cap B) \cup A$. Therefore we have defined, in this case, $\setminus$ in terms of $\cap$ and $\cup$.
Next, I drew three Venn diagrams for $A\cap B $, $A\cup B $ and $A\setminus B $ and made the observation that $A \setminus B$ involves an exclusion of a part of $A$. When I looked at the definitions, I could see this clearer:
$$A\cap B = \{x\mid (x\in A) \land (x\in B)\}$$
$$A\cup B = \{x\mid (x\in A) \lor (x\in B)\}$$
$$A\setminus B = \{x\mid (x\in A) \land \mathbf{(x\notin B)}\}$$
From this, I decided to conclude that since the intersection and union operations have the condition that $(x\in B)$ whereas the set difference operation requires $(x\notin B)$, we cannot define set difference through the operations union and intersection only.
This is as far as I could go. How can I prove this formally?
| All you need is a counterexample.
Let $A = B = \{ 0 \}$. Then all of the sets
$$A,~ B,~ A \cup A,~ A \cap A,~ A \cup B,~ A \cap B,~ B \cup A,~ B \cap A,~ B \cup B,~ B \cup B$$
are equal to $\{ 0 \}$, and so any set built out of $A$, $B$ and the operations $\cup$ and $\cap$ is equal to $\{ 0 \}$.
But $B \setminus A = \varnothing \ne \{ 0 \}$, and so the set difference operator $\setminus$ can't be expressed in terms of $\cup$ and $\cap$ alone.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3031087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Does a sequence of random variables constructed in a certain manner converge in distribution to a Gaussian? Let $\{X_n\}_{n \in \mathbb{N}}$ be a sequence of of IID random variables taken for simplicity with mean zero and variance one.
The Central Limit Theorem give us that
$$
\frac{X_1 + \dots + X_n}{\sqrt{n}} \xrightarrow {d} N\left(0,1\right)
$$
If one constructs a new sequence $\{Y_n\}_{n \in \mathbb{N}}$ from the first one given by the square of the sum of two consecutive terms, i.e.
$$Y_1 = (X_1 + X_2)^2, Y_2 = (X_2 + X_3)^2, \dots , Y_n = (X_n + X_{n+1})^2 $$
do there exist two sequences $\{\mu_n\}_{n \in \mathbb{N}}$ and $\{\sigma_n\}_{n \in \mathbb{N}}$ s.t.
$$\frac{1}{\sigma_i^2} \sum_{i=1}^n ( Y_i - \mu_i) \rightarrow N(0,1) $$
I was thinking of using some Lyapunov type central limit theorem to prove this but there is an obvious (weak) dependence in the sequence. Is it possible to show this or is it not true?
| Let $\left(X_i\right)_{i\geqslant 1}$ be an i.i.d. sequence and let $f\colon \mathbb R^2\to \mathbb R$ be a function such that such that the random variable $Y_i:=f(X_i,X_{i+1})$ is centered and square integrable.
Let $n$ be a fixed integer and $q\in\left\{1,\dots,n\right\}$. We write
\begin{align}
\sum_{i=1}^nY_i&= \sum_{i=1}^{q\left\lfloor \frac nq\right\rfloor}Y_i+
\sum_{q\left\lfloor \frac nq\right\rfloor}^nY_i\\
&= \sum_{k=1}^{\left\lfloor \frac nq\right\rfloor}
\sum_{i=(k-1)q+1}^{kq}Y_i+\sum_{q\left\lfloor \frac nq\right\rfloor}^nY_i\\
&= \sum_{k=1}^{\left\lfloor \frac nq\right\rfloor}
\sum_{i=(k-1)q+2}^{kq}Y_i+\sum_{k=1}^{\left\lfloor \frac nq\right\rfloor}Y_{(k-1)q+1}+\sum_{q\left\lfloor \frac nq\right\rfloor}^nY_i.
\end{align}
Denoting
$Z^q_k:= \sum_{i=(k-1)q+2}^{kq}Y_i$, the sequence $\left(Z^q_k\right)_{k\geqslant 1}$ is i.i.d. hence we could apply the central limit theorem but the problem is that in order to make the contribution of $n^{-1/2}\sum_{k=1}^{\left\lfloor \frac nq\right\rfloor}Y_{(k-1)q+1}$ small but we can choose $q$ depending on $n$ and apply the central limit theorem for arrays.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3031285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find all positive integers $a$ and $b$ such that $(1 + a)(8 + b)(a + b) = 27ab$. Here's the problem I'm having difficulties with:
Find all positive integers $a$ and $b$ such that $$(1 + a)(8 + b)(a + b) = 27ab\,.$$
Does anyone have an idea how to do this? Any detailed solution is welcome! :)
| Using Hölder's inequality,
$$27ab = (a+1)(8+b)(b+a) \geqslant \left(2\sqrt[3]{ab}+\sqrt[3]{ab} \right)^3=27ab$$
Hence we are looking for the equality case for Hölder, which is when $a:8:b=1:b:a \implies (a, b)=(2, 4)$.
In fact, this is the only solution among positive reals, not just positive integers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3031422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Using Wallis' product to derive $\sqrt\pi$ Recall Wallis' product:
$$\lim_{n\to\infty}\Big(\frac{2}{1}\cdot\frac{2}{3}\cdot\frac{4}{3}\cdot\frac{4}{5}\cdot\frac{6}{5}\cdots\frac{2n}{2n-1}\cdot\frac{2n}{2n+1}\Big)=\frac{\pi}{2}$$
We have to show that $$\lim_{n\to\infty}\frac{(n!)^22^{2n}}{(2n)!\sqrt n}=\sqrt\pi$$
The hint I got was to use $$P_n=\frac{(n!)^42^{4n}}{[(2n)!]^2(2n+1)}$$
which is just simply the inside of the limit in Wallis' product, multiplied and divided by $2\cdot2\cdot4\cdot4\cdots(2n)\cdot(2n)$ alternatively. How do I use $P_n$ to derive $\sqrt\pi\:$?
| As you noticed, $P_n \to \frac{\pi}{2}$ since it's the inside of limit of the L.H.S of the Wallis Product Formula multiplied by $1$. Since continuous maps preserve limits, this implies $\sqrt{2P_n} \to \sqrt{\pi} $ and note that
$$\lim_{n \to \infty}\sqrt{2P_n} = \lim_{n \to \infty}\frac{\sqrt{2}(n!)^2 2^{2n}}{(2n)!\sqrt{2n+1 }} \\ = \lim_{n \to \infty} \frac{(n!)^22^{2n}}{(2n)!\sqrt{n+ \frac{1}{\sqrt{2}}}} \\ = \lim_{n \to \infty} \frac{(n!)^22^{2n}}{(2n)! \sqrt{n}}$$
Where the last equality was got by translation invariance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3031546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Positive divisors of n = $2^{14} \cdot 3^9 \cdot 5^8 \cdot 7^{10} \cdot 11^3 \cdot 13^5 \cdot 37^{10}$ How do I find positive divisors of n that are perfect cubes that are multiples of 2^10 * 3^9 * 5^2 * 7^5 * 11^2 * 13^2 * 37^2
The answer is (1)(1)(2)(2)(1)(1)(3) = 12
I don't understand though because I would have done something like:
2: [(14-10)/3]+1 = 2 (taking the floor)
3: [(9-9)/3]+1 = 1
5: [(8-2)/3]+1 = 3
7: [(10-5)/3]+1 = 2
11: [(3-2)/3]+1 = 1
13: [(5-2)/3]+1 = 2
37: [(10-2)/3]+1 =3
2*1*3*2*1*2*3
| For $d$ to be a divisor of $n$, $d$ must be of the form $2^a \cdot 3^b \cdot 5^c \cdot 7^d \cdot 11^e \cdot 13^f \cdot 37^g$, where $0 \leq a \leq 14, 0 \leq b \leq 9, 0 \leq c \leq 8, 0 \leq d \leq 10, 0 \leq e\leq 3, 0\leq f \leq 5, 0 \leq g \leq 10$.
Now we want $d$ ot be a multiple of the number given, that means $d$ must be of the form $2^a \cdot 3^\color{red}{9} \cdot 5^c \cdot 7^d \cdot 11^e \cdot 13^f \cdot 37^g$, where $\color{red}{10} \leq a \leq 14, \color{red}{2} \leq c \leq 8, \color{red}{5} \leq d \leq 10, \color{red}{2} \leq e\leq 3, \color{red}{2}\leq f \leq 5, \color{red}{2} \leq g \leq 10$.
Now we want $d$ to be a cube as well. This means all powers appearing must by divisible by $3$. Thus
$$a=12, b=9, c \in \{3,6\}, d \in \{6,9\}, e=3, f=3, g \in \{3,6,9\}.$$
Thus the total number of choices we have are
$$2 \cdot 2 \cdot 3=12.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3031641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What does $\min\sum\dots$ indicate? What does "min" indicate infront of a sigma sign?
$$\min \sum_{e\in E} c_e x_e$$
Source: https://www.math.unipd.it/~luigi/courses/metmodoc1718/m08.01.TSPexact.en.pdf
| The $c_e$ are some constants, the $x_e$ are variables that satisfy some constraints and you are minimizing the sum $\sum_{e\in E} c_e x_e$ over all feasible choices of the $x_e$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3031729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Cone with height $9$ cm and radius $3$ cm is filled at a rate of $1.2~\text{cm}^3$. Find the rate of change when $h=3$. A cone with radius $3$ cm and height $9$ cm is filled with water at a rate of $1.2~\text{cm}^3$. Find the rate of change of the height of the water when the height of the water is $3$ cm.
I differentiated both sides to get $$\frac{dV}{dt}= \frac{1}{3} \pi \cdot 2(3)~\frac{dh}{dt}$$ Solving for $dh/dt$ I got $5.23599$. My textbook says to use similar triangles but I didn't, I am wondering if there is another way to solve of the rate of change of the height of the water at $h=3$?
| Much simpler is to note that when $h=3$ cm the radius is $1$ cm. The area of the water is then $\pi$ cm$^2$ so the rate of rise is $\frac {1.2}\pi$ cm/sec
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3031830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How many obtuse angles can be formed from the 15 rays on a single point on a same plane? Consider 15 rays that originate from a point. What is the maximum number of obtuse angles they can form, assuming that the angles between two rays is less than or equal to 180 degrees?
| Call two rays near if they form a non-obtuse angle. If there are $n$ ordered pairs of near rays, there are exactly ${15\choose 2}-\frac n2$ obtuse angles among the rays. Hence we want to minimize $n$.
Claim. $n\ge 60$.
Proof. Suppose one of the rays (wlog the positive $x$ axis) is near $a\le 2$ other rays (i.e., we have $a$ other rays within the (closed) first or fourth quadrant). Then there are $b$ rays in the second and $c$ rays in the third quadrant (with the negative $x$ axis being counted as either of these quadrants) where $b+c\ge12$.
Then we have (e.g., by Jensen's inequality) $$n\ge b(b-1)+c(c-1)\ge 2\cdot 6\cdot 5=30.$$
Therefore, we need only consider configurations where each ray is near $\ge 3$ other rays.
Again, suppose the some ray is near $a=3$ other rays, and define $b$ and $c$ as above, where now $b+c=11$. This time, we have
$\ge b(b-1)+c(c-1)=(b-5)^2+(b-6)^2+49\ge50$ near pairs within the left half plane. Additionally, each of the $4$ rays in the right half plane is first component of at least $3$ near pairs. Hence,
$$ n\ge 50+4\cdot 3=62.$$
Remains the case that each ray is near at least $4$ other rays. Then clearly,
$$n\ge 4\cdot 15=60.$$
$\square$
As we can achieve the lower bound $n=60$ (e.g., with five rays per each of the directions $0^\circ$, $120^\circ$, $240^\circ$), the maximal number of obtuse angles is
$$ 75.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3031951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Correlation between parallel lines and volumes of tetrahedrons. Given 4 parallel lines $d_1$, $d_2$, $d_3$, $d_4$ , no more than 2 of which can be on a same plane. Plane (P) intersects the 4 lines at 4 points A, B, C, D. Plane (Q) ( not identical to plane (P)) intersects the 4 lines at $A_1$, $B_1$, $C_1$, $D_1$. Proof that the volumes of the 2 tetrahedra ABC
$D_1$ and $A_1$$B_1$$C_1$$D$ are equal.
| It is not explicitly stated that one of the given four parallel lines passes through both $D$ and $D_1,$ but I will assume this is given since otherwise
the two tetrahedra might have different volumes.
If you know that a shear transformation of three-dimensional Euclidean space preserves volume, you can construct a plane $P'$ perpendicular to $d_1$
(hence also perpendicular to the other three given lines)
and apply a shear transformation parallel to $d_1$ that maps $P$ to $P'.$
This transformation maps $A,$ $B,$ and $C$ to points $A',$ $B',$ and $C'$
in plane $P'$, and the tetrahedron $ABCD_1$ has volume equal to $A'B'C'D_1,$
which has base $\triangle A'B'C'$ and height $DD_1.$
Likewise you can construct a plane $Q'$ perpendicular to $d_1$ and apply a shear transformation parallel to $d_1$ that maps $Q$ to $Q',$
and in particular maps $A_1,$ $B_1,$ and $C_1$ to $A_1',$ $B_1',$ and $C_1'.$
Then the tetrahedron $A_1B_1C_1D$ has volume equal to $A_1'B_1'C_1'D,$
which has base $\triangle A_1'B_1'C_1'$ and height $DD_1.$
But since $\triangle A_1'B_1'C_1'$ is congruent to $\triangle A'B'C',$
the tetrahedron $\triangle A_1'B_1'C_1'D$ has the same volume as $A'B'C'D_1,$
and hence all four tetrahedra have the same volume.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3032077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Distance from eigenspace of matrix In linear algebra, is there a separate name / concept for the notion of distance between linear vector subspaces?
I'm asking this because I'm considering a problem in numerical linear algebra where a Krylov subspace iterative method is used. Since for every subsequent $n$ a Krylov subspace method implicitly generates an additional basis vector in Krylov subspace, which approaches the eigenspace of the matrix for which the problem $$Ax=b$$ is being solved, it must be true that if $b$ is in the span of the eigenspace of $A$ then the convergence will happen faster.
But what if $b$ is very "far" from the eigenspace? I'm trying to think about what the notion of a distance between two vector subspaces could mean or how it could be defined. Would a vector $b$ contained in a subspace "far away" from the eigenspace of $A$ make iteration of a Krylov subspace method take longer than in a general case?
| The common notion of distance is to consider an orthogonal projection $P$ onto the first linear subspace $V$, and an orthogonal projection $Q$ onto the other subspace $W$.
At this point we can define
$$d(V,W) = \| P - Q \|$$ as the distance between these subspaces, where the norm used is the operator norm. For properties and applications see Section 2.5.3 of Golub and Van Loan.
This distance metric is used throughout GVL’s exposition on unsymmetrical eigenvalue problems (which involve Krylov methods) — see Chapter 7.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3032192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
The minimum edge cover of a tree is at least the maximum degree Let $T$ be a tree with maximum degree $\Delta(T)$, and let $\beta'(T)$ denote the size of the minimum edge cover of $T$. The question is to prove that $\beta'(T) \ge \Delta(T)$.
I started by proving that each tree has at least $\Delta(T)$ using induction on $n$. Then I tried using the fact that if we take the edge connected to each leaf then we will have minimum edge coverage equal to $\Delta(T)$ in case all the leaves are connected to the vertex of max degree, if not, then we need at least one more edge. Does it make sense?
| Let $v$ be a vertex of a maximal degree $\Delta$ of the tree $T$. Let $N(v)$ be a set of neighbors of the vertex $v$. If any edge of $T$ is incident to at least two vertices $v,w$ of $N(v)$ then $u-v-w-u$ is a cycle, which cannot occur in a tree. So each edge cover need at least $|N(v)|=\Delta$ distinct vertices to cover $N(v)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3032549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If $a+\sqrt{a^2+1}= b+\sqrt{b^2+1}$, then $a=b$ or not? It might be a silly question but if $$a+\sqrt{a^2+1}= b+\sqrt{b^2+1},$$ then can I conclude that $a=b$? I thought about squaring both sides but I think it is wrong! Because radicals will not be removed by doing that! Can you help me with proving that $a=b$ or not?
Actually I'm going to prove that $x+\sqrt{x^2+1}$ is a $1$-$1$ function.
| Alternatively, for $a,b\in \Bbb R$,
\begin{align}a+\sqrt{a^2+1}&=b+\sqrt{b^2+1}\\&\implies \frac{1}{a+\sqrt{a^2+1}}=\frac{1}{b+\sqrt{b^2+1}}\wedge a+\sqrt{a^2+1}=b+\sqrt{b^2+1}\\
&\implies \sqrt{a^2+1}-a=\sqrt{b^2+1}-b \wedge a+\sqrt{a^2+1}=b+\sqrt{b^2+1}\\
&\implies \sqrt{a^2+1}-a=\sqrt{b^2+1}-b \wedge a+\sqrt{a^2+1}=b+\sqrt{b^2+1}\\&\implies (a+\sqrt{a^2+1})-(\sqrt{a^2+1}-a)=(b+\sqrt{b^2+1})-(\sqrt{b^2+1}-b)
\\&\implies 2a=2b\implies a=b. \end{align}
It is also easy to see that $a=b\implies a+\sqrt{a^2+1}=b+\sqrt{b^2+1}$. That is,
$$a+\sqrt{a^2+1}=b+\sqrt{b^2+1}\iff a=b.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3032720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
What are some advanced books on metric space? Metric space, with the additional notion of “distance between points”, has properties that are more “concrete” than a topological structure. After a basic study I saw a number of strange and interesting results which depend heavily on the metric structure. So I wonder if there is any advanced book on metric space after learning general topology, supposing that one is familiar with the metrizability theorems and have a basic knowledge on metric structure, say, up to Willard’s General Topology. Any recommendation would be helpful. Thank you.
| I can recommend the following books which deal mainly with metric spaces:
*
*Heinonen: Lectures on Analysis in Metric spaces This is a short and well written book talking about various topics and well suited for self-study.
*Bridson, Martin R., Häfliger, André : Metric Spaces of Non-Positive Curvature A more advanced book dealing with non-positively curved spaces. CAT($\kappa$) spaces and (Gromov) $\delta$-hyperbolic spaces. It is kind of the standard reference for those spaces.
*Stephanie Alexander, Vitali Kapovitch, Anton Petrunin: Invitation to Alexandrov geometry Dealing with CAT(0) spaces. Freely available online.
*S. Alexander, V. Kapovitch, A. Petrunin Alexandrov Geometry Spaces with curvature bounded from below.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3032835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Can distinct elements of a $C^*$-algebra be separated by a maximal left ideal? Let $A$ be a $C^*$-algebra, and let $f\neq g\in A$. Does there exist a maximal ideal $J\trianglelefteq A$ with $f+J\neq g+J$?
I'm particularly interested in the case of $A=B(\mathcal H)$, and why things aren't obvious. In this case, $f\neq g$ implies that there is some $h\in\mathcal H$ with $f(h)\neq g(h)$, so the left ideal of the annihilator of $h$ certainly separates them. However, $\text{Ann}_{B(\mathcal H)}(h)$ is not maximal, and I don't immediately see how to upgrade this to a maximal ideal.
An alternative approach is to attempt to find an irreducible representation. We have a good family of representations from GNS: for each positive linear functional $\rho$ on $A$, we have a representation $\pi_\rho$ on a Hilbert space, with the image of $1$ being cyclic for the representation. If $f\neq g$, then at least one of these representations has $f\cdot [1]\neq g\cdot [1]$, because the GNS representation is faithful. I know that the annihilator of a simple module is the intersection of maximal left ideals, so if I believe that if I can find a simple $A$-module on which $f$ and $g$ act differently, there is such an ideal.
| Your ideal is maximal when $A=B(H)$. Let $J=\{T:\ Th=0\}$. Let $S\in B(H)\setminus J$. Then $Sh\ne0$. Choose $k$ with $\langle Sh,k\rangle=1$, and let $Wx=\langle x,k\rangle\,h$. Put $R=I-WS$. Then
$$ Rh=h-WSh=h-h=0, $$ so $R\in J$. Then $$ I=R+WS\in J+\mathbb C\,WS,$$ and $J$ is maximal (its codimension is 1).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3033276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Prove $(b-a) \cdot \int_a^b \alpha(x) \beta(x) dx \ge \int_a^b \alpha(x) dx \cdot \int_a^b \beta(x) dx$.
Suppose $\alpha: [a, b] \to (-\infty, \infty)$ and $\beta:[a,b] \to (-\infty, \infty)$ are both nondecreasing. Then,
$$(b-a) \cdot \int_a^b \alpha(x) \beta(x) dx \ge \int_a^b \alpha(x) dx \cdot \int_a^b \beta(x) dx.$$
This is Lemma A.3 from Robson, A.J., 1992. Status, the distribution of wealth, private and social attitudes to risk. Econometrica: Journal of the Econometric Society, pp.837-857.
The author omit this proof since it is straightforward, but I am struggling to prove this. I appreciate if you give some help, and please tell me if more context is needed.
| Proof.
Let $D = [a,b]^2$. Then
$$
\int_a^b 1 \int_a^b \alpha\cdot \beta- \int_a^b \alpha \int_a^b \beta\stackrel{(1)}= \iint_D( \alpha(y)\beta(y)-\alpha(x)\beta(y) )\,\mathrm dx \,\mathrm dy \stackrel{(2)}= \iint_D (\alpha (x)\beta(x)-\alpha(y)\beta(x))\,\mathrm dx\,\mathrm dy = \frac 12 \iint_D (\alpha(x)\beta(x)+\alpha(y)\beta(y)-\alpha (x)\beta(y)-\alpha(y)\beta(x)) \,\mathrm dx \,\mathrm dy = \frac 12 \iint_D (\alpha(x)-\alpha(y))(\beta(x)-\beta(y))\,\mathrm dx\,\mathrm dy \geqslant 0,
$$
since both $\alpha, \beta$ are nondecreasing, i.e. $(\alpha(x) - \alpha (y)), (\beta(x)- \beta(y))$ have the same sign, equivalently $(\alpha(x)-\alpha(y))(\beta(x)-\beta(y)) \geqslant 0$ for all $(x,y)\in [a,b]^2$.
EXPLANATION
*
*For two functions $f,g \colon [a,b] \to \mathbb R$,
$$
\int_a^b f \int_a^b g = \int_a^b f(x)\,\mathrm dx \int_a^b g(x)\,\mathrm dx = \int_a^b f(x)\,\mathrm dx \int_a^b g(y)\,\mathrm dy \;[\text{change the integration parameter}] = \iint_D f(x)g(y)\,\mathrm dx \,\mathrm dy \;[\text{transfer to a double integral}].
$$
Hence the $(1)$.
*Use a different parameter, we have
$$
\int_a^b f \int_a^b g = \int_a^b f(y)\,\mathrm dy \int_a^b g(x)\, \mathrm dx = \iint_D f(y)g(x)\,\mathrm dx \,\mathrm dy.
$$
Go back to the question, we have
$$
(b-a)\int_a^b \alpha\cdot \beta - \int_a^b \alpha \int_a^b \beta = \iint_D (\alpha(y)\beta(y) - \alpha(x)\beta(y)),
$$
also
$$
(b-a)\int_a^b \alpha\cdot \beta - \int_a^b \alpha \int_a^b \beta = \iint_D (\alpha(x)\beta(x) - \alpha(y)\beta(x)),
$$
add these two equations we get the $(2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3033379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding $\frac{1}{2\pi}\int_{0}^{2\pi}\phi^\prime(x) dx$, where $\phi(x)=\arctan\frac{3\cos x}{4(\cos x+\sin x)}$. Why isn't it $\phi(2\pi)-\phi(0)$? I'm tasked with the following problem:
Evaluate
$$I_C=\frac{1}{2\pi}\int_{0}^{2\pi}\left(\frac{d}{d\theta}\phi(\theta)\right) d\theta,\quad\text{where}\; \phi(\theta)=\arctan\left[\frac{3\cos(\theta)}{4(\cos(\theta)+\sin(\theta))}\right]$$
Am I correct in assuming that this is simply $\phi(2\pi)-\phi(0)$? When I do that I get zero, but when I take the derivative, then evaluate the integral I get -1. How do I use the fundamental theorem of calculus to get the correct answer?
| Hint: what happens at $\theta =3\pi/4, 7\pi/4$? See the graph if required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3033554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\int_{0}^{\pi/6} {\cos (x^2)}\mathrm{d}x\ge\frac12$.
Prove that $\displaystyle\int_{0}^{\frac\pi 6} {\cos ({x^2)}\mathrm{d}x\ge\dfrac12}$.
I know this is a Fresnel integral but without going into advanced calculus is there a way to show that this is true? using calculus 1 knowledge, I tried Riemann's sum to prove this and got stuck. Thanks for any help.
| For $0 < x \le \frac \pi 6 < 1$ we have $x^2 < x$ and therefore
$$
\int_{0}^{\pi/6} \cos (x^2) \, dx > \int_{0}^{\pi/6} \cos (x) \ dx
= \sin( \frac \pi 6) = \frac 12
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3033674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Compute $\sum\limits_{n=0}^\infty a_nx^n$ if $a_0=3$, $a_1=5$, and $na_n=\frac23a_{n-1}-(n-1)a_{n-1}$ for every $n>1$
Assume that $a_0=3$, $a_1=5$, and, for arbitrary $n>1$ , $na_n=\frac{2}{3}a_{n-1}-(n-1)a_{n-1}$. Prove that, when $|x|<1$, the series $\sum\limits_{n=0}^\infty a_nx^n$ converges, and compute its sum.
I tried to let $\displaystyle a_n-a_1=\sum_{k=2}^{n}\frac{5-6k}{3k}a_{n-1}$ , and $\displaystyle a_n=(\frac{5-3n}{3n})a_{n-1}$
$$
a_{n-1}(\frac{5-3n}{3n}-\sum_{k=2}^{n}\frac{5-6k}{3k})=5
$$
I want to know how to continue it.
Edit: (after reading ideas by @JV.Stalker)
I made the following supplement
$$
S(x)=\sum_{n=0}^{\infty}a_nx^n\\
S'(x)=\sum_{n=1}^{\infty}na_nx^{n-1}\\
\sum_{n=2}^{\infty}na_nx^n=\sum_{n=2}^{\infty}\frac{2}{3}a_{n-1}x^n-\sum_{n=2}^{\infty}(n-1)a_{n-1}x^n\\
[xS'(x)-5x]=\frac{2}{3}x·\sum_{n=2}^{\infty}(n-1)a_{n-1}x^{n-1}-x\sum_{n=2}^{\infty}(n-1)a_{n-1}x^{n-1}\\
x[S'(x)-5]=\frac{2}{3}x(S(x)-3)-x(xS'(x))\\
S'(x)-5=S(x)-3-xS'(x)\\
(x+1)S'(x)=\frac{2}{3}S(x)+3\\
S'(x)-\frac{2}{3}\frac{1}{x+1}S(x)=\frac{3}{x+1}\\
S(x)=c(x+1)^{\frac{2}{3}}-\frac{9}{2}\\
S(0)=a_0=3\\
c=\frac{15}{2}\\
S(x)=\frac{15}{2}(x+1)^{\frac{2}{3}}-\frac{9}{2}
$$
| $na_n=\frac{5}{3}a_{n-1}-na_{n-1}$
Multiply by $x^n$ both sides and sum from $n=1$ to $\infty$
$\sum\limits_{n=1}^\infty na_n x^n=\frac{5}{3}\sum\limits_{n=1}^\infty a_{n-1}x^n-\sum\limits_{n=1}^\infty na_{n-1}x^n$
Reindex of the RHS:
$\sum\limits_{n=1}^\infty na_nx^n=\frac{5}{3}x\sum\limits_{n=0}^\infty a_{n}x^n-x\sum\limits_{n=0}^\infty (n+1)a_{n}x^n$
After sorting the eqution:
$\sum\limits_{n=1}^\infty na_nx^n+x\sum\limits_{n=0}^\infty na_{n}x^n=\frac{2}{3}x\sum\limits_{n=0}^\infty a_{n}x^n$
$(1+x)\sum\limits_{n=0}^\infty na_nx^n=\frac{2}{3}x\sum\limits_{n=0}^\infty a_{n}x^n$
Use that $nx^{n-1}=\frac{dx^n}{dx}$ and let $f(x)=\sum\limits_{n=0}^\infty a_nx^n$
We have the following differential eqution:
$(x+1)\frac{df(x)}{dx}=\frac{2}{3}f(x)$
Finally $f(x)=(x+1)^\frac{2}{3}+c$
$\sum\limits_{n=1}^\infty a_n x^n$ is convergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3033773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that there are infinitely many prime numbers $p$ such that $\left(\frac{a}{p}\right)=1$ for fixed $a$. I already proved this is true for all prime numbers and clearly see how this is true for all perfect squares, I'm just having trouble expanding it to any prime factorization. If we let $a$ have prime factorization $a=p_1^{a_1}p_2^{a_2}...p_n^{a_n}$, then since the Legendre Symbol is multiplicative, we know that:
$$ \left(\frac{a}{p}\right)=\left(\frac{p_1}{p}\right)^{a_1}\left(\frac{p_2}{p}\right)^{a_2}...\left(\frac{p_n}{p}\right)^{a_n} $$
I don't, however, understand where to go from here.
| Through quadratic reciprocity and Dirichlet's theorem we have a straightforward proof: for any $a\in\mathbb{N}^+$ there is some prime $p$ such that $p\equiv{1}\pmod{4}$ and $p\equiv 1\pmod{a}$. For such a prime
$$ \left(\frac{a}{p}\right)=\left(\frac{p}{a}\right)=\left(\frac{1}{a}\right)=1.$$
Yet another overkill: by Chebotarev's density theorem the polynomial $x^2-a$ has a root in $\mathbb{F}_p$ for approximately half the primes $p$. In particular an $a\in\mathbb{N}^+$ that is a quadratic non-residue for every sufficiently large prime $p$ cannot exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3033908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this Diophantine problem solvable without invoking Fermat's Last Theorem? Let $a,b,c,n$ be positive integers with $a<b<c$ and $n\geq 3$ odd. Given that $a^n + b^n < 2c^n$, can one prove that $a^{n+2}+b^{n+2}\neq c^{n+2}$ without invoking Wiles' theorem ? Or is this actually equivalent to Fermat's Last Theorem ?
| As stated in the comments, $a<b<c$ implies trivially $a^n+b^n<2c^n$, hence this condition is extraneous. The equation also immediately implies $a,b<c$ so that is not useful either. Finally the equation is symmetric in $a,b$ so we may WLOG assume $a\leq b$, so your condition just becomes $a\neq b$. In fact, $a\neq b$ is quite extraneous too, because from the Fermat equation we will obtain $\sqrt[n+2]{2}$ is rational if the equation has solutions with $a=b$. So what you're left with is, "prove that if $n$ is odd, then $a^{n+2}+b^{n+2}=c^{n+2}$ has no solutions." Now it is entirely obvious that this is just Fermat's Last Theorem in the odd case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3034070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Review on my method for $Number$ $of$ $diagonals$ in a regular $n$-gon is $\frac12n(n-3)$ I have an assignment on permutations and combinations topics. In that there is a question-
The number of interior angles of a regular polygon is $150^\circ$ each. The number of diagonals of the polygon is _____.
Attempt
I don't know about any thing about the P&C method for diagonals. So I found the side as per the formula $$(n-2)(360^\circ)=(\theta)n$$ where $n$ is the number of side and $\theta$ as interior angle.
But the main problem came to be the diagonal part because I was not able to understand the logic given on platforms like MSE itself. (Maybe it is due to language problem because I am still learning English language.)
But I tried to count the polygon diagonal one by one to generate some pattern which may be useful. So I observed this-
In quads, if we start making diagonals then for first it will be $1$, then again $1$ and $0$ and $0$.
In pents, the similar process gave $2$ then again $2$ then $1$ and then $0$, and $0$.
In hexes, $3$ then $3$ then $2$ then $1$ then $0$ and again $0$.
Also, I observed that for the last two vertices it came to be $0$ always.
So observing this I first did in same way for hepts and after doing this I checked the answers for above cases in that formula $$\frac{n(n-3)}{2}$$. (Though I didn't understand how it was formed. And to my surprise, it came to be same. I even checked for $12, 13, 50, 40 \ldots$ It all gave the same value as that formula.
Now my doubt is What I observed is right but how? Also I wrote it in terms of $$2(n-3)+(n-4)+...+1$$ to give $$\frac{n(n-3)}{2}$$.
| You can also use the Handshake Lemma from graph theory. Let $G(V,E)$ be a graph on $n$ vertices, where the vertices form a regular $n$-gon and the edges are the diagonals of the $n$-gon. Then, prove that each vertex of $G$ has degree $n-3$. By the Handshake Lemma, $G$ has
$$|E|=\frac{1}{2}\,\sum_{v\in V}\,\deg(v)=\frac{1}{2}\,n\,(n-3)=\frac{n(n-3)}{2}$$ edges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3034176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
How to solve this system of equations systematically? This might seem a trivial problem, but I have some trouble in arranging the data. So suppose you are given $f(x,y)=x^2y^2(1+x+2y)$ and you want to find it's critical points. Thus we find
$$\frac{\partial f}{\partial x}(x,y)=xy^2(2+3x+4y)\textrm{ and }\frac{\partial f}{\partial y}(x,y)=2x^2y(1+x+3y).$$
Now we set $f_x= 0$ and $f_y=0.$ Thus we get a system
$$
\begin{split}
xy^2(2+3x+4y) &=0\\
2x^2y(1+x+3y) &=0
\end{split}
$$
Now we have a bunch of cases. The way I think about this is as follows:
$$((x=0)\lor(y=0)\lor(3x+4y=-2))\land((x=0)\lor(y=0)\lor(x+3y=-1)).$$
Then I consider each possibility separately, but this seems to be slow and sometimes I forget some solutions. Thus I was wondering if there are any other methods which one can use to solve such type of problems.
| Use the converse of the distributive property:
$((x=0)\lor(y=0)\lor(3x+4y=-2))\land((x=0)\lor(y=0)\lor(x+3y=-1))\\\equiv(x=0)\lor(y=0)\lor[(3x+4y=-2)\land(x+3y=-1)]$
$3x+4y+2=0=x+3y+1$ is just a pair of straight lines (linear equations) intersecting at $(-2/5,-1/5)$. Therefore, you have $(x=0)\lor(y=0)\lor(x=-2/5\land y=-1/5)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3034418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Prove inverse of strictly monotone increasing function is continuous over the range of original function Let $f:[a,b] \rightarrow \Bbb R$ be a strictly monotone increasing. Then $f$ has an inverse function $g:[c,d]\rightarrow \Bbb R,$ where $[c,d]$ is the range of $f$. I'm trying to prove that $g$ is continuous at d.
My intial thoughts for an attempt of a proof:
Strictly monotone functions are injective. So if $\alpha, \beta \in [a,b]$ and $\alpha \not= \beta $ then $\alpha < \beta$. Since $f$ is strictly monotone increasing $f(\alpha) < f(\beta)$ and $f(\alpha) \not= f(\beta)$.
Since $f$ is strictly increasing, so is $f^{-1}$. So if $\alpha < \beta$ then $f^{-1}(f(\alpha)) < f^{-1}(f(\beta))$.
This is because if there exists $\alpha$ and $\beta $ $\in (a,b)$ with $\alpha < \beta$ such that $f^{-1}(\alpha)$ = $\alpha '$ and $f^{-1}(\beta)$ = $\beta '$ and $\alpha ' < \beta '$ then
$\beta = f^{-1}(\beta ') \le f^{-1}(\alpha ') = \alpha$
which is a contradiction if $f$ is strictly increasing.
The remainder of the proof is some form of an epsilon delta proof to show that the inverse function is continuous from the left at the right end point. My attempt:
Let $b$ be the upper limit $ \in [a,b]$ and define $d = f(b)$.
Next, I want to show that $\lim_{x\rightarrow d^{-}}f^{-1}(x) = b$ for any $\epsilon >0$ such that $(b-\epsilon) \subset [a,b]$.
So, $f(b-\epsilon) < f(b)$.
Let $\delta = 1/2 (f(b)-f(b-\epsilon))$
Then $f(x_0-\epsilon) < f(x_0)-\delta$
So if $|x-d| < \delta$, then $|f^{-1}(x)-f^{-1}(d)|<\epsilon$
then continuity holds at $f^{-1}(d)$, which is possible by the Archimedean principle. Currently, I'm having trouble with the epsilon-delta proof. I don't think the argument is strong enough.
| Let $R = f([a,b])$ be the range of $f$. Since $f$ is strictly increasing, we have $R \subset [f(a),f(b)]$, but in general $R \ne [f(a),f(b)]$. For example, let $f : [0,2] \to \mathbb{R}, f(x) = x$ for $x \in [0,1)$, $f(1) = 2$, $f(x) = x + 2$ for $x \in (1,2]$.
But although $R$ is general no interval, the usual definition of continuity makes sense for $f^{-1} : R \to [a,b]$. Moreover, as you remarked in your question, $f^{-1}$ is strictly increasing, i.e. for $y,y'\in R$ with $y < y'$ we have $f^{-1}(y) < f^{-1}(y')$.
Let us assume that $f^{-1}$ is not continuous. This means that exist $y \in R$ and $\epsilon > 0$ such that for all $\delta > 0$ there exists $y_\delta \in R$ such that $\lvert y - y_\delta \rvert < \delta$ and $\lvert f^{-1}(y) - f^{-1}(y_\delta) \rvert \ge \epsilon$. We can therefore find a sequence $(y_n)$ in $R \setminus \{ y \}$ such that $y_n \to y$ and $\lvert f^{-1}(y) - f^{-1}(y_n) \rvert \ge \epsilon$. W.lo.g. we may assume that infinitely many $y_n < y$. Passing to a suitable subsequence, we may assume that all $y_n < y$ and that $(y_n)$ is strictly increasing. Write $x_n = f^{-1}(y_n)$, $x = f^{-1}(y)$. The sequence $(x_n)$ is strictly increasing such that $x_n < x$. It therefore converges to some $\xi \le x$. We have $y_n = f(x_n) < f(\xi) \le f(x) = y$, and this implies $f(\xi) = y$ because $y_n \to y$. Hence $\xi = f^{-1}(y) = x$. We conclude $x_n \to x$. But $\lvert x - x_n \rvert = \lvert f^{-1}(y) - f^{-1}(y_n) \rvert \ge \epsilon$ which is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3034525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proof about inequalities Let $a,b,c \in \mathbb{R}$. Prove that if for $a<c$ $\forall$ $c>b$, then $a \leq b$.
My attempted proof was that if we take the contrapositive of the statement, then we get the inequality $c \leq b$ $\forall$ $a \geq c$. Then if we add the the given inequality to this we get $a+c \leq b+c$ which proves that $a \leq b$. I am almost certain that there is something wrong with this approach. Any hints?
| To take the contrapositive of a statement you need to be extra careful about quantifiers.
The contrapositive of $$"a<c \text{ holding for all } c>b \text{ implies } a \le b."$$ is instead
$$"a > b \text{ implies that there exists } c>b \text{ such that }a \ge c."$$ Note that the 'for all $c>b$' turned into a 'there exists $c>b$' in the process of negation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3034729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
how to integrate $\frac{-1+e^{-i k x}}{x^2}$ How do I integrate the following?$$\int_{-\infty }^{\infty } \frac{-1+e^{-i k x}}{x^2} \, dx$$
I am not very familiar with complex analysis, but I did try to use contour integral to do this but I couldn't get any success with that.
| Another way to do it if you want to use contour integration is to use an indented semi circle in the upper half plane going around the singularity at $z=0$. For this contour, the function $f(z)=(e^{-ikz}-1)/z^2$ will integrate to 0 by Cauchy's theorem since it is holomorphic in the region enclosed by the indented semi circle. Now you simply split $\int_\gamma=\int_{C_R}+\int_{C_\epsilon}+\int_{-R}^{-\epsilon}+\int_{\epsilon}^{R}$ and you're left with justifying the exchange of limit and integrals when you want to let $\epsilon$ (radius of small semi circle) go to $0$ and R go to $\infty$ (radius of larger semi-circle).
Here are some examples : https://web.williams.edu/Mathematics/sjmiller/public_html/302/coursenotes/Trapper_MethodsContourIntegrals.pdf
There are also nice people explaining it in full detail on youtube, have fun !
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3034826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Homotheties: Let $A$ and $B$ be distinct points of a circle $o$. What is the set of possible centroids of triangles $ABC$ with $C\in o$? Question: Let $A$ and $B$ be distinct points of a circle $o$. What is the set of possible centroids of triangles $ABC$ with $C\in o$?
Here is what I have:
The angle at $C$ will always be the same as it is always subtended by the same arc as $A$ and $B$ are fixed.
There are 2 cases: Either $C$ lies on the small arc of $AB$ or $C$ lies on the big arc of $AB$.
In the case where $C$ lies on the large arc, by looking at the possible positions of $C$ one can observe that at some point $C$ and $A$ are on the same diameter and at another point $C$ and $B$ are on the same diameter. Also, it is worth mentioning that the midpoint at which $c$ intersects on the chord $AB$ does not depend on $C$ and is thus always the same.
One can observe, by determining various points that satisfy the criteria in a drawing, that the possible points $C$ all seem to lie on a smaller circle contained in the original circle $o$.
One can then guess that the center of this smaller circle is a possible center of homothethy mapping the smaller circle to the larger circle with scale $r_1/r_2$ where $r_1$ is the radius of the larger circle and $r_2$ is the radius of the smaller circle(I am really not sure about this).
I am not sure what to say about the case where $c$ lies on the small arc and am not sure where to continue with the problem.
Any help is appreciated.
| Here's a moderately obnoxious idea:
If you use complex numbers and set your circle to be the unit circle, then the centroid of the triangle determined by $a,b,c$ is simply $\frac{a+b+c}{3}$. Thus, the locus of possible centroids, as $A$ and $B$ are fixed and $C$ varies, is simply the circle with radius $\frac{1}{3}$ centered at $\frac{a+b}{3}$ (I think you need to get rid of two of the points because $C\neq A,B$, but oh well).
There should be a geometric interpretation of this idea as well, if you essentially try to phrase everything with homotheties.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3034991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to calculate area of an ellipse based on its formula? How can I determine the area of a half-ellipse if all that is given is $y = \sqrt{1-n^2x^2}$? I have tried both geometry and calculus, but without convincing results…
Thank you
| With the use of generalized polar coordinates
\begin{aligned}x&=ar\cos t\\
y&=br\sin t\end{aligned}
where $a=\frac 1n,\; b=1,\; t \in [0,\pi]\; \text{and}\; r\in [0,1]$ in the given case. The Jacobian is ${r\over n}$ and the area
$$\cal{A}=\int_0^{\pi} \int_0^1 1\cdot {r\over n}\;dr \;dt=\frac{\pi}{2n}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3035202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Question on graph theory regarding roads connecting a pair of cities I have the following question with me from "Problem Solving Strategies" by Arthur Engel, given as an example in page 44
"Every road in Sikinia is one-way. Every pair of cities is connected exactly by one direct road. Show that there exists a city which can be reached from every city directly or via at most one another city"
Isn't the statement that is required to be proved obvious from the given condition, if I connect any two cities, then obviously I can reach that city by the road right?
| Considering the way the question was put, the answer is “no”, since the roads are one-way. But I suppose you need help with the problem, so there’s my proof below.
I’ll proceed with a proof that is, basically, an inductive algorithm. In graph theory this is very common, so take a moment to understand what I did.
Take the $n=2$ vertices. Then it holds trivially. Suppose by induction that it holds for all $n \leq n’$. For each vertex $v_i$, take the graph $G_i$ obtained by removing it and its edges. By hypothesis, $G_i$ has at least one vertex $v’_i$ with such property. If $v_i$ has only incoming edges, we’re done; it’s also done if $v_i \to v_j \to v’_i$ or $v_i \to v’_i$. We’ll proceed to prove that there must be such $v_i$ (in an algorithm, you can just test case by case; here we prove it exists).
Suppose now that the cases above do not hold and fix some $i$. Then $v’_i \to v_i$. Take all the vertices $w \to v’_i$. Then $w \to v_i$ by hypothesis. Take all the vertices $w’$ reaching $v’_i$ with necessarily and exactly 2 steps. But then $w’ \to w_j \to v_i$. Since every vertex (but $v_i$) of the initial graph is of one of the two types, we have that all the vertices of the graph reach $v_i$ in 2 or fewer steps, as we wanted. $\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3035351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Checking if a vector is in a subspace? I have
v1= $$
\begin{pmatrix}
1 \\
0 \\
-1\\
\end{pmatrix}
$$
v2=
\begin{pmatrix}
2 \\
1 \\
3\\
\end{pmatrix}
v3=\begin{pmatrix}
4 \\
2 \\
6\\
\end{pmatrix}
and
w=\begin{pmatrix}
3 \\
1 \\
2\\
\end{pmatrix}
I need to check whether w is in the subspace spanned by (v1,v2,v3)
I know that w is in the subspace spanned by (v1,v2,v3) if x1v1+x2v2+x3v3=w has a solution .
I write:
x1+2x2+4x3=3
x2+2x3=1
-x1+3x2+6x3=2
I write down the augmented matrix, which is
A= $$
\begin{pmatrix}
1 & 2 & 4&3 \\
0 & 1 & 2&1 \\
-1 & 3 & 6&2 \\
\end{pmatrix}
$$
And row reduce it to get
$$
\begin{pmatrix}
1 & 2 & 4&3 \\
0 & 1 & 2&1 \\
0 & 0 & 0&0 \\
\end{pmatrix}
$$
On the answer sheet is states:
since the dimension of the space of the columns of the augmented matrix coincides with the dimension of the space of the matrix coefficients, the system admits a non trivial solution and w exists in (v1,v2,v3)
I am new studying matrices and my dyscalculia certainly does not help. My question is, what is the dimension of the space of the columns of the augmented matrix? What is the dimension of the space of the matrix coefficient? How can i show that they are the same?
Moreover, can you show me an example where the space of the columns of the augmented matrix DOES NOT coincide with the dimension of the space of the matrix coefficients?
I would greatly appreciate an answer that is as clear and simple as possible ... thank you guys !
| The dimension of the space of columns of a matrix is the maximal number of column vectors that are linearly independent.
In your example, both dimensions are $2$, as the last two columns can be written as a linear combination of the first two columns.
An example where the dimensions are not equal can be given by
$$\begin{cases}
x_1=0\\ x_1=1
\end{cases}.$$
The augmented matrix is
$$\begin{pmatrix}1 & 0 \\ 1 & 1 \end{pmatrix}.$$
The dimension of the space of columns of the coefficient matrix is $1$, while that of the augmented matrix is $2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3035454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Evaluate $\int_{0}^{\frac{\pi}{4}} \ln(\sec x)dx$ Evaluate $$P=\int_{0}^{\frac{\pi}{4}} \ln(\sec x)dx$$
My try: I tried using its complimentary integral:
Let $$Q=\int_{0}^{\frac{\pi}{4}} \ln(\csc x)dx$$
Adding both we get:
$$P+Q=\int_{0}^{\frac{\pi}{4}}\ln(\sec x\csc x)dx$$ $\implies$
$$2P+2Q=\int_{0}^{\frac{\pi}{4}}\ln(\sec^2 x\csc^2 x)dx=\int_{0}^{\frac{\pi}{4}}\ln\left(\frac{4}{4\sin^2 x\cos^2 x}\right)dx$$ $\implies$
$$2P+2Q=\frac{\pi}{4}\ln 4-\int_{0}^{\frac{\pi}{4}}\ln\left(\sin^2 2x\right)dx$$
$$2P+2Q=\frac{\pi}{2}\ln 2-2 \int_{0}^{\frac{\pi}{4}}\ln(\sin 2x)dx$$
Using the substitution $2x=t$ we get
$$2P+2Q=\frac{\pi}{2}\ln 2- \int_{0}^{\frac{\pi}{2}}\ln(\sin t)dt$$
Using the formula:
$$\int_{0}^{\frac{\pi}{2}}\ln(\sin t)dt=\frac{-\pi}{2}\ln 2$$ we get
$$2P+2Q=\pi \ln 2$$
$$P+Q=\frac{\pi}{2}\ln 2$$
Is there any way to find $P-Q$
| In fact
\begin{eqnarray*}
P-Q&=&\int_0^{\frac{\pi}{4}}\ln(\tan t)dt\\
&=&\int_0^1\frac{\ln u}{1+u^2}du\\
&=&\int_0^1\ln u\sum_{n=0}^\infty(-1)^nu^{2n}du\\
&=&\sum_{n=0}^\infty(-1)^n\int_0^1u^{2n}\ln udu\\
&=&-\sum_{n=0}^\infty(-1)^n\frac{1}{(2n+1)^2}\\
&=&-C
\end{eqnarray*}
where $C$ is the Catalan constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3035569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Conditional probability - mistake in interpretation? Assume 3 events $A, B, C$ with success rates $p_1, p_2, p_3$. Let $X$ define an event, where exactly one of the three events had a success.
For me it's clear that $P(X) = P(X|A)P(A) + P(X|B)P(B) + P(X|C)P(C)$.
Further, given that we know that $X$ holds (exactly one success was seen), the probability that $A$ was the one successful is $$ P(A|X) = p_1 \cdot (1-p_2) \cdot (1-p_3)$$
On the other hand, $$ P(X|A) = \frac{P(A|X) P(X)}{P(A)}$$
Logically it seems that $P(X|A)$, the probability that exactly one success was seen, given that A was successful, is equal to $p_1 \cdot (1-p_2) \cdot (1-p_3)$, because it depends on the other two events failing. But this is equal to $P(A|X)$.
Clearly, $P(X) \neq P(A)$, hence I'm wrong somewhere.
Probably in the interpretation of the probability $P(X|A)$. What am I missing?
| Careful! By your same formula we have that
$$P(A|X)=\frac{P(A\cap X)}{P(X)} = \frac{P(X|A)P(A)}{P(X|A)P(A) + P(X|B)P(B) + P(X|C)P(C)}$$
which is not equal to $p_1(1-p_2)(1-p_3)$.
The denominator in the above formula is exactly what you would interpret as "given that we know that $X$ holds". Indeed the denominator contains all possible outcomes. This is both the beauty and subtleness of Bayes' theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3035739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Differential of Hopf's map Let $$h : \mathbb{C^2} \rightarrow \mathbb{C \times R} $$
$$h(z_1, z_2) = (2z_1z_2^*, |z_1|^2-|z_2|^2)$$
How do you find the differential of $h$ and show it is onto/surjective?
I know that I can express $h$ as $\mathbb{R^4}$ instead of $\mathbb{C^2}$, but then how do I proceed? Do I just differentiate with respect to each $x_1, x_2, x_3, x_4$ instead of $z_1, z_2$ given that $x_2, x_4$ are the imaginary part?
So, let's say $h(z_1, z_2)$ becomes $$h(x_1,x_2,x_3,x_4) = (2(x_1+x_2i)(x_3-x_4i),\ x_1^2+x_2^2-x_3^2-x_4^2) \\ \rightarrow (2(x_1x_3 + x_2x_4), 2(x_2x_3-x_1x_4), x_1^2+x_2^2-x_3^2-x_4^2)$$
But if I start differentiating $h$ with respect to each variable, I am getting a Jacobian matrix $\mathbb{R^{3 \times 4}}$. I think I am doing something wrong here.
| In fact you must understand $h$ as a map from $\mathbb{R}^4$ to $\mathbb{R}^3$ and the derivative of $h$ as a the derivative in the sense of real multivariable calculus. You have done this almost correctly (I corrected a typo), and you are right that the Jacobian $Jh(x)$ of $h$ at $x$ is a $3 \times 4$-matrix. With respect to the standard bases of $\mathbb{R}^4, \mathbb{R}^3$ it is the matrix representation of the differential $Dh(x)$ of $h$ at $x$ which is linear map $Dh(x) : \mathbb{R}^4 \to \mathbb{R}^3$.
You have to determine for which $x$ the map $Dh(x)$ is a surjection. This is equivalent to determining when $Jh(x)$ has maximal rank, i.e. rank $3$. You have
$$Jh(x) = \left( \begin{array}{rrrr}
2x_3 & 2x_4 & 2x_1 & 2x_2 \\
-2x_4 & 2x_3 & 2x_2 & -2x_1 \\
2x_1 & 2x_2 & -2x_3 & -2x_4 \\
\end{array}\right) = 2 \left( \begin{array}{rrrr}
x_3 & x_4 & x_1 & x_2 \\
-x_4 & x_3 & x_2 & -x_1 \\
x_1 & x_2 & -x_3 & -x_4 \\
\end{array}\right) = 2 M(x) .$$
You see that $Jh(0) = 0$. i.e. $Jh(0)$ has rank $0$. Let us show that the rank is $3$ if $x \ne 0$. Denote by $M_i(x)$ the matrix obtained from $M(x)$ be deleting the $i$-th column. Easy computations show that
$$\det M_1(x) = -x_2(x_1^2 + x_2^2 + x_3^2 + x_4^2)$$
$$\det M_2(x) = -x_1(x_1^2 + x_2^2 + x_3^2 + x_4^2)$$
$$\det M_3(x) = -x_4(x_1^2 + x_2^2 + x_3^2 + x_4^2)$$
$$\det M_4(x) = -x_3(x_1^2 + x_2^2 + x_3^2 + x_4^2)$$
At least one of these four expressions is $\ne 0$ which proves our claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3035823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why is 2 $\cdot$ $\sin(\alpha)$ $\cos(\alpha)$=$\sin(2\alpha)$? I am solving a physics problem involving a 2-dimensional throw. However, I have hit a bump trying to understand
$$2\cdot \sin(\alpha)\cdot\cos(\alpha)=\sin(2\alpha)$$
I have googled and searched on Stack Exhange but found nothing. I hope someone can explain how those two equal each other.
|
Consider the semicircle over the diameter $[AB]$ of radius 1 and center $E$. Let $C$ be an arbitrary point on the semicircle and $D$ the foot of the altitude of the side $[AB]$ in the triangle $\Delta ABC$. Denote furthermore by $\alpha$ the angle $\angle BAC$. It follows that $\angle BEC=2\alpha$.
By the definition of the sine and cosine $$\sin(\alpha)=\frac{[CB]}{[AB]}$$
$$\cos(\alpha)=\frac{[AC]}{[AB]}$$ Thus $$\sin(\alpha)·\cos(\alpha)=\frac{[CB]·[AC]}{[AB]^2}$$
Now, since $[CD]·[AB]=[AC]·[CB]$ (different ways to get the area of $\Delta ABC$)
$$\sin(\alpha)·\cos(\alpha)=\frac{[CB]·[AC]}{[AB]^2}=\frac{[CD]}{[AB]}=\frac{[CD]}{2}$$ $$\Rightarrow 2·\sin(\alpha)·\cos(\alpha)=[CD]$$
Finally $$\sin(\angle BEC)=\sin(2\alpha)=\frac{[CD]}{[EC]}=[CD] \Rightarrow \sin(2\alpha)=2·\sin(\alpha)·\cos(\alpha)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3035934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Need help with some basic math/exponential rules I don't understand. How do I get from a to be here?
See image. I've looked up just about every rule I can find and I can't figure out how I am supposed to arrive at that answer. Can someone explain to me what has been done step by step here?
| $$0.5=\frac12$$
$$0.5-1=-0.5$$
$$x^{-n}=\frac1{x^n}$$
That should be enough to figure it out
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3036017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Why doesn't a 20 degree rotation change the slopes of $y=x$ and $y=\frac{x}{2}$ by the same amount? It seems that if I rotate different lines (lying in the same quadrant) the same number of degrees they move different amounts (in terms of their slope). (where the rotation is such that all the lines do not enter a different quadrant)
Can someone give me intuition why this is?
My guess was that maybe it has something to do with the nature of a circle.
| Slope is $\tan x$ , $x$ being the angle the line makes with $x$ axis. $x$ is 45 in the line $y=x$ and around 26 when $2y=x$. As you know that $\tan$ approaches infinity when $x$ approaches 90, so as we add 20 to 45, the value of $\tan$ will increase at a greater rate than when we add 20 to ~26.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3036144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
multivariable calculus and architecture It's often said that architecture involves a lot of multivariable calculus, and for my (high school) Multivariable calculus project, I wanted to do further research on that. However, so far I haven't been able to exactly determine what specific maths architects use and how they use it. For instance, I heard quite often that architects use integral calculus, but how exactly do they use it? Does anyone know a lot about these or can someone recommend me any books/articles that go pretty in-depth relating to this subject? Any help would really be appreciated!
| *I should clarify that I might be missing the point of the question entirely, and if so, please excuse me.
Source: I have worked professionally as an Architect for the past 5 years, and have completed my Master of Architecture degree.
Outside of the profession, the perceived level of mathematics is often overestimated. I never took calculus, and only went up to trigonometry. I do project design and management, and the only math that I use on a routine basis is basic geometry. The computer programs that we work with these days (sketchup, revit, autocad) do the heavy mathematic lifting, and we primarily focus on the 2D/3D representation and constructability concerns.
An example of a frequent calculation I do is determining the riser height of stairs between two floors of a building. 11'-6" between floors: 11 x 12" = 132". 132" + 6" = 138". The building code dictates a maximum riser height of 7" in commercial spaces, so 138" / 20 = 6.9" Very rudimentary stuff. Roof pitches are determined by well-established industry standards such as a 7:12 slope is about as steep as anyone can walk, and anything below 3:12 slope requires extra cost and waterproofing.
The myriad of concerns that Architects consider on a daily basis frequently revolve around numbers, but not in an intense number crunching way (we rely on engineers for that), and are more guidelines with which we work and not the primary focus.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3036266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A question about a primitive root mod $p=2^{2^k}+1$, where $p$ is prime. Let $p=2^{2^k}+1$ be a prime where $k\ge1$. Prove that the set of quadratic non-residues mod $p$ is the same as the set of primitive roots mod $p$. Use this to show that $7$ is a primitive root mod $p$.
I've already shown the theorem to be true. The second part asks to use the first part to show the result which leads me to think that I have to show $7$ is a quadratic non-residue mod $p$ then use the first part to imply that it must be a primitive root.
To show $7$ to be a quadratic non-residue for $k\ge1$ is to show that the Legendre symbol $\left(\frac{7}{p}\right) = -1$. Now, $$\left(\frac{7}{p}\right)=\left(\frac{p}{7}\right)(-1)^{\left(\frac{7-1}{2}\right)\left(\frac{p-1}{2}\right)} = \left(\frac{p}{7}\right)(-1)^{3(2^{(2^k)-1})} = \left(\frac{p}{7}\right)$$
since $2^{2^k-1}$ is even (as $k\ge1$).
Then it suffices to know $p$ mod $7$ to determine the Legendre symbol. Since $\left(\frac{p}{7}\right) = -1$ when $p\equiv 3,5,6$ mod $7$, I suspect I somehow have to show that $p$ must be congruent to those values but I don't know how to do that. Although, trivially, $p\not\equiv 1$ mod $7$ otherwise, $7|2^{2^k}$ which is not possible.
Unfortunately, I don't know where to go from here.
Any guidance would be appreciated. However, assuming I’ve taken the right approach, I would prefer a constructive hint to a full blown solution, as I think I may be able to work it out on my own, given a nudge in the right direction.
Thank you for taking the time.
| Powers of 2 are congruent to $1, 2, $ or $4$ modulo $7$ according as the power is congruent to $0, 1$ or $2$ modulo $3$ (as $3$ is the order of $2$ modulo $7$). As $3\nmid 2^k,$ $p\equiv 3\ or\ 5 \pmod 7.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3036390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Evaluate the limit of the sequence: $\lim_{n_\to\infty}\frac{\sqrt{(n-1)!}}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})}$ Evaluate the limit of the sequence:
$$\lim_{n\to\infty}\frac{\sqrt{(n-1)!}}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})}$$
My try:
Stolz-cesaro: The limit of the sequence is $\frac{\infty}{\infty}$
$$\lim_{n\to\infty}\frac{a_n}{b_n}=\lim_{n\to\infty}\frac{a_{n+1}-a_n}{b_{n+1}-b_n}$$
For our sequence:
$\lim_{n\to\infty}\frac{\sqrt{(n-1)!}}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})}=\lim_{n\to\infty}\frac{\sqrt{n!}-\sqrt{(n-1)!}}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})\cdot(1+\sqrt{n+1})-(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})}=\lim_{n\to\infty}\frac{\sqrt{(n-1)!}\cdot(\sqrt{n-1})}{\left((1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})\right)\cdot(\sqrt{n}+1)}$
Which got me nowhere.
| Consider:
$$
(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})
$$
Take the root from each pair of parentheses and multiply them, then:
$$
(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n}) > \sqrt{n!} \iff \\
\iff \frac{1}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})} < \frac{1}{\sqrt{n!}}
$$
Going back to original we have that:
$$
\frac{\sqrt{(n-1)!}}{(1+\sqrt{1})\cdot(1+\sqrt{2})\cdot (1+\sqrt{3})\cdots (1+\sqrt{n})} \le \frac{\sqrt{(n-1)!}}{\sqrt{n!}} = \frac{1}{\sqrt n}
$$
But the function is greater than $0$ and hence using squeeze theorem we conclude that:
$$
0 \le \lim_{n\to\infty}x_n \le \lim_{n\to\infty}\frac{1}{\sqrt n} = 0
$$
Hence the limit is $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3036510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
prove there exist postive integers $a,b$ such $p^2|a^2+ab+b^2$ Problem 1: Let prime $p\equiv 1\pmod 3$.show that:there exist postive integers $a\le b<p$ such
$$p^2|a^2+ab+b^2$$
I have only prove there $a,b$ such $$p|a^2+ab+b^2$$
Problem 1 from this:
Problem 2.3 (Noam Elkies). Prove that there are infinitely many triples $(a,b,p)$ of integers, with $p$ prime and $0<a\leq b<p$, for which $p^5$ divides $(a+b)^p-a^p-b^p$.
The key claim is that if $p\equiv1\pmod3$, then $$p(x^2+xy+y^2)^2\;\mathrm{divides}\;(x+y)^p-x^p-y^p$$ as polynomials in $x$ and $y$. $\color{red}{\underline{\color{black}{\text{Since it's known that}}}}$ one can select $a$ and $b$ such that $\color{red}{\underline{\color{black}{p^2\mid a^2+ab+b^2}}}$, the conclusion follows. (The theory of quadratic forms tells us we can do it with $p^2=a^2+ab+b^2$; Thue's lemma lets us do it by solving $x^2+x+1\equiv0\pmod{p^2}$.)
To prove this, it is the same to show that $$(x^2+x+1)^2\;\mathrm{divides}\;F(x)\overset{\mathrm{def}}=(x+1)^p-x^p-1.$$ since the binomial coefficients $\binom pk$ are clearly divisible by $p$. Let $\zeta$ be a third root of unity. Then $F(\zeta)=(1+\zeta)^p-\zeta^p-1=-\zeta^2-\zeta-1=0$. Moreover, $F'(x)=p(x+1)^{p-1}-px^{p-1}$, so $F'(\zeta)=p-p=0$. Hence $\zeta$ is a double root of $F$ as needed.
(Incidentally, $p=2017$ works!)
Image that replaced the text
| This relies on the following statement: if some residue class $a$ is a square mod $p$, then it is a square mod $p^2$.
Indeed, if $b^2=a+tp [p^2]$, then $(b+kp)^2=a+(t+2k)p [p^2]$.
So you know that there is some $x$ such that $p|x^2+x+1$, thus $-3=(2x-1)^2$ is a square mod $p$, thus mod $p^2$, and if $y^2=-3[p^2]$, then $a=(1+y)/2$ (mod $p^2$) and $1$ work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3036675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Average distance between point in a disc and line segment What is the average distance between a (randomly chosen) point in a disc of radius r and a line segment of length $a < 2r$ whose midpoint is at the center of the disc? ["Distance" here being the shortest distance to any point on the line segment.]
|
We devote our calculations to only the first quadrant because all the other quadrants are symmetrical in terms of calculations.
Dividing the quadrant into three regions, we will use the concept of expectation value of a variable denoted as follows $$\lt x\gt = \frac{\int xP(x)dx}{\int P(x)dx}$$
where $P(x)$ is the number of times a particular value $x$ of the quantity we seek occurs. (which we will contemplate as area of infinitesimal strips)
For region $\mathbf I$:
The distances of all the points on a vertical line are the same ($x$).
So the $N^r$ contributed to the final formula of expected value or $N^r_{\mathbf I}$ is calculated as $$\int_0^{\sqrt{R^2-\frac{a^2}{4}}}x\frac a2dx=\frac a4\Biggl(R^2-\frac{a^2}{4}\Biggr)$$
where $\frac a2dx$ is the area of the strip on which all the points are at a distance $x$ from the line of length $a\lt 2R$.
And obviously the denominator contributed $$D^r_{\mathbf I} = area(rectangle\ OABC) =\int_0^{\sqrt{R^2-\frac{a^2}{4}}}\frac a2dx=\frac a2\sqrt{R^2-\frac{a^2}{4}}$$
For region $\mathbf {II}$:
$$N^r_{\mathbf {II}}=\int_{\sqrt{R^2-\frac{a^2}{4}}}^R\ x\sqrt{R^2-x^2}dx$$
Subtituting $t=x^2$ and $dt=2xdx$, it becomes at last
$$-\frac13\biggl[(R^2-x^2)^{\frac32}\biggr]_{\sqrt{R^2-\frac{a^2}{4}}}^R=\frac{a^3}{24}$$
where $\sqrt{R^2-x^2}$ is the height of each strip.
And $D^r_{\mathbf {II}}$ = area of half the segment $\mathbf {II}$ $$= \int_{\sqrt{R^2-\frac{a^2}{4}}}^R\ \sqrt{R^2-x^2}dx$$
which can also simply be calculated by $\frac12R^2(\angle BOD)^c - area (\triangle OBC)$
$$=\frac12R^2\sin^{-1}(\frac a{2R})-\frac a4\sqrt{R^2-\frac{a^2}{4}}$$.
For region $\mathbf {III}$: ($\mathbf {My\ attempt}$)
Since the equal distances between line and collection of random points are radial in sense, we now switch to polar coordinates.
Also you'd reconcile that there is some relation between $\theta$ and the radial distance $r$ from point $A$ i.e. $max(r)$ depends on what $\theta$ line you are seeing.
So, applying $\mathit {Law \ of \ cosines}$ in $\triangle OAE$:
$$\cos(\frac{\pi}2+\theta)=\frac{r^2+\frac{a^2}4-R^2}{ra}$$
or
$$\sin(\theta)=\frac{R^2-\big(r^2+\frac{a^2}4\big)}{ra}$$
or
$$r=\frac{\sqrt{4R^2-a^2\cos^2{\theta}}-a\sin{\theta}}2$$
which is strictly decreasing in $\big[0,\frac{\pi}2\big)$.
Now, there are two integral sums that can lead us to $D^r_{\mathbf {III}}$:
$$D^r_{\mathbf {III}}=\int_0^{R-\frac a2}\int_0^{\frac{\pi}2}rd{\theta}dr+\int_{R-\frac a2}^{\sqrt{R^2-\frac{a^2}4}}\int_0^{\sin^{-1}\biggl(\frac{R^2-\big(r^2+\frac{a^2}4\big)}{ra}\biggr)}rd{\theta}dr=\frac{\pi}4\big(R-\frac a2\big)^2+\int_{R-\frac a2}^{\sqrt{R^2-\frac{a^2}4}}r\ {\sin^{-1}\Biggl(\frac{R^2-\big(r^2+\frac{a^2}4\big)}{ra}\Biggr)}dr$$
or
$$D^r_{\mathbf {III}}=\int_0^{\frac{\pi}2}\int_{R-\frac a2}^{\frac{\sqrt{4R^2-a^2\cos^2{\theta}}-a\sin{\theta}}2}rdrd\theta+\int_0^{\frac{\pi}2}\int_0^{R-\frac a2}rdrd{\theta}=\int_0^{\frac{\pi}2}\int_0^{\frac{\sqrt{4R^2-a^2\cos^2{\theta}}-a\sin{\theta}}2}rdrd\theta$$
(I prefer latter to check if we are on the right track and we are! That's what I checked through above)
which can also simply be calculated by the area of half the segment $\mathbf {III}$
$$=\frac12R^2(\angle BOA)^c - area (\triangle OBA)=\frac12R^2\cos^{-1}\Bigl(\frac a{2R}\Bigr)-\frac a4\biggl(\sqrt{R^2-\frac{a^2}4}\biggr)$$
$$\sum_{i=1,2,3} D^r_{\mathbf {i}}= area(quadrant)=\frac{\pi}4R^2$$
since $\sin^{-1}x+\cos^{-1}x=\frac{\pi}2$
Since the distance between points at radial distance of $r$ from point $A$ is the same, thus multiplying $r$ in the integrand would mean $distance\times (number\ of\ points\ having\ such\ distance)$ thing:
$$N^r_{\mathbf {III}}=\int_0^{\frac{\pi}2}\int_0^{\frac{\sqrt{4R^2-a^2\cos^2{\theta}}-a\sin{\theta}}2}r.rdrd\theta=\frac1{24}\int_0^{\frac{\pi}2}\Big({\sqrt{4R^2-a^2\cos^2{\theta}}-a\sin{\theta}}\Big)^3drd\theta$$
Average distance(as defined) of a random point from the given line is $$\frac{\underset{i=1,2,3}{\sum} N^r_{\mathbf {i}}}{\frac{\pi}4R^2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3036812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Find $\lim_{n\to\infty} \cos(\frac{\pi}{4}) \cos(\frac{\pi}{8})\ldots \cos(\frac{\pi}{2^n}) $ I already know that $$ a_n = \cos\left(\frac{\pi}{2^{n+1}}\right) = \overbrace{\frac{\sqrt{2+\sqrt{2+\ldots + \sqrt{2}}}}{2}}^{n\text{ roots}}$$
Also I know that $$\lim_{n\to\infty} 2\cos\left(\frac{\pi}{2^n}\right) = 2
\text{ and if } a_n \xrightarrow {n\to\infty} a \text{ then } \sqrt[n]{a_1 a_2 \ldots a_n} \xrightarrow{n\to\infty} a $$
With that method I only got indeterminate form
$$ \lim_{n\to\infty} \cos\left(\frac{\pi}{4}\right) \cos\left(\frac{\pi}{8}\right)\ldots \cos\left(\frac{\pi}{2^n}\right) = \Big(\frac{\sqrt[n]{a_1 a_2 \ldots a_n}}{2}\Big)^n = 1^\infty $$
Anyone knows a working solution?
| What you are trying to proof is Viete's formula. What he did was trying to compare area's of regular polygons that are inscribed in a unit circle. The area of a regular polygon with $n$ sides is given by
$$ A_n = \frac12 n \sin\left(\frac\pi n\right)$$
If you compute now the ratio between two regular polygons, one with $2^n$ sides, and one with $2^{n-1}$ sides, then you get:
$$B_n=\frac{A_{2^{n-1}}}{A_{2^n}} = \frac{2^{n-1} \sin\left(\frac{\pi}{2^{n-1}}\right)}{2^{n} \sin\left(\frac{\pi}{2^{n}}\right)} = \cos\left(\frac{\pi}{2^{n}}\right)$$
This now implies that the product the OP tries to compute is equal to
$$C_n=B_3 B_4 ... B_n = \frac{A_4}{A_8}\cdot\frac{A_8}{A_{16}}\cdot\cdots\cdot\frac{A_{2^{n-1}}}{A_{2^n}}=\frac{A_4}{A_{2^n}}$$
Sine a regular polygon with an infinite amount of sides is equivalent to a circle, you have $$\lim_{n\rightarrow\infty}A_n=\pi$$. In essence, the complete product is nothing more than comparing the size of a circle with respect to its inscribed square. Hence,
$$\prod_{n=2}^\infty\cos\left(\frac{\pi}{2^n}\right)=\lim_{n\rightarrow\infty}C_n=\frac 2 \pi$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3036917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
} |
Determine the Expected Value of Uniformly random elements of sets
Answer is D
The way I attempted this was that for X = MAX(a,b), the random variable X is equivalent to the max value of a and b. So, from the 2 sets, the probability of getting k from set {1,2...100} is $\frac{k}{100}$ and for the second set we have to select $\frac{k-1}{100}$ so that it is less than the first one.
E(X) would then just be the summation of $x*$$\frac{{k*}{(k-1)}}{100}$, which is option B.
Don't understand how its D? Any explanations?
| Your general logic is fine, although you calculate the probability incorrectly. Note that, if $\max(a,b) = k$, then we have one of three cases:
*
*$a = k$ and $b < k$.
*$a < k$ and $b = k$.
*$a = b = k$
It should be readily apparent that these cases are disjoint, so to find the probability $P(X = k)$, we can add the probabilities of these cases.
Since $a,b$ are uniform, it's really a matter of counting how many ways this can happen. There's one way for $a = k$ to happen and $k-1$ ways for $b < k$, so multiplying these gives $k-1$ ways; similarly, there's also $k-1$ ways to get $a < k$ and $b = k$. Finally, there's exactly one way to get $a = b = k$. Adding these gives you $1 + 2(k - 1)$; we divide by $100^2$ as the total number of outcomes for $a,b$. This give the probability $P(X = k) = \frac{1 + 2(k-1)}{100^2}$, from which you can derive the expected value in the way you do above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3037061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is $A \& B \multimap A$ derivable? Intuitively, the sentence $A \& B \multimap A$ seems to mean "Using a choice between $A$ and $B$, get an $A$." This feels like it should be derivable for any $A$ and $B$, but I haven't found any way to derive it from the definition of $\&$. Is it possible to establish this in linear logic? Or, if not, what makes this sentence different from the definition of $\&$?
| $\DeclareMathOperator{\par}{\unicode{8523}}$
Yes, $A \& B \multimap A$ is provable in linear logic sequent calculus, but the derivation in the answer above is wrong, because there is no rule that allows one to derive $\vdash (A^\bot\oplus B^\bot) \par A$ from $\vdash A^\bot \par A$ (an inference rule in the sequent calculus can only introduce a new principal connective in a formula).
A correct derivation of $A \& B \multimap A$ in the one-sided sequent calculus for linear logic is below.
Note that, according to the one-sided formulation, $A \& B \multimap A$ is the same formula as $(A^\bot\oplus B^\bot) \par A$.
\begin{align}
\dfrac{\dfrac{\dfrac{}{A^\bot, A}\text{ax}}{A^\bot \oplus B^\bot, A}\oplus}{(A^\bot \oplus B^\bot) \par A} \par
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3037188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Is continuous increasing function in $H^1([0,1])$ Consider a function $f(x):[0,1]\rightarrow \mathbb{R}$. If $f(x)$ is continuous and increasing, is $f(x)$ in $H^1(\Omega)$, the Sobolev space with norm $\sqrt{\int_0^1 (|f(x)|^2 + |Df(x)|^2) dx}$?
| As in the comments, the Devil's staircase, $\mathcal D=\mathcal D(x)$ is a non-decreasing function with no weak derivative; therefore it cannot lie in any Sobolev space. Since the OP wants a function that is increasing, one can consider $f(x) := x + \mathcal D(x)$ instead.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3037295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A function that is continuous almost everywhere is Lebesgue measurable If $f: E \to \mathfrak{M}$ (where $\mathfrak{M}$ is the Lebesgue measurable sets) is continuous a.e., is it true that $f$ is Lebesgue measurable?
I know that continuous functions on $E \in \mathfrak{M}$ are Lebesgue measurable, but I am wondering if this can be extended to functions that are continuous a.e.?
My intuition is that the answer is yes.
Let $D = \{x \in E: f(x) \text{ discontinuous}\}$ and $\alpha \in \mathbb{R}$. Then:
$f^{-1}((-\infty, \alpha)) = ((\{x \in E: f(x) < \alpha\} \setminus D) \cup (\{x \in E: f(x) < \alpha\} \cap D))$
The second set is a subset of $D$, which has measure 0, so it is measurable. But is the first set also measurable? Is there any easier way to prove (or disprove) the statement?
| The first set is open, hence measurable.
Edit: indeed, the first set is not open.
However, let us denote $S_1$ the first set, $S_2$ the second one, $S=S_1 \cup S_2$. Then $S_2$ has null measure and $S_1 \subset S’ \subset S=S_1 \cup S_2$ where $S’$ is the interior of $S$.
So $S$ has symmetric difference of null measure with its interior, thus is measurable.
Edit2: Let $x \in S_1$. Then $f(x) < \alpha$ and $f$ is continuous at $x$. Thus, there exists an open interval $J$ containing $x$ such that if $y \in J$, $f(y) < \alpha$, hence $x \in J \subset S$, and since $J$ is open, $x \in S’$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3037477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Looking for reference on cup and cap product without invoking acyclic model theorem I am looking for reference on cup and cap product without invoking acyclic model theorem. To me, acyclic model theorem is very strange phenomena though I could understand it but I do not see direct construction.
$\textbf{Q:}$ Is there a reference on cup and cap product construction without invoking acyclic model theorem(relative and $C(X)\otimes_Z C(Y)\cong C(X\times Y)$ as quasi isomorphism)? I would like to see a direct (computable) construction which will demonstrate non-commuativity of cup, associativity of both cap and cup whenever they are well defined. I am having trouble to see "obviously" $u\cup v=(-1)^{deg(v)deg(u)}v\cup u$ as well.(Note here I should not have written in this way as $u\in H^i(X), v\in H^j(X)$ but I have identified $H^{i+j}(X\times Y)=H^{i+j}(Y\times X)$ in the image. This is indicating diagram is commutative up to a sign.) Most of time, the book proves this by acyclic model via producing homotopy to a chain map with a sign.
| The cup product is graded commutative in homology, but not on the chain level. Maps witnessing higher non-commutativity in a coherent way are known as $i$-cup products, and were introduced by N. Steenrod in this paper. Computations there are very explicit. The fundamental result for (usual) cup products is that if $a$ and $b$ are cochains in degree $p$ and $q$, there is a cochain $a\smile_1 b$, called the 1-cup product of $a$ with $b$, so that
$$d(a\smile_1 b) -da\smile_1 b-(-1)^p a\smile_1 b= (-1)^{p+q+1}[a,b]$$
where the right hand side is the graded commutator. This MO post contains more information on these operations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3037622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $(u(x,y))^2+u(x,y)v(x,y)$ has a local maximum or minimum in $D$, then $f$ must be constant?
Let $f(z) = u(x,y)+iv(x,y)$ be an analytic function on a connected open set $D$ with $u(x,y)$ and $v(x,y)$ being the real and imaginary parts of $f(z)$, respectively. If $(u(x,y))^2+u(x,y)v(x,y)$ has a local maximum or minimum in $D$, then $f(z)$ must be a constant.
I am preparing for an upcoming final in complex analysis, and this question was given as a practice problem. The solution given seems very tedious and I suspect this can be proven with a simple contradiction. Since $D$ is open and connected, maybe we can assume $f$ is not constant and apply the open mapping theorem to arrive at a contradiction? Maybe apply the maximum modulus principle?
My apologies for the lack of work, I am not too sure how to attempt the problem. Any help would be much appreciated!
| I didn't think about it much, so I don't know if it helps for the answer, but we can show that $(u(x,y))^2+u(x,y)v(x,y)$ is constant.
For seeing this you should first calculate the Laplace operator of $f$, i.e. $\triangle f$ and see that $-\triangle f\leq 0$ everywhere. You need to use the Cauchy-Riemann-equations for this.
Since an analytic function is smooth, we can use that $f\in C^2$ is subharmonic iff. $-\triangle f\leq 0$ everywhere.
You can now use the maximum principle for subharmonic functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3037768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Solving independent linear equations
\begin{align}
&{-}2y+2z-1=0 \tag{4} \\[4px]
&{-}2x+4y-2z-2=0 \tag{5} \\[4px]
&\phantom{-2}x-y+3/2=0 \tag{6}
\end{align}
Equation (6) is the sum of (4) and (5). There are only two independent equations.
Putting $z=0$ in (5) and (6) and solving for x and y, we have
\begin{align}
x&=-2 \\[4px]
y&=-1/2
\end{align}
*
*equation (6) is the sum of (4) and (5): OK, I see it
*There are only two independent equations: I didn't get; what does this sentence mean?
*Putting $z=0$ in (5) and (6): why putting z=0 in equation (5) and (6)?
Please help
| You can also think of as follows.
*
*A single variable linear equation represents a single point.
*A two-variable linear equation represents a line. If there are two two-variable linear equations, their solution is the intersection point of them as long as both equations are independent.
*A three-variable linear equation represent a plane. If there are 3 three-variable equations, their solution is the intersection point of them as long as they are independent.
So if you have two 3-variable linear equations (or three 3-variable linear equations in which two of them are dependent), you cannot get a single point of intersection but a collection of points which are on a line, for example, $x+y+5/2=0$ in your case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3037882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
When for given $n$ and $m$ we get $p^n-1=k(p^m-1)$ provided that $k$ is an even number Consider $p$ is an odd prime number.
Assume that $n$ and $m$ are two positive integer numbers
provided that $m \mid n$ which results in $p^m-1 \mid p^n-1$.
Therefore, we get $p^n-1=k(p^m-1)$ where $k$ is a positive integer number.
My question: Suppose that $n$ is fixed.
With which condition over $m$, we get $k$ is an even number
($k$ may be odd or even number).
Thanks for any suggestions
| You can write $$\frac{p^n-1}{p^m-1}=\frac{(p^{m})^l-1}{p^m-1}=\sum_{i=0}^{l-1}(p^m)^i$$ There are $l$ terms on the RHS all being odd, so you need $l$ to be even, that is $\frac{n}{m}$ must be even.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3038024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Let $f(x) = \ln x - 5x$, for $x > 0$. I’m in IB Math and we are working on some calculus problems but I wanted to get extra practice so this is a problem in my book. The number in parenthesis next to the parts are the “marks” we get for the question if we get it right. So, usually that’s about how much work we have to show or how many steps to take.
Let $f(x) = \ln x - 5x$, for $x > 0$.
a) Find $f’(x)$. (2)
Would this be $1/x - 5$?
b) Find $f’’(x)$. (1)
Would this be simply $1/x$?
c) Solve for $f’(x) = f’’(x)$. (2)
We set them up equal to each other. So,
$1/x - 5$ = $1/x$
The $1/x$’s cancel out and we get $-5$. Is this correct?
| Point "a)" is ok.
For point "b)" recall that
$$\frac1x = x^{-1} \implies \frac{d}{dx}(x^{-1})=-x^{-2}=-\frac1{x^2}$$
Note also that for point "c)" in nay case
$$\frac1x - 5 = \frac1x \iff -5=0$$
that would mean that the equation has not solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3038155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why don't we always consider complete measure spaces? Let $(\Omega ,\mathcal F,\mathbb P)$ a probability space and let $X=(X_t)$ and $Y=(Y_t)$ two stochastic processes. I know for example that $X$ and $Y$ are indistinguishable if there is a set $N$ of measure $0$ s.t. for all $\omega \notin N$ we have $X_t=Y_t$ for all $t$, but we can't write $\mathbb P\{\forall t, \ X_t=Y_t\}$ since $\{\forall t, X_t=Y_t\}$ may be not $\mathcal F-$ measurable.
*
*The thing is if $Y$ is a copy of $X$ and $(\Omega ,\mathcal F,\mathbb P)$ is complete, then $\{\forall t,X_t=Y_t\}$ is $\mathcal F-$measurable.
*I also know that each measure space can be completed by adding sets of measure.
Questions :
So, why don't we always work with complete measure space (since they can be always completed), and avoid for example the problem of the measurability of $\{\forall t, X_t=Y_t\}$ if $Y$ is a copy of $X$ (or many other measurability problem) ?
In what working in a non complete measure space can be interesting, or at least more interesting than to work with it's completion ? (since a non complete measure space can always be completed).
Do you have an example where it's worth to work with the uncompleted measure space rather than with the completed space ?
| The main advantage I can think of is that the composition of two $(\mathcal B,\mathcal B)$-measurable functions are also $(\mathcal B,\mathcal B)$-measurable whereas the composition of two $(\mathcal L,\mathcal B)$-measurable functions need not be $(\mathcal L,\mathcal B)$-measurable. Of course, here $\mathcal B$ is the family of Borel sets and $\mathcal L$ is the family of Lebesgue measurable sets.
Many analysis books mean to say $(\mathcal L,\mathcal B)$-measurable when they say measurable. This is why those books state that we need a continuous function $\varphi$ so that $\varphi\circ f$ is measurable, provided that $f$ is a measurable function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3038312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Extra factor of 2 when evaluating an infinite sum using fourier series and parseval's theorem. I'm asked to find the fourier series of the $2 \pi $ periodic function f(x) which is $sin(x)$ between $0$ and $\pi$ and $0$ between $\pi$ and $2\pi$
I use the complex form to proceed and get $$\frac{1}{2\pi}\int_{0}^{\pi}sin(x)e^{-ikx}dx$$. The complex coefficient $c_k$ I get as result is $\frac{-1}{\pi(k^2-1)}$ where $k=2n$ (k even) which is also correct according to WolframAlpha.
But then, I'm asked to use this result to evaluate $\sum_{n=1}^{\infty}\frac{1}{(4n^2-1)^2}$. For that, I switch to the real coefficients using $a_k=c_k+c_{-k}, b_k=i(c_k-c_{-k})$. I get: $a_k=\frac{-2}{\pi(k^2-1)}, a_0=\frac{2}{\pi}$ and $b_k$ is $0$.
So $f(x)=\frac{a_0}{2}+\sum_{k=1}^{\infty}a_kcos(kx)$
I then use Parseval's theorem to evaluate the sum we are looking for, remembering that k is even, i.e. $k=2n$ and that the function is $0$ between $\pi$ and $2\pi$:
$$\frac{1}{\pi}\int_{0}^{2\pi}|f(x)|^2 dx=\frac{a_0^2}{2}+\sum_{k=1}^{\infty}a_k^2$$ (1)
$$\frac{1}{\pi}\int_{0}^{\pi}sin^2(x)dx=\frac{\frac{2^2}{\pi^2}}{2}+\sum_{n=1}^{\infty}\frac{4}{\pi^2(4n^2-1)^2}$$ (2)
$$\frac{1}{2}-\frac{2}{\pi^2}=\sum_{n=1}^{\infty}\frac{4}{\pi^2(4n^2-1)^2}$$ (3)
So finally I get$\frac{\pi^2}{8}-\frac{1}{2}=\sum_{n=1}^{\infty}\frac{1}{(4n^2-1)^2}$ (4)
However, WolframAlpha gets $\frac{\pi^2}{16}-\frac{1}{2}$ so I must somehow have forgotten a factor of $\frac{1}{2}$ or put an extra factor of $2$ by $\frac{\pi^2}{8}$.
Logically, this missing/extra factor must have happened while I was evaluating $\frac{1}{\pi}\int_{0}^{\pi}sin^2(x)dx$ because the $-\frac{1}{2}$ at the end that came from the right side of the equality is correct according to WolframAlpha. But even when I evaluate $\frac{1}{\pi}\int_{0}^{\pi}sin^2(x)dx$ in WolframAlpha, I get the $\frac{1}{2}$ from step (3) which finally becomes $\frac{\pi^2}{8}$ and again, an factor of $\frac{1}{2}$ is missing, so I'm a bit perplex about what's wrong.
Thanks for your help !
Edit: the strange thing is that when I evaluate this sum using Parseval but with the fourier series of |sin(x)| between $0$ and $2\pi$, I get the correct result.
| I can't check your calculations since you haven't included them, but it is clear that the Fourier series you found is not the Fourier series of $f$. Your function is not even, so it cannot have a Fourier cosine series. For example,
$$ b_1 = \frac{1}{\pi} \int_0^{2\pi} f(x) \sin(x) \, dx = \frac{1}{\pi} \int_0^{\pi} \sin^2(x) \, dx = \frac{1}{2}. $$
My guess is that you haven't been careful in checking the special case when integrating the complex form (that is, $\int e^{ikx} \, dx = \frac{e^{ikx}}{ik} + C$ only when $k \neq 0$). In fact, the Fourier series of $f$ is given by
$$ \sum_{k = 1}^{\infty} \frac{2}{\pi(1-4k^2)} \cos(2kx) + \frac{1}{\pi} + \frac{1}{2} \sin(x) $$
and you'll get your missing factor from the extra $\sin$ term.
The complex coefficient $c_1$ is given by
$$ c_1 = \frac{1}{2\pi} \int_0^{2\pi} f(x) e^{-ix} \, dx = \frac{1}{2\pi} \int_0^{\pi} \frac{e^{ix} - e^{-ix}}{2i} e^{-ix} \, dx = \frac{1}{4\pi i} \int_0^{\pi} (1 - e^{-2ix}) \ dx = \frac{1}{4\pi i} \left[x - \frac{e^{-2ix}}{-2i} \right]_{x = 0}^{x = \pi} = -\frac{1}{4}i. $$
Similarly,
$$ c_{-1} = \frac{1}{2\pi} \int_0^{2\pi} f(x) e^{ix} \, dx = \frac{1}{2\pi} \int_0^{\pi} \frac{e^{ix} - e^{-ix}}{2i} e^{ix} \, dx = \frac{1}{4\pi i} \int_0^{\pi} (e^{2ix} - 1) \ dx = \frac{1}{4\pi i} \left[\frac{e^{2ix}}{2i} - x \right]_{x = 0}^{x = \pi} = \frac{1}{4}i. $$
Hence,
$$ b_1 = i(c_1 - c_{-1}) = i(-\frac{1}{4}i - \frac{1}{4}i) = \frac{1}{2}. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3038460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Closed from of $\int_0^{\infty} \frac{e^{iax}}{x^{n}+1}dx$? I've been trying to find the general form of a certain group of integrals of the form$$I(a,n)=\int_0^{\infty} \frac{e^{iax}}{x^{n}+1}dx$$
I know that the real part of $I(a,2)$ can be calculated using Fourier Transform or residues, and $I(a,1)$ reduces to a form of the exponential integral.
I thought about approaching the integral via Fourier Transform but I did not know how to apply it to this integral. It might be able to be calculated with residues but I am not that great at complex analysis. I'm very interested in a closed form for this integral so any help would be appreciated.
| Since the integrand is a product of two Meijer G-functions and the integration range is $[0, \infty)$, there is a closed form, but it involves the Fox H-function:
$$\int_0^\infty \frac {e^{i a x}} {x^n + 1} dx =
\int_0^\infty
G_{0, 1}^{1, 0} {\left(- i a x \middle| { - \atop 0} \right)}
G_{1, 1}^{1, 1} {\left(x^n \middle| { 0 \atop 0} \right)} dx =
\frac i a H_{2, 1}^{1, 2}
{\left(
\left( \frac i a \right)^{\!n} \middle| {(0, 1), (0, n) \atop (0, 1)}
\right)}.$$
This becomes a G-function if $n$ is rational, but a rational $n$ produces an infinite number of double poles. This gives an infinite sum of polygamma terms instead of gamma terms when the H-function is evaluated by applying the residue theorem. Such a sum may have a closed form in terms of simpler functions in some special cases, which happens for $n = 1$ and $n = 2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3038558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Given $f(x)$ is integrable on $[0, 1]$ and $0 < f(x) < 1$, prove that $\int_{0}^{1} (f(x))^{n} \mathop{dx}$ converges to $0$.
Given $f(x)$ is integrable on $[0, 1]$ and $0 < f(x) < 1$, prove that
$\int_{0}^{1} (f(x))^{n} \mathop{dx}$ converges to $0$.
I understand why the statement is true intuitively because as $n \to \infty$, since $f$ lies between $0$ and $1$, it will be like a fractional value, which converges to $0$ since the fractions get smaller and smaller.
However, I am not sure about how to prove this rigorously.
| You may use the following theorem due to Arzelà :---
Let $\{f_n\}$ be a sequence of Riemann integrable Functions on $[a,b]$ and converges point-wise to $f$, also there is a positive number $M$ such that $|f_n(x)|≤M,\forall x\in [a,b],\forall n\in \Bbb N$. Now if $f$ is Riemann integrable over $[a,b]$ then , $$\lim_{n\rightarrow \infty}\int_a^bf_n(x)dx=\int_a^b\lim_{n\rightarrow \infty} f_n(x)dx=\int_a^b f(x) dx.$$
Here $f_n(x)=(f(x))^n\rightarrow 0$ as $n\rightarrow \infty$ $,\forall
x\in [0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3038847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
What is the convex hull of $\text{conv}(u_1,u_2,\cdots,u_p)+\text{conv}(v_1,v_2,\cdots,v_s)$? Let $u_i, i= 1,\cdots,p$ and $v_j, j= 1,\cdots,s$ be finitely many vectors in $\mathbb{R}^n$. Show that
$$
\text{conv}(u_1,u_2,\cdots,u_p)+\text{conv}(v_1,v_2,\cdots,v_s)=\text{conv}\{u_i+v_j \mid i= 1,\cdots,p, \,\, j= 1,\cdots,s\}
$$
We need to show
$$
x+y \in \text{conv}\{u_i+v_j \mid i= 1,\cdots,p, \,\, j= 1,\cdots,s\}
$$
where $x \in \text{conv}(u_1,u_2,\cdots,u_p)$ and $y \in \text{conv}(v_1,v_2,\cdots,v_s)$. Also, we need to show
$$
z \in \text{conv}(u_1,u_2,\cdots,u_p)+\text{conv}(v_1,v_2,\cdots,v_s)
$$
where $z \in \text{conv}\{u_i+v_j \mid i= 1,\cdots,p, \,\, j= 1,\cdots,s\}$.
I have tried the following for the first one:
Let $x \in \text{conv}(u_1,u_2,\cdots,u_p)$ so $x=\sum_{i=1}^p\lambda_iu_i$ where $\sum_{i=1}^p\lambda_i=1$. Also, Let $y \in \text{conv}(v_1,v_2,\cdots,v_s)$ so $x=\sum_{j=1}^s\mu_jv_j$ where $\sum_{j=1}^s\mu_j=1$.
Summing them
$$x+y=\lambda_1u_1+\lambda_2u_2+\cdots+\lambda_pu_p+\mu_1v_1+\mu_2v_2+\cdots+\mu_sv_s.$$
Now the question is how we can get something in the form of $\text{conv}\{u_i+v_j \mid i= 1,\cdots,p, \,\, j= 1,\cdots,s\}$?
| For two sets
$$\operatorname{co}(S_1 \times S_2) = \operatorname{co}(S_1) \times \operatorname{co}(S_2)$$
the inclusion from left to right is clear, for the other way notice
the equality
$$(\sum \lambda_i u_i, \sum \mu_j v_j) = \sum \lambda_i \mu_j (u_i , v_j)$$
if $\lambda_i$, $\mu_j$ positive with sum $1$. Now apply the affine map $+\colon V\times V \to V$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3038918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
How to show polyhedral cone of nonnegative vectors contains finitely generated cone? Let $P=\{x \in \mathbb{R}^n \mid Ax \geq b, x \geq 0 \}$ be a nonempty polyhedron for matrix $A \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R}^m$.
According to Minkowski-Weyl theorem $P$ can be written as
$$
P=\text{conv}(v_1,\cdots,v_p)+ \text{cone}(d_1,\cdots,d_l)
$$
for some $v_i \in \mathbb{R}^n$ and $d_j \in \mathbb{R}^n$.
Let $C=\{x \in \mathbb{R}^n \mid Ax \geq 0, x \geq 0 \}$.
Show that $\text{cone}(d_1,\cdots,d_l) \subseteq C$.
The thing that that I cannot cope with is how to connect the finite number $l$ that can be any natural number with dimension of the matrix $A$.
I tried the following:
Let $z \in \text{cone}(d_1,\cdots,d_l)$, so there exist non-negative $\mu_i$'s such that
$$
z= \sum_{i=1}^l \mu_id_i
$$
where $\mu_1,\mu_2,\cdots,\mu_l \geq 0$.
We can write $z$ as the following:
$$
z=
\begin{bmatrix}
d_1 & d_2 & \cdots & d_l
\end{bmatrix}
\begin{bmatrix}
\mu_1 \\
\mu_2 \\
\cdots \\
\mu_l
\end{bmatrix}
$$
Now, we should come up with an $m \times n$ matrix $A$ for which we have $Az \geq 0$ and $z \geq 0$ to prove the claim. But the problem is we do not have $z \geq 0$ necessarily.
| We know that $P$ can be written as
$$
P=\operatorname{conv}(v_1,\cdots,v_p)+ \operatorname{cone}(d_1,\cdots,d_l)=V+D.
$$
The set $D$ is a cone, hence, for every $v\in V$ and $d\in D$ we have that $v+td\in P$, $\forall t\ge 0$. That is
$$
A(v+td)\ge b,\quad v+td\ge 0,\quad\forall t\ge 0.
$$
Now divide by $t$ and let $t\to +\infty$
\begin{align}
\frac{1}{t}Av+Ad\ge\frac{1}{t}b\quad&\Rightarrow\quad Ad\ge 0,\\
\frac{1}{t}v+d\ge 0 \quad&\Rightarrow\quad d\ge 0.
\end{align}
Therefore, every $d\in D$ belongs to $C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3039036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Upper bound of expected maximum of weighted sub-gaussian r.v.s Let $X_1, X_2, \ldots$ be an infinite sequence of sub-Gaussian random variables which are not necessarily independent.
My question is how to prove
\begin{eqnarray}
\mathbb{E}\max_i \frac{|X_i|}{\sqrt{1+\log i}} \leq C K,
\end{eqnarray}
where $K=\max_i \|X_i\|_{\psi_2}$. Note that $\|\cdot\|_{\psi_2}$ is the Orlicz norm for sub-Gaussian random variable.
Here is my thought that confuses me.... Consider the finite case with $i\leq N$, we have
\begin{eqnarray}
\mathbb{E}\max_{i\leq N} \frac{|X_i|}{\sqrt{1+\log i}} &=& \int_0^\infty \mathbb{P}\left(\max_{i\leq N} \frac{|X_i|}{\sqrt{1+\log i}} > t \right) dt \\
&\leq& \int_0^\infty \sum_{i=1}^N\mathbb{P}\left( \frac{|X_i|}{\sqrt{1+\log i}} > t \right) dt \\
&\leq& \sum_{i=1}^N \frac{2}{\sqrt{1+\log i}} \int_0^\infty e^{-cs^2/K^2}ds \\
&=& K\sqrt{\frac{\pi}{c}} \sum_{i=1}^N \frac{1}{\sqrt{1+\log i}}
\end{eqnarray}
where the first inequality holds by a simple union bound and the second inequality holds by sub-Gaussianity of $X_i$ (i.e. we have $\mathbb{P}\{|X_i|\geq t\} \leq 2 e^{-ct^2/\|X_i\|_{\psi_2}^2}$ and $c$ is an absolute constant) and a simple trick of change-of-variable (i.e. let $s := t\sqrt{1+\log i}$).
However, the problem of my proof above is that the sum $\sum_{i=1}^N \frac{1}{\sqrt{1+\log i}}\to\infty$ as $N\to\infty$. Intuitively, I think the inequalities I used here are not very sharp. But what is the right inequality to use in this case???
This question comes from Exercise 2.5.10 of Prof. Roman Vershynin's book titled as "High-Dimensional Probability". The electric version of this book is downloadable from his personal webpage.
| Without loss of generality, assume that $K = c$ (the constant in the exponent of subgaussian tail).
\begin{eqnarray}
\mathbb{E}\max \frac{|X_i|}{\sqrt{1+\log i}} &=& \int_0^\infty \mathbb{P}\left(\max \frac{|X_i|}{\sqrt{1+\log i}} > t \right) dt\\
&\leq& \int_0^2 \mathbb{P}\left(\max \frac{|X_i|}{\sqrt{1+\log i}} > t \right) dt + \int_2^\infty \mathbb{P}\left(\max \frac{|X_i|}{\sqrt{1+\log i}} > t \right) dt
\\&\leq& 2 + \int_2^\infty \sum_{i=1}^N\mathbb{P}\left( \frac{|X_i|}{\sqrt{1+\log i}} > t \right) dt \\
&\leq& 2 + \int_2^\infty \sum_{i=1}^N 2 \exp\big(-t^2(1+\log(i))\big) dt\\
&\leq& 2 + 2\sum_{i=1}^N \int_2^\infty \exp(-t^2) \;\;i^{-t^2} dt \\
&\leq&
2 + 2\sum_{i=1}^N \int_2^\infty \exp(-\frac{ct^2}{K}) \;\;i^{-4} dt \leq \infty
\end{eqnarray}
I choose 2 as the point to split two integrals to make the sum convergent. (you could have used other points).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3039181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Show that if $(A+2I)^2=0$, then $A+\lambda I$ is invertible for $\lambda \ne 2$.
Show that if $(A+2I)^2=0$, then $A+\lambda I$ is invertible for $\lambda \ne 2$.
I tried to solve this by treating $(A+\lambda I)v=0$ as linear equation system, and proving that $v$ must be $0$ (trivial solution) therefore $A+\lambda I$ echelon form is I and it's invertible..
Would love to hear another solutions, and tips to my own proof:
$$A+\lambda I=A+2I+(\lambda-2) I.$$
$$(A+\lambda I)v=0 \Rightarrow (A+2I)v+(\lambda-2)Iv=0.$$
$$(A+2I)(A+2I)v+(A+2I)(\lambda-2)Iv=0 \Rightarrow (\lambda-2)(A+2I)Iv=0 \Rightarrow (A+2I)Iv=0$$ (multiply by $A+2I$ and we know $\lambda \ne 2$).
$$(A+2I)v+(\lambda-2)Iv=0 \wedge (A+2I)Iv=0\Rightarrow (λ−2)Iv=0 \Rightarrow v=0$$
Therefore if v is solution for $(A+\lambda I)v=0$ it must be 0.
(We have also shown that for $\lambda = 2$, $v$ can be $(A+2I)$ which shows $A+2I$ isn't invertible)
BTW If A is such that $(A+2I)^2=0$, prove that $A+I$ is invertible. solves the basic case for $\lambda = 1$
| Have you heard about eigenvalues? YOur equation for $A$ shows that $-2$ is the only eigenvalue of A(because any eigenvalue would satisfy $(x+2)^2=0$, the equation satisfied by $A$), so no other $\lambda$ could be an eigenvalue for $A$, qed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3039282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Apply Rolle's theorem to find real roots
Suppose the function $f$ is continuous on $[a,b]$ and differentiable on $(a,b)$ such that $f(a)=f(b)=0$. Prove that there exist a point $c\in(a,b)$ such that
$$f(c)-f'(c)=0$$
From the question above, or otherwise, show that the equation
$$1+x+\frac{x^2}{2!}+\cdots+\frac{x^{2n+1}}{(2n+1)!}=0$$
has real roots on $\mathbb{R}$ but not more than two.
Additionally, show that the equation
$$e^x-x^n=0$$
has at most three real roots on $\mathbb{R}$, where $n
\in \mathbb{N}$.
My attempt: Suppose the function
$$h(x)=e^{-x}f(x)$$
Then $$h'(x)=e^{-x}[f(x)-f'(x)]$$
Since $h(a)=h(b)=0$ and $e^{-x}>0$ for all real $x$, then there exist a point $c\in(a,b)$ such that $h'(c)=0$ i.e. $f(c)-f'(c)=0$. Then I get stuck on the following question. Should I start it with construct a function
$$h(x)=e^{-x}[1+x+\frac{x^2}{2!}+\cdots+\frac{x^{2n+1}}{(2n+1)!}]$$
and follow my previous procedure?
| The answer above addressed the second part of your question, so I guess I'll answer the third. We break it into cases for $n$ even and $n$ odd.
Let $f(x)=e^x-x^n$, so that $f'(x)=e^x-nx^{n-1}$
First, assume $n$ is odd, so that $x<0\rightarrow x^n<0$; this along with the fact that $0$ is not a root of $f$ means our roots must be positive. Assume we have two roots $a \neq b$ and assume w.l.o.g that $a<b$, from the above result, we have that $\exists_{c\in\mathbb{R}}f(c)=f'(c)$ so we have that $e^c-c^n=e^c-nc^{n-1}\rightarrow c=n$,and obviously $c$ can only belong to one interval of roots of $f$.
Now, let $n$ be even, we apply the exact same reasoning as above to show that $f$ can have at most $2$ positive roots. To show that there can be at most $1$ negative root, we note that $x^n$ is strictly decreasing for $x\in\mathbb{R}^{-}$ and $e^x$ is strictly increasing for $x\in\mathbb{R}$ so they can (and do by IVT) intersect at most once.
So then, the sum of roots is at most $3$ for even $n$ and at most $2$ for odd $n$, so is at most $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3039397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Is $f_{n}$ is analytic on $(a, b)$ and $f_{n} \rightarrow f$ uniformly on $(a, b)$ then is $f$ analytic on $(a, b)$?
Is $f_{n}$ is analytic on $(a, b)$ and $f_{n} \rightarrow f$ uniformly
on $(a, b)$ then is $f$ analytic on $(a, b)$?
Intuitively, I think that the answer is no. I know that the statement holds for integrability and continuity; however, I don't think it's necessary for analyticity. Am I correct?
| Yes, you are correct. Just consider$$\begin{array}{rccc}f_n\colon&(-1,1)&\longrightarrow&\mathbb R\\&x&\mapsto&\sqrt{x^2+\frac1{n^2}}.\end{array}$$The sequence $(f_n)_{n\in\mathbb N}$ is a sequence of analytic functions that converges uniformly to the absolute value functions, which isn't differentiable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3039492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$\lim_{n \to \infty}(1+\frac{1}{n^2})(1+\frac{2}{n^2})...(1+\frac{n}{n^2})=e^{\frac{1}{2}}$. Here is the beginning of a proof:
Suppose $0<k \leq n$,
$1+\frac{1}{n}<(1+\frac{k}{n^2})(1+\frac{n+1-k}{n^2})=1+\frac{n+1}{n^2}+\frac{k(n+1-k)}{n^4}\leq 1+\frac{1}{n}+\frac{1}{n^2}+\frac{(n+1)^2}{4n^4}$.
I'm confused by the second inequality above.
| Hint: By AM-GM Inequality
$$\frac{k+ n+1-k}{2} \geq \sqrt{k(n+1-k)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3039690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
In what dimensions are PL and Diff equivalent? I have read in different places, that for $n\leq 4$, PL and Diff are equivalent (passing through PDIFF). I believe Milnor gave an example of the inequivalence for $n=7$. What about for $n=5$ or $n=6$? Are there examples of PL manifolds which are not smoothable?
| A more qeneral question is References on the relations between Top, Diff and PL. It gives a reference to https://mathoverflow.net/q/96670 which gives a complete (positive) answer for $n =5, 6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3039810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find all matrices which satisfy $M^2-3M+3I = 0$ I am trying to find all matrices which solve the matrix equation
$$M^2 -3M +3I=0$$
Since this doesn't factor I tried expanding this in terms of the coordinates of the matrix. It also occurs to me to put it into "vertex" form:
$$M^2 - 3M + \frac{9}{4}I+\frac{3}{4}I=0$$
$$(M-\frac{3}{2}I)^2 = -\frac{3}{4}I$$
but this doesn't look much better.
What I found from expanding by coordinates was, if $M=\pmatrix{a & b \\ c & d}$ then
$$\pmatrix{a^2+bc -3a + 3& ab + bd - 3b \\ ac+cd-3c & bc+d^2-3d+3} = \pmatrix{0&0\\0&0}$$
From the off-diagonal entries I get that either
$$a+d-3=0$$
or
$$b=c=0$$
If $a+d-3\not=0$ then $a^2-3a+3=0$ and likewise for $d$. Then we get more cases for $a$ and $d$.
If $a+d-3=0$ the upper-left is unchanged and the lower-right is
$$bc + (3-a)^2-3(3-a)+3 = 0$$
which simplifies to the same thing from the upper-left and so is redundant. In the off-diagonals
$$ac+c(a-3)-3c = 0 \Rightarrow $$
$$2ac-6c = 0$$
We again get cases, and I suppose after chasing cases enough you get the solution set.
However, it just feels like this can't be the intended solution given how tedious and uninformative all of this case-chasing is. Is there some bigger idea I'm missing?
| Minimal polynomial of $M, m_M(x),$ is a factor of $x^2-3x+3=[x-(\frac{3+i\sqrt3}2)][x-(\frac{3-i\sqrt3}2)]$
Either $m_M(x)=x-(\frac{3+i\sqrt3}2)\implies M=[\frac{3+i\sqrt3}2]$
or $m_M(x)=x-(\frac{3-i\sqrt3}2)\implies M=[\frac{3-i\sqrt3}2]$
or $m_M(x)=x^2-3x+3\implies$ the eigenvalues of $M$ are $\frac{3\pm i\sqrt3}2$
In case of $2\times2$ matrices, product of eigenvalues $=\det(M)=3$, sum of eigenvalues $=\text{Tr}(M)=3$
We have $M=\begin{bmatrix}a&b\\c&3-a\end{bmatrix}$ and $3a-a^2-bc=3; a,b,c\in\Bbb C$.
You could go for $3\times3,4\times4,...$ matrices by defining the same eigenvalues and conditions. In case you are looking for real matrices, you just have to take the real subset of these matrices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3039910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.