Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Problem about subgroup of $D_n$ Prove that every subgroup of $D_n$ , either every member of subgroup is a rotation or exactly half of them are rotations.
Intuitively, if every member is a rotation then they will form a subgroup because we can rotate them as much as we like (closure) and other properties will also be satisfied. But how do we prove that exactly half of them will be rotations form a subgroup. Please give Hints to start!
Thanks
|
This is what the question is really asking you:
Let $D_n$ be the dihedral group of order $2n$ and $R$ the order $n$ group of rotations.
The if $H$ is any subgroup of $D_n$, either $[H:H \cap R] = 1$ or $[H:H \cap R] = 2$.
If $H \subseteq R$, then $H \cap R = H$, and we have $[H:H \cap R] = [H:H] = 1$.
Otherwise, we can invoke (via the second isomorphism theorem):
$[H: H \cap R] = [HR:R]$.
Now $HR$ is a subgroup of $D_n$ properly containing $R$, and $[D_n:R] = 2$, so $HR = D_n$. Can you continue?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1793972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Expand $(a-b)^3$ without formulas When I solve $(a-b)^2$ I take $aa-ab-ab+bb$ I do not use formulas at all because I only forget them. To solve the above example all i do is to multiply one variable or constant at the time but when I ask anyone or anything how to solve $(a-b)^3$ all they tell me is to use this or that formula etc... And I hate formulas because I only forget them. I want to learn how to calculate math not how to use math formulas. Since no one so far can answer my question I am now turning my hope to you.
How do I calculate $(a-b)^3$ without formulas? I am supposed to do $aaa+a(-2)(-2)$ and so on or what?
I really appreciate all the help I can get, thanks!
|
Arrange $(a-b)^3$ as $(a-b)(a-b)(a-b)$. Now sum over all ways of picking either $a$ or $-b$ from each factor. So you get
$$aaa+(-b)aa+a(-b)a+aa(-b)+(-b)(-b)a+(-b)a(-b)+a(-b)(-b)+(-b)(-b)(-b)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1794060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Define a metric for an annulus, which makes it seem like the curved wall of a cylinder.
Can anybody please help me in understanding this question?
|
Hint: They want you to define a metric in the annulus so that the shortest path between 2 points with the same radius is not the length of the straight line between them, but the length of the shortest arc joining them, as it is if you're forced to walk along the wall of a cylinder. This can then be extended to a metric for points with different radii.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1794191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Estimate the sum of alternating harmonic series between $7/12$ and $47/60$ How can I prove that:
$$\frac{7}{12} < \sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n} < \frac{47}{60}$$
? I don't even know how to start solving this...
|
First note that the series converges using Leibniz Test.
Next, denote by $S_N$ the partial sum $\sum_{n=1}^N\frac{(-1)^{n-1}}{n}$. Then, we must have $$S_{2N}<\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}<S_{2N+1}$$
Finally, we see that $\sum_{n=1}^4 \frac{(-1)^{n+1}}{n}=\frac{7}{12}$ and inasmuch as the next term is positive, the value of the series must exceed $7/12$. Similarly, we see that $\sum_{n=1}^5 \frac{(-1)^{n+1}}{n}=\frac{47}{60}$ and inasmuch as the next term is negative, the value of the series must be less than $47/60$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1794261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
}
|
Finding $z$ for Complex Convergence I am having an issue understanding how to go about solving a problem regarding complex sequences. The problem is as follows:
Find a $z$ for which the following sequence converges: $f_{n} (z) =e^{nz}$
My attempt thus far is:
$$f_{n} (z) = e^{nz} = e^{nx} \cdot (\cos(ny) + i\sin(ny))$$
Then I was planning to use the fact that both the real part and the complex part must converge if the sequence converges, so:
$$\text{Re}(f_{n}(z)) = e^{nx}\cos(ny)$$
$$\text{Im}(f_{n}(z)) = e^{nx}i\sin(ny)$$
But where do I go from here?
|
Let $z=x+iy$. Then, note that $f_n(z)=e^{nx}e^{iny}$.
The real and imaginary parts of the sequence $f_n(z)$ are given respectively by
$$\text{Re}(f_n(z))=e^{nx}\cos(ny)$$
and
$$\text{Im}(f_n(z))=e^{nx}\sin(ny)$$
Note that if $x< 0$, the exponential term $e^{nx}$ approaches zero as $n\to \infty$. Since both the sine and cosine functions are bounded in absolute value by $1$, both real and imaginary parts of the sequence converge to zero for $x<0$.
If $x=0$ and $y=2\pi \ell$ for any integer $\ell$, then the sequence converges to $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1794377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Evaluation of Definite Integral
Evaluation of $\displaystyle \int_{0}^{\frac{\pi}{2}}\frac{\cos x\sin 2x \sin 3x}{x}dx$
$\bf{My\;Try::}$ Let $\displaystyle \int_{0}^{\frac{\pi}{2}}\frac{\cos x\sin 2x \sin 3x}{x}dx = \frac{1}{2}\int_{0}^{\frac{\pi}{2}}\frac{2\sin 3x\cos x\sin 2x}{x}dx$
So we get $$I = \frac{1}{2}\int_{0}^{\frac{\pi}{2}}\frac{(\sin4x+\sin 2x)\sin 2x}{x}dx$$
So $$I=\frac{1}{4}\int_{0}^{\frac{\pi}{2}}\frac{\cos2x-\cos6x+1-\cos 4x}{x}dx$$
Now How can i solve after that , Help me
Thanks
|
Hint 1: $$ \int f(x) = F(X) + C \Longrightarrow \int f(ax+b) = \frac{1}{a} \cdot F(ax+b) + C$$
Hint 2: $$ \int \left(f(x) + g(x)\right) = \int f(x) + \int g(x) $$
Hint 3: $$ \int \frac{\cos x}{x} = Ci(x) + C$$
First step:
(use hint 2)
$$
4I=\int_{0}^{\frac{\pi}{2}}\frac{\cos2x-\cos6x+1-\cos 4x}{x}dx =\\
=2\int_{0}^{\frac{\pi}{2}} \frac{1}{x} + 2\int_{0}^{\frac{\pi}{2}}\frac{\cos 2x -1}{2x} - 6 \int_{0}^{\frac{\pi}{2}} \frac{\cos 6x -1}{6x}
- 4\int_{0}^{\frac{\pi}{2}} \frac{\cos 4x -1}{4x}
$$
Now let $f(x) := \frac{\cos x - 1}{x}$ so
$$
4I = 2\int_{0}^{\frac{\pi}{2}} \frac{1}{x} + 2 \int_{0}^{\frac{\pi}{2}} f(2x) - 6 \int_{0}^{\frac{\pi}{2}} f(6x) - 4 \int_{0}^{\frac{\pi}{2}} f(4x)
$$
Edit after comment: You can also use Taylor expansion of $\cos x$. But then you will have series as result (without $Ci$ function).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1794469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Asymptotic behaviours from Fourier transforms I have completely forgotten how one derives the asymptotic behavior in frequency space, given the asymptotic behavior of the function in real space (e.g. time). As an example example, it is often said that when $f(t)\sim t^\alpha$ for $t\to\infty$, then $f(\omega)\sim\omega^{-\alpha-1}$ for $\omega\to 0$. Aside from a dimensional analysis, how do you derive this result a bit more strictly?
|
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\on}[1]{\operatorname{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
See Method of Steepest Descent : With $\ds{\int_{-\infty}^{\infty}t^{\alpha}\expo{\ic\omega t}\dd t}$ you get the ${\it saddle\ point}$ $\ds{t_{s} = \alpha\ic/\omega}$ and the ${\it asymptotic\ behavior}$
$$
\bbx{\pars{\root{2\pi}\,\ic^{\alpha - 1}\,\alpha^{\alpha + 1/2}\,\,\expo{-\alpha}}\ {1 \over \omega^{\alpha + 1}}}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1794619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Uniform convergence of $\sum_{n=-\infty}^{\infty}\frac{1}{n^2 - z^2}$ on any disc contained in $\mathbb{C}\setminus\mathbb{Z}$ I'm currently revising some complex analysis, and need to show that the series $$\sum_{n=-\infty}^{\infty}\frac{1}{n^2 - z^2}$$ defines a holomorphic function on $\mathbb{C}\setminus\mathbb{Z}$. The hint that the question gives me is to consider any disc contained in $\mathbb{C}\setminus\mathbb{Z}$ and show that the series converges uniformly there.
I can see how to complete the question once I've done this, but it's late, I'm tired, and I can't figure out how to show the series converges uniformly. I've tried the Weierstrass M-Test, but this failed for me on discs where $z$ was close to $n$, for obvious reasons.
Any hints would be greatly appreciated.
|
Let $K \subset (B(0,R) \cap \mathbb{C} \setminus \mathbb{Z})$ be a compact set and set $d(K,\mathbb{Z}) = \delta > 0$.
If $|n| > R$, then for all $x \in K$ we have
$$ |n^2 - x^2| \geq ||n|^2 - |x|^2| = |n|^2 - |x|^2 \geq |n|^2 - R^2. $$
If $|n| \leq R$ then for all $x \in K$ we have
$$ |n^2 - x^2| = |x - n||x - (-n)| \geq \delta^2. $$
Hence,
$$ \sup_{x \in K} \left| \frac{1}{n^2 - x^2} \right| \leq
\begin{cases} \frac{1}{\delta^2} & |n| \leq R, \\
\frac{1}{|n|^2 - R^2} & |n| > R
\end{cases} $$
and so the series converges uniformly on $K$ by the Weierstrass $M$-test. Finish by taking $K = \overline{B(x_0,r)}$ for all $x_0 \in \mathbb{C} \setminus \mathbb{Z}$ and small enough $r$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1794713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
On changing limits of integration when there are domain problems. As an example say I have $$\int_{\pi/2}^\pi \frac{2}{1- \sin(2x)} dx$$
I would like to perform the substitution $2x = \arcsin(u)$ but I notice this would not be surjective on the interval given by the extremes of integration.
So, to solve the problem, I say instead of having the domain of $\sin(x)$ be $[0, 2 \pi]$ let it be $[3 \pi/2, 7 \pi/2]$ in this way $\arcsin(u)$ will take values in $[\pi, \pi/2]$.
But now performing the substitution I obtain
$$\int_{1}^{-1} \frac{1}{((1-u)(\sqrt{1-u^2})} du$$
That does not give the same values of the original integral that evaluates to $2$. Where is the mistake in my reasoning?
|
Your domain of integration should not extend to 1. Instead, work with
$\int_{\pi/2}^{3\pi/4} \frac{2}{1 - \sin(2x)} dx+ \int_{3\pi/4}^{\pi}\frac{2}{1 - \sin(2x)} dx$ such that your substitution of arcsin is well-defined.
Given $x \in [\pi/2, 3\pi/4]$, we know that $\cos(2x)$ is non-positive, so it must be the case that $du = 2\cos(2x) dx = -2\sqrt{1 - \sin^2(2x)} dx = -2\sqrt{1 - u^2} dx$. I feel that this subtlety may have been neglected in your derivation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1794808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why is a Lie algebra of a matrix Lie group not closed under complex scalar multiplication? Let the set $\mathcal{g}$ be the Lie algebra of a matrix Lie group $G$. Then my book asserts that $\mathcal{g}$ is a real vector space because it's closed under real scalar multiplication. My question why is it not closed under complex scalar multiplication?
If $X \in \mathcal{g}$ and the corresponding exponentiated matrix $e^{tX}\in G$ ($t \in \mathbb{R}$), I don't see why a multiplication with a complex scalar $C$ of $X$, namely $CX$, will make $CX\notin \mathcal{g}$.
|
If $G$ is a real Lie group, so in particular it is a real manifold, then its tangent spaces will be real vector spaces. In particular its Lie algebra will be a real vector space. If, on the other hand, $G$ is a complex Lie group, then its Lie algebra will be a complex vector space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1794914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to prove a complex limit with epsilon delta definition? I have $$\lim_{z \to i} \frac {iz^3-1}{z+i}=0$$
To prove this I am trying to use the epsilon-delta definition. By saying that for any $\delta >0$ and any $\varepsilon >0$ then:
$$0<|z-i|<\delta \quad \mathrm {whenever} \quad \left|\frac {iz^3-1}{z+i}\right|< \varepsilon$$
I factor the implication on the right to get:
$$\left|\frac {(z-i)(z^2+iz-1)}{z+i}\right| < \varepsilon$$
I know that I need to "solve" for $z-i$ and in order to do so I need to see how the other terms on the LHS behave when in a close neighborhood of $z-i$. Therefore, I pick an arbitrary value such that $|z-i| < \delta \le 1$.
This is how I have seen other people do it with real limits, however, I get stuck here because when I solve for $z$ with use of the triangle inequality, I get that $|z| \le 0$ which is obviously not true. I can see that I could choose another bound, but is this not supposed to work for any arbitrary value? Also, if it works, how should I proceed?
|
If we assume $0 < |z-i| < \delta \le 1$, then
$$|z+i| = |-2i - (z-i)| \ge
\Big||2i| - |z-i|\Big| > |2 - \delta| \ge 1$$
(using the reverse triangle inequality), and
$$|z^2 + iz - 1| = |(z-i)^2 + 3i(z-i) - 3| \le \delta^2 + 3 \delta + 3 \le 7$$
(using the regular triangle inequality), so
$$\Big| \frac{(z-i)(z^2 + iz - 1)}{z+i} \Big| \le 7\delta.$$
Given a fixed $\varepsilon > 0$ (small enough) you can let $\delta = \varepsilon / 7.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1794989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Factoring a degree 4 polynomial without power of 2 term For my hobby, I'm trying to solve $x$ for $ax^4 + bx^3 + dx + e = 0$. (note there's no $x^2$) I hope there is a simple solution.
I'm trying to write it as $(fx + g)(hx^3+i) = 0$
It follows that
$fh=a; gh=b; if=d; gi=e$
At first sight it looks promising with 4 equations and 4 unknowns ($f,g,h,i$). Unfortunately when substituting them you'll find a dependency so that this only works when $db=ae$. Is there an easy solution for the more general case?
|
I suppose that you are searching a decomposition of the given polynomial in factors with real coefficients.
First note that your condition $db=ae$ means
$$
\frac{a}{d}=\frac{b}{e}=k
$$
so the polynomial is obviously decomposable as:
$$
kdx^4+kex^3+dx+e=kx^3(dx+e)+dx+e=(dx+e)(kx^3+1)
$$
Also note that a degree $4$ polynmial without the $x^2$ term can be decomposable in other forms, as:
$$
3x^4+5x^3+5x+3=(x^2+1)(3x^2+5x-3)
$$
So, in general, also for a quartic equation of the given form, the solutions can be found only using the (not simple) general methods.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1795114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Number of Subgroups of $C_p \times C_p$ and $C_{p^2}$ for $p$ prime As the title says, I am interested in finding all subgroups of $C_p \times C_p$ and $C_{p^2}$ for $p$ prime.
We did not cover the Sylow-theorem so far in the lecture.
What I noticed so far:
As $C_p$ is of order $p$, the elements can only have order $p$ or $1$ due to Lagrange's theorem. There is only one element of order 1, namely the neutral element. Hence there are $p^2-1$ elements in $C_p \times C_p$ with order $p$.
Because $p$ is prime, $C_p$ is cyclic and hence there exists an element, call it $a$ which is of order p and generates the whole group. All other powers of $a$ are generating $C_p$ as well, so $(a^i,1)$ for $i=1 \dots p-1$, generate one subgroup that is not trivial.
Using the same argumentation for the second factor, I conclude that there are $2$ nontrivial subgroups and $2$ trivial ones.
Consider the case $p=2$ now.
$<(a,a)>$ is one as well, hence $5$ subgroups. But I am stuck with what happens for $(a^i, a^j)$ for $i \neq j$ and $i, j >0$ for general $p$ prime. Could you post some hints, please?
For $C_{p^2}$, we know that there is at least one element, call it $c$. This generates the whole group. According to Lagrange, there can only be elements that are either of order $p^2, p$ or $1$. All $p^2$ elements will generate the whole group, so they are quite uninteresting.
For $p=2$ again, $<2>$ is another subgroup, nontrival of order $p$. For this case, there are in total $3$ subgroups ($2$ trivial ones and $<2>$). I cannot find any meaningful generalisation. Any help is greatly appreciated.
|
Every element of order $p$ in $G=C_p\times C_p$, and there are $p^2-1$ of them, generates a cyclic subgroup of order $p$, and every such subgroup has $p-1$ generators. This implies that there are $\frac{p^2-1}{p-1}$ subgroups of order $p$, that is, $p+1$. As there are also the trivial group, and the whole group (and no others, in view of Lagrage's theorem), the number of subgrups of $G$ is $p+3$.
On the other hand, a cyclic group $C_n$ has exactly one subgroup for each divisor of $n$, so $C_{p^2}$ has three subgroups, corresponding to the divisors $1$, $p$ and $p^2$ of $p^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1795231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Finding if a linear transformation is diagonalisable Hi i am having some trouble tackling this question for my exam revision.
Let $M_{(2,2)}(\mathbb{R})$ denote the vector spce of 2x2 matrices over the real numbers. Also, let A denote the matrix
$$\begin{bmatrix}2&\lambda\\1&0\end{bmatrix}$$
where $\lambda$ is a real number. Consider the map
$$\psi : V\rightarrow V, \psi(X)=AX-XA. $$
Compute the eigenvalues for $\psi$. For which values of $\lambda$ is $\psi$ diagonalisable?(check the dimension of the kernel of $\psi$.)
It is clear to see that $\psi$ is a linear transformation and hence it is diagonalisable if it has two distinct real eigenvalues.
The problem i am having is how do I find the eigen values of this linear transformation. I have tried finding the eigen values for the matrix $A$ and obtain $\lambda > -1$ but i do not think this is correct as I haven't considered the kernel or the actual function itself at all.
Secondly i tried using the hint and firstly i obtained the kernel
$$AV - VA=0$$
$$\begin{bmatrix}2&\lambda\\1&0\end{bmatrix} \begin{bmatrix}v_1&v_2\\v_3&v_4\end{bmatrix}-\begin{bmatrix}v_1&v_2\\v_3&v_4\end{bmatrix}\begin{bmatrix}2&\lambda\\1&0\end{bmatrix}=\begin{bmatrix}0&0\\0&0\end{bmatrix}$$
rearranging I obtained
$$\begin{bmatrix}-v_2+\lambda v_3&2v_2+\lambda v_4-\lambda v_1\\v_1-2v_3-v_4&v_2-\lambda v_3\end{bmatrix}=\begin{bmatrix}0&0\\0&0\end{bmatrix}$$
from this i obtained $\lambda=\frac{v_2}{v_3}$
so the ker($\psi$)=$c\begin{bmatrix}2&\frac{v_2}{v_3}\\1&0\end{bmatrix}$ for any $c,v_2,v_3 \epsilon\mathbb{R}$ i am stuck on how am i supposed to compute the eigen values ? and i can see that the $dim(ker)=1$ using the rank nullity theorem this means the $dim(im(\psi))$=1
|
$\psi$ is a linear transformation between $M_{2\times 2} \rightarrow M_{2\times 2},$ so you need to find a matrix representation $[\psi]$ of $\psi$ and compute the eigenvalues and eigenvectors of $[\psi]$. Since $M_{2\times 2}$ is 4-dimensional, $[\psi]$ will be a $4 \times 4$ matrix. It will be diagonalizable if and only if it has 4 linearly independent eigenvectors. It is not necessary that it has 4 distinct eigenvalues. It may have only 1 eigenvalue with an eigenspace of dimension 4, for example.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1795341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Drawing conclusions from a differential inequality Let $f(x)$ be a smooth real function defined on $x>0$. It is given that:
*
*$f$ is an increasing function ($f'(x)>0$ for all $x>0$).
*$x \cdot f'(x)$ is a decreasing function.
I am trying to prove that:
$$ \lim_{x\to 0}f(x) = -\infty $$
EXAMPLE: $f(x) = -x^{-q}$, for some constant $q>0$. Then $f$ is increasing, $x\cdot f'(x) = q x^{-q}$ is decreasing, and indeed $ \lim_{x\to 0}f(x) = -\infty $.
If this is not true, what other conditions are required to make it true?
|
This is an alternate proof via contradiction, using limits and the greatest lower bound property. If $\lim_{x \to 0}f(x)$ is not $-\infty,$ then since $f$ is increasing for positive $x$ it would follow that $\lim_{x \to 0}f(x)=L$ where $L$ is the greatest lower bound of the set of values $f(x),\ x>0.$
From that it follows that $\lim x f(x)=0$ and so we may apply L'Hopital's rule (for $x \to 0$) to the fraction
$$\frac{x \ f(x)}{x}.$$
The denominator derivative being $1,$ the L'Hopital equivalent limit is that of
$$D[x \ f(x)]=f(x)+xf'(x)$$
as $x \to 0,$ i.e. the equivalent limit is $L+\lim_{x \to 0}[x\ f'(x)].$
Now using that the fraction we applied L'Hopital to was just $f(x)$ (whose limit is $L$), we may conclude that
$$\lim_{x \to 0}x f'(x)=0.$$
However the other assumption of the problem is that $x\ f'(x)$ is decreasing, and combined with its limit at $0$ existing and being $0,$ we would get for positive $x$ that $x\ f'(x)<0,$ which implies $f'(x)<0$ against the other assumption that $f$ is increasing.
Note: The approaches of $x$ to zero here are all from the right, naturally; just didn't want to clutter the notation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1795455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Lower bound on quadratic form Suppose I have a non-symmetric matrix $A$ and I can prove that $x^T A x = x^T \left(\frac{A+A^T}{2}\right) x>0$ for any $x \ne 0$? Can I then say that $x^T A x \ge \lambda_{\text{min}}(A) \|x\|^2 > 0$?
|
Let quadratic form $f$ be defined by $f (x) = x^T A x$, where $A \in \mathbb{R}^{n \times n}$. Since $x^T A x$ is a scalar, then $(x^T A x)^T = x^T A x$, i.e., $x^T A^T x = x^T A x$. There are infinitely many matrix representations of $f$. We take affine combinations of $A$ and $A^T$, and any such combination yields $f$
$$x^T (\gamma A + (1-\gamma) A^T) x = f (x)$$
where $\gamma \in \mathbb{R}$. We choose $\gamma = \frac{1}{2}$, which yields the symmetric matrix $\frac{A+A^T}{2}$, which is diagonalizable, has real eigenvalues and orthogonal eigenvectors. Hence, it has the eigendecomposition
$$\frac{A+A^T}{2} = Q \Lambda Q^T$$
Thus,
$$x^T \left(\frac{A+A^T}{2}\right) x = x^T Q \Lambda Q^T x$$
If the eigenvalues are nonnegative, then we can take their square roots
$$x^T \left(\frac{A+A^T}{2}\right) x = x^T Q \Lambda Q^T x = \|\Lambda^{\frac{1}{2}} Q^T x\|_2^2 = \|\Lambda^{\frac{1}{2}} y\|_2^2 \geq 0$$
where $y = Q^T x$. We conclude that $f$ is positive semidefinite. If all the eigenvalues are positive, then $f$ is positive definite. Note that
$$\begin{array}{rl} \|\Lambda^{\frac{1}{2}} y\|_2^2 &= \displaystyle\sum_i \lambda_i \left(\frac{A+A^T}{2}\right) y_i^2\\ &\geq \displaystyle\sum_i \lambda_{\text{min}} \left(\frac{A+A^T}{2}\right) y_i^2 = \lambda_{\text{min}} \left(\frac{A+A^T}{2}\right) \|y\|_2^2\end{array}$$
Since the eigenvectors are orthogonal, $Q^T Q = I_n$, then
$$\|y\|_2^2 = \|Q x\|_2^2 = x^T Q^T Q x = \|x\|_2^2$$
We thus obtain
$$x^T A x = x^T \left(\frac{A+A^T}{2}\right) x \geq \lambda_{\text{min}} \left(\frac{A+A^T}{2}\right) \|x\|_2^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1795563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Square root of both sides If you have the equation:
$x^2=2$
You get:
$x=\pm \sqrt{2}$
But what do you do actually do? What do you multiply both sides with to get this answer? You take the square root of both sides, but the square root of what? If you understand what i mean?
|
$$x^2=2 \Rightarrow \begin{cases}y=x^2\\y=2 \end{cases}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1795673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 7,
"answer_id": 1
}
|
Mean-value Theorem $f(x)=\sqrt{x+2}; [4,6]$
Verify that the hypothesis of the mean-value theorem is satisfied for the given function on the indicated interval. Then find a suitable value for $c$ that satisfies the conclusion of the mean-value theorem.
$$f(x)=\sqrt{x+2}; [4,6]$$
So,
$$f'(x) = {1 \over 2} (x+2)^{-{1\over 2}}$$
$f(x)$ is differentiable for all x. Now,
$$f'(c) = {f(b)-f(a)\over b- a} \\
= {f(6) - f(4)\over 6-4} \\
= {2\sqrt{2} - \sqrt{6}\over2}$$
Since $f'(c) = {2\sqrt{2} - \sqrt{6}\over2}$,
$${1\over 2}(c+2)^{-{1\over 2}} = {2\sqrt{2} - \sqrt{6}\over2}$$
Then after, this will give me $c$ right?
I just want to know if I did it right so far. I tried simplifying the last equation, but it wasn't right. Please let me know if there is anything wrong in my steps, if not could anyone help me solve for $c$ at the end? Thank you.
|
You did everything correctly, let's solve for $c$ together.
You have
$$
\frac{1}{2} (c+2)^{-1/2} = \frac{a}{2}\\
(c+2)^{-1/2} = a \\
\frac{1}{\sqrt{c+2}} = a \\
c+2 = \frac{1}{a^2}
$$
so
$$
\begin{split}
c &= \frac{1}{a^2} - 2 = \frac{1}{\left(2 \sqrt{2} - \sqrt{6}\right)^2} - 2 \\
&= \frac{1}{8 + 6 - 4\sqrt{2}\sqrt{6}} - 2 \\
&= \frac{1}{14 - 8\sqrt{3}} - 2 \\
&= \frac{14 + 8 \sqrt{3}}{14^2 - 3 \cdot 8^2} - 2 \\
&= \frac{14 + 8 \sqrt{3}}{4} - 2 \\
&= \frac{7}{2} +2\sqrt{3} - 2 \\
&= \frac{3}{2} +2\sqrt{3}
\end{split}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1795783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
how many distinct values does it have? I solved this problem by manually adding parentheses and counting them, and got correct answer of 32. Is there a simple to find the answer? Thanks.
The value of the expression $1÷2÷3÷5÷7÷11÷13$ can be altered by including parentheses. If we are allowed to place as many parentheses as we want, how many distinct values can be obtained for this expression?
|
In general, inserting parentheses in
$$ a_1 \div a_2 \div a_3 \div \cdots \div a_n $$
can produce every number of the form
$$ a_1^{\strut}a_2^{-1}a_3^{s_3} a_4^{s_4} \cdots a_n^{s_n}$$
(and only those), where each $s_i$ is either $1$ or $-1$. Note that the exponents of $a_1$ and $a_2$ are fixed.
If $a_3$ through $a_n$ are coprime (as is the case here), this gives $2^{n-2}$ different possible values.
Proof by induction on $n$. The base case is $n=2$ where there is only a single possibility.
For $n>2$, first parenthesize $a_1\div\cdots\div a_{n-1}$ in order to get the desired exponents for $a_1$ through $a_{n-1}$.
Then, if the desired $s_n$ is opposite to $s_{n-1}$ then just replace $a_{n-1}$ by $(a_{n-1}\div a_n)$.
Otherwise the desired exponents $s_n$ and $s_{n-1}$ are equal. Let $E$ be the fully parenthesized expression such that $(E\div a_{n-1})$ is a subexpression of what we got from the induction hypothesis (that is, $E$ is the left operand to the division whose immediate right operand is $a_{n-1}$, and such a division always exists because $n-1\ge 2$), and replace this entire subexpression with $((E \div a_{n-1})\div a_n)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1795909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Let $A= \{1,2,3,4,5,6,7,8,9,0,20,30,40,50\}$. 1. How many subsets of size 2 are there? 2.How many subsets are there altogether?
Let $A= \{1,2,3,4,5,6,7,8,9,0,20,30,40,50\}$.
1. How many subsets of size $2$ are there?
2.How many subsets are there altogether?
Answer:
1) I think there are $7$ subsets of size two are there, since $14$ elements$/2=7$ and also by grouping them into pairs it shows there are $7$ groups of pairs as follows; $\{\{1,2\}, \{3,4\}, \{5,6\}, \{7,8\}, \{9,0\}, \{20,30\}, \{40,50\}\}$
2) I choose $15$, but I'm not sure.
However, are my answers valid? Hints are much appreciated.
|
The total number of subsets of size $2$ is $\binom{14}{2}$. To understand this try to see how many ways are there to pick two distinct elements out of the set.
For the second part using the similar idea from the previous part, there are $\frac{14}{k}$ subsets with size $k$. So the total sum is:
$$\sum_{n=0}^{14} \binom{14}{n} = \sum_{n=0}^{14} \binom{14}{n}(1)^{14-n}(1)^{n} = (1+1)^{14} = 2^{14}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1795982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Proof that a discrete space (with more than 1 element) is not connected I'm reading this proof that says that a non-trivial discrete space is not connected. I understood that the proof works because it separated the discrete set into a singleton ${x}$ and its complementar. Since they're both open, their intersection is empty and their union is the entire space, this is a separation that is not trivial, therefore the space is not connected. But why a finite set of points is open? I remember that I proved that this set is closed, since I just have to pick a ball in the complementary, with radius such that its the minimum of the distances to those points. I know that if a set is closed it doesn't mean it's not open, but how to prove it?
Update: what's the simples proof that does not involve topology, only metrics?
|
It is true that finite sets are closed in every T$_1$ space, and thus they are closed in every discrete space. Also by the definition of the discrete topology, $\textit{every}$ subset of the space is open. So suppose $X$ is discrete and has more than one point. Let $x\in X$. Then $\{x\}$ is open. It is also closed (it is finite), and so its complement is also open (and nonempty). So $X$ is not connected.
If you want to prove this in terms of metrics, the discrete topology on $X$ is induced by the metric $d(x,y)=0$ if $x=y$ and $d(x,y)=1$ if $x\neq y$. So if $x\in X$ then $$B_d (x,1)=\{y\in X:d(x,y)<1\}=\{x\}$$ and $$X\setminus \{x\}=\bigcup _{y\neq x} \{y\}=\bigcup _{y\neq x} B_d (y,0),$$ so $X$ is the union of two disjoint open sets.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1796056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Linear connection on a 1-form Let $M$ be a manifold with linear connection $\nabla$ and let $X$ be a vector field on $M$. Given a 1-form $\alpha \in \Omega^{1}(M)$, define $\nabla_{X} \alpha : \scr{X}$ $(M) \to C^{\infty}(M)$ by $$[\nabla_{X} \alpha] (Z) = X(\alpha(Z)) - \alpha(\nabla_{X}Z)$$ for $Z \in \scr{X}$$(M)$. Show that $\nabla_{X} \alpha$ is a 1-form on $M$.
I tried writing everything in coordinate form. $X =\sum X^{i} \frac{\partial}{\partial x^{i}}$ and $\alpha = \sum f_{i} dx^{i}$ and then taking $Z$ to be the basis $\frac{\partial}{\partial x^{i}}$ (for each $i$ separately to see what it does to the basis) but I'm stuck. Not sure where to go from here. Thanks in advance.
|
$[\nabla_{X} \alpha] (fY) = X (\alpha(f Y)) - \alpha(\nabla_{X} fY)$
$= X(f \alpha Y) - \alpha ((Xf)Y + f(\nabla_{X}Y)$
$= fX(\alpha Y) + (Xf)\alpha Y - (Xf) \alpha Y + f \alpha(\nabla_{X} Y)$
$= f(X(\alpha Y) - \alpha(\nabla_{X} Y)) = f [\nabla_{X} \alpha](Y)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1796149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Examine convergence of $\int_0^{\infty} \frac{1}{x^a \cdot |\sin(x)| ^b}dx$ Examine convergence of $\int_0^{\infty} \frac{1}{x^a \cdot |\sin(x)| ^b}dx$ for $a, b > 0$. There are 2 problems. $|\sin(x)|^b = 0$ for $x = k \pi$ and $x^a = 0$ for $x = 0$. We can write $\int_0^{\infty} \frac{1}{x^a \cdot |\sin(x)| ^b}dx = \int_0^{1} \frac{1}{x^a \cdot |\sin(x)| ^b}dx + \int_1^{\infty} \frac{1}{x^a \cdot |\sin(x)| ^b}dx$ but what to do next?
|
To even have a chance at convergence at $\infty$, you need $a > 1$. However, then near $x = 0$, we have $x^a \lvert \sin(x) \rvert^b \approx x^{a+b}$ and since $a+b > 1$, we will have divergence near $x=0$. Thus the integral diverges for all $a,b > 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1796258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Show that the sphere, S, and $\mathbb{R}^2$ is not homeomorphic I am trying to show that the sphere $S^2$ and $\mathbb{R}^2$ are not homeomorphic.I understand that you can't 'compress' a 3D shape into a 2D plane but I don't know how I would express this formally.
$S^2 = \{(x, y, z) ∈ \mathbb{R}^3: x^2 + y^2 + z^2 = 1\}$
As always, any help is appreciated!
|
Homeomorphism will preserve any "topological" property of spaces - in particular, $S^2$ is compact and $\mathbb R^2$ is not, so they can't be homeomorphic.
In fact, the image of a compact space under a continuous map is compact, so there is not even a surjective continuous map $S^2 \to \mathbb R^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1796376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Show that no line with a y-int of 10 will ever be tangential to the curve with $y=3x^2+7x-2$ Show that no line with a y-int of 10 will ever be tangential to the curve with $y=3x^2+7x-2$.
Having trouble in showing this. So far these are my process.
*
*Let line be $y=mx+10$
*$mx+10 = 3x^2+7x-2$
*$3x^2+(7-m)x-12=0$
*Apply quadratic formula
*$\frac{(7-m\pm\sqrt{(m^2-14m+193)}}6$
A bit stuck here. Maybe I've missed the whole point and complicated this. Any help is appreciated! Thanks in advance :)
|
You tagged calculus so with derivatives: the slope of a tangent to the given function is
$$y'=6x+7\implies\;\text{for any point on the graph }\;\;(a, 3a^2+7a-2)$$
the tangent line to the function at that point is
$$y-(3a^2+7a-2)=(6a+7)(x-a)\implies y=(6a+7)x-3a^2-2$$
and thus the $\;y\,-$ intercept is $\;-3a^2-2\;$ , and this is $\;10\;$ iff
$$-3a^2-2=10\iff a^2=-4$$
and this last equality can't be true in the real numbers
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1796515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
}
|
What is $\mathbb{Z_{n}}\left [ x \right ]$
Question: Show that $\mathbb{Z_{n}}\left [ x \right ]$ has characteristic $n$.
What does $\mathbb{Z_{n}}\left [ x \right ]$ stands for? I'm very sure this is not the gaussian ring.
|
Let $\mathbb{Z}_n$ be the set of integers $\{0,1,\ldots,n-1\}$ equipped with the operations of addition mod $n$ and multiplication mod $n$. It can be shown this structure is a ring. $\mathbb{Z}_n[x]$ is defined as the set of polynomials of the form $a_n x^n + \cdots + a_1 x + a_0$, where $a_i \in \mathbb{Z}_n$ equipped with the usual operations of addition and multiplication of polynomials. This structure is also a ring.
If we take any polynomial $f(x)$ in $\mathbb{Z}_n[x]$, and add $f(x)$ to itself $n$ times, we get $0$ because $a_i + a_i + \cdots + a_i = na_i = 0 $ in $\mathbb{Z}_n$. Recall that the characteristic of a ring $R$ is defined to be the smallest value of $k$ such that $r+r+\cdots+r=rk=0$ for all $r \in R$. It is clear then that the characteristic of $\mathbb{Z}_n[x]$ is at most $n$. By considering the polynomial $f(x)=a_0=1$, we see that the number of times we need to add $f(x)=1$ to itself to get 0 is at least $n$. Hence, the characteristic of $\mathbb{Z}_n[x]$ is exactly $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1796632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Roots of $x^{101}-100x^{100}+100=0$ I do not know how to prove that $x^{101}-100x^{100}+100=0$ has exactly two positive roots.
Some can give me hint for solving this please. Thanks for your time.
|
Descartes' rule of signs indicates that $P(x)=x^{101}-100x^{100}+100$ has either zero or two positive roots.
But $P(0)>0$ and $P(2)<0$ so $P(x)$ has at least one positive root, hence it has exactly two positive roots.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1796753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
}
|
find the maximum of the function $f(x)=a+b\sqrt{2}\sin{x}+c\sin{2x}$ let $a,b,c\in R$,and such $a^2+b^2+c^2=100$, find the maximum value and minimum value of the function
$$f(x)=a+b\sqrt{2}\sin{x}+c\sin{2x},0<x<\dfrac{\pi}{2}$$
Use Cauchy-Schwarz inequality?
|
Use Cauchy-Schwarz inequality:
$$\left(a+b\sqrt{2}\sin{x}+c\sin{2x}\right)^2\le (a^2+b^2+c^2)(1+2\sin^2x+\sin^22x)$$
$$\left(a+b\sqrt{2}\sin{x}+c\sin{2x}\right)^2\le 100\cdot(1+2\sin^2x+\sin^22x)$$
$$|a+b\sqrt{2}\sin{x}+c\sin{2x}|\le 10\cdot\sqrt{1+2\sin^2x+\sin^22x}$$
$$1\le1+2\sin^2x+\sin^22x\le \frac{13}{4}$$
Then $$-5\sqrt{13}\le a+b\sqrt{2}\sin{x}+c\sin{2x}\le 5\sqrt{13}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1796847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A funtion and its fourier transformation cannot both be compactly supported unless f=0 Problem : Suppose that $f$ is continuous on $\mathbb{R}$. Show that $f$ and $\hat f$ cannot both be compactly supported unless $f=0$.
Hint : Assume $f$ is supported in [0,1/2]. Expand $f$ in a Fourier series in the interval [-,1], and note that as a result, f is a trigonometric polynomial.
I proved that f is trigonometric polynomial by using hint.
But, I don't know how to prove function's fourier transform cannot compactly supported function.
Can I get some hints?
|
Suppose the support of $f$ is contained in $[-1,1],$ and $\hat f (y) = 0$ for $|y|>N \in \mathbb N.$ Applying a standard Fourier series argument on $[-\pi,\pi]$ then shows
$$f(x) = \sum_{-N}^{N}\hat f (n) e^{inx}, x \in [-\pi,\pi].$$
Thus $f$ is a trigonometric polynomial that vanishes on $[1,\pi].$ But a trigonometric polynomial on $[-\pi,\pi]$ that vanishes at an infinite number of points must vanish identically. Thus $f\equiv 0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1796949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
How do I prove something without premises in a Fitch system? If asked “Prove in Fitch: From no premises, derive $A \lor (A \to B)$. Without using Taut Con?"
These are the are the Fitch rules, and this is what I have so far.
Should I aim to use V Elim to isolate both sides and then derive with the method I'm currently trying? I'm unsure how to piece that part together.
|
Yuck! It looks like somebody is trying to give you a headache.
To solve this there are a couple of general tricks you'll need to implement.
*
*derive $\neg (C\lor D)\vdash \neg C$.
*derive $\neg(C\to D)\vdash C$.
Combining yields a derivation of $\neg(A\lor (A\to B))\vdash A\land\neg A$.
Toward 1, after assuming $\neg (C\lor D)$ you'll want temporarily to assume $C$. An application of $\lor$-intro gives you a contradiction.
Toward 2, you'll first want to have achieved
*derive $C,\neg C\vdash D$.
With 3 in hand, let's return to 2. Assume $\neg (C\to D)$. Temporarily assume $\neg C$. Further temporarily assume $C$. After reaching $D$ by trick 3, conditional proof gives $C\to D$. You now have a contradiction, returning $\neg\neg C$.
Finally toward 3, the strategy is to 'convict the innocent': after assuming $C$ and $\neg C$, further temporarily assume $D$. Here you can can assert $C\land \neg C$. Using this proof of a contradiction which begins from an assumption of $D$ you can infer $\neg D$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Describe, as a direct sum of cyclic groups, given a map $\phi: \mathbb{Z}^{3} \longrightarrow \mathbb{Z}^{3}$ I'm trying to resolve the next one:
Describe, as a direct sum of cyclic groups, the cokernel of the map $\phi: \mathbb{Z}^{3} \longrightarrow \mathbb{Z}^{3}$ given by left multiplication by the matrix
$$
\left(\begin{matrix} 15 & 6 & 9 \\ 6 & 6 & 6 \\ -3 & -12 & -12 \end{matrix}\right)
$$
So, the cokernel is $\mathbb{Z}^{3}/\phi(\mathbb{Z}^{3})$, I know that I can only get at most two groups of order 3, but I'm not able to describe it. Any hint?
|
Thanks to Derek Holt and SpamIAM for the recomendations and useful links, after a while to read and understand Modules over a PID, I finally got an answer.
Let $\phi$ be a $\mathbb{Z}$-linear map such that can be determined by $\phi(e_{1}) = f_{1}, \dots, \phi(e_{n}) = f_{n}$, where $e_{1}, \dots, e_{n}$ be the basis of $\mathbb{Z}^{n}$. Then $\phi(e_{j}) = \sum_{i=1}^{n} = c_{ij}e_{i}$ for $j = 1, \dots, n$, so $(c_{ij})$ is the matrix representation of $\phi$ with respect to the basis. Then
$$
\phi(\mathbb{Z}^{n}) = \mathbb{Z}\phi(e_{1}) \oplus \dots \oplus \mathbb{Z}\phi(e_{n}) = \mathbb{Z}f_{1} \oplus \dots \oplus \mathbb{Z}f_{n},
$$
By aligned bases for $\mathbb{Z}^{n}$ and its modulo $\phi(\mathbb{Z}^{n})$, we can say that
$$
\mathbb{Z}^{n} = \mathbb{Z}v_{1} \oplus \dots \oplus \mathbb{Z}v_{n}, \hspace{1 em} \phi(\mathbb{Z}^{n}) = \mathbb{Z}a_{1}v_{1} \oplus \dots \oplus \mathbb{Z}a_{n}v_{n}
$$
where $a_{i}$'s are nonzero integers. Then
$$
\mathbb{Z}^{n}/\phi(\mathbb{Z}^{n}) \cong \bigoplus_{i=1}^{b} \mathbb{Z}/a_{i}\mathbb{Z}
$$
Now, for our solution we need to get the Smith Normal Form, since each $a_{i}$ is the $M_{i,i}$ element of the matrix, the Smith Normal Form of the cokernel is:
$$
\left(\begin{matrix}
3 & 0 & 0 \\
0 & 3 & 0 \\
0 & 0 & 18
\end{matrix}\right)
$$
So, we can describe the cokernel as the sum of cyclic groups:
$$
\mathbb{Z}^{3}/\phi(\mathbb{Z}^{3}) \cong \mathbb{Z}/3\mathbb{Z} \oplus \mathbb{Z}/3\mathbb{Z} \oplus \mathbb{Z}/18\mathbb{Z}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Show that $f(x)=\ln(x)$ is not uniformly continuous on $(0,\infty)$ I'm trying to show that $f(x)=\ln(x)$ is not uniformly continuous on the interval $(0,\infty).$
This is what I have so far:
Let $\epsilon=1.$
Choose $\delta=$
if $x,y\in(0,\infty)$ with $|y-x|<\delta$ then $|f(y)-f(x)|=|\ln\left(\frac{y}{x}\right)|$
I'm stuck at this point though, are there any well known inequalities I can use here?
|
Working with $\epsilon$ and $\delta$ quickly becomes tedious and annoying, it is thus better to learn more convenient and powerful techniques. Remember that $f$ is uniformly continuous on $S$ if and only if for every sequences $(x_n), (y_n) \subseteq S$ with $d(x_n, y_n) \to 0$ we have that $d(f(x_n), f(y_n)) \to 0$ (with $d$ denoting the distances in the domain the definition and the range of $f$).
In our case, we suspect that $\ln$ fails to be uniformly continuous towards of $0$. Choose, therefore, $x_n = \textrm e^{-n}$ and $y_n = \textrm e^{-n + 1}$. Notice that $|x_n - y_n| \to |0 - 0| = 0$, but $$| \ln x_n - \ln y_n | = | \ln \textrm e^{-n} - \ln \textrm e^{-n + 1} | = | -n - (-n + 1)| = 1 \not\to 0 ,$$
which shows that $\ln$ is not uniformly continuous on $(0,\infty)$. (It is, though, on every interval of the form $[a,b)$ with $a > 0$ and $b$ possibly infinite.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 1
}
|
Large cycles in bridgeless cubic graphs Wikipedia tells us that most cubic graphs have a Hamilton cylce (for instance the proportion of Hamiltonian graphs among the cubic graphs on $2n$ vertices converges to 1 as $n$ goes to infinity) but is also kind enough to provide us with some pictures of non-Hamiltonian cubic graphs:
Staring at these graphs it is easy to see a pattern: in all three cases there is a large cycle $C$ such that all the points not on $C$ have all three of their neighbours on $C$ instead of being connected to eachother. (In the bottom two graphs this is most obvious.) Since the same is somewhat vacuously true for cubic graphs that do have a Hamilton cycle my question is:
Is it true that in every connected bridgeless cubic graph there is a cycle $C$ such that each vertex is either on $C$ or has all three of its neighbors on $C$?
(Of course also many vertices on $C$ have all their neighbors on $C$ so in spite of writing 'either' I just mean the ordinary, inclusive or here, not the exclusive or.)
Edit: the graphs in the pictures are called 'Tutte graph', 'Coxeter graph' and 'Petersen graph' respectively.
|
Even though this question seems to be of no interest to anyone I still thought it would be good form to post here a counterexample I found since posting:
Suppose (aiming for a contradiction) that the graph contains a cycle $C$ of the form described in the question. Picking any of the red edges it is easy to conclude that it is absolutely necessary that this edge belongs to $C$. Now by symmetry the same arguments apply to the other red edges and hence all 8 of them must be part of $C$. This in turn implies that the green vertices are not on $C$ but have exactly 2 of their neighbors on $C$ instead of 3, contradicting the presumed nature of $C$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Using inverse of transpose matrix to cancel out terms? I am trying to solve the matrix equation $A = B^TC$ for $C$, where $A$, $B$, and $C$ are all non-square matrices. I know that I need to utilize $M^TM$ in order to take the inverse. I'm just not sure how to isolate $C$ in the equation provided.
|
We have the matrix equation in $\mathrm C$
$$\mathrm B^\top \mathrm C = \mathrm A$$
Let's left-multiply both sides by $\mathrm B$
$$\mathrm B \mathrm B^\top \mathrm C = \mathrm B \mathrm A$$
If $\mathrm B$ has full row rank, then $\mathrm B \mathrm B^\top$ is invertible. Hence,
$$\mathrm C = (\mathrm B \mathrm B^\top)^{-1} \mathrm B \mathrm A$$
If $\mathrm B$ does not have full row rank, then we vectorize $\mathrm B^\top \mathrm C = \mathrm A$, which gives us the linear system
$$(\mathrm I \otimes \mathrm B^\top) \operatorname{vec} (\mathrm C) = \operatorname{vec} (\mathrm A)$$
which can be solved using Gaussian elimination.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How many ways are there to select $15$ cookies if at most $2$ can be sugar cookies?
A cookie store sells 6 varieties of cookies. It has a large supply of each kind. How many ways are there to select $15$ cookies if at most $2$ can be sugar cookies?
For my answer, I put $6 \cdot 6 \cdot 5^{13}$. My logic was to assume that the first two are sugar cookies, so there are only $5$ choices for the next $13$ cookies, but I am not sure if this is correct.
|
Let $x_k$ be the number of cookies of type $k$, $1 \leq k \leq 6$. Since an order of $15$ cookies is placed,
$$x_1 + x_2 + x_3 + x_4 + x_5 + x_6 = 15 \tag{1}$$
where equation 1 is an equation in the non-negative integers. A particular solution to equation 1 corresponds to the placement of five addition signs in a row of fifteen ones. For instance,
$$1 1 1 + + 1 1 1 1 + 1 + 1 1 + 1 1 1 1 1$$
corresponds to the solution $x_1 = 3$, $x_2 = 0$, $x_3 = 4$, $x_4 = 1$, $x_5 = 2$, $x_6 = 5$. Thus, the number of solutions of equation 1 is the number of ways five addition signs can be inserted into a row of fifteen ones, which is
$$\binom{15 + 5}{5} = \binom{20}{5}$$
since we must choose which five of the twenty symbols (five addition signs and fifteen ones) will be addition signs.
However, we have to exclude those orders in which more than two sugar cookies are selected. We must subtract the number of orders in which more than two sugar cookies are selected from the total.
Suppose $x_1$ denotes the number of sugar cookies in the order. If $x_1 > 2$, then $y_1 = x_1 - 3$ is a non-negative integer. Substituting $y_1 + 3$ for $x_1$ in equation 1 yields
\begin{align*}
y_1 + 3 + x_2 + x_3 + x_4 + x_5 + x_6 & = 15\\
y_1 + x_2 + x_3 + x_4 + x_5 + x_6 & = 12 \tag{2}
\end{align*}
Equation 2 is an equation in the non-negative integers with
$$\binom{12 + 5}{5} = \binom{17}{5}$$
solutions.
Subtracting the number of orders in which more than two sugar cookies are selected from the total number of ways of ordering fifteen cookies selected from the six varieties at the shop yields
$$\binom{20}{5} - \binom{17}{5}$$
Why is your solution incorrect?
*
*There are six varieties of cookies, of which just one is sugar cookies. Thus, there is one way of selecting a sugar cookie, not six.
*An order can contain up to two sugar cookies. You assumed that two were selected.
*If two sugar cookies are selected, they can be selected anywhere in the order. They do not need to be selected in the first two positions.
*By multiplying the number of ways of ordering two sugar cookies by $5^{13}$, you were counting ordered selections. However, the order in which the cookies are ordered is irrelevant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Estimation of Integral $E(x)$
How can we prove $$\frac{1}{2}e^{-x}\ln\left(1+\frac{2}{x}\right)<\int_{x}^{\infty}\frac{e^{-t}}{t}dx<e^{-x}\ln\left(1+\frac{1}{x}\right)\;, x>0$$
$\bf{My\; Try::}$ Let $\displaystyle f(x)=\int_{x}^{\infty}\frac{e^{-t}}{t}dx\;,$ Then $\displaystyle f'(x) = -\frac{e^{x}}{x}<0\;\forall x>0$
So $f(x)$ is Strictly Decreasing function.
Now How can I estimate above function, Help required, Thanks
|
Upper Bound Inequality
Note that we have the elementary inequalities
$$\frac{1}{x+1}\le \log\left(1+\frac1x\right)\le \frac1x \tag 1$$
Using the left-hand side inequality in $(1)$, it is easy to see that
$$\frac{1}{x}\le \log\left(1+\frac1x\right) -\left(\frac{1}{x+1}-\frac{1}{x}\right) \tag 2$$
Multiplying $(2)$ by $e^{-x}$ and integrating, we find that
$$\begin{align}
\int_x^{\infty}\frac{e^{-t}}{t}\,dt& \le \int_x^\infty \left(e^{-t}\log\left(1+\frac1t\right) -e^{-t}\left(\frac{1}{t+1}-\frac{1}{t}\right)\right)\,dt\\\\
&=-\int_x^\infty \frac{d}{dt}\left(e^{-t}\log\left(1+\frac1t\right)\right)\,dt\\\\
&=e^{-x}\log\left(1+\frac1x\right)
\end{align}$$
Lower Bound Inequality
To establish the lower bound, we first show that the logarithm function satisfies the inequality
$$\log\left(1+\frac2x\right)\le \frac1x+\frac1{x+2} \tag 2$$
for $x>0$.
To show that $(2)$ is valid, we proceed as follows.
Define a function $f(x)$ to be
$$f(x)=\log\left(1+\frac2x\right)- \frac1x-\frac1{x+2}$$
Then, the derivative of $f(x)$ is given by
$$\begin{align}
f'(x)&=\frac{1}{x+2}-\frac1x+\frac{1}{x^2}+\frac{1}{(x+2)^2}\\\\
&=\frac{4}{x^2(x+2)^2}\\\\
&>0
\end{align}$$
Inasmuch as $f(x)$ is monotonically increasing with $\lim_{x\to \infty}f(x)=0$, then $f(x)< 0$.
Now, using $(2)$, it is easy to see that
$$\frac12\left(\frac1x-\frac{1}{x+2}\right)+\frac12\log\left(1+\frac2x\right)<\frac1x \tag 3$$
Multiplying both sides of $(3)$ by $e^{-x}$ and integrating yields
$$\begin{align}
\int_x^\infty \frac{e^{-t}}{t}\,dt &> \frac12\int_x^\infty \left(e^{-t}\left(\frac1t-\frac{1}{t+2}\right)+e^{-t}\log\left(1+\frac2x\right)\right)\,dt\\\\
&=-\frac12 \int_x^\infty \frac{d}{dt}\left(e^{-t}\log\left(1+\frac2t\right)\right)\,dt\\\\
&=\frac12 e^{-x}\log\left(1+\frac2x\right)
\end{align}$$
as was to be shown!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
on finite division subring of a ring
Is there any example of a ring which is not a division ring but any of its subring is a division ring?
According to me if $R$ is a ring and $S$ is a division subring then $1\in S$ and hence $R=S$. Is it true?
|
A simple example is the product ring $\mathbb Z_2\times \Bbb Z_2$ (the Klein four group). Every proper nonzero subring is isomorphic to $\Bbb Z_2$, which is a division ring.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why is radian so common in maths? I have learned about the correspondence of radians and degrees so 360° degrees equals $2\pi$ radians. Now we mostly use radians (integrals and so on)
My question: Is it just mathematical convention that radians are much more used in higher maths than degrees or do radians have some intrinsic advantage over degrees?
For me personally it doesn't matter if I write $\cos(360°)$ or $\cos(2\pi)$. Both equals 1, so why bother with two conventions?
|
The answer is simple, it's a distance measure. When you move in a straight line you use inches or metres, in circles it is radians.
If you are at Disneyland and ask how far it was to Anaheim Stadium [go, Angels!] and I tell you that from my house it's about 45º, you are probably not going to be happy.
You want the distance traveled, at 1 mile out from my house, from one point to another. This distance is pi/4 * 1 mi = about 0.8 miles. Enjoy the game.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "71",
"answer_count": 17,
"answer_id": 13
}
|
Compactness of $C([0,1])$ I have to verify if the $C([0,1])$, space of all continuous functions defined on interval $[0,1]$ with supremum metric is compact.
As I know, we have to check if every sequence of functions $f_{n}(x)$ has subsequence that $f_{n_{k}}(x)$ is convergent.
In this metric of course conervgence implies uniform convergence, so there won't be a problem of showing continuity of the limit function.
But I really don't know if we can make subsequence convergent.
|
The sequence $f_n(x)=x^n$ does not have a convergent subsequence since $f_n$ converges pointwise towards $f(x)=0$ if $x\neq 1$ and $f(1)=1$ which is not continuous. Hence, the space is not compact.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Proof of a theorem regarding group homomorphisms and kernels I am looking for a proof of the following theorem:
"Let $H<G$ then $H\unlhd G$ $\iff$ there exists a group $K$ and a group homomorphism: $\phi : G \rightarrow K$ such that $ker(\phi) = H$
There is one on a french wikipedia page but I find it incomprehensible.
Edit I unfortunately don't know how to suppress questions from this site, I've found the answer I nee**
If anyone is interested, here's the link: https://proofwiki.org/wiki/Kernel_is_Normal_Subgroup_of_Domain
|
The forward direction will be done by considering $K=G/H$ and for the converse part, kernel of a group homomorphism is always a normal subgroup.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1797999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Can someone solve my non-understandable process in proving a theorem? Theorem.
Let $E$ be a subset of $\mathbb{R}^n$.
Then, if $p\gt0$, $\int_E|f-f_k|^p\to0$, and $\displaystyle\int_E|f_k|^p\le{}M$ for all $k$, then $\displaystyle\int_E|f|^p\le{}M$.
For your information, $|\cdot|$ means a Lebesgue measure.
*
*There is a Theorem A, $$\int_E|f|^p=p\int_0^\infty\alpha^{p-1}w_{|f|}(\alpha)d\alpha$$
,where $w(\alpha)=\left|\left(\mathbf{x}\in{}E:f(\mathbf{x})\gt\alpha\right\}\right|$
*Since $|f|\le|f-f_k|+|f_k|$ by the triangle inequality,
$$\int_E|f|\le\int_E\left(|f-f_k|+|f_k|\right)=\int_E|f-f_k|+\int_E|f_k|$$
*From the Theorem A when $p=1$,
$$\int_0^\infty w_{|f|}(\alpha)d\alpha \le \int_0^\infty w_{|f-f_k|}(\alpha)d\alpha + \int_0^\infty w_{|f_k|}(\alpha)d\alpha$$
*Since (?),
$$w_{|f|}(\alpha) \le w_{|f-f_k|}(\alpha) + w_{|f_k|}(\alpha)$$
*Thus,
\begin{align}
\int_E|f|^p &= p\int_0^\infty\alpha^{p-1}w_{|f|}(\alpha)d\alpha \\
&\le p\int_0^\infty\alpha^{p-1}\left(w_{|f-f_k|}(\alpha) + w_{|f_k|}(\alpha)\right)d\alpha \\
&= p\int_0^\infty\alpha^{p-1} w_{|f-f_k|}(\alpha) d\alpha + p\int_0^\infty\alpha^{p-1} w_{|f_k|}(\alpha) d\alpha \\
&=\int_E|f-f_k|^p + \int_E|f_k|^p
\end{align}
*Letting $k\to\infty$,
$$\int_E|f|^p \le M$$
I do not understand how the third process causes the fourth process.
Also, I do not understand the last process.
We are proving that $\displaystyle\int_E|f|^p\le{}M$, not proving that $\displaystyle\int_E|f|^p\le{}M$ as $k\to\infty$.
|
*
*Since $p >0$, we have that
$$ |f-g|^p \leq |f|^p + |g|^p $$
*Using 1, we see
$$ \int |f|^p dx \leq \int | f- f_k|^p dx + \int |f_k|^p dx $$
for all $k \in \mathbb{N}$.
*Apply bound, we see
$$\int |f|^p dx \leq \int | f- f_k|^p dx + M $$
*Apply the limit, we see
$$\int |f|^p dx \leq\underbrace{ \lim_{k \to \infty} \int | f- f_k|^p dx}_{=0}+ M \leq M $$
as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1798103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
expressing contour integral in different form Hi I have a short question regarding contour integration: Given that $f(z)$ is a continuous function over a rectifiable contour $z = x + iy$. If $f(z) = u(x,y) + iv(x,y)$, why does it follow that the contour integral can be expressed as $$\int_{C}f(z)dz = \int_{C}(udx-vdy) + i \int_{C}(vdx +udy)$$
where on the right-hand side, $C$ is a rectifiable curve in the $xy$-plane.
|
Another way to see it is by expanding the integral
$$
\int_\gamma f(z)dz = \int_a^bf(\gamma(t))\gamma'(t)dt = \int_a^b(u(\gamma_1(t),\gamma_2(t)) + i v(\gamma_1(t),\gamma_2(t)))(\gamma_1(t) + i\gamma_2(t))dt,
$$
where $\gamma(t) = (\gamma_1(t),\gamma_2(t)), t\in [a,b],$
and applying the definition of the line integrals
$$
\int_\gamma p dx = \int_a^b p(\gamma(t))\gamma_1'(t)dt
$$
and
$$
\int_\gamma q dy = \int_a^b q(\gamma(t))\gamma_2'(t)dt
$$
from calculus.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1798216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Does a repeated eigenvalue always mean that there is an eigenplane under the transformation matrix? If you have a 3x3 matrix, if you find that it has repeated eigenvalues, does this mean that there is an invariant plane (or plane of invariant points if eigenvalue=1)?
I always thought that there was an invariant plane if all 3 equations were the same when trying to find the eigenvectors, but does this only happen when there is a repeated eigenvalue, or does it happen also when there are 3 distinct eigenvalues?
|
If $Av=\lambda v$ and $Aw=\lambda w$, then for any linear combination $\alpha v+\beta w$ we have
$$
A(\alpha v+\beta w)=\alpha Av+\beta Aw=\alpha\lambda v+\beta\lambda w=\lambda(\alpha v+\beta w).
$$
In words, a linear combination of eigenvectors for the same eigenvalue is again an eigenvector for that eigenvalue.
That said, it could happen that no such linearly independent $v$ and $w$ exist: let
$$
A=\begin{bmatrix} 2&1&0\\0&2&0\\0&0&3\end{bmatrix}.
$$
Then, while $2$ is a repeated eigenvalue, its eigenspace is one-dimensional.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1798305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Question in Introductory Linear Algebra I really need help with this question. I am in an introductory linear algebra course. If you guys could help me, I would really appreciate it. Here is the question:
A large apartment building is to be built using modular construction
techniques. The arrangement of apartments on any particular floor is
to be chosen from one of the three basic floor plans.
*
*Each floor of
plan A includes: 3 three bedroom units, 7 two bedroom units and 8 one
bedroom unit.
*Each floor of plan B includes 4 three bedroom units, 4
two bedroom units and 8 one bedroom units.
*Each floor of plan C
includes: 5 three bedroom units 3 two bedroom units and 9 one bedroom
units.
Suppose the building contains a total of $x_1$ floors of plan A,
$x_2$ floors of plan B, and $x_3$ floors of plan C.
Is it possible to design
the building with exactly 66 three bedroom units, 14 two bedroom units
and 136 one bedroom units?
If it can be done, list two different ways
to design the building and if not, explain why.
|
Think about this in the context of a system of linear equations..
Plan A = (3,7,8)
Plan B = (4,4,8)
Plan C = (5,3,9)
Total = (66,14,136)
\begin{bmatrix}
3 & 4 & 5 & 66 \\
7 & 4 & 3 & 14 \\
8 & 8 & 9 & 136 \\
\end{bmatrix}
Is this system consistent?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1798509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How do you calculate the smallest cycle possible for a given tile shape? If you connect together a bunch of regular hexagons, they neatly fit together (as everyone knows), each tile having six neighbors. Making a graph of the connectivity of these tiles, you can see that there are cycles all over the place. The smallest cycles in this graph have three nodes.
$f(\text{regular hexagon}) = 3$
If you use regular pentagons, they don't fully tile the plane, but they do still form cycles. The smallest cycles have 6 nodes.
$f(\text{regular pentagon}) = 6$
If you use regular heptagons, the smallest cycle also seems to require 6 tiles.
$f(\text{regular heptagon}) = 6$
And it's the same with nonagons, 6 tiles to a cycle:
$f(\text{regular nonagon}) = 6$
Is there a general way to calculate the minimum number of tiles needed to form a cycle?
$f(\text{polygon}) = ?$
Ideally, I'd love to find a method that worked for any equilateral non-intersecting polygon, but even a method that works for any regular polygon would be great.
Sorry for the crudeness of the drawings.
|
I did some experiments for regular polygons, for up to $n=24$ (click to enlarge):
These experiments suggest that
*
*If $6\vert n$ you get a $3$-cycle
*Else if $2\vert n$ you get a $4$-cycle
*Else you get a $6$-cycle
That you can get the $3$-cycle for anything that has edges aligned as a hexagon has them is pretty obvious. On the other hand, for reasons of symmetry any arrangement of $3$ regular $n$-gons has to be symmetric under $120°$ rotation, so you have to have those hexagon-aligned edges else it won't work.
It's also easy to see that you can get a $4$-cycle if $4\vert n$, since that's the obvious arrangement for squares. The other $4$-cycles are more tricky. The most systematic (and reasonably symmetric) choice there would be a configuration where two polygons have opposite vertices aligned with one axis, while the other two polygons have an axis perpendicular to the first passing through the centers of two opposite edges. Something like this:
Right now I have no obvious answer as to why you can't get $4$-cycles for odd $n$, nor why $6$-cycles are always possible. Looking at the angles it's easy to see that the length of the cycle has to be even if $n$ is odd, so once the $4$-cycles are ruled out for those cases, the $5$-cycles are out of the question and the $6$-cycles will provide the solution. Perhaps others can build more detailed answers with the help of my pictures.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1798630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
}
|
2048 Logic Puzzle I thought up this logic problem related to the 2048 game. If all 16 tiles on a 2048 board all had the value 1024, how many ways are there to get to the 2048 tile? Here is what I am talking about in an illustration:
I found a much simpler, but longer way to think about this: There are 3 ways to combine 2 tiles by going to the right, and 3 by going to the left. That means there are 6 ways to combine the tiles. So, for all the rows and columns, there are $$2 \cdot (4 \cdot 6) = 48$$ ways to get to the 2048 tile.
My question(s) are, is my logic correct? Also, is there a simpler way to approach this logical problem?
Notes
I found two Math.SE post related to 2048 logic, but they have nothing to do with my problem.
|
I believe you are Correct.
There are 3 lines separating rows horizontally, 4 pairs of numbers across each line, and 2 ways to combine said pairs (top-down or bottom-up), giving $2*3*4=24$ ways of making pairs.
Repeating for the columns and getting the exact same numbers, we now have $24+24 = 48$ total options for merging two numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1798725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Find the value of $\frac{a^2}{a^4+a^2+1}$ if $\frac{a}{a^2+a+1}=\frac{1}{6}$ Is there an easy to solve the problem? The way I did it is to find the value of $a$ from the second expression and then use it to find the value of the first expression. I believe there must be an simple and elegant approach to tackle the problem. Any help is appreciated.
Find the value of $$\frac{a^2}{a^4+a^2+1}$$ if $$\frac{a}{a^2+a+1}=\frac{1}{6}$$
|
From the first equation (inverted),
$$\frac{a^2+a+1}a=6$$ or $$\frac{a^2+1}a=5.$$
Then squaring,
$$\frac{a^4+2a^2+1}{a^2}=25$$
or
$$\frac{a^4+a^2+1}{a^2}=24.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1798825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 2
}
|
Is a diagonalization of a matrix unique? I was solving problems of diagonalization of matrices and I wanted to know if a diagonalization of a matrix is always unique? but there's nothing about it in the books nor the net.
I was trying to look for counter examples but I found none.
Any hint would be much appreciated
Thanks!
|
The diagonal matrix is unique up to a permutation of the entries (assuming we use a similarity transformation to diagonalize). If we diagonalize a matrix $M = U\Lambda U^{-1}$, the $\Lambda$ are the eigenvalues of $M$, but they can appear in any order.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1798902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
Geometrical Description of $ \arg\left(\frac{z+1+i}{z-1-i} \right) = \pm \frac{\pi}{2} $
The question is in an Argand Diagram, $P$ is a point represented by the complex number. Give a geometrical description of the locus of $P$ as $z$ satisfies the equation:
$$ \arg\left(\frac{z+1+i}{z-1-i} \right) = \pm \frac{\pi}{2} $$
What I have done:
Consider $$\left(\frac{z+1+i}{z-1-i} \right)$$
Let $z=x+iy$
$$\left(\frac{x+iy+1+i}{x+iy-1-i} \right) \Leftrightarrow \left(\frac{(x+1)+i(y+1)}{(x-1)+i(y-1)} \right) $$
$$ \Leftrightarrow \left(\frac{(x+1)+i(y+1)}{(x-1)+i(y-1)} \right) \cdot \left(\frac{(x-1)-i(y-1)}{(x-1)-i(y-1)} \right) $$
$$ \Leftrightarrow \left( \frac{ (x+1)(x-1) -i(y+1)(x+1) + i(y+1)(x-1) - i^2(y+1)(y-1)}{(x-1)^2 -i^2(y-1)^2} \right)$$
$$ \Leftrightarrow \left (\frac{ x^2 -1+y^2-1 +i(xy-y+x-1)-i(xy+y+x+1)}{(x-1)^2 +(y-1)^2} \right) $$
$$ \Leftrightarrow \left (\frac{ x^2 -1+y^2-1 +i(xy-y+x-1)-i(xy+y+x+1)}{(x-1)^2 +(y-1)^2} \right) $$
$$ \Leftrightarrow \left (\frac{ (x^2 +y^2-2) +i(-2-2y)}{(x-1)^2 +(y-1)^2} \right) $$
$$ \Leftrightarrow \left( \frac{x^2+y^2-2}{(x-1)^2 +(y-1)^2} \right) + i \left( \frac{-2-2y}{(x-1)^2 +(y-1)^2} \right) $$
As the $\arg = \pm \frac{\pi}{2}$ , which means locus of $P$ is purely imaginary hence $\Re(z)=0$
$$\Rightarrow \frac{x^2+y^2-2}{(x-1)^2 +(y-1)^2} = 0 $$
$$ \therefore x^2 + y^2 = 2 $$
So the Locus of $P$ should be a circle with centre $(0,0)$ and radius $\sqrt{2}$
However my answer key states that the answer is $|z|=2$ which is a circle with centre $(0,0)$ and radius $2$ so where exactly did I go wrong?
|
Let the points $z_A = -1-i$ and $z_B=1+i$. Then we look for the locus of all points Pwith $z_P=z$ such that $\vert\arg \frac{z-z_A}{A-z_B} \vert = \pi$ (in other words, the angle $\angle APB=\pi$). This is the circle in the complex plane with diameter $AB$, since we know that the angle under which the segment $AB$ is seen from a point $P$ on the circle with the same diameter is $\pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1799004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
disk of convergence for complex-valued series Find the disk of convergence of $\displaystyle \sum_{k=0}^{\infty} \frac{(z+2)^k}{(k+2)^3 4^{k+1}}$, where $z \in \mathbb{C}$.
I tried applying the ratio test: $\lim_{k \to \infty} \left| \frac{(z+2)^{k+1}}{(k+3)^3 4^{k+2}} \cdot \frac{(k+2)^3 4^{k+1}}{(z+2)^{k}} \right| = \lim_{k \to \infty} \left| \frac{(z+2) \cdot (k+2)^3}{4 (k+3)^3} \right| = \left|\frac{z+2}{4} \right|.$
Do we just check where $\left|\frac{z+2}{4} \right|<1$ to get the radius of convergence? Not entirely sure where to go from here
|
I'd rather go directly with the $\;n\,-$ th root test (Cauchy-Hadamard formula) of the general term of the coefficients:
$$\sqrt[k]{|a_k|}=\sqrt[k]{\frac1{(k+2)^34^{k+1}}}=\frac1{4\sqrt[k]{4(k+2)^3}}\xrightarrow[k\to\infty]{}\frac14\;\implies R=4$$
and thus the interval of convergence ( around $\;-2\;$ , of course ) is the disk $\;|z+2|<4\;$ .
The way you went with the quotient test the answer is yes: just check when $\;\frac{|z+2|}4<1\;$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1799099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the smallest value of the function $F:\alpha\in\mathbb R\rightarrow \int_0^2 f(x)f(a+x)dx$
Let $f -$ fixed continuous on the whole real axis function which is periodic with period $T = 2$, and it is known that the function $f$ decreases monotonically on the segment $[0, 1]$ increases monotonically on the segment $[1, 2]$ and for all $ x \in \mathbb R $ satisfies the equality $f(x)=f(2-x)$. Find the smallest value of the function
$$F:\alpha\in\mathbb R\rightarrow \int_0^2 f(x)f(\alpha+x)dx$$
I have no clue how to start. Any kind of help will be appreciated.
|
We intend to prove that the smallest value of the function $F$ defined by
\begin{equation*}
F({\alpha}) = \int_0^2f(x)f(x+{\alpha})\, dx\tag{1}
\end{equation*}
is $F(1)$.
At first we give $f$ the additional property to be differentiable with a continuous derivative. At the end we will fill that gap.
The function $F$ will be continuous and have the period $T = 2$. Thus there exists a smallest value of $F(\alpha).$
If $f$ is differentiable then $F$ will also be differentiable.
We intend to prove that
\begin{equation*}
F'(\alpha) \le 0 \text{ for } 0 \le \alpha \le 1. \tag{2}
\end{equation*}
Since
\begin{equation*}
f(x) = f(2-x) = f(-x)\tag{3}
\end{equation*}
then $F(\alpha) = F(2-\alpha)$. Then (2) implies that
\begin{equation*}
F'(\alpha) \ge 0 \text{ for } 1 \le \alpha \le 2.
\end{equation*}
Then we know that $F(1)$ is the smallest value.
We start by studying (1). If we change $x$ to $x+1$ and then $x$ to $1-x$ we get
\begin{gather*}
\int_1^2f(x)f(x+\alpha)\, dx = \int_0^1f(x+1)f(x+1+\alpha)\, dx = [(3)]\\ = \int_0^1f(1-x)f(1-x -\alpha)\, dx = \int_0^1f(x)f(x-\alpha)\, dx.
\end{gather*}
Now we can rewrite $F(\alpha)$ as
\begin{equation*}
F(\alpha) = \int_0^1f(x)(f(x+\alpha)+f(x-\alpha))\, dx.
\end{equation*}
Via differentiation under the integral sign (we now use that $f$ is differentiable) followed by integration by parts we get
\begin{gather*}
F'({\alpha}) = \int_0^1f(x)(f'(x+{\alpha})-f'(x-{\alpha}))\, dx = \left[f(x)(f(x+{\alpha})-f(x-{\alpha}))\right]_0^1 \\- \int_0^1f'(x)(f(x+{\alpha})-f(x-{\alpha}))\, dx = f(1)\underbrace{(f(1+{\alpha})-f(1-{\alpha}))}_{= 0}-f(0) \underbrace{(f({\alpha})-f(-{\alpha}))}_{= 0} \\- \int_0^1f'(x)(f(x+{\alpha})-f(x-{\alpha}))\, dx = - \int_0^1f'(x)(f(x+{\alpha})-f(x-{\alpha}))\, dx.
\end{gather*}
Since $f$ is decreasing and differentiable on $[0,1]$ then $f'(x) \le 0$ in the integral above. It remains to prove that
\begin{equation*}
g(x,\alpha) = f(x+{\alpha})-f(x-{\alpha}) \le 0
\end{equation*}
on the square $0 \le x \le 1, \, 0 \le \alpha \le 1.$ But \begin{equation*}
g(\alpha,x) = f(\alpha +x) - f(\alpha-x) = [(3)] = f(x+{\alpha})-f(x-{\alpha}) = g(x,\alpha).
\end{equation*}
Consequently it is sufficient to examine $g$ on the triangle $0 \le \alpha \le x \le 1.$ To do that we study $g$ on that part of the straight line $\alpha = x-k$, that runs inside the triangle. Here $k$ is constant in $[0,1]$.
To be more precise look at
\begin{equation*}
g(x,x-k) = f(2x-k)-f(k), \quad 0 \le k \le 1, \: k \le x \le 1.
\end{equation*}
We split the $x$-interval into two. If $k \le x \le \dfrac{k+1}{2}$ then $k \le 2x - k \le 1$. Thus $2x-k \in[0,1]$ and $k\in[0,1]$. But on $[0,1]$ the function $f$ is decreasing. Since $2x-k \ge k$ then $f(2x-k) \le f(k)$ and $g(x,2x-k) \le 0.$
The second part of the $x$-interval is $\dfrac{k+1}{2} \le x \le 1$. Then $1 \le 2x-k \le 2-k \le 2$. Furthermore $f(k) = f(2-k)$ and $1 \le 2-k \le 2.$ But on the interval $[1,2]$ the function $f$ is increasing. Since $2x-k \le 2-k$ then $f(2x-k ) \le f(2-k)$ and $g(x,2x-k) \le 0.$
We have proved that $F(1)$ is the smallest value under the additional assumption that $f$ is differentiable with a continuous derivative.
Now we will prove that the same is true if $f$ only is continuous. Define a smooth substitute $\tilde{f}$ for $f$ via
\begin{equation*}
\tilde{f}(x) = \dfrac{1}{2h}\int_{x-h}^{x+h}f(t)\, dt = \dfrac{1}{2h}\int_{-h}^{h}f(x+t)\, dt.\tag{4}
\end{equation*}
Then $\tilde{f}$ will be differentiable with the continuous derivative $\dfrac{f(x+h)-f(x-h)}{2h}$. Furthermore it inherits a lot of properties from $f$. It is obvious that $\tilde{f}$ will be periodic with $T=2$. It will also fulfil the condition $\tilde{f}(x) = \tilde{f}(2-x) = \tilde{f}(-x).$ It will also be decreasing on $[0,1]$. To realize that we need a little argument. Assume that $0 < x_1 < x_2 <1$. If $h$ in (4) is small enough then $|t|$ will be so small that
\begin{equation*}
0 <x_1+t<x_2+t <1.
\end{equation*}
From (4) we then get that $\tilde{f}(x_1) \ge \tilde{f}(x_2)$. We have proved that $\tilde{f}$ is decreasing on the open interval $0 < x < 1$. But since $\tilde{f}$ is differentiable it will also be decreasing on the closed interval $0 \le x \le 1$ (use the intermediate value theorem).
Analogously we prove that $\tilde{f}$ is increasing on $1\le x\le 2.$
Since $f$ is continuous and since we work on a compact interval $f$ will be uniformly continuous. Thus to a given $\varepsilon $ there exists a $\delta $ such that
\begin{equation*}
|f(x_1)-f(x_2)| < \varepsilon \text{ if } |x_1-x_2|< \delta .
\end{equation*}
With these $\varepsilon$ and $\delta$ and with $0<h<\delta$ we get
\begin{equation*}
|f(x)-\tilde{f}(x)| \le \dfrac{1}{2h}\int_{-h}^{h}|f(x)-f(x+t)|\, dt \le \dfrac{1}{2h}\int_{-h}^{h}\varepsilon\, dt = \varepsilon .\tag{5}
\end{equation*}
We are now prepared to study the function
\begin{equation*}
\tilde{F}(\alpha) = \int_0^2\tilde{f}(x)\tilde{f}(x+\alpha)\, dx.
\end{equation*}
According to what we have done above we know that
\begin{equation*}
\tilde{F}(\alpha) \ge \tilde{F}(1)
\end{equation*}
and that $\tilde{F}(1)$ is the smallest value.
From (5) we get
\begin{gather*}
|F(\alpha) - \tilde{F}(\alpha)| \le \int_0^2|f(x)f(x+\alpha)-\tilde{f}(x)\tilde{f}(x+\alpha)|\, dx \\= \int_0^1|(f(x)-\tilde{f}(x)) f(x+{\alpha})+\tilde{f}(x)(f(x+{\alpha})-\tilde{f}(x+{\alpha}))|\, dx \\
\le \int_0^2(\epsilon C + C\varepsilon)\, dx = 2C\varepsilon
\end{gather*}
where $\displaystyle C = \max_{0 \le x\le 2}|f(x)|$.
Consequently
\begin{equation*}
F({\alpha}) = F({\alpha}) -\tilde{F}({\alpha}) +\tilde{F}({\alpha}) \ge -2C\varepsilon +\tilde{F}(1).
\end{equation*}
But $\varepsilon$ can be arbitrarily small.
In the limit we have that
\begin{equation*}
F({\alpha})\ge F(1)
\end{equation*}
i.e. $F(1)$ is the smallest value of $F(\alpha).$
I welcome shorter solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1799193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Simplifying sum of powers of conjugate pairs The result of summing a conjugate pair of numbers each raised to the power $n$:
$$
(a + bi)^n + (a - bi)^n
$$
Produces a real number where $a + bi$ is a complex number.
Given the result is real, is there a simplified way to express the above expression in terms of $a$ and $b$ involving no imaginary number $i$ in the simplified expression?
|
Let $z=|z|(cos (\alpha) + isin(\alpha))$ therefore $z^n=|z|^n(cos (n \alpha) + isin (n\alpha))$ so the expression becomes $2|z|^ncos (n\alpha)$. We know $|z|=\sqrt {a^2+b^2}$, also $\alpha$ can be expressed in terms of $a,b$ (using cotangent).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1799322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
}
|
Lower bound for norm of matrix I have the following problem: $A$ is a positive definite, symmetric matrix.
Firstly I was required to find a matrix $B$ such that $B^n = A$. I believe this to be $C(D^{\frac1n}) C'$ where C is the orthogonal matrix of eigenvectors of $A$, and $A = CDC'$.
After this I am asked to find a lower bound for the norm of $B$ as a function of the norm of $A$. It is not specified which norm to take, but by default I took the spectral norm which gave me an equality rather than a bound, because the eigenvalues of $A$ correspond to those of $B$.
Is there something I am missing here? Thanks in advance!
|
For the spectral norm, you can write a direct relation between the norm of $B$ and the norm of $A$. Since $A$ is symmetric and positive-definite, the spectral norm of $A$ is just the maximal eigenvalue of $A$. Your $B$ is also symmetric and positive-definite and so its norm also equals to the maximal eigenvalue which will be $||A||^{\frac{1}{n}}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1799481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Example to show that unconditional convergence does not imply absolute convergence in infinite-dimensional normed spaces. I tried to show that unconditional convergence does not imply absolute convergence in infinite-dimensional normed spaces using a direct proof, but unfortunately I did not succeed. The definition of unconditional convergence I am using is the following:
A series $\sum_n x_n$ is called unconditionally convergent if the series $\sum_n x_{\pi(n)}$ converges for all permutations $\pi : \mathbb{N} \to \mathbb{N}$.
I considered the sequence $(x_n)_{n \in \mathbb{N}} \subset c_0 (\mathbb{N})$, where $x_n = \frac{\delta_n}{n}$ and $\delta_n$ denoting the standard unit vector. It is clear that $\sum_n \| x_n \|_{\infty}$ does not convergence, but I have trouble showing that the series $\sum_n x_n$ converges unconditionally.
Any help is appreciated. Thanks in advance.
|
Denote by $y = (y^n)_{n \in \mathbb{N}}$ the sequence $y = (1, \frac{1}{2}, \frac{1}{3}, \dots)$ (we will use upper indices for the terms of an element in $\ell^{\infty}(\mathbb{N})$ in order to not confuse ourselves when considered sequences of elements in $\ell^{\infty}(\mathbb{N})$). Then we have
$$ \left( y - \sum_{k=1}^n x_{\pi(k)} \right)^i = \left( y - \sum_{k=1}^n \frac{\delta_{\pi(k)}}{k} \right)^i = \begin{cases} 0 & i \in \{ \pi(1), \dots, \pi(k) \}, \\ \frac{1}{i} & \textrm{otherwise}. \end{cases} $$
Given $n > 0$, choose $N_0$ such that $\{ 1, \dots, n \} \subseteq \{\pi(1), \dots, \pi(N_0) \}$ (any $N_0 > \max \{ \pi^{-1}(1), \dots, \pi^{-1}(n) \}$ will do). Then, for all $N \geq N_0$, we will still have
$\{ 1, \dots, n \} \subseteq \{\pi(1), \dots, \pi(N) \}$ and so
$$ \left| \left| y - \sum_{k=1}^N x_{\pi(k)} \right| \right|_{\infty} = \sup_{i \in \mathbb{N}} \left| \left( y - \sum_{k=1}^N x_{\pi(k)} \right)^{i} \right| = \sup_{i \in \mathbb{N} \setminus \{ \pi(1), \dots, \pi(N) \}} \left| \frac{1}{i} \right| \leq \frac{1}{n+1}. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1799596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Functor is of the form Set(-,A) Let $F: Set^{op} \rightarrow Set$ be a functor such that for corresponding functor $\overline{F}: Set \rightarrow Set^{op}$ we have $\overline{F} \dashv F$. With corresponding functor I mean that $F$ and $\overline{F}$ are just basically the same functor (just written differently).
An exercise says that from this information, it follows that $F$ is naturally isomorphisc to $Set(-,A)$. Clearly we need to use Yoneda's Lemma, but I cant really see how. I have no clue how to show this $A$ exists at all or where it comes from. Anybody have an idea?
|
Yoneda's lemma doesn't help to characterize when a given functor is representable or not.
Here just write the fact that you have an adjunction :
$$Hom_{Set^{op}}(\overline{F}(A),B) = Hom_{Set}(A,F(B))$$
which means in $Set$:
$$Hom(B,F(A)) = Hom(A,F(B)).$$
Now take $B = \ast$ (a point) :
$$F(A)\simeq Hom(\ast,F(A)) = Hom(A,F(\ast))$$
so you get a natural isomorphism
$$F\simeq Hom(\bullet,F(\ast)).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1799732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
ordered set notation in functions Do please forgive me, if this question is a duplicate.
How does one correctly notate a function $f$, which takes a ordered subset $S$ from the field $\mathbb{K}$ and returns an other (ordered) subset from the same field in question?
Obviously, the notation $f:S \mapsto T : \text{ ...} \qquad (S,T \subseteq \mathbb{K})$ is invalid, as the function would now take one element from the subset $S$ instead of the subset itself.
I believe, that the correct representation of a ordered set/list would be a tuple $ a := (a_1, a_2, .... , a_n) \in \mathbb{K}^n$, but my issue with the function $f$ is, that the tuple size $n$ is not always defined.
This is not a specific task/problem, where I need the notation requested - it is more out of general interest.
|
$$f:\mathcal{P}(\mathbb{K})\to \mathcal{P}(\mathbb{K})$$
Where $\mathcal{P}(\mathbb{K})$ denotes the power set of $\mathbb{K}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1799843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How do I show that a contraction mapping in a metric space is continuous? I start out by letting $V$ be an arbitrary open set in $X$. Then
$$
f^{-1}(V) = \{x\in X\mid f(x) \in B_\epsilon(f(a))\}.
$$
This can be re-written as:
$$
f^{-1}(V) = \{x\in X\mid d(f(a), f(x)) < \epsilon \}.
$$ I realize that contraction mappings have an $0<r<1$ such that
$$
d(f(x_1), f(x_2)) \leq r\cdot d(x_1,x_2),\quad \forall x_1,x_2 \in X.
$$ I construct an open ball
$$
B_{\frac{\delta}{r}}(a) = \{x\in X\mid r\cdot d(a, x) \lt \delta \}
$$ but from here I'm unsure as to how to show that $f^{-1}(V)$ is open.
|
Let $(X, d)$ be a metric space, $S \subset X$, and $f:S \longrightarrow S$ a function be such that $d(f(x), f(y)) \leq c d(x, y)$, for all $x, y \in S$, where $0 \leq c < 1$ is given.
Fix $\epsilon > 0$ and choose $a \in S$.
The case where $c = 0$ is trivial. Assume $c > 0$ and let $\delta = \epsilon / c$.
For all $x \in S$ with $d(x, a) < \delta$, we have $d(f(x), f(a)) \leq c d(x, a) < \epsilon$, i.e. $f$ is continuous.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1799921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Integrate $\int_0^\infty \frac{e^{-x/\sqrt3}-e^{-x/\sqrt2}}{x}\,\mathrm{d}x$ I can't solve the integral
$$\int_0^\infty \frac{e^{-x/\sqrt3}-e^{-x/\sqrt2}}{x}\,\mathrm{d}x$$
I tried it by using Beta and Gamma function and integration by parts. Please help me to solve it.
|
An alternative approach to Marco Cantarini's perfectly sound answer.
If we set, for any $\alpha>1$,
$$ I(\alpha) = \int_{0}^{+\infty}\frac{e^{-x}-e^{-\alpha x}}{x}\,dx $$
differentation under the integral sign/Feynman's trick gives
$$ I'(\alpha) = \int_{0}^{+\infty} e^{-\alpha x}\,dx = \frac{1}{\alpha}, $$
and since $\lim_{\alpha\to 1^+}I(\alpha) = 0$, it follows that $I(\alpha)=\log\alpha$.
On the other hand, by setting $x=z\sqrt{6}$ in the original integral, we get:
$$ J = \int_{0}^{+\infty}\frac{e^{-z\sqrt{2}}-e^{-z\sqrt{3}}}{z}\,dz = I(\sqrt{3})-I(\sqrt{2}) = \color{red}{\frac{1}{2}\,\log\left(\frac{3}{2}\right)}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1800042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
Ito formula when g(t,x) is an integral Suppose we have a stochastic process which is written as an Ito process.
$$dX_t=\mu_t\ dt +\sigma_t\ dB_t$$.
If $Y_t$ is defined as a stochastic process as a function of $X_t$, then we can find $dY_t$ using the Ito formula. The key is to have the function $g(t,x)$ which relates $X_t$ to $Y_t$. Then we can find the derivatives with respect to $t$ and $x$ to plug into the Ito formula. For instance, if we write
$Y_t=tX_t$, we use the Ito formula with the function $g(t,x)=tx$.
However, what if we want to define $Y_t$ as a time integral of $X_t$? That is, $$Y_t=\int^t_0 X_u \ du.$$ Then how do we write $g(t,x)$ in order to find $Y_t$ as an Ito process? Is it simply
$$\int^t_0 x\ du = tx-0*x=tx?$$ That doesn't feel right. Or is it maybe $$\int^t_0 x\ dx = \frac{t^2}{2}?$$ I definitely don't think that is right though since we changed $du$ to $dx$.
Edit: To be clear, I want to write $Y_t$ as a function of $\mu_t$ and $\sigma_t$.
Thanks for the help!
|
Express $Y$ as an Ito process:
$$
dY_t=X_t\,dt = X_t\,dt +\ 0\,dB_t.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1800180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Derivation of Dirac delta function Is there anyone could give me a hint how to find the distributional derivative of the delta function $\delta$? I don't know how to deal with the infinite point.
|
Upon request in the comments:
There is a large class of distributions which are given by integration against locally integrable functions. Specifically, given a locally integrable $f$ and a smooth compactly supported $g$, one can define $T_f(g)=\int_{-\infty}^\infty f(x) g(x) dx$. This leads to a common abuse of notation, where we write the same thing for distributions which are not given by integration against a locally integrable function. Thus we write things like $\delta(f)=\int_{-\infty}^\infty \delta(x) f(x) dx$, even though "$\delta(x)$" is meaningless by itself.
This abuse of notation turns out to be productive, because we can often define operations in distribution theory by passing to an approximating sequence, performing the operation on the approximating sequence (where it is defined the way we want) and then taking a limit. For example, if $T_{f_n} \to T$ and $f_n$ are smooth, then we can define $T'$ (the distributional derivative) to be $\lim_{n \to \infty} T_{f'_n}$. But with $T'_{f_n}$ we can really integrate by parts, which gives the formula $T'(g)=-T(g')$. Note that the result does not involve the approximation scheme, so we can just call this formula the "distributional derivative" and forget about approximating it.
The abuse of notation hints at trying such manipulations. Often people write them out without mentioning any approximation scheme. In this case the manipulations are called "formal", because they only pay attention to "form" without worrying about semantics. (This is the same use of the word as in "formal power series".)
Anyway, in the case of the Dirac delta, this procedure winds up telling you that $\delta'(f)=-f'(0)$ since by definition $\delta(f)=f(0)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1800257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Prove the fractions aren't integers
Prove that if $p$ and $q$ are distinct primes then $\dfrac{pq-1}{(p-1)(q-1)}$ is never an integer. Is it similarly true that if $p,q,r$ are distinct primes then $\dfrac{pqr-1}{(p-1)(q-1)(r-1)}$ is also never an integer?
I think using a modular arithmetic argument here would help. In other words, we must have for the first fraction $pq-1 \equiv 0 \pmod{(p-1)(q-1)}$. Then I am unsure how to proceed next since we can't really use the Chinese Remainder Theorem.
|
Suppose, for the sake of contradiction, such distinct $p$ and $q$ exist.
First of all observe that the statement implies that $p-1|pq-1$. So,
$$p-1|pq-1-q(p-1) \implies p-1|q-1$$
Similarly we get,
$$q-1|p-1$$
These observations imply that $p-1 = q-1$. This implies that $p = q$. Contradiction. They aren't distinct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1800384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 3
}
|
Is the Lie derivative $L_{X}(\omega \wedge \mu)$ an exact form?
Let $\omega$ be an $n$-form and $\mu$ be an $m$-form where both are acting on a manifold $M$. Is the Lie derivative $L_{X}(\omega \wedge \mu)$ where $X$ is a smooth vector field acting on $M$ an exact form?
I think it is but I've been unable to prove it, so any help would be greatly appreciated.
|
In general this is not true. Recall that
$$
L_X(\omega) = i_x d\omega + d i_x\omega
$$
where you see that the right part is exact and the left part mustn't be. As an example for your case take $N$ a manifold with a non exact form $\mu$
and let $\omega$ be a 0-form (function) on $\mathbb{R}$ and define $M=N\times\mathbb{R}$ and note that the extension of $\mu$ is still not exact. Taking $\omega=x$ and $X=\partial_x$ where $x$ is the variable in $\mathbb{R}$ one sees
$$
L_X(\omega\wedge \mu) = L_X(\omega)\wedge\mu +(-1)^0\omega\wedge L_X(\mu)=\mu
$$
since the projection of $X$ on $T_N$ vanishes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1800519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Probability of having a Girl A and B are married. They have two kids. One of them is a girl. What is the probability that the other kid is also a girl?
Someone says $\frac{1}{2}$, someone says $\frac{1}{3}$. Which is correct?
Now A and B have 4 children and all of them are boys. B is pregnant. So what is the probability that A and B are gifted with a baby girl?
Is it $\frac{1}{2}$ or there will be some conditional probability?
|
Conditional probability:
Let $A$ and $B$ be two events.
$$P(A|B) = \frac{P(A\cap B)}{P(B)}$$
which means, the probability of $A$ occuring given that $B$ occured, is the probability of both $A$ and $B$ occuring, divided by the probability that $B$ occurs.
In this case, $A$ is the event that the other kid is a girl, and $B$ is the event that one of the kids is a girl.
$A\cap B$ would be both kids are girls, which has a probability of $\frac14$.
$B$ would be the event that one of the kids is a girl, which has a probability of $\frac34$.
Therefore, the required probability is $\frac{1/4}{3/4}=\frac13$.
And for the second question, we can do it using conditional probability or without.
Using conditional probability:
$A$ be the event that the fifth child is a girl.
$B$ be the event that the first four children are boys.
$P(A\cap B)$ would be $\left(\frac12\right)^5=\frac1{32}$.
$P(B)$ would be $\left(\frac12\right)^4=\frac1{16}$.
The required probability would be $\frac{1/32}{1/16}=\frac12$.
Without using conditional probability:
Notice that event $A$ is independent on event $B$.
Therefore, $P(A)=\frac12$.
Notes
In the first question, the probability in question is the probability that "the other kid is also a girl", making the two events dependent.
However, in the second question, the word other is gone, leaving us with independent events.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1800658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
}
|
Eigen-values of a matrix $P^{-1}AP$
QUESTION: If A and P be $2$ non-singular $n\times n$ matrices and $\lambda$ is the eigen-value of $A$, then show that $\lambda$ is also
the eigen-values of a matrix $P^{-1}AP$.
I could simply show that $\lambda$ being the eigen-value of $A$, we have that
$$det (A-\lambda I_n)=0$$
But I could not proceed further and make any comment on the question asked.
NOTE: I am unaware of diagonalisation of matrices, if at all it is playing any part in this problem. And also I require a method which does not utilise this principle of diagonalisation to solve this.
|
Actually $A$ and $P^{-1}AP$ share the same characteristic polynomial, hence they have the same eigenvalues. Note that
$$\begin{align*}
\det(P^{-1}AP-\lambda I_n) & = \det(P^{-1}(A-\lambda I_n)P))\\
& = \det(P^{-1})\det(A-\lambda I_n)\det(P)\\
& = \det(A-\lambda I_n).
\end{align*}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1800757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Finding coefficient of $x^n$ in this series I was doing an assignment question, I came across this:
I understand everything (I know the theorem used in the answer), but I don't get how solution switched from first "i.e." to second "i.e.". I mean how did it happen? And what is happening after that Geometric Progression formula.
Thank you!
|
They are using the fact that $(1-x^6)^4=1-4x^6+{4\choose2}x^{12}-\cdots$ and noting that since you're only looking for the coefficient of $x^8$, you can drop all those higher-order terms, including the ${4\choose2}x^{12}$.
As for the rest, it's a matter of observing that
$${1\over(1-x)^4}={1\over3!}\left(1\over1-x\right)'''={1\over3!}(1+x+x^2+x^3+\cdots)'''={1\over3!}\left({3!\over0!}+{4!\over1!}x+{5!\over2!}x^2+\cdots \right)\\={3\choose3}+{4\choose3}x+{5\choose3}x^2+\cdots+{11\choose3}x^8+\cdots$$
When you multiply by $1-4x^6$ and pick out the resulting coefficient for $x^8$. you get the $11\choose3$ from the multiplication by the $1$ and $-4{5\choose3}$ from the multiplication by the $-4x^6$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1800867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How quickly can we detect if $\sqrt{(-1)}$ exists modulo a prime $p$? How quickly can we detect if $\sqrt{(-1)}$ exists modulo a prime $p$? In other words, how quickly can we determine if a natural, $n$ exists where $n^2 \equiv -1 \bmod p$?
NOTE
This $n$ is essentially the imaginary unit modulo $p$.
|
Let's do some experimentation.
$p = 3$: no.
$p = 5$: yes, $2^2 \equiv -1$.
$p = 7$: no.
$p = 11$: no.
$p = 13$: yes, $5^2 \equiv -1$.
$p = 17$: yes, $4^2 \equiv -1$.
$p = 19$: no.
$p = 23$: no.
It appears that only those prime numbers which are congruent to $1$ modulo $4$ have this property.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1801264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Induced homomorphism of a covering space How can I determine what's the induced homology homomorphism of a covering $S^{n} \rightarrow RP^{n}$? I suppose that a Hurewicz homomorpism would be pretty effective, but since I know nothing about higher homotopy groups of spheres and their generators I'd rather avoid it.
|
All of the homology groups of $\mathbb{S}^n$ are trivial, except of top and bottom one. The induced map $H_0(\mathbb{S}^n) \to H_0(\mathbb{R}\mathbb{P}^n)$ will always be isomorphism (this is very easy to calulate). The top homology group $H_n(\mathbb{R}\mathbb{P}^n)$ is either trivial or $\mathbb{Z}$, depending on whether $\mathbb{R}\mathbb{P}^n$ is orientable or not.
To calculate the map $\mathbb{Z} \to \mathbb{Z}$ between the top homology groups in the orientable case, you can use the local degree method, described e.g. in Hatcher's book on page 136. In case of covering, since the map is local homeomorphism, the local degrees will all be 1 (or -1, depending on the choice of orientations), so degree will be the (minus) multiplicity of the covering, which is 2 in this case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1801361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Surface of the intersection of $n$ balls Suppose there are $n$ balls (possibly, of different sizes) in $\mathbb R^3$ such that their intersection $\mathfrak C$ is non-empty and has a positive volume (i.e. is not a single point). Apparently, $\mathfrak C$ is a convex body with a piecewise smooth surface — a "quilt" of sphere fragments. Let $f(n)$ be the maximal number of fragments that can be achieved for a given $n$.
Is there a simple formula or recurrence relation for $f(n)$?
|
Surface and volumes can be calculated analytically for any n value. For intersections of n=5 spheres and more, they can be calculated from the 4 by 4 intersections: see theorems 4.5 and 4.6 in my book chapter:
Spheres Unions and Intersections and some of their Applications in Molecular Modeling, In: Distance Geometry: Theory, Methods, and Applications, chap. 4, pp. 61--83. Mucherino, A.; Lavor, C.; Liberti, L.; Maculan, N. (Eds.), Springer, 2013. ISBN 978-1-4614-5127-3, DOI 10.1007/978-1-4614-5128-0_4
A free preprint is available at: https://hal.archives-ouvertes.fr/hal-01955983
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1801543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\det(A)=p_1p_2-ba={bf(a)-af(b)\over b-a}$ Let $f(x)=(p_1-x)\cdots (p_n-x)$ $p_1,\ldots, p_n\in \mathbb R$ and let $a,b\in \mathbb R$ such that $a\neq b$
Prove that $\det A={bf(a)-af(b)\over b-a}$ where $A$ is the matrix:
$$\begin{pmatrix}p_1 & a & a & \cdots & a \\ b & p_2 & a & \cdots & a \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ b & b & b & \cdots & p_n \end{pmatrix}$$
that is the entries $k_{ij}=a$ if $i<j$, $k_{ij}=p_i$ if $i=j$ and $k_{ij}=b$ if $i>j$
I tried to do it by induction over $n$. The base case for $n=2$ is easy $\det(A)=p_1p_2-ba={bf(a)-af(b)\over b-a}$
The induction step is where I don´t know what to do. I tried to solve the dterminant by brute force(applying my induction hypothesis for n and prove it for n+1) but I don´t know how to reduce it. It gets horrible.
I would really appreciate if you can help me with this problem. Any comments, suggestions or hints would be highly appreciated
|
Here is a possible proof without induction. The idea is to consider $\det A$ as a function of $p_n$.
We define the function $F: \Bbb R \to \Bbb R$ as
$$
F(p) = \begin{vmatrix}
p_1 &a &\ldots &a &a \\
b &p_2 &\ldots &a &a \\
\vdots &\vdots &\ddots &\vdots &\vdots \\
b &b &\ldots &p_{n-1} &a\\
b &b &\ldots &b &p
\end{vmatrix}
$$
$F$ is a linear function of $p$ and therefore completely determined by its values at two different arguments.
$F(a)$ and $F(b)$ can be computed easily:
By subtracting the last row from all previous rows we get
$$
F(a) = \begin{vmatrix}
p_1 &a &\ldots &a &a \\
b &p_2 &\ldots &a &a \\
\vdots &\vdots &\ddots &\vdots &\vdots \\
b &b &\ldots &p_{n-1} &a\\
b &b &\ldots &b &a
\end{vmatrix}
=
\begin{vmatrix}
p_1-b &a-b &\ldots &a-b &0\\
0 &p_2-b &\ldots &a-b &0 \\
\vdots &\vdots &\ddots &\vdots &\vdots\\
0 &0 &\ldots &p_{n-1}-b &0 \\
b &b &\ldots &b &a
\end{vmatrix} \\
$$
i.e.
$$
F(a) = a(p_1-b)\cdots (p_{n-1}-b) \, .
$$
In the same way (or by using the symmetry in $a$ and $b$) we get
$$
F(b) = b (p_1-a)\cdots (p_{n-1}-a) \, .
$$
Now we can compute $\det A = F(p_n)$ with linear interpolation:
$$
\det A = \frac{b-p_n}{b-a} F(a) + \frac{p_n-a}{b-a} F(b) \\
= \frac{- a(p_1-b)\cdots (p_{n-1}-b)(p_n-b) + b(p_1-a)\cdots (p_{n-1}-a)(p_n-a) }{b-a} \\
= \frac{-af(b) + bf(a)}{b-a} \, .
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1801627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
}
|
Statistical significance and sample size I have a device that is said to succeed at doing some task at least 99% of attempts, and fails no more than 1% of attempts.
If I want to be 98% sure that it achieves that success rate, how many results would I need to check at minimum?
And what would be the maximum number of failures allowed in that number of results?
|
This is how I would approach the question (Even if I made a mistake, you'll get the idea):
At first, we should choose the model. Lets do hypothesis testing for binomial distribution. Our device has binomial distribution with some constant probability of success $p$ and probability of failure $1-p$. We are checking hypothesis $H_0: p \geq 0.99$ vs $H_1: p < 0.99$. Significance level is $\alpha = 0.02$.
Suppose we have done $n$ trials and counted the number of successful events $x$. It should have distribution $X \sim Bin(n, 0.99)$. Using binomial cumulative distribution function (or approximations) we can compute $P(X\leq x)$ and compare this value with $\alpha$. If it is bigger, then the null hypothesis is accepted (no enough evidence).
So lets imagine now that $0.985$ success rate is not satisfactory for you. To obtain $n$ we need to solve such equation:
$$P(X<0.985n)=0.02$$
where $P$ is binomial CDF for $Bin(n,0.99)$. and "trials" were taken from distribution with $p=0.985$. We will approximate binomial CDF with normal distribution to make it continuous: $F_{Bin(n,p)} \sim N(np,\sqrt{np(1-p)})$. Direct computation in R gives:
bar<-function(x){pnorm(0.985*x,mean=x*0.99,sd=sqrt(0.99*0.01*x))}
uniroot(function(x){bar(x)-0.02},lower = 1, upper = 10000000)
$n = 1670$. If you agree to consider possibility of $0.98$ success rate then $n=417$.
Note that function is decreasing and only after 1670 attempts you can notice that $p=0.985$, not $0.99$:
And this is all for $\alpha=0.02$ confidence level, so its hard to be "sure".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1801724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$a_n = \frac{1}{n}b_n$, $\lim b_n = L>0, L\in\mathbb{R}$, prove $\sum a_n$ diverges I have to prove that if
$$a_n = \frac{1}{n}b_n$$for $n\ge 1$ and $$\lim_{n\to\infty} b_n = L>0, L\in\mathbb{R}$$ then $$\sum_{n=1}^{\infty} a_n$$ diverges.
My idea was to show that it's not true that $a_n\to 0$ but I guess it's true because in $\frac{1}{n}b_n$, $b_n$ is limited because converges, and $\frac{1}{n}$ goes to $0$, and there is a theorem that says that when these two things happen in a product, it goes to $0$. So I cannot affirmate anything with this result. I guess it has something to do with comparsion but I cannot find any good comparsion between $a_n$ and $\frac{1}{n}b_n$
|
hint : what is $$\lim_{ n\rightarrow\infty}\sum_{x=n}^{x=kn}a_{x}$$ ? where k is some integer
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1801828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Determinant of determinant is determinant? Looking at this question, I am thinking to consider the map $R\to M_n(R)$ where $R$ is a ring, sending $r\in R$ to $rI_n\in M_n(R).$ Then this induces a map. $$f:M_n(R)\rightarrow M_n(M_n(R))$$
Then we consider another map $g:M_n(M_n(R))\rightarrow M_{n^2}(R)$ sending, e.g. $$\begin{pmatrix}
\begin{pmatrix}1&0\\0&1\end{pmatrix}&\begin{pmatrix}2&1\\3&0\end{pmatrix}\\ \begin{pmatrix} 0&0\\0&0 \end{pmatrix}&\begin{pmatrix} 2&3\\5&2\end{pmatrix} \end{pmatrix}$$ to $$\begin{pmatrix}1&0&2&1\\
0&1&3&0\\
0&0&2&3\\
0&0&5&2\end{pmatrix}.$$
Is it true that $$\det_{M_n(R)}(\det_{M_n(M_n(R))}A)=\det_{M_{n^2}(R)}g(A)$$ for some properly-defined determinant on $M_n(M_n(R))?$
If this is true, then $\det_{M_n(R)}\operatorname{ch}_{A}(B)=\det_{M_n(R)}\circ\det_{M_n(M_n(R))}(f(A)-B\cdot I_{M_n(M_n(R))})=\det_{M_{n^2}}\circ g(f(A)-B\cdot I_{M_{n^2}(R)}),$ which is what OP of the linked question wants to prove.
Any hint or reference is greatly appreciated, thanks in advance.
P.S. @user1551 pointed out that determinant is defined on commutative rings only and $M_n(R)$ is in general a non-commutative ring. So I am thinking maybe we could use the Dieudonné determinant. In any case, I changed the question accordingly.
|
(Edit: the OP has modified their question; this answer no longer applies.)
Your question is not well posed because determinant is defined on commutative rings only, but $M_n(R)$ in general is not a commutative ring. But there is indeed something similar to what you ask. See
*
*M. H. Ingraham, A note on determinants, Bull. Amer. Math. Soc., vol. 43, no. 8 (1937), pp.579-580, or
*John Silvester, Determinants of Block Matrices, theorem 1.
Briefly speaking, suppose $B\in M_m(R)$, where $R$ is commutative subring $R\subseteq M_n(F)$ for some field $F$ (note: I follow Silvester's notation here; his $R$ is not your $R$). If you "de-partition" $B$, you get a matrix $A\in M_{mn}(F)$. In other words, the entries of $A$ are taken from a field $F$, and if you partition $A$ into a block matrix $B$, all blocks commute.
Now Silvester's result says that $\det_F A=\det_F(\det_R B)$. Put it another way, if you take the determinant of $B$, the result is a "scalar" in $R$, which is by itself an $n\times n$ matrix over $F$. If you further take the determinant of this resulting matrix, you get a scalar value in $F$. As shown by Silvester, this value must be equal to the determinant of $A$.
Silvester's proof still applies if $F$ is an integral domain instead. I don't know how far the assumption on $F$ can be weakened to.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1801911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Find two fractions such that their sum added to their product equals $1$ This is a very interesting word problem that I came across in an old textbook of mine. So I managed to make a formula redefining the question, but other than that, the textbook gave no hints really and I'm really not sure about how to approach it. Any guidance hints or help would be truly greatly appreciated. Thanks in advance :) So anyway, here the problem goes:
Find two fractions so that their sum added to their product equals $1$.
In other words:
Given integers $a$ and $b$ with $\frac ab < 1,$
find integers $c$ and $d$ such that $\frac ab + \frac cd + \frac ab \cdot\frac cd = 1$
|
Simple solution may be $(a,b,c,d) = (1,1,0,1)$.
$$
\frac ab + \frac cd + \frac ab \cdot\frac cd = 1 \Leftrightarrow
\frac{ad+cb+ac}{bd}=1 \Leftrightarrow\\
a(d+c) = b(d-c)\Leftrightarrow
\frac{a}{b} = \frac{d - c}{d+c} \
$$
Answer to:
Find two fractions so that their sum added to their product equals 1.
Take some $d,c$. It has to satisfy $d \neq 0 \wedge d+c \neq 0$. Eg. $(d,c) = (7,3)$ then
$
\frac{a}{b} = \frac{d-c}{d+c} = \frac{2}{5}
$.
Possible (example) solution is $(a,b,c,d) = (2,5,3,7)$, you can check it.
Answer to:
Given integers a and b with $ab<1$, find integers c and d such that $\frac ab + \frac cd + \frac ab \cdot\frac cd = 1$
You have to solve
$$
\begin{cases}
a = d - c\\
b = d + c\\
\end{cases}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1801986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Enough projectives and $F$ preserves limits implies $G$ preserves epi's. Exercise: Let $\mathcal{C}, \mathcal{D}$ be categories, $G : \mathcal{C} \to \mathcal{D}$ and $F : \mathcal{D} \to \mathcal{C}$ an adjunction $F \dashv G$. Suppose $\mathcal{D}$ has enough projectives and $F$ preserves projectives. Prove that $G$ preserves epimorphisms.
My try: Suppose $f : x \to y \in \mathcal{C}$ is epi, then we have to show that $Gf : Gx \to Gy$ is epi. That is, for all $g, h : Gy \to z$ in $\mathcal{D}$ we must have $gFf = hFf$ implies $g = h$. I have tried writing down an projective for $Fx$ and $Fy$ but that seems to lead nowhere.
|
Suppose $f : X \to Y$ is an epimorphism in $\mathcal{C}$. We wish to show that $G f : G X \to G Y$ is an epimorphism.
Let $q : B \to G Y$ be an epimorphism in $\mathcal{D}$ where $B$ is projective. Then $F B$ is projective, so we have a morphism $x : F B \to X$ in $\mathcal{C}$ such that $f \circ x = \epsilon_Y \circ F q$, where $\epsilon_Y : F G Y \to Y$ is the adjunction counit. Hence, $G f \circ G x \circ \eta_B = q$. But $q : B \to G Y$ is an epimorphism, so $G f : G X \to G Y$ is also an epimorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1802108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to differentiate product of vectors (that gives scalar) by vector? I'm trying to understand derivation of the least squares method in matrices terms:
$$S(\beta) = y^Ty - 2 \beta X^Ty + \beta ^ T X^TX \beta$$
Where $\beta$ is $m \times 1$ vertical vector, $X$ is $n \times m$ matrix and $y$ is $n \times 1$ vector.
The question is: why $$\frac{d(2\beta X^Ty)}{d \beta} = 2X^Ty$$
I tried to derive it directly via definition of derivative:
$$\frac{d(2\beta X^Ty)}{d \beta} = \lim_{\Delta \beta \to 0} \frac{2\Delta\beta X^T y}{\Delta \beta} = \lim_{\Delta \beta \to 0} 2\Delta\beta X^T y \cdot \Delta \beta^{-1}$$
May be the last equality must be as in the next line, but anyway I don't understand why $$2\Delta\beta \Delta \beta^{-1} X^T y $$And, what is $\Delta \beta^{-1}$? Vectors don't have the inverse form.
The same questions I have to this quasion:
$$(\beta ^ T X^TX \beta)' =2 X^T X \beta$$
|
Recall that the multiple regression linear model is the equation given by
$$Y_i = \beta_0 + \sum_{j=1}^{p}X_{ij}\beta_{j} + \epsilon_i\text{, } i = 1, 2, \dots, N\tag{1}$$
where $\epsilon_i$ is a random variable for each $i$. This can be written in matrix form like so.
\begin{equation*}
\begin{array}{c@{}c@{}c@{}c@{}c@{}c}
\begin{bmatrix}
Y_1 \\
Y_2 \\
\vdots \\
Y_N
\end{bmatrix} &{}={} &\begin{bmatrix}
1 & X_{11} & X_{12} & \cdots & X_{1p} \\
1 & X_{21} & X_{22} & \cdots & X_{2p} \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
1 & X_{N1} & X_{N2} & \cdots & X_{Np}
\end{bmatrix}
&\begin{bmatrix}
\beta_0 \\
\beta_1 \\
\vdots \\
\beta_p
\end{bmatrix} &{}+{} &\begin{bmatrix}
\epsilon_1 \\
\epsilon_2 \\
\vdots \\
\epsilon_N
\end{bmatrix} \\
\\[0.1ex]
\mathbf{y} &{}={} &\mathbf{X} &\boldsymbol{\beta} &{}+{} &\boldsymbol{\epsilon}\text{.}
\end{array}
\end{equation*}
Since $\boldsymbol{\epsilon}$ is a vector of random variables, note that we call $\boldsymbol{\epsilon}$ a random vector. Our aim is to find an estimate for $\boldsymbol{\beta}$. One way to do this would be by minimizing the residual sum of squares, or $\text{RSS}$, defined by
$$\text{RSS}(\boldsymbol{\beta}) = \sum_{i=1}^{N}\left(y_i - \sum_{j=0}^{p}x_{ij}\beta_{j}\right)^2$$
where we have defined $x_{i0} = 1$ for all $i$. (These are merely the entries of the first column of the matrix $\mathbf{X}$.) Notice here we are using lowercase letters, to indicate that we are working with observed values from data. To minimize this, let us find the critical values for the components of $\boldsymbol{\beta}$. For $k = 0, 1, \dots, p$, notice that
$$\dfrac{\partial\text{RSS}}{\partial\beta_k}(\boldsymbol{\beta}) = \sum_{i=1}^{N}2\left(y_i - \sum_{j=0}^{p}x_{ij}\beta_{j}\right)(-x_{ik}) = -2\sum_{i=1}^{N}\left(y_ix_{ik} - \sum_{j=0}^{p}x_{ij}x_{ik}\beta_{j}\right)\text{.}$$
Setting this equal to $0$, we get what are known as the normal equations:
$$\sum_{i=1}^{N}y_ix_{ik} = \sum_{i=1}^{N}\sum_{j=0}^{p}x_{ij}x_{ik}\beta_{j}\text{.}\tag{2}$$
for $k = 0, 1, \dots, p$. This can be represented in matrix notation.
For $k = 0, 1, \dots, p$,
$$\begin{align*}
\sum_{i=1}^{N}y_ix_{ik} &= \begin{bmatrix}
x_{1k} & x_{2k} & \cdots & x_{Nk}
\end{bmatrix}
\begin{bmatrix}
y_1 \\
y_2 \\
\vdots \\
y_{N}
\end{bmatrix} = \mathbf{c}_{k+1}^{T}\mathbf{y}
\end{align*}$$
where $\mathbf{c}_\ell$ denotes the $\ell$th column of $\mathbf{X}$, $\ell = 1, \dots, p+1$. We can then represent each equation for $k = 0, 1, \dots, p$ as a matrix. Then
$$\begin{bmatrix}
\mathbf{c}_{1}^{T}\mathbf{y} \\
\mathbf{c}_{2}^{T}\mathbf{y} \\
\vdots \\
\mathbf{c}_{p+1}^{T}\mathbf{y}
\end{bmatrix} = \begin{bmatrix}
\mathbf{c}_{1}^{T} \\
\mathbf{c}_{2}^{T} \\
\vdots \\
\mathbf{c}_{p+1}^{T}
\end{bmatrix}\mathbf{y}
= \begin{bmatrix}
\mathbf{c}_{1} & \mathbf{c}_{2} & \cdots & \mathbf{c}_{p+1}
\end{bmatrix}^{T}\mathbf{y} = \mathbf{X}^{T}\mathbf{y}\text{.}
$$
For justification of "factoring out" $\mathbf{y}$, see this link, on page 2. For the right-hand side of $(2)$ ($k = 0, 1, \dots, p$),
$$\begin{align*}
\sum_{i=1}^{N}\sum_{j=0}^{p}x_{ij}x_{ik}\beta_{j} &= \sum_{j=0}^{p}\left(\sum_{i=1}^{N}x_{ij}x_{ik}\right)\beta_{j} \\
&= \sum_{j=0}^{p}\left(\sum_{i=1}^{N}x_{ik}x_{ij}\right)\beta_{j} \\
&=\sum_{j=0}^{p}\begin{bmatrix}
x_{1k} & x_{2k} & \cdots & x_{Nk}
\end{bmatrix}
\begin{bmatrix}
x_{1j} \\
x_{2j} \\
\vdots \\
x_{Nj}
\end{bmatrix}\beta_j \\
&= \sum_{j=0}^{p}\mathbf{c}^{T}_{k+1}\mathbf{c}_{j+1}\beta_j \\
&= \mathbf{c}^{T}_{k+1}\sum_{j=0}^{p}\mathbf{c}_{j+1}\beta_j \\
&= \mathbf{c}^{T}_{k+1}\begin{bmatrix}
\mathbf{c}_{1} & \mathbf{c}_2 & \cdots & \mathbf{c}_{p+1}
\end{bmatrix}\begin{bmatrix}
\beta_0 \\
\beta_1 \\
\vdots \\
\beta_p
\end{bmatrix}
\\
&= \mathbf{c}^{T}_{k+1}\mathbf{X}\boldsymbol{\beta}\text{.}
\end{align*} $$
Bringing each case into a single matrix, we have
$$\begin{bmatrix}
\mathbf{c}^{T}_{1}\mathbf{X}\boldsymbol{\beta}\\
\mathbf{c}^{T}_{2}\mathbf{X}\boldsymbol{\beta}\\
\vdots \\
\mathbf{c}^{T}_{p+1}\mathbf{X}\boldsymbol{\beta}\\
\end{bmatrix} = \begin{bmatrix}
\mathbf{c}^{T}_{1}\\
\mathbf{c}^{T}_{2}\\
\vdots \\
\mathbf{c}^{T}_{p+1}\\
\end{bmatrix}\mathbf{X}\boldsymbol{\beta} = \mathbf{X}^{T}\mathbf{X}\boldsymbol{\beta}\text{.}$$
Thus, in matrix form, we have the normal equations as
$$\mathbf{X}^{T}\mathbf{X}\boldsymbol{\beta} = \mathbf{X}^{T}\mathbf{y}\text{.}\tag{3}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1802243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
}
|
Prove similarity of matrix $A^{-1}$ to matrix $A^{*}$ which is Hermitian adjoint Let $A \in \mathcal M_{n}(\Bbb C)$ and $A$ is similar to unitary matrix.
Prove that $A^{-1}$ is similiar to $A^{*}$, where $A^{*}$ is Hermitian adjoint.
$A = C^{-1}UC$, where $U$ is unitary matrix
So $A^{-1} = (C^{-1}UC)^{-1} = C^{-1}U^{-1}C \iff U^{-1}=CA^{-1}C^{-1}$
$A^{*} = \overline{(A^{T})} = \overline{(C^{-1}UC)^T} = \overline{C^T}\cdot\overline{U^T}\cdot\overline{(C^{-1})^T}$, but U is unitary so
$\overline{U^T} = U^{-1}$
Hence $A^{*} = \overline{C^T}\cdot CA^{-1}C^{-1} \cdot\overline{(C^{-1})^T}$
What can I do next? Moreover, is it at least correct?
|
What you have is correct. You can now say that
$$
A^*= (C^*C)A^{-1}(C^*C)^{-1}
$$
By definition, this means $A^*$ is similar to $A^{-1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1802417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Check if $f(x)=2 [x]+\cos x$ is many-one and into or not? If $f(x)=2 [x]+\cos x$
Then $f:R \to R$ is:
$(A)$ One-One and onto
$(B)$ One-One and into
$(C)$ Many-One and into
$(D)$ Many-One and onto
$[ .]$ represent floor function (also known as greatest integer function
)
Clearly $f(x)$ is into as $2[x]$ is an even integer and $f(x)$ will not be able to achieve every real number.
Answer says option$(C)$ is correct but I cannot see $f(x)$ to be many-one as it does not look like that value of $f(x)$ is same for any two values of $x$
e.g. $f(x)= [x]+\cos x$, then $f(0)=f(\frac{\pi}{2})=1$ making the function many-one but can't see it happening for $f(x)= 2[x]+\cos x$
Could someone help me with this?
|
You are right with respect to surjectiveness (it is not onto).
Hint:
For injectiveness (one to one), look in a neighbourhood around $x = 3\pi$ for example.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1802497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
$((n-K)s^2)/\sigma^2$ what is this in terms of matrix linear regression? $$
\frac{(n-K)s^2}{\sigma^2}
$$
what is this in terms of matrix linear regression? Has Chi Squared Distribution with (n-K) df
|
I don't know what you mean by "matrix" linear regression, and your question isn't all that clear. However, suppose you're doing multiple linear regression with $K$ predictors (including the constant predictor) and $n$ cases.
Suppose the errors (not to be confused with the (observable) residuals) all are independent and each is distributed as $N(0,\sigma^2)$. Then
$$
s^2 = \frac{\text{sum of squares of residuals}}{n-K}
$$
is an unbiased estimator of $\sigma^2$, and
$$
\frac{(n-K)s^2}{\sigma^2} \sim \chi^2_{n-K}.
$$
The notation here is quite conventional, except that I don't recall having seen a capital $K$ used for this before.
PS: Alright, let's say our model is
$$
Y = \mathbf X\beta + \varepsilon
$$
where $Y\in\mathbb R^{n\times 1}$, $\mathbf X \in \mathbb R^{n\times K}$, $\beta\in\mathbb R^{K\times 1}$, and $\varepsilon \sim N_n(0\in\mathbb R^n, \mathbf I \in\mathbb R^{n\times n})$. We can observe $Y$ and $\mathbf X$ and we want to estimate $\beta$ by least squares.
The estimate is $\widehat\beta = (\mathbf X^T \mathbf X)^{-1} \mathbf X^T Y$. The vector of residuals is $Y-\mathbf X\widehat\beta = (\mathbf I - \mathbf X(\mathbf X^T\mathbf X)^{-1}\mathbf X)Y = (\mathbf I - \mathbf H)Y$. The matrix $\mathbf{I-H}$ is symmetric and idempotent, so one has
$$\|(\mathbf{I-H})Y\|^2 = \|Y^T(\mathbf{I-H})Y\|^2 = \text{sum of squares of residuals}.$$
Then
$$
\frac{\|Y^T(\mathbf{I-H})Y\|^2}{\sigma^2} = \frac{(n-K)s^2}{\sigma^2} \sim \chi^2_{n-K}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1802569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Awesome riddle including independence and exponential distribution The life cycles of 3 devices $A, B$ and $C$ are independent and exponentially distributed with parameters $\alpha,\beta,\gamma$. These three devices form a system that fails if not only device A fails but also device B or C fails too. Maybe $a \land (b \lor c) $ is easier to understand.
Calculate the probability that the system fails before time $t$.
This riddle is driving me insane, I have spent like 5 hours thinking about it and I just can't seem to find the answer. I am pretty sure thought that there is a simple solution, anyone creative here?
|
We first go after the complementary event, the event the system is still alive at time $t$. This event can happen in two disjoint ways: (i) $A$ is alive or (ii) $A$ is dead but $B$ and $C$ are alive.
The probability of (i) is $e^{-\alpha t}$.
The probability of (ii) is $(1-e^{-\alpha t})e^{-\beta t}e^{-\gamma t}$.
Thus the probability the system is dead is
$$1-\left[e^{-\alpha t}+(1-e^{-\alpha t})e^{-\beta t}e^{-\gamma t}\right].$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1802665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
When a set of functions becomes complete? I know that a set of functions are said to form a complete basis on an inteval if any function on that interval can be expressed as a linear combination of the functions in the set. I also know that every function in the set are orthogonal.
Now what is what is the condition(s) that a set of functions has to statisfy to become complete?
That is, how to prove a set of orthogonal functions span a space?
|
A metric space is said to be "complete" if every Cauchy sequence converges.
For example: Let $(X, \mu)$ be a measure space. Then $L^P(X)$ is complete under the $L^P$ norm, for $p \in [1,\infty]$. [It is a Banach space.]
Every finite dimensional normed vector space is also complete. (This this can be explained by the Lipschitz equivalence to the euclidian norm.)
Notions of completeness need not be restricted to a set of functions. For example, $\mathbb{R}$ is complete, since every Cauchy sequence $\{a_n\}$ converges in $\mathbb{R}$.
In fact, it can be proven that $\mathbb{R}$ is the completion of $\mathbb{Q}$; i.e.: take a sequence of rationals that is Cauchy and define it's limit to be a real number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1802750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How is $[Q(\sqrt2, \sqrt3 ) : Q(\sqrt2)]=2$? $\mathbb{Q}$ is the rationals. I know that $\sqrt3 \notin \mathbb{Q}(\sqrt2)$ but so what? The answer to this question seems to be based upon that. Really don't understand what that means in finding the minimal polynomial.
|
If you know that $\sqrt{3}$ is not in $\mathbb{Q}(\sqrt{2})$, then you know the degree is greater than $1$. But $\sqrt{3}$ is a root of the equation $x^2-3=0$, which has coefficients in $\mathbb{Q}(\sqrt{2})$, so the degree is exactly $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1802841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
$\sum_{n=0}^{\infty}\frac{a^2}{(1+a^2)^n}$ converges for all $a\in \mathbb{R}$ $$\sum_{n=0}^{\infty}\frac{a^2}{(1+a^2)^n}$$
Can I just see this series as a geometric series? Since $c = \frac{1}{1+a^2}<1$, we can see this as the geometric series:
$$\sum_{n=0}^{\infty}bc^n = \sum_{n=0}^{\infty}a^2\left(\frac{1}{1+a^2}\right)^n$$
that converges because $c<1$. The sum of this series is:
$$S_n = b(c^0+c^1+c^2+\cdots c^n)\rightarrow cS_n = b(c^1+c^2 + c^3 + \cdots + c^{n}+c^{n+1})\rightarrow $$$$cS_n - S_n = b(c^{n+1}-1)\rightarrow S_n(c-1) = b(c^{n+1}-1)\rightarrow S_n = b\frac{c^{n+1}-1}{c-1}$$
$$S = \lim S_n = b\frac{1}{1-c}$$
So $$\sum_{n=0}^{\infty}\frac{a^2}{(1+a^2)^n} = b\frac{1}{1-c} = a^2\frac{1}{1-\frac{1}{1+a^2}} =$$
|
Let $a=0$. Then the series obviously converges to $0$.
Now suppose that $a\ne 0$. Then our series is the geometric series
$$a^2+a^2r+a^2r^2+a^2r^3+\cdots,$$
where $r=\frac{1}{1+a^2}\lt 1$. Since $|r|\lt 1$, the series converges. It is probably by now a familiar fact that when $|r|\lt 1$ the series $1+r+r^2+r^3+\cdots$ converges to $\frac{1}{1-r}$. So our series converges to
$$\frac{a^2}{1-\frac{1}{1+a^2}}.$$
This simplifies to $1+a^2$.
Remark: Your calculation included an almost correct proof that the series that you called $1+c+c^2+c^3+\cdots$ converges to $\frac{1}{1-c}$. There was a little glitch. At one stage you had $\frac{c^{n+1}-1}{c-1}$. Since $c^{n+1}$ has limit $0$, the sum that you called $S_n$ converges to $\frac{-1}{c-1}$, which I prefer to call $\frac{1}{1-c}$.
I think that part of your calculation is not necessary, since the convergence of $1+c+c^2+\cdots$ to $\frac{1}{1-c}$ if $|c|\lt 1$ can probably by now be viewed as a standard fact.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1802887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
is the rotation matrix is unique for one rotation I have a test for rotation , and found two rotation behave the same at one point
rot1 = [ 0.8736 0.2915 -0.3897;
-0.4011 0.8848 -0.2373;
0.2756 0.3636 0.8898]
rot2 = [ 0.9874 -0.1420 -0.0700;
0.0700 0.7880 -0.6117;
0.1420 0.5991 0.7880]
yet they have same result at rotation
wpt = [200 200 200]
with result
cpt = [ 155.0812 49.2660 305.8148]
can anyone could explain this? :)
|
Any two rotation matrices about a point should be distinct when acting on some arbitrary vector. That being said, two distinct rotations could certainly map some specific vector into another specific vector. That's what JeanMarie's answer addresses.
I think your issue, though, may have to do with the limitations of computational science: You don't have true rotation matrices. You're using MATLAB, right? Well, your matrices are stored as floating point arrays. That is, they suffer from limited computational precision. It wouldn't surprise me that two rotations that you might expect to be distinct don't compute that way.
I suspect the matrices in your question have been truncated after you copied and pasted them. If I attempt use the matrices as displayed in your question, I do see a difference:
rot1*wpt'
ans =
155.0800
49.2800
305.8000
rot2*wpt'
ans =
155.0800
49.2600
305.8200
Something you can look into is your machine and software limitations on floating point precision.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1802967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why is the index $i(\mathcal{L})$ of an ample line bundle on an abelian variety equal to $0$? I've seen that here https://www.math.uchicago.edu/~ngo/Shimura.pdf there's a theorem called Mumford's Vanishing Theorem (Theorem 2.2.2) which says:
Let $\mathcal{L}$ be a line bundle on $X$ (abelian variety) such that $K(\mathcal{L})$ is finite. There exists a unique integer $i=i(\mathcal{L})$, $0\leq i(\mathcal{L})\leq g=\dim X$, such that $H^p(X,\mathcal{L})=(0)$ for $p\neq i$ and $H^i(X,\mathcal{L})\neq (0)$. Moreover, $\mathcal{L}$ is ample if and only if $i(\mathcal{L})=0$.
This theorem has no proof here, and I'm intersted in understanding the last claim because, in the book "Abelian Varieties" of Mumford, the Vanishing Theorem doesn't say this result but it actually uses it implicity in the beginning of proof of the theorem at page 163 (old edition).
So my question is: why is it true that $\mathcal{L}$ is ample if and only if $i(\mathcal{L})=0$? In particular I'm intersted in the implication
$\mathcal{L}$ ample implies $i(\mathcal{L})=0$.
Thank you!
|
If $\mathcal{L}$ is ample, then it is nondegenerate (cf. page 84 of Mumford's book) and $h^0(\mathcal{L}^n)>0$ for some $n>0,$ where $h^q(\mathcal{L})=dim_k H^q(A, \mathcal{L}),$ so $i(\mathcal{L}^n)=0$ and hence also $i(\mathcal{L})=0$ by the Corollary of Mumford in page 159.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1803105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Probability of $m$ failed trades in series of $n$ trades This is a trading problem:
Let's say I have an automated trading system with a probability of
success of $70\%$ on any individual trade. I run $100$ trades a year.
What is the probability of getting $5$ or more consecutive failed
trades?
More generally, for a probability of success $p$ on the individual
trade, and a total of $n$ trades per annum, what is the probability of
a series of $m$ or more consecutive failures, where $m \leq n$ and $0 \leq p \leq 1$?
I realize this can be converted into a problem about getting a run of heads with a biased coin, but I looked around and could not find a response matching my exact needs. By the way, this is a real-world problem, not schoolwork.
|
Let your trades be $t_1\cdots t_n$, with $t_i\in \{0,1\}$. What is the probability that this sequence contains $k$ consequtives zeroes, if the probability of zero is $p$? Well, the sequence could start at $t_1$, or $t_2,\ldots, t_{n-k+1}$. The probability that a sequence of $k$ zeroes starts at $t_i$ is $p^k$, and there are $n-k+1$ points where the sequence might start, so the first instinct is to add up these probabilities as $(n-k+1)p^k$. But that wouldn't be correct, because you haven't accounted for the fact that these sequences might overlap: you can only add probabilities if they represent disjoint events.
If you were looking for two consequtive fails, then that probability is
$$ P = (n-1)p^2 - (n-2)p^3 $$
The second term subtracts the times you counted two scenarios double. If you are looking for three consecutive fails, then
$$ P = (n-2)p^3 - (n-3)p^4 + (n-4)p^5 $$
You add the term (n-4)p^5 to compensate for the length-$5$ sequences you subtracted double (once too many) in the $(n-3)p^4$ term. See a Venn diagram if you want to know in more detail why exactly.
In general, the probability that $k$ tosses come up tails in $n$ trials (if $n\geq k$) is:
$$ P = \sum_{u=0}^{k-1}(-1)^{u}(n-k-u+1)p^{k+u} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1803184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
A question about a route of a point that travels in a particular way through the plane I don't know exactly how to classify this question. It is not from any homeworks, just something I've been wondering about.
Let's say that in the beginning of an experiment ( the beginning is $t=0$ secs) we have two points on the plane: one on $(0,0)$ and one on $(0, 1)$.
They start moving by the following rules:
*
*The point that was on $(0,0)$ moves right, on the $x$-axis, in a constant speed of $1~\text{m}/\text{s}$.
*The point that was on $(0,1)$ also moves in a constant speed of $1~\text{m}/\text{s}$, but its direction changes, so that the direction of its speed is always directed to the first point (like a missile that follows a moving object).
I hope I made myself clear.
My question is: Is there a nice formula for the location of the second point on the plane for a given time $t$? How can it be derived? And what if we change the ratio between the constant speeds?
Thank you for your time reading my question...
|
Let the first point's position be $(t,0)$, and the second $(x(t),y(t))$.
We have that the direction of the second point is proportional to the difference between the points:
$$(x'(t), y'(t))\propto(t - x(t), -y(t))$$
But, since the speed is constant, we must have that:
$$(x'(t), y'(t))=\left(\frac{t-x(t)}{\sqrt{(t - x(t))^2 + y^2(t)}}, -\frac{y(t)}{\sqrt{(t - x(t))^2 + y^2(t)}}\right)$$
With $x(0) = 0, y(0) = 1$.
Now we must solve the differential equation.
Addendum
This equation does have a closed solution involving the Lambert $W$-function, which can be obtained by moving to polar coordinates, but it gets very ugly - If anyone can offer a simpler solution, I'd be glad to hear of one. The solution is also closely related to a similar problem, that of the Tractrix, which has constant distance rather than constant speed.
In any case, here is what the resulting curve looks like, for $t\in[0,1]$:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1803293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
the connection between $\gamma_m(a)$ and $\gamma_m(b)$ when $a\cdot b\equiv 1\pmod m$
show the connection between the order of $a$ $\gamma_m(a)$ and the order of $b$ $\gamma_m(b)$ when
$$a\cdot b\equiv 1\pmod m$$
I took $a=5$ and $b=4$
$$5\cdot 4\equiv 1\pmod{19}$$
$$\gamma_m(a)=9\text{ and } \gamma_m(b)=9$$
So I think that alaways $\gamma_m(a)=\gamma(b)$ when $a\cdot b\equiv 1\pmod m$
Is there a more formal way to show this?
|
One has $a^{\gamma_m(a)}\equiv 1$ and $b^{\gamma_m(b)}\equiv 1\pmod{m}$. Multiplying the two and assuming without loss of generality that $\gamma_m{a}\gt \gamma_m(b)$ we can write
$$a^{\gamma_m(a)-\gamma_m(b)}\left(a\cdot b\right)^{\gamma_m(b)}\equiv 1\pmod{m}$$
This means $a^\alpha\equiv 1\pmod{m}$ with $\gamma_m(a)-\gamma_m(b)=\alpha\lt \gamma_m(a)$ which is a contradiction. So the orders are equal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1803381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Chevalier de mere paradox with game with three dice Chevalier de Mere asked Blaise Pascal why in a game with three dice the sum $11$ is more favorable than $12$, when both sums have exactly the same possible combinations:
For $11$ we have $(5,5,1), (5,4,2), (5,3,3), (4, 4, 3), (6,4,1), (6,3,2)$ and for $12$ we have $(6,5,1), (6,4,2), (6,3,3), (5,4,3), (4,4,4), (5,5,2)$, so both sums should be equiprobable.
My attempt: I think Chevalier de Mere made the mistake of thinking all the dice are indistinguishable. I tried to compute the exact probabilities.
Let $$\Omega = \left\{(x,y,z) \mid 1 \leq x,y,z \leq 6, \quad 3 \leq x + y + z \leq 18\right\} $$ be the sample space. We are interested in the events $$A = \left\{(x,y,z) \mid x + y + z = 11 \right\} $$ and $$ B = \left\{(x,y,z) \mid x + y + z = 12 \right\}. $$ For the sum $11$, we have $27$ possible permutations of all the triples. For the sum $12$, there are two less, that is $25$. So $\#A = 27$ and $\#B = 25$. Since $\#\Omega = 6 \cdot 6 \cdot 6 = 216$, we have $$ P(A) = \frac{27}{216} = 0.125, \qquad P(B) = \frac{25}{216} = 0.1157. $$
Is this reasoning correct?
|
You're making the very common mistake of confusing issues of distinguishability with issues of equiprobability. If you roll three indistinguishable dice, the probabilities of the various sums are exactly the same as if you roll three distinguishable dice. The dynamics of the dice are not influenced by your ability to distinguish among them.
Chevalier de Méré's error was not to think of the dice as indistinguishable, but to regard the wrong sorts of events as equiprobable. Absent prior information, events should be regarded as equiprobable when there is symmetry among them. In the present case the six possible results of each individual die should be regarded as equiprobable, since there is symmetry among the six sides. It then follows that each combination of these independent equiprobable events, that is, each ordered triple of dice results should also be considered equiprobable, as you did in your calculation. Instead, Chevalier de Méré regarded each unordered triple as equiprobable. This is wrong since unordered triples with different numbers of coinciding dice correspond to different numbers of ordered triples, and thus to different numbers of equiprobable elementary events.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1803493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $x-a \sin(x)=b$ has one real solution, where $0\lt a \lt 1 $ $a,b \in \mathbb{R}$. Prove that $x-a\sin(x)=b$ has one real solution, where $0\lt a \lt 1 $.
I need some sort of starting hint as to how to prove this.
I can define $g(x)= x-a\sin(x)-b$ but more than that I'm having difficulties proving. What theorem could I use?
|
Consider the function:
$$
f(x)=x-a\sin(x)-b
$$
This has derivative:
$$
f'(x)=1-a\cos(x)>0
$$
since $0<a<1$, so $f(x)$ is increasing on $\mathbb{R}$. Hence as $f(x)$ is continuous increasing and is negative for sufficiently negative $x$ and positive for sufficiently positive $x$ it has a zero on $\mathbb{R}$, and it is unique.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1803599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Solution of $4 \cos x(\cos 2x+\cos 3x)+1=0$ Find the solution of the equation:
$$4 \cos x(\cos 2x+\cos 3x)+1=0$$
Applying trigonometric identity leads to
$$\cos (x) \cos \bigg(\frac{x}{2} \bigg) \cos \bigg(\frac{5x}{2} \bigg)=-\frac{1}{8}$$
But I can't understand what to do from here. Could some suggest how to proceed from here?
|
Thinking about the answer, we might notice that if $\theta=\frac{2\pi k}9$, then $\cos9\theta=1$. We can write this as
$$\begin{align}\cos9\theta-1&=4(4\cos^3\theta-3\cos\theta)^3-3(4\cos^3\theta-3\cos\theta)-1\\
&=(16\cos^4\theta+8\cos^3\theta-12\cos^2\theta-4\cos\theta+1)^2(\cos\theta-1)=0\end{align}$$
From this we can see that the solutions were all the solutions to $\cos9\theta=1$ except for $\cos\theta=1$. If $\theta=\frac{\pm2\pi}3$, then $\cos\theta=-\frac12$, and if $\theta=\frac{2\pi(3k\pm1)}9$, then $\cos3\theta=-\frac12$ so all the cases in the big factor are taken into account by $(2\cos\theta+1)(2\cos3\theta+1)=0$.
Indeed we can go back to the original equation and find that $4\cos\theta\cos2\theta=2(\cos(1+2)\theta+\cos(1-2)\theta)=2\cos3\theta+2\cos\theta$, so it reads
$$\begin{align}4\cos x(\cos2x+\cos3x)+1&=2\cos3x+2\cos x+4\cos3x\cos x+1\\
&=(2\cos3x+1)(2\cos x+1)=0\end{align}$$
Thus if we could have seen through this at the outset, we could have made quick work of the problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1803682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Show that the sequence of norms of inverses of a convergent sequence of matrices diverges to infinity. This is a question I found while working on the book "Analysis in Euclidean Spaces" by Ken Hoffman.
Suppose $(A_n)$ is a sequence of invertible matrices from $\mathbb{R}^{k \times k}$ that converges to the matrix $A$. Show that if $A$ is not invertible, then $$\lim_{n \to \infty} \| A_n^{-1} \| = \infty.$$
I can easily show that if the sequence $(A_n^{-1})$ converges to some matrix $B$, then $B = A^{-1}$, but I don't know how to proceed if the sequence is not convergent (I can also prove the case where $A = \textbf{0}$, the zero matrix).
|
Since all norms are equivalent here (you are in finite dimension), you are free to pick the one that is the most convenient. In particular, let $\lVert\cdot \rVert\colon \mathbb{R}^{k\times k}\to [0,\infty)$ be a the norm defined by $\lVert A\rVert = \max_{1\leq i,j\leq k} \lvert A_{i,j}\rvert$.
Since for all $n\geq 1$ we have $I_{k} = A_n\cdot A_n^{-1}$, we can apply the determinant to get
$$
1 = \det I_{k} = \det (A_n\cdot A_n^{-1}) = \det A_n \cdot \det A_n^{-1}
$$
for all $n\geq 1$, so that (as usual)
$$
\forall n\geq 1,\qquad \lvert \det A_n^{-1}\rvert = \frac{1}{\lvert \det A_n \rvert} \xrightarrow[n\to\infty]{} +\infty
$$
since $\det A_n \xrightarrow[n\to\infty]{} \det A = 0$, the determinant being continuous. But again, the determinant is a polynomial of fixed degree in the $k^2$ coefficients of the matrix.
Write $\det M$ as a polynomial $P$ in $k^2$ variables $(M_{i,j})_{i,j}$:
$$
\det M = P((M_{i,j})_{i,j}) = \sum_{S\subseteq [k]\times [k]} c_{S} \prod_{(i,j)\in S} M_{i,j}
$$
Then $\lvert \det M\rvert \leq \sum_{S\subseteq [k]\times [k]} \lvert c_{S}\rvert \prod_{(i,j)\in S} \lvert M_{i,j} \rvert \leq
\sum_{S\subseteq [k]\times [k]} \lvert c_{S}\rvert \lVert M\rVert^{\lvert S\rvert} \leq \lVert M\rVert^{k^2}\sum_{S\subseteq [k]\times [k]} \lvert c_{S}\rvert$,
i.e. there exists a constant $\gamma_k>0$ (only depending on $k$) such that
$$
\lvert \det M\rvert \leq \gamma_k\cdot \lVert M\rVert^{k^2}
$$
for every $M\in\mathbb{R}^{k\times k}$.
In particular,
$$
\lVert A_n^{-1}\rVert \geq \frac{1}{\gamma_k}\lvert \det A_n^{-1}\rvert^{1/k^2}\xrightarrow[n\to\infty]{} \infty
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1803802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Let $A = \{ \frac{1}{n} : n \in \Bbb{N} \}$, show that $\overline{A} = A \cup \{0\}$
Let $A = \{ \frac{1}{n} : n \in \Bbb{N} \}$, show that $\overline{A} = A \cup \{0\}.$
We have $A \subseteq \overline{A}$ by the definition of closures. To show that $\{0\} \subset \overline{A}$ we need to show that for every open set $U$ containing $0, U \cap A \not= \emptyset$. The elements of $A$ converge to $0$, since $\forall \epsilon > 0 \ \exists N \in \Bbb{N}, \ s.t \ \forall n > N \ \frac{1}{n} \in B_{\epsilon}(0).$ So that any open set $U$ containing $0$, we have an $\epsilon$ s.t $0 \in B_{\epsilon}(0) \subseteq U $. And since $B_{\epsilon}(0)$ belongs to $A$, we have that $A \cap U \not= \emptyset \ \forall \ U.$
This establishes that $A \cup \{0\} \subseteq \overline{A}$.
I'm not too sure how to show this direction $\overline{A} \subseteq A \cup \{0\}$ .
|
Here is another way to do it which I find easier:
Since $\overline{A}$ is defined as the set $A$ and all its limit points, by definition $\overline{A}=A\cup L$, where $L$ is the set of limit points of $A$.
Now you only need to show that $L=\{0\}$, i.e. that $A$ only has only $0$ as its limit point. The result then follows immediately.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1803917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
$\sum (-1)^{n+1}\log\left(1+\frac{1}{n}\right)$ convergent but not absolutely convergent I need to prove that:
$$\sum_{n=1}^{\infty} (-1)^{n+1}\log\left(1+\frac{1}{n}\right)$$
is convergent, but not absolutely convergent.
I tried the ratio test:
$$\frac{a_{n+1}}{a_n} = -\frac{\log\left(1+\frac{1}{n+1}\right)}{\log\left(1+\frac{1}{n}\right)} = -\log\left({\frac{1}{n+1}-\frac{1}{n}}\right)$$
I know that the thing inside the $\log$ converges to $1$, so $-\log$ converges to $0$? This is not right, I cannot conclude that this series is divergent.
Also, for the sequence without the $(-1)^{n+1}$ it would give $0$ too.
|
First i would show the absolute converge by the test for alternating series. First show that it is a alternating series (pretty obvious since your log only gives positive values). Then since $$ log(1+ \frac{1}{n} ) \rightarrow 0 \ , n \rightarrow \infty $$ it is convergent. To show the absolute i would remove $ (-1)^{n+1} $ since its only changing +-. Then i would use comparison test at compare to the divergent series $$ \frac{1}{n} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1803975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
If (X,T) is perfect and A is a dense subset of X, then A has no isolated points. If $(X,T)$ is perfect and $A \subseteq X$ is a dense subset of X, then A has no isolated points.
Since $A$ is dense $\Rightarrow (\forall U \in T)(A \cap U \neq \emptyset)$ and
since $(X,T)$ is perfect $\Rightarrow$ $(\forall x \in X)(\{x\} \notin T)$ but I can't figure a way to show that $(\forall x \in A) (\forall V \in T)(A \cap V \neq \{x\}) $. It seems from that that all I can do is show that $A \cap U$ is nonempty. I can't see how I'd show that it's $\neq \{x\}$.
|
This is true if $X$ is $T_1$. Otherwise it need not be true: $X = \{0,1,2,3\}$, with topology $\left\{\emptyset, X, \{0,1\},\{2,3\}\right\}$ is perfect but $A = \{0,2\}$ is dense and consists of two isolated points. For a $T_0$ example, consider $X = [0,\infty)$ in the topology generated by all sets $[0,a) ,a > 0$. Here $X$ is perfect but $\{0\}$ is dense and trivially isolated. So I'll assume $X$ is $T_1$.
Suppose $x$ were an isolated point of $A$. So there is an open $U$ in $X$ such that $U \cap A = \{x\}$. But $\{x\}$ is not isolated in $X$, as $X$ is perfect. So pick $y \neq x$ with $y \in U$. As $X$ is $T_1$, we find an open set $V$ that contains $y$ but not $x$. But then $U \cap V$ is non-empty (it contains $y$), open, and it does not intersect $A$, as $(U \cap V) \cap A) = V \cap (U \cap A) = V \cap \{x\} = \emptyset$. This contradicts $A$ being dense. So $A$ has no isolated points.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1804053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Weight Modification for Computationally-Efficient Nonlinear Least Squares Optimization There was a time where I could figure this out for myself, but my math skills are rustier than I thought, so I have to humbly beg for help. Thank you in advance.
I am solving a weighted nonlinear least-squares problem of the usual form:
$$
\mathbf{\theta}^* = \arg \min_\mathbf{\theta} \sum_i \left[ \frac{y_i-\hat{y_i}\left(\mathbf{\theta}\right)}{w_i} \right]^2
$$
I have programmed an algorithm that can solve this very well*. However, in my particular problem, it is easier** to deal with $y_i^2$ and $\hat{y_i}^2\left(\mathbf{\theta}\right)$.
So my question is: can I modify the weights $w_i \rightarrow w_i^\prime$ so that the following modified forumlation gives the same result as before:
$$
\mathbf{\theta}^* = \arg \min_\mathbf{\theta} \sum_i \left[ \frac{y_i^2-\hat{y_i}^2\left(\mathbf{\theta}\right)}{w_i^\prime} \right]^2
$$
*It is not really relevant to the question, but I am using Levenberg-Marquardt.
**The reason is that my $y$'s are geometrical distances that come from a Euclidean norm. I'm programming a microcontroller, where a square root is a computationally expensive function.
|
It's not going to work, I think:
$$
\mathbf{\theta}^* = \arg \min_\mathbf{\theta} \sum_i \left[ \frac{y_i^2-\hat{y_i}^2\left(\mathbf{\theta}\right)}{w_i^\prime} \right]^2
= \arg \min_\mathbf{\theta} \sum_i \left[ \frac{y_i+\hat{y_i}\left(\mathbf{\theta}\right)}{w_i^\prime}\right]^2\left[
\frac{y_i-\hat{y_i}\left(\mathbf{\theta}\right)}{w_i^\prime} \right]^2 \\=
\arg \min_\mathbf{\theta} \sum_i \left[\frac{y_i-\hat{y_i}\left(\mathbf{\theta}\right)}{w_i} \right]^2
$$
Where:
$$
w_i = \frac{(w_i^\prime)^2}{y_i+\hat{y_i}\left(\mathbf{\theta}\right)}
$$
Unless the weights $w_i$ are allowed to be dependent on $y_i$ and $\hat{y_i}\left(\mathbf{\theta}\right)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1804126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Probabilities ant cube I have attached a picture of the cube in the question.
An ant moves along the edges of the cube always starting at $A$ and never repeating an edge. This defines a trail of edges. For example, $ABFE$ and $ABCDAE$ are trails, but $ABCB$ is not a trail. The number of edges in a trail is known as its length.
At each vertex, the ant must proceed along one of the edges that has not yet been traced, if there is one. If there is a choice of untraced edges, the following probabilities for taking each of them apply.
If only one edge at a vertex has been traced and that edge is vertical, then the probability of the ant taking each horizontal edge is $\frac12$.
If only one edge at a vertex has been traced and that edge is horizontal, then the probability of the ant taking the vertical edge is $\frac23$ and the probability of the ant taking the horizontal edge is $\frac13$.
If no edge at a vertex has been traced, then the probability of the ant taking the vertical edge is $\frac23$ and the probability of the ant taking each of the horizontal edges is $\frac16$.
In your solutions to the following problems use exact fractions not decimals.
a) If the ant moves from $A$ to $D$, what is the probability it will then move to $H$? If the ant moves from $A$ to $E$, what is the probability it will then move to $H$?
My answer:
$A$ to $D$ then to $H = \dfrac23$
$A$ to $E$ then to $H = \dfrac12$
b) What is the probability the ant takes the trail $ABCG$?
My answer:
Multiply the probabilities: $$\frac16\times\frac13\times\frac23 = \frac1{27}$$
c) Find two trails of length $3$ from $A$ to $G$ that have probabilities of being traced by the ant that are different to each other and to the probability for the trail $ABCG$.
My answer:
$$\begin{align}
ABFG&=\frac16\times\frac23\times\frac12=\frac1{18}\\[5pt]
AEHG&=\frac23\times\frac12\times\frac13=\frac19
\end{align}$$
d) What is the probability that the ant will trace a trail of length $3$ from $A$ to $G$?
I don't know how to do d). Do I just multiply every single probability?
Also, could you please check to see if I have done the a) to c) correctly? I am not completely sure if this is the correct application of the multiplicative principle.
|
Your answers for a) to c) are correct (except for your loose use of the equals sign).
For d), note that any path of length $3$ to G will contain exactly two horizontal steps and one vertical step, the vertical step can come at any of the three steps, the probabilities are fully determined by when the vertical step comes, and all trails are mutually exclusive events. You've already determined the probabilities for the three types of trail, so now you just need to count how many of each there are and add up the probabilities multiplied by those multiplicities.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1804231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Variable Change In A Differential Equation If I have the following differential equation:
$\dfrac{dy}{dx} = \dfrac{y}{x} - (\dfrac{y}{x})^2$
And if I make the variable change: $\dfrac{y}{x} \rightarrow z$
I know have $\dfrac{dy}{dx} = z-z^2$
What is $\dfrac{dx}{dy}$ after the variablechange?
|
Suppose you have a differential equation that looks like this: $$y'=F\left ( \frac{y}{x}\right )$$
then you can make a substitution $v(x)=\frac{y}{x} \iff y=vx \implies y'=v+xv'$ to transform your ODE into an ODE in $v$ $$\implies v+xv'=F(v) \iff \frac{dv}{F(v)-v}=\frac{dx}{x}$$
This equation is separated and you can solve it by the usual methods.
In your case you have: $$y'=\frac{y}{x}-\left (\frac{y}{x}\right)^2$$
A substitution $u=\frac{y}{x} \iff ux=y \iff y'=u+xu'$ will lead to the differential equation:
$$xu'=-u^2$$
Can you solve it from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1804336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Unitization of Suspension Let $A$ a C*-algebra (unital or not). Its suspension is defined to be: $$ S(A) \equiv A\otimes C_0((0,1);\,\mathbb{C}) $$where $C_0$ denotes all continuous functions which vanish at infinity.
We know that if $X$ is locally compact and if $X^{+}$ is its one-point compactification, then $$ \widetilde{C_0(X;\,\mathbb{C})}\cong C(X^{+};\,\mathbb{C})$$
My question is:
1) Given two C*-algebras $A$ and $B$ (unital or not), is it true that $$ \widetilde{A\otimes B} \cong \widetilde{A}\otimes\widetilde{B} $$?
2) Does it then follow that $$ \widetilde{S(A)}\cong \widetilde{A}\otimes C(S^1;\,\mathbb{C})$$?
|
As noted in the comments, this is false. Intuitively, on the left-hand side in 1) you add one point to both spaces and then take the cartesian product whereas on the right-hand side you first take the product and then add one point.
For example let $A=B=C_0((0,1))$. Then $\widetilde{A\otimes B}=\widetilde{C_0((0,1)\times (0,1))}=C(S^2)$ and $\tilde A\otimes \tilde B=C(S^1)\otimes C(S^1)=C(\mathbb{T}^2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1804399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Can integration relate real and complex numbers? eg, considering $\int\frac{dx}{1+x^2}$ vs $\int\frac{dx}{(1+ix)(1-ix)}$ We all know that
$$\int\frac{dx}{1+x^2}=\tan^{-1}x+C$$
Let's evaluate this a bit differently,
$$\int\frac{dx}{1+x^2}=\int\frac{dx}{(1+ix)(1-ix)}$$
$$=\frac{1}{2}\int\frac{(1+ix+1-ix)dx}{(1+ix)(1-ix)}$$
$$=\frac{i}{2}\ln \left(\frac{1-ix}{1+ix}\right)+\mathbb C$$
but that must mean
$$\tan^{-1}x=\frac{i}{2}\ln \left(\frac{1-ix}{1+ix}\right)+\mathbb C$$
Plugging in $x=0$, we get
$\mathbb C=0$
$$\implies i\ln \left(\frac{1-ix}{1+ix}\right)=2\tan^{-1} x$$
Question:
Is this valid?
If yes then can someone give more examples which lead to such relations?
Thanks!
|
Let $\,\,\mathsf{y=tan^{-1}(x)}$
$\mathsf{\implies\,x=tan(y)}$
$\mathsf{\implies\,x=\dfrac{sin(y)}{cos(y)}}$
$\mathsf{\implies\,x=\dfrac{\dfrac{e^{iy}-e^{-iy}}{2i}}{\dfrac{e^{iy}+e^{-iy}}{2}}}$
$\mathsf{\implies\,x=\dfrac{e^{iy}-e^{-iy}}{i\left(e^{iy}+e^{-iy}\right)}}$
$\mathsf{\implies\,ix=\dfrac{e^{iy}-e^{-iy}}{e^{iy}+e^{-iy}}}$
$\mathsf{\implies\,\dfrac{1}{ix}=\dfrac{e^{iy}+e^{-iy}}{e^{iy}-e^{-iy}}}$
Using componendo and dividendo,
$\mathsf{\implies\,\dfrac{1+ix}{1-ix}=\dfrac{e^{iy}+e^{-iy}+e^{iy}-e^{-iy}}{e^{iy}+e^{-iy}-e^{iy}+e^{-iy}}}$
$\mathsf{\implies\,\dfrac{1+ix}{1-ix}=\dfrac{2\,e^{iy}}{2\,e^{-iy}}}$
$\mathsf{\implies\,\dfrac{1+ix}{1-ix}=\dfrac{e^{iy}}{e^{-iy}}}$
$\mathsf{\implies\,\dfrac{1+ix}{1-ix}=e^{2iy}}$
Taking log both sides,
$\mathsf{\implies\,\ln\left(\dfrac{1+ix}{1-ix}\right)=\ln\left(e^{2iy}\right)}$
$\mathsf{\implies\,\ln\left(\dfrac{1+ix}{1-ix}\right)=2iy}$
$\mathsf{\implies\,y=\dfrac{1}{2\,i}\ln\left(\dfrac{1+ix}{1-ix}\right)}$
$\mathsf{\implies\,y=-\dfrac{i}{2}\ln\left(\dfrac{1+ix}{1-ix}\right)}$
$\mathsf{\implies\,y=\dfrac{i}{2}\ln\left(\dfrac{1-ix}{1+ix}\right)}$
$\mathsf{\implies\,tan^{-1}(x)=\dfrac{i}{2}\ln\left(\dfrac{1-ix}{1+ix}\right)}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1804517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to evaluate $\int_0^1\frac{\ln(1-2t+2t^2)}{t}dt$? The question starts with:
$$\int_0^1\frac{-2t^2+t}{-t^2+t}\ln(1-2t+2t^2)dt\text{ = ?}$$
My attempt is as follows:
$$\int_0^1\frac{-2t^2+t}{-t^2+t}\ln(1-2t+2t^2)dt$$
$$=2\int_0^1\ln(1-2t+2t^2)dt+\int_0^1\frac{-t}{-t^2+t}\ln(1-2t+2t^2)dt$$
$$=-4+\pi-\frac{1}{2}\int_0^1\frac{\ln(1-2t+2t^2)}{-t^2+t}dt$$
$$=-4+\pi-\frac{1}{2}(\int_0^1\frac{\ln(1-2t+2t^2)}{t}dt+\int_0^1\frac{\ln(1-2t+2t^2)}{1-t}dt)$$
$$=-4+\pi-\int_0^1\frac{\ln(1-2t+2t^2)}{t}dt$$
The question left with: how can I evaluate:
$$\int_0^1\frac{\ln(1-2t+2t^2)}{t}dt$$
Wolfram Alpha gives me the result: $-\frac{\pi^2}{8}$, but I am not able to obtain this result by hand.
|
As OP has found
\begin{equation*}
I = \int_0^1\dfrac{\ln(1-2t+2t^2)}{t}\, dt = \dfrac{1}{2}\int_0^1\dfrac{\ln(1-2t+2t^2)}{t(1-t)}\, dt.
\end{equation*}
Via the transformation $s= \dfrac{t}{1-t}$ and a partial integration we get
\begin{equation*}
I = \dfrac{1}{2}\int_0^{\infty}\dfrac{\ln(1+s^2)-2\ln(1+s)}{s}\, ds = \int_0^{\infty}\dfrac{(1-s)\ln(s)}{(s+1)(s^2+1)}\, ds.
\end{equation*}
After integrating
\begin{equation*}
f(s) = \dfrac{(1-s)\log^2(s)}{(s+1)(s^2+1)}
\end{equation*}
along a keyhole or a branch cut we have
\begin{equation*}
\int_0^{\infty}\dfrac{(1-s)(\ln^2(s)-(\ln(s)+i2\pi)^2)}{(s^2+1)(s+1)}\, ds = 2\pi i\sum_{s=-1,\pm i}{\rm Res} f(s).
\end{equation*}
From this we can extract that $I = -\dfrac{\pi^{2}}{8}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1804604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.