Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
What is going on with the symmetry group of the cube? I've been studying the symmetry groups of the platonic solids, and the cube has be confused. I've been considering the four diagonals on the cube. By writing out all permutation in $S_4$, it's clear that almost all permutations of the four diagonals can be reached by one rotational symmetry on the cube, except 3 permutations. These are the three of order $2$, namely $$(1,2)(3,4), (1,3)(2,4), (1,4)(2,3)$$ which is only reached by applying two consecutive rotations on the cube. They are reached by rotations around the axis between two opposite edges. Since the direct symmetry group of the cube is isomorphic to $S_4$, what additional information is gained by considering the reflections?
Consider this sketch: Cube with diagonals. The reflection through the plane in which diagonal $2,4$ live will give me the permutation $(1,3)$, but the same permutation can be reached by rotation around the axis between the edges connecting endpoints of diagonals $1,3$. I'm struggling to see what the reflections do to this group, because it seems that the full symmetry group should be larger than the direct symmetry group, which already is isomorphic to $S_4$.
I also, on the advice of my advisor, considered the symmetries of the cube on its faces. I labeled them as you would see on a six-sided die. By composing the reflection $(1,6)$ and the rotation $(1,4,6,3)$, I got a permutation of order $6$, namely $(3,1,2,4,6,5)$. No permutations of order $6$ appear in the direct symmetry group of the cube when considering the faces, so this seems to me to suggest that the full symmetry group is in fact larger than $S_4$, but I can't figure out exactly how to formualate it.
Any help or tips is greatly appreciated.
|
The action of the orientation preserving symmetry group of the group induced on the diagonals is indeed an isomorphism between two groups isomorphic to $S_4$. The orientation preserving symmetry group is of index $2$ in the full symmetry group of the cube (a group of order $48$). Note that the reflection at the center swaps opposing faces $(1,6)(2,5)(3,4)$ while leaving the four diagonals unaffected (except that they change direction within themselves).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1676176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $a^2+b^2=c^2$ then $a$ or $b$ is even. I am having trouble proving this directly. I know that it is easy to prove by contradiction by assuming both $a$ and $b$ to be odd, but how should I start to try to prove this? This is a homework problem and I am just looking for help getting started, not the full answer.
|
It's a bit of a fine line between proving directly and by contradiction. Try breaking it into cases, such as
Suppose $c^2$ is even/odd and WLOG $b$ is odd, then show that $a$ must be even.
Since the conclusion isn't a contradiction, but rather that $a$ is even, this is a direct proof. You should end with something like $a = 2*n$ where $n$ is an integer (definition of even).
It might help if you have a theorem that says that $a$ and $a^2$ have the same parity. Maybe worth a lemma.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1676265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Picking Multiples of 4 I recently came up with and tried to solve the following problem: If you are randomly picking integers in the range $[1,30]$ out of a hat without replacement, on average, how many integers will you have to pick until you have picked all of the multiples of $4$?
There are $7$ multiples of $4$ that can be chosen. I know that the expected value of picks until you pick a multiple of 4 is the smallest value of $n$ such that $1-\displaystyle\prod_{i=0}^{n-1}\dfrac{23-i}{30-i}>0.5$, which is $3$. However, I don't know how to figure out how many picks are needed until all multiples of $4$ have been chosen. Can I please have some assistance?
|
The probability of getting all multiples of $4$ precisely on draw $k$ is
$$
\frac{\binom{k-1}{6}}{\binom{30}{7}}
$$
Thus, the expected number of draws would be
$$
\begin{align}
\frac1{\binom{30}{7}}\sum_{k=7}^{30}k\binom{k-1}{6}
&=\frac7{\binom{30}{7}}\sum_{k=7}^{30}\binom{k}{7}\\
&=\frac7{\binom{30}{7}}\binom{31}{8}\\
&=\frac{217}{8}\\[9pt]
&=27.125
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1676374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
}
|
Proving density of a set I want to show that $A=\{(x_n)\in c_{00}:\sum\limits_{n=1}^{\infty}x_n=0\}$ is dense in $(c_0,\|.\|_{\infty})$.
Let $x=(x_n)\in c$ and let $\epsilon >0$. Thus there exists $n_0\in \mathbb N$ such that $|x_n|<\epsilon $ for all $n\geq n_0$. Now the question is how to choose an element from $A$? I wanted to choose $y=(x_1-x_2,x_2-x_3,\ldots,x_{n_0-1}-x_1,0,0,\ldots)\in A$ but it didn't work as $\|x-y\|_{\infty}=\|x\|_{\infty}$ and we cann't say it is less than $\epsilon$. Now what to do?
|
We must show that $$\forall a_n\in c_0\quad,\quad\forall\epsilon>0\quad,\quad\exists b_n\in A\quad,\quad||a_n-b_n||_\infty<\epsilon$$also according to definition $$||a_n-b_n||_\infty<\epsilon\leftarrow\rightarrow \sup_{n}|a_n-b_n|<\epsilon\leftarrow\rightarrow|a_n-b_n|<\epsilon\quad,\quad\forall n$$we try to make such a sequence $b_n$ for any given $a_n\in c_0$ . Since $a_n$ tends to $0$ there exists some $M_\epsilon$ such that $$\forall n>M_\epsilon\quad,\quad|a_n|<\dfrac{\epsilon}{2}$$For $n\le M_\epsilon$ define $b_n$ such that $$a_n-\epsilon<b_n<a_n+\epsilon$$once again define $$\Large S=\sum_{n=1}^{M_\epsilon}b_n$$if $S\ge 0$ for $M_\epsilon+1\le n \le M_\epsilon+\left[\dfrac{2S}{\epsilon}\right]$ define $b_n=-\dfrac{\epsilon}{2}$ and $$\Large b_{M_\epsilon+\left[\dfrac{2S}{\epsilon}\right]+1}=-\sum_{n=1}^{M_\epsilon+\left[\dfrac{2S}{\epsilon}\right]}b_n$$
if $S< 0$ for $M_\epsilon+1\le n \le M_\epsilon+\left[\dfrac{2S}{\epsilon}\right]$ define $b_n=\dfrac{\epsilon}{2}$ and $$\Large b_{M_\epsilon+\left[\dfrac{2S}{\epsilon}\right]+1}=-\sum_{n=1}^{M_\epsilon+\left[\dfrac{2S}{\epsilon}\right]}b_n$$ and $$b_n=0\quad,\quad\forall n\ge M_\epsilon+\left[\dfrac{2S}{\epsilon}\right]+2$$using this approach we make sure that $$|a_n-b_n|<\epsilon\quad,\quad \forall n\in\Bbb N$$which completes our proof on existence of such sequence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1676489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What's so special about a prime ideal? An ideal is defined something like follows:
Let $R$ be a ring, and $J$ an ideal in $R$. For all $a\in R$ and $b\in J$, $ab\in J$ and $ba\in J$.
Now, $J$ would be considered a prime ideal if
For $a,b\in R$, if $ab\in J$ then $a\in J$ or $b\in J$.
To my (admittedly naive) eyes, this isn't saying much. More or less, I guess it just sounds like a backwards way of describing a regular ideal.
*
*$a,b$ are always elements of $R$, though the prime ideal definition doesn't specify that one has to be in $J$...
*... but the definition of a normal ideal already tells that the product is in $J$ if one of the elements is in $J$.
So, in both cases, the product is in $J$, and either of the elements is in $J$, making them seem like incredibly similar statements to me, and not saying much about the interesting "prime-like" properties of a prime ideal.
What makes these two different?
|
The easiest way to answer this, I think, is with an example. Let $R=\mathbb{Z}$, and let's consider the ideal $I=(6)$. This is the set of all integers that are multiples of $6$. You can see that it's an ideal because if you take any multiple of $6$ and multiply it by any other integer, the result is still a multiple of $6$. So $I$ is closed under multiplication by any element of the ring.
But $I=(6)$ is not a prime ideal. You can see this because $2 \notin (6)$ and $3 \notin (6)$, but $2 \cdot 3 \in (6)$.
On the other hand, the ideals $(3)$ is prime: If you multiply two numbers together and the result is a multiple of $3$, then at least one of the two numbers you began with must also be a multiple of $3$.
If you ponder this example, you will also understand why the word "prime" is used for this property.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1676578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 0
}
|
How to show that this cone doesn't have a generating set that forms a basis of $\mathbb Z^2$? Consider the cone $\sigma=\mathbb R_{\ge0} e_1 +\mathbb R_{\ge0}(e_1+2e_2)$ in $\mathbb R^2$. I am trying to show that $\sigma$ is not a smooth cone. That is I want to show that $\sigma$ doesn't have a generating set that forms a $\mathbb Z$ - basis of $\mathbb Z^2$.
Clearly $e_1,e_1+2e_2$ doesn't form a $\mathbb Z$ - basis of $\mathbb Z^2$. But how do I show that it has no such generating set? I understand intuitively that checking only on generators on the boundary of the cone is enough but how do I say that more concretely?
|
Here's a concrete way to explain it: show that there is no linear combination (over $\Bbb Z$) of $e_1$ and $e_1 + 2e_2$ that produces the vector $e_2 \in \Bbb Z^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1676700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Splitting Lemma where $C=\mathbb{Z}.$ Given a short exact sequence
$$ 0 \xrightarrow{\theta_3} A \xrightarrow{\theta_2} B \xrightarrow{\theta_1} \mathbb{Z} \xrightarrow{\theta_0} 0 $$
show that $B \cong A \oplus \mathbb{Z}.$
So far I have that $\theta_2$ is injective and as $0 \to \operatorname{Im}(\theta_3) \to A \to \operatorname{Im}(\theta_2) \to 0 $ is exact, then $A \cong Im(\theta_2).$ Similarly, $\operatorname{Im}(\theta_1) \cong \mathbb{Z}$ from the surjectivity of $\theta_2.$
Also from exactness we have that $\operatorname{Im}(\theta_2) \cong \operatorname{Ker}(\theta_1).$
|
This is the elementary way to proof it:
Let $b \in B$ with $\theta_1(b)=1$ and define a homomorphism $s: \mathbb Z \to B, 1 \mapsto b$.
Then you can show that
$$A \oplus \mathbb Z \to B, (a,z) \mapsto \theta_2(a) + s(z)$$
is an isomorphism.
Using results from homological algebra, one would just say that $\mathbb Z$ is free, hence projective. Thus the sequence splits and we obtain the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1676870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
integrating this infinite gaussian integral How does one integrate
$\int_{-\infty}^{+\infty}x e^{-\lambda ( x-a )^2 }dx $
where $\lambda$ is a positive constant.
My integral tables are not returning anything useable. The best it return is non-definite gaussian integrals. Useless!
Help please
|
$$
\begin{align}
\int_{-\infty}^{+\infty}x e^{-\lambda(x-a)^2}\,\mathrm{d}x
&=\int_{-\infty}^{+\infty}(x+a) e^{-\lambda x^2}\,\mathrm{d}x\tag{1}\\
&=\frac{a}{\sqrt{\lambda}}\int_{-\infty}^{+\infty}e^{-x^2}\,\mathrm{d}x\tag{2}\\
&=a\sqrt{\frac\pi\lambda}\tag{3}
\end{align}
$$
Explanation:
$(1)$: substitute $x\mapsto x+a$
$(2)$: $\int_{-\infty}^{+\infty}xe^{-\lambda x^2}\,\mathrm{d}x=0\because$ odd integrand, then substitute $x\mapsto\frac x{\sqrt\lambda}$
$(3)$: $\int_{-\infty}^{+\infty}e^{-x^2}\,\mathrm{d}x=\sqrt\pi$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1676958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
$L^p-L^q$ estimates for heat equation - regularizing effect Where can I find a proof of the following estimate
$$\|S(t)v\|_{L^p(\Omega)}\leq C t^{-\frac{N}{2}\left(\frac{1}{q}-\frac{1}{p}\right)}\|v\|_{L^q(\Omega)}, $$
where $1\leq p<q<+\infty$, $\Omega\subset \mathbb{R}^N$ is an open bounded set and $\{S(t)\}_{t\geq 0}$ is the semigroup generate by the heat equation with Dirichlet boundary condition.
|
Let $N_t:\mathbb{R}^N\to \mathbb{R}$, $t>0$, be the function defined by
$$N_t(x)=(4\pi t)^{-N/2}e^{-|x|^2/4t}.$$
Since
$$\int_{\mathbb{R}^N} e^{-a|x|^2}dx=\left(\frac{\pi}{a}\right)^{N/2},\tag{1}\label{1}$$
we can see that $N_t\in L^1(\mathbb{R}^N)$ and $\|N_t\|_{ L^1(\mathbb{R}^N)}=1$.
We know that $S(t)v=N_t\ast v$. From Young's Inequality, we have
$$\|S(t)v\|_{ L^p(\Omega)}\leq \|N_t\ast v\|_{ L^p(\Omega)}\leq \|N_t\|_{ L^m(\Omega)}\|v\|_{ L^q(\Omega)},$$
where $1+\frac{1}{p}=\frac{1}{m}+\frac{1}{q}.$
Now, we just have to estimate $\|N_t\|_{ L^m(\Omega)}$. From \eqref{1}, we can see that
$$\|N_t\|_{ L^m(\Omega)}=(4\pi t)^{-N/2}\left(\int_{\mathbb{R}^N} e^{-\frac{m}{4t}|x|^2}dx\right)^{1/m}=(4\pi t)^{-N/2}\left(\frac{\pi}{\frac{m}{4t}}\right)^{N/2m}=C_{m,N}t^{-\frac{N}{2}\left(1-\frac{1}{m}\right)}=C_{m,N}t^{-\frac{N}{2}\left(\frac{1}{q}-\frac{1}{p}\right)}.$$
Hence, we have the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1677157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Proof that $\sum_{i=0}^n 2^i = 2^{n+1} - 1$ $\sum_{i=0}^n 2^i = 2^{n+1} - 1$
I can't seem to find the proof of this. I think it has something to do with combinations and Pascal's triangle. Could someone show me the proof? Thanks
|
Mathematical induction will also help you.
*
*(Base step) When $n=0$, $\sum_{i=0}^0 2^i = 2^0 = 1= 2^{0+1}-1$.
*(Induction step) Suppose that there exists $n$ such that $\sum_{i=0}^n 2^i = 2^{n+1}-1$. Then $\sum_{i=0}^{n+1}2^i=\sum_{i=0}^n 2^i + 2^{n+1}= (2^{n+1}-1)+2^{n+1}=2^{n+2}-1.$
Therefore given identity holds for all $n\in \mathbb{N}_0$.
Edit: If you want to apply combinations and Pascal's triangle, observe
\begin{align}
2^0&=\binom{0}{0}\\
2^1&=\binom{1}{0}+\binom{1}{1}\\
2^2&=\binom{2}{0}+\binom{2}{1}+\binom{2}{2}\\
2^3&=\binom{3}{0}+\binom{3}{1}+\binom{3}{2}+\binom{3}{3}\\
\vdots&=\vdots\\
2^n&=\binom{n}{0}+\binom{n}{1}+\binom{n}{2}+\binom{n}{3}+\cdots+\binom{n}{n}
\end{align}
Hockey stick identity says that
$$
\sum_{i=r}^n \binom{i}{r}=\binom{n+1}{r+1}.
$$
and so
\begin{align}
\binom{0}{0}+\binom{1}{0}+\cdots+\binom{n}{0}&=\binom{n+1}{1}\\
\binom{1}{1}+\binom{2}{1}+\cdots+\binom{n}{1}&=\binom{n+1}{2}\\
\binom{2}{2}+\binom{3}{2}+\cdots+\binom{n}{2}&=\binom{n+1}{3}\\
\vdots&=\vdots\\
\binom{n}{n}&=\binom{n+1}{n+1}
\end{align}
Add all terms, then we get
\begin{align}
\sum_{i=0}^n 2^i &= \sum_{i=1}^{n+1} \binom{n+1}{i}\\
&=\sum_{i=0}^{n+1}\binom{n+1}{i}-1\\
&=2^{n+1}-1.
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1677359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
}
|
Are there order statistics for a Gaussian variable raised to a power? Let $X$ be a random variable with a standard normal distribution. Let $Y = |X|^{2p}$. I am trying to find the distribution for $Y_{(n)}$, i.e., the largest value of $Y$ out of $n$ samples.
I have derived the pdf to be:
$$f_{Y_{(n)}} = n \left(\frac{1}{p\sqrt{2\pi}} y^{\frac{1}{2p} - 1} \exp\left(-\frac{1}{2}y^{1/p} \right)\right) \left(\int_0^y \frac{1}{p\sqrt{2\pi}} t^{\frac{1}{2p} - 1} \exp\left(-\frac{1}{2}t^{1/p}\right) \, dt \right)^{n-1}$$
But Mathematica says $EY_{(n)}$ is infinite. Intuitively, I feel that it should be some finite value in terms of p and n. Any ideas?
|
In this answer I will try to derive an analytic formula for you. I think my answer is the same as yours. Anyway, you can use it to compare your result and check what might have gone wrong.
Let $\Phi$ denote the cumulative distribution function of the standard normal distribution. That is
$$
\Phi(x) = \int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}e^{-t^2/2}dt
$$
In the following let the $X_i$ be standard normally distributed and $Y_i = |X_i|^{2p}$. Then
$$
P(Y_{(n)}\le y) = P(Y_1\le y;\cdots;Y_n\le y) = \prod_{i=1}^{n}P(Y_i\le y) =: F_Y(y)^n
$$
and thus
$$
f_{Y_{(n)}}(y) = \frac{d}{dy}F_Y(y)^n = nF_Y(y)^{n-1}f_Y(y).
$$
We conclude that if we can determine $F_Y(y)$, then we have the pdf of $Y_{(n)}$.
$$
F_Y(y) = P(Y\le y) = P(|X|^{2p}\le y) = P(-y^{1/2p}\le X \le y^{1/2p}) = 2\Phi(y^{1/2p})-1
$$
From this we see that
$$
f_Y(y) = \frac{d}{dy}2\Phi(y^{1/2p})-1 = 2f_X(y^{1/2p})\frac{1}{2p}y^{-\frac{2p-1}{2p}}.
$$
Finally we get that
\begin{align}
f_{Y_{(n)}}(y) &= n\left(2\Phi(y^{1/2p})-1\right)^{n-1}2f_X(y^{1/2p})\frac{1}{2p}y^{-\frac{2p-1}{2p}}\\
&= n\left(2\int_{-\infty}^{y^{1/2p}}\frac{1}{\sqrt{2\pi}}e^{-t^2/2}dt-1\right)^{n-1}\left(\frac{1}{\sqrt{2\pi}}e^{-y^{1/p}/2}\right)\frac{1}{p}y^{-\frac{2p-1}{2p}}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1677449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
If $R$ is an equivalence relation, does $R^2$ too? I think that yes,
$I_A \subseteq R$
$R = R^{-1}$
$R^2 \subseteq R$
And now we can show.
Reflex: $I_A = I_A^2 \subseteq R \subseteq R^2$
A lil bit struggling with symm. And trans. Can you show me how will you prove it?
|
The key is that $R_1\subseteq R_2$ implies $R_1^2\subseteq R_2^2$. Together with $I_A^2=I_A$ and $(R^{-1})^2=(R^2)^{-1}$ the three properties for $ 2$ follow.
But actually the situation is much simpler: While transitivity (alone) of $R$ in fact just means $R^2\subseteq R$, transitivity plust reflexivity means that $$R^2=(R\cup I_A)^2=R^2\cup RI_A\cup I_AR\cup I_A^2=R^2\cup R\cup I_A=R. $$
Hence $R^2$ is an equivalence relation because it is the same relation as $R$. (We also not that we did not need symmetry to show $R^2=R$, only transitivity and reflexivity; hence $R^2=R$ holds as well for relations such as $\le$)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1677529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Getting the sides of unit circle Im very confused on how to get the sides of the unit circle. By that i mean like sin/cos of 0,90,180,360... I can get the others by this logic for example:
Image:
For example if it asks me
*
*Sin 150: I would first see the quadrant.. Its Quadrant II so it
means Sin is the only one positive... Ok so i know its positive, and
next i would know that sin is always the hieght so according to that
picture it is 1/2.. So i would know the sin of 150 is 1/2...
I can do that for every angle presented on the picture exept the 0,90,180,360... How do you possible calculate the sin/cos for the angles i just mentioned. Thankyou
|
In the following triangles, use the Pythagorean theorem to find the missing lengths, then find the trigonometric ratios for $0,\pi/2, \pi/3,\pi/4,$ and $\pi/6$. Don't even bother memorizing, just draw the appropriate triangle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1677643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Defining term with $3$-vectors and $3 \times 3$ matrices I don't normally ask questions, but my professors isn't responding back to me. I do not want an answer or anything, just want a point in the right way so I can figure this set of question out.
The question is:
$3$-vectors with Cartesian coordinates and $3 \times 3$ matrices:
Scalar multiplication (scalar times matrix)
Matrix multiplication (matrix times matrix)
At the moment, I think I have the following answer as
$s[M]$ which is $s \cdot M_{ij}$ for the matrix
$s[V]$ which is $s\cdot V_i$ for the Cartesian in a vector form.
However I'm not so sure if that what the question want. Does it want the $VM$, where $V$ is the scalar in question?
Sorry if this is confusing. To be honest, I'm very confused about the problem too.
|
Let $M$ be a $3 \times 3$ matrix given by columns $M_1, M_2, M_3$ (so each of $M_1, M_2, M_3$ is a three vector in column format). Then we have $$M = [M_1|M_2|M_3].$$
Let Let $W$ be a $3 \times 3$ matrix given by rows $W_1, W_2, W_3$ (so each of $W_1, W_2, W_3$ is a three vector in row format). Then we have $$\pmatrix{W_1\\W_2\\W_3}$$
Let $$V = \pmatrix{v_1\\v_2\\v_3}$$ be a 3-vector in column format. Let $Y$ also be a vector, this time in row form: $$Y = \pmatrix{y_1&y_2&y_3}.$$
Let $s$ be a scalar. Then, as you said, $s[V] = s\cdot V = \pmatrix{sv_1\\sv_2\\sv_3}$
Also, $$s[M] = s\cdot M = [s\cdot M_1|s\cdot M_2|s\cdot M_3],$$ where multiplication of a vector by a scalar is defined above.
Multiplication of two vectors $$Y\cdot V = y_1\cdot v_1 + y_2\cdot v_2 + y_3\cdot v_3.$$
Finally, we have matrix multiplication:
$$W\cdot M =\pmatrix{W_1\cdot M_1|W_2\cdot M_2|W_3\cdot M_3},$$ where the vectors $W_i$ and $M_i$ are multiplied as described above in the multiplication of $V$ and $Y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1677753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What is the probability that three are males and two are female? The human sex ratio at birth is commonly thought to be 107 boys to 100 girls. Suppose five infants are chosen at random.
(A) What is the probability that three are males and two are female?
(B) What is the probability that at least one of them is a male?
My work: (A) $(5C2 * 5C3)$ / $207C5$
(B) $(5C1 * 202C)4$ / $207C5$
I'm not sure if these are correct.
|
107:100 implies that the chance of being male is $p = \frac{107}{207}$.
Notice that we have $n = 5$ (presumably independent) trials (children) with probability $p = 107/207$ of success (if I consider selecting a boy being as a success). Then the number of boys selected $N$ follows a binomial distribution with $n = 5, p = 107/207$
a) Notice that by selecting 3 males in 5 tries forces us to have 2 females. Hence we only need to consider getting 3 boys in five trials. Can you find that?
$P(N = 3) =\binom{5}{3}p^3(1-p)^2 = 0.3223292$
b) In terms of notation, this asks for $P(N\geq 1)$, but it is easier to use the complementary probability. Can you do it?
$P(N\geq 1) = 1-P(N = 0) = 1-\binom{5}{0}p^0(1-p)^5 =1- 0.02631166=0.9736883$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1677894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What is the probability that Andrea, Melissa and Carol end up being on the same team? I am stuck on a word problem about picking teams. I thought it would be very simple, but to my surprise, I could not solve it. So here's the problem..
Andrea, Melissa, and Carol are in a class of 27 girls. The teacher chooses
students at random to make up teams of three. What is the probability that Andrea, Melissa and Carol end up being on the same team?
|
Consider that there are $27$ slots available.
$\Large\boxed{.}\Large\boxed{.}\Large\boxed{.}\;\;\Large\boxed{.}\Large\boxed{.}\Large\boxed{.}\;\;\Large\boxed{.}\Large\boxed{.}\Large\boxed{.}\;\;\Large\boxed{.}\Large\boxed{.}\Large\boxed{.}\;\;\Large\boxed{.}\Large\boxed{.}\Large\boxed{.}\;\;\Large\boxed{.}\Large\boxed{.}\Large\boxed{.}\;\;\Large\boxed{.}\Large\boxed{.}\Large\boxed{.}\;\;\Large\boxed{.}\Large\boxed{.}\Large\boxed{.}\;\;\Large\boxed{.}\Large\boxed{.}\Large\boxed{.}\;\;$
Andrea can occupy any slot.
To be in the same group, Melissa can choose any $2$ of the $26$ remaining,
and Carol is left with only $1$ choice out of the $25$ left.
$Pr = \dfrac2{26}\dfrac1{25}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1678004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Prove that the "theta-space" is not homeomorphic to $S^{1}$. Let $X$ be the "$\theta$-space":
\begin{equation*}
X = \{ (x,y) \in \mathbb{R}^{2} \colon x^{2} + y^{2} = 1 \} \cup \{ (x,0) \colon -1\leq x \leq 1 \}.
\end{equation*}
Prove that $X$ is not homeomorphic to $S^{1}$.
My initial thought was to show that $X$ is not compact. I was thinking that the collection of open balls with radius $r>0$, $\{ B(0,r) \}$, is an open cover with no finite subcover. I am not very confident with my level of understanding so can someone tell me if I am on the right track or not.
|
You can also give an alternative proof using $\pi_1$. Since the theta-space is homotopic to the figure eight space i.e $\mathbb{S}^1 \vee \mathbb{S}^1$, it has fundamental group $\mathbb{Z} * \mathbb{Z}$. However, the fundament group of $\mathbb{S}^1$ is $\mathbb{Z}$. Since homeomorphic spaces have isomorphic fundamental groups, your result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1678110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Proving a nash equilibria does not exist At a certain warehouse, the price of tobacco per pound in dollars, $p$, is related
to the supply of tobacco in pounds, $q$, by the formula
$p=10−(q/100000)$
Thus the more tobacco farmers bring to the warehouse, the lower the price. However,
a price support program ensures that the price never falls below .25 per pound. In
other words, if the supply is so high that the price would be below .25 per pound,
the price is set at $p = .25$, and a government agency purchases whatever cannot be
sold at that price.
One day three farmers are the only ones bringing their tobacco to this warehouse.
Each has harvested 600,000 pounds and can bring as much of his harvest as
he wants. Whatever is not brought must be discarded. Show
that there are no Nash equilibria in which $q1 +q2 +q3 < 975000$, exactly one
$qi$ equals 600,000, and the other $qi$s are strictly between 0 and 600,000.
I am aware of what a Nash equilibrium is and how to find one. What I am unsure of is how to show a Nash equilibrium doesn't exist under a certain set of conditions.
|
Without loss of generality, let $q_1'=600000$ and let $q_2'+q_3'=Q<375000$, $q_2',q_3'\in(0,600000)$.
To show that a NE does not exist, you need to demonstrate that at least one player $i$ can earn more by not playing $q_i'$. In other words, you'd show that $q_1',q_2',q_3'$ are not mutual best responses.
Obviously one such candidate is either $2$ or $3$. By increasing his quantity (which he can, because $q_2'$ is necessarily smaller than $600000$), say $q_2'+\epsilon$, Farmer $2$'s payoff is strictly higher by $.25\epsilon$. Therefore, he would not find $q_2'$ a best response to $1$ and $2$'s strategies. So no NE exists under the given conditions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1678213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove a useful formula for computing expectations Suppose $X$ is a non-negative random variable, and $h$ is a non-decreasing function on $\mathbb{R}_+$ such that $h(0)=0$ and $h$ is absolutely continuous on each bounded interval. ($h(a) = \int_0^a h'(s) ds$ for all $0\leq a <\infty$) Then,
\begin{align}
\mathbb{E}[h(X)] = \int_0^\infty h'(s) P(X>s) dt.
\end{align}
I am thinking about partial integration, but it is not really obvious how to use this. Can somebody help me?
LINKED
|
Since $X\geqslant0$ a.s. and $h$ is increasing, we have
$$\mathbb E[h(X)] = \int_0^\infty (1-F_{h(X)}(x)\ \mathsf dx = \int_0^\infty (1-F_X(h^{-1}(y))\ \mathsf dy, $$
where $$h^{-1}(y)=\min\{x: h(x)=y\}. $$
Applying the change of variables $s=g^{-1}(x)$, we have
$$\mathbb E[h(X)]=\int_0^\infty(1-F_X(s))h'(s)\ \mathsf ds=\int_0^\infty h'(s)\mathbb P(X>s)\ \mathsf ds. $$
Alternatively, from @Did's hint we see that
$$
\mathbb E[h(X)]
= \mathbb E\left[\int_0^\infty h'(s)\mathsf 1_{\{X>s\}}\ \mathsf ds \right]=\int_0^\infty h'(s)\mathbb P(X>s)\ \mathsf ds.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1678328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$|.|$ and $d'$ are not equivalent (in metric sense)? Please i need a counter example why $|.|$ and $d'$ are not metricly equivalent where:
$$d'(x,y)=\left\{
\begin{array}{cc}
0, & x=y, \\
|x|+|y|, & x\neq y.
\end{array}\right.$$
Thank you very much .
|
Hint: This can also help, since metric equivalence implies topological equivalence:
Given any $x \in \mathbb{R}\setminus\{0\}$, let $\varepsilon =\frac{|x|}{2}$, then the epsilon ball $B_{d'}(x,\varepsilon)$ consists only of $\{x\}$. For, if $y\not= x$ and $y \in B_{d'}(x,\varepsilon)$, we must have $\varepsilon < |x|+|y| < \varepsilon$, a contradiction. For $x=0$, $B_{d'}(0,\varepsilon) = \{x \in \mathbb{R}\ |\ |x|< \varepsilon\} = (-\varepsilon,\varepsilon)$ . Thus, $0$ is the only point that is not open in $(X,d')$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1678407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Limit of sum of the series What would be the sum of following ?
$$\lim_{n\to\infty} \left[\frac{1}{(n+1)^{2}} + \frac{1}{(n+2)^{2}} + \frac{1}{(n+3)^{2}} + \cdots + \frac{1}{(n+n)^{2}}\right]$$
I tried to turn it into integral :
$\displaystyle\int \frac{1}{(1+\frac{r}{n})^{2}}\frac{1}{n^{2}} $ but
I can't figure out how to deal with $\frac{1}{n^{2}}$
|
$$\lim_{n\to\infty}0\le\lim_{n\to\infty} [\frac{1}{(n+1)^{2}} + \frac{1}{(n+2)^{2}} + \frac{1}{(n+3)^{2}} + ... + \frac{1}{(n+n)^{2}}]\le\lim_{n\to\infty}\frac{n}{(n+1)^2}$$
$$\lim_{n\to\infty}0\le\lim_{n\to\infty} [\frac{1}{(n+1)^{2}} + \frac{1}{(n+2)^{2}} + \frac{1}{(n+3)^{2}} + ... + \frac{1}{(n+n)^{2}}]\le\lim_{n\to\infty}\frac{\frac1n}{(1+\frac1n)^2}$$
$$0\le\lim_{n\to\infty} [\frac{1}{(n+1)^{2}} + \frac{1}{(n+2)^{2}} + \frac{1}{(n+3)^{2}} + ... + \frac{1}{(n+n)^{2}}]\le0$$
$$\lim_{n\to\infty} [\frac{1}{(n+1)^{2}} + \frac{1}{(n+2)^{2}} + \frac{1}{(n+3)^{2}} + ... + \frac{1}{(n+n)^{2}}]=0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1678539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Show that $\int\cdots\int{p_1^{x_1}\cdots p_m^{x_m} \, dp_1\cdots dp_m}=\frac{x_{1}!\cdots x_{m}!}{\left(m-1+\sum_{i=1}^{m}{x_i}\right)!}$
How to prove that,$$\int\int\cdots\int p_1^{x_1} p_2^{x_2} \cdots p_m^{x_m} \, dp_1 \, dp_2 \cdots dp_m = \frac{x_1!\cdots x_m!}{\left(m-1+\sum_{i=1}^m x_i\right)!}$$
where $0\leq p_i\leq 1$ and $\sum\limits_{i=1}^m p_i=1$,
My attempt: If we have, $$\int_{0}^{1}\int_{0}^{1}{p_{1}^{x_{1}}p_{2}^{x_{2}}dp_{1}dp_{2}}=\int_{0}^{1}\int_{0}^{1}{p_{1}^{x_{1}}(1-p_{1})^{x_{2}}dp_{1}dp_{2}}$$
$$=\int_{0}^{1}{p_{1}^{x_{1}}(1-p_{1})^{x_{2}}dp_{1}}=\beta(x_{1}+1,x_{2}+1)=\dfrac{x_{1}!x_{2}!}{(x_{1}+x_{2}+2)!}$$
Since, $p_{1}+p_{2}=1$ and $\beta(x,y)=\dfrac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$. Now, for the next case
$$\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}{p_{1}^{x_{1}}p_{2}^{x_{2}}p_{3}^{x_{3}}dp_{1}dp_{2}dp_{3}}=\int_{0}^{1}\int_{0}^{1}{p_{1}^{x_{1}}p_{2}^{x_{2}}(1-(p_{1}+p_{2}))^{x_{3}}dp_{1}dp_{2}}$$
I tried with change variable $p_{1}+p_{2}=t$, but doesn't work. Any help, for prove the general statement... Regards!
|
*I know this question has existed for a long time. It happens that I also came across this formula learning Bose-Einstein distribution and was confused for a moment. However, it turns out that the step-by-step calculation wasn't that daunting for some lower dimension case. And indeed the whole formula can be derived by using induction.
Let us embark on calculating $\int_{0}^{1}\int_{0}^{1} p_1^{x_1}p_2^{x_2}(1-p_1-p_2)^{x_3} dp_1dp_2 $.
Start with a change of variable $p_1+p_2=t$, we have $\int_{0}^{1}\int_{0}^{t} p_1^{x_1}(t-p_1)^{x_2}(1-t)^{x_3} dp_1dt$
= $\int_{0}^{1}(1-t)^{x_3} (\int_{0}^{t} p_1^{x_1}(t-p_1)^{x_2}dp_1)dt$ (1.1)
Now, $\int_{0}^{t} p_1^{x_1}(t-p_1)^{x_2}dp_1$ do look like the $\int_{0}^{1} p_1^{x_1}(1-p_1)^{x_2}dp_1$, and if you remember the last formula, you know that
$\int_{0}^{1} p_1^{x_1}(1-p_1)^{x_2}dp_1 = \frac{x_1!x_2!}{(x_1+x_2+1)!}$ either by the Beta function or recursively using integration by parts.
Note that we have used the condition that $x_1+x_2 = n$.
You have very good reason to guess $\int_{0}^{t} p_1^{x_1}(t-p_1)^{x_2}dp_1$ can be calculated in an identical manner. Consider the base case, where we have
$\int_{0}^{t} (t-p_1)^{m-x_3}dp_1 = \int_{0}^{t} (t-p_1)^{x_1+x_2}dp_1 = \frac{1}{x_1+x_2+1}t^{x_1+x_2+1}$
Now $\int_{0}^{t} p_1^{1}(t-p_1)^{x_1+x_2-1}dp_1 = -\frac{1}{x_1+x_2}\int_{0}^{t} p_1d(t-p_1)^{x_1+x_2} = -\frac{1}{x_1+x_2}[p_1(t-p_1)^{x_1+x_2}|_{0}^{t}-\int_{0}^{t} (t-p_1)^{x_1+x_2}dp_1]$
= $\frac{1}{(x_1+x_2)(x_1+x_2+1)}t^{x_1+x_2+1}$, you can continue to do for another step so that the pattern is clear, but here we have
$\int_{0}^{t} p_1^{x_1}(t-p_1)^{x_2}dp_1 = \frac{x_1!x_2!}{(x_1+x_2+1)!}t^{x_1+x_2+1}$, plug this back to (1.1), and you will find the desired result.
Use this idea, you can prove by induction your statement in a general case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1678656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Reflection of a Light Ray I found this problem to be very hard while studying for the exam:
Let
$$L: \vec r(t)=<1,-2,3>+t<-5,4,1>, \qquad t \in \mathbb{R}$$
be a line. Light is traveling along the line $L$ in the direction of increasing $t$ value. Let $\Omega: x=0$ (the $yz$ plane) be a mirror. Find the vector equation of the line $\hat L$ that is the reflection of line $L$ by $\Omega$
Any help would be helpful!
Any help would be helpful!
My solution
$$L: \vec r(t)= 1/5<0,-6,16>+t<5,4,1>, \qquad t \in \mathbb{R}$$
|
Perhaps I’ve misunderstood something in the problem statement, but this seems rather straightforward. The problem talks about lines, not rays, so the point at which the given line intersects the mirror isn’t directly relevant.
Forget about the line for the moment. What is the image of an arbitrary vector $\langle x,y,z \rangle$ when reflected in the $yz$-plane? It is simply $\langle -x,y,z \rangle$. To find the equation of the line’s reflection, apply this same transformation to the given equation, resulting in the line $$\langle -1,-2,3\rangle + t\langle 5,4,1\rangle.$$
Now, if indeed you’re really supposed to find the incident and reflected rays, then you do need to reparametrize by finding where the line intersects the reflective plane (which is also the point of intersection of the two lines), but it seems to me that you then also have to specify that $t\ge 0$ so that you get a ray rather than a line.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1678735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Uniform convergence of $\frac{n^2\sin(x)}{1+n^2x}$ My goal is to show that
$$\frac{n^2\sin(x)}{1+n^2x}$$
does not converge uniformly on $S=(0,\infty)$ but does so on any compact subset of $S$. First, we find the limit function
$$\frac{n^2\sin(x)}{1+n^2x} \to \frac{\sin(x)}{x} \qquad \text{ as } n \to \infty$$
Now, to show that this doesn't uniformly converge on the whole interval I am really only interested in the boundary points (since any compact subset supposedly makes it uniformly convergent). As $x \to 0$ our function goes to $1$and as $x \to \infty$ our function goes to $0$. I am unsure of how to calculate
$$\left|\left|\frac{n^2\sin(x)}{1+n^2x}-\frac{\sin(x)}{x} \right|\right|_S$$
and, moreover, show that it is $0$. From there, on any compact subset of $S$ I am guessing that
$$\left|\left|\frac{n^2\sin(x)}{1+n^2x}-\frac{\sin(x)}{x} \right|\right|=0$$
Will be zero since if we consider $[a,b] \subset S$ we have
$$\left|\left|\frac{n^2\sin(x)}{1+n^2x}-\frac{\sin(x)}{x} \right|\right| \leq \left|\frac{n^2\sin(a)}{1+n^2a}\right|+\left|\frac{\sin(a)}{a}\right|=0$$
Thanks for your help!
|
For $0 < a \leqslant x$ we have
$$\left|\frac{n^2\sin(x)}{1+n^2x}-\frac{\sin(x)}{x} \right| = \left|\frac{\sin (x)}{n^2x^2 + x}\right| \leqslant \frac{1}{n^2a^2 + a} \to 0
$$
and convergence is uniform on $[a, \infty)$.
For $x > 0$
we have
$$\left|\frac{n^2\sin(x)}{1+n^2x}-\frac{\sin(x)}{x} \right| = \left|\frac{\sin (x) /x}{n^2x + 1}\right|
$$
Choose a sequence $x_n = 1/n^2$ to show that the convergence is not uniform on $(0,\infty)$.
Here we have $\sin (x_n) /x_n \geqslant \sin 1$ and
$$\left|\frac{n^2\sin(x_n)}{1+n^2x_n}-\frac{\sin(x_n)}{x_n} \right| \geqslant \frac{\sin 1}{2} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1678843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Prove $\gcd(a^3,b^3)$ = $\gcd(a,b)^3$ $\gcd(a^3,b^3)$ = $\gcd(a,b)^3$
Let there be integers $s,t,x,y$
$a^3s + b^3t = (ax + by)^3 $
Should I start like from the above?
|
$\displaystyle ~ \gcd(a^3, b^3) = \prod_{1 \le k \le n} p_k^{\min(3r_k, 3s_k)} = \prod_{1 \le k \le n} p_i^{3\min(r_k, s_k)} = \bigg(\prod_{1 \le k \le n} p_i^{\min(r_k, s_k)} \bigg)^3 = (\gcd(a, b))^3.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1678959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the proper way to set up this integral to find the area bounded by the curves?
Sketch the region enclosed by the given curves. Decide whether to integrate with respect to x or y. Draw a typical approximating rectangle. Then find the area.
$$y = 2x^2, y = 8x^2, 3x + y = 5, x ≥ 0$$
Here is my drawing: http://www.webassign.net/waplots/d/a/986f170c68ee080b539049f410cbba.gif
If I'm integrating with in terms of x, I know I'm going to need more than one integral, but I don't know how to set it up. Any help is appreciated
|
You can use double integral,
the intersection points:
between $y=8x^2$ and $y=5-3x$ are at $x=-1$ and $x=5/8$
between $y=2x^2$ and $y=5-3x$ are at $x=-5/2$ and $x=1$
let's assume that we want $\text{d$y$d$x$}$
for $I_1$: when $y$ goes from $y=2x^2$ to $y=8x^2$ then $x$ goes from $0$ to $5/8$
for $I_2$: when $y$ goes from $y=2x^2$ to $y=5-3x$ then $x$ goes from $5/8$ to $1$
$$\underbrace{\int_{x=0}^{x=5/8}\int_{y=2x^2}^{y=8x^2}1\text{d$y$d$x$}}_{I_1}+\underbrace{\int_{x=5/8}^{x=1}\int_{y=2x^2}^{y=5-3x}1\text{d$y$d$x$}}_{I_2}$$
$$=\int_{0}^{5/8} \left(8x^2 - 2x^2\right)\text{d}x + \int_{5/8}^{1} \left(-3x + 5 - 2x^2\right)\text{d}x = \frac{125}{256}+\frac{117}{256}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1679044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Why is it important to find both solutions to a second order linear differential equation? Given the equation $$y'' + y=0$$
A solution is $y=\sin(t)$
Why can't we stop there since we know a way to solve the system? Why should we consider all of the ways to solve the system?
I would really like to see a real world example when having a single solution is inadequate. I know this is asking a lot, but I often find mathematics only becomes easier to understand once I need to use it to solve something and it becomes relate-able to real things.
|
Consider a point mass $m= 1 \ \mathrm{kg}$ attached to one end of a spring with spring constant $k = 1 \ \mathrm{N/m}$. Suppose that the spring is suspended vertically from an immovable support. The oscillations of the mass around its equilibrium position can then be described by the equation
$$y''+y = 0$$
where $'$ denotes time derivative, $y$ the displacement and where we measure distance in $\mathrm{m}$ and time in $\mathrm{s}$.
As you have said, $y = \sin t$ is a solution to this equation. For this solution, we have $y(0) = 0$ and $y'(0) = 1$. This means that the mass starts from rest with unit speed. Furthermore, the maximum displacement of the mass is $1$.
But what if the mass doesn't start from rest and what if it doesn't have unit speed? What if we release the mass $1.5 \ \mathrm{m}$ from its equilibrium at $t=0$ and with zero initial speed? Then your solution would be $y = 1.5 \cos t$. This is physically different from $y = \sin t$.
In order to be able to account for different initial conditions, you need the general solution, which can be written in many ways, one of which is $y = A\sin (t) + B\sin (t)$. We could also write this as $y = C\sin(t+ \phi)$, as $C\sin(t+\phi) = C\cos(\phi)\sin(t) + C\sin(\phi)\cos(t)$. However, this is still very different from simply $y = \sin t$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1679146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
How canonical is Gauss's law of composition of forms Gauss defined the composition of binary quadratic forms $f$ and $g$ to be another binary quadratic form $F$ such that there exist integral quadratic forms
$$ \begin{align} r(x_0,x_1,y_0,y_1) &= p_0 x_0y_0 + p_1 x_0y_1 + p_2 x_1y_0 + p_3 x_1y_1\\
s(x_0,x_1,y_0,y_1) &= q_0 x_0y_0 + q_1 x_0y_1 + q_2 x_1y_0 + q_3 x_1y_1 \end{align}$$
for which
$$ F(r(x_0,x_1,y_0,y_1),s(x_0,x_1,y_0,y_1)) = f(x_0,y_0)g(x_1,y_1) $$
(and satisfying one extra technical condition).
In Section 236 of Gauss's Disquitiones Mathematicae, he shows how to find a form which is composed of two other forms. After laying out the notation, he begins by saying that you can take ad libitum four integers satisfying a pretty minor condition (minor enough that there are usually infinitely many ways to select these integers). He then constructs a composition of two forms, which depends on how you choose the four integers.
In Section 242, he moves to considering the composition of two binary quadratic forms of the same discriminant. At that point, he says that we can take the four integers to be $-1$, $0$, $0$, and $0$ and does so seemingly for the remainder of the work. With this choice, we get the composition operation on forms (and not just on ideal classes) that is common in other works. Namely, the first coefficient of the composition is a divisor of the product of the first coefficients of the forms to be composed, and the middle coefficient of the composition is obtained by solving a certain system of congruences. (A special case of this is Dirichlet's definition of composition.)
By choosing the four integers differently, you can produce other forms that are compositions of two given forms. For instance, the following identities are readily checked:
$$ (5x_0^2 + 7x_0y_0 + y_0^2)(5x_1^2 + 7x_1y_1 + y_1^2) =
\begin{cases}
5r_0^2 + 13r_0s_0 + 7s_0^2, \\
7r_1^2 + 15r_1s_1 + 7s_1^2, \end{cases}$$
with
$$ \begin{align} r_0&=x_0y_0-2x_0y_1-2x_1y_0-3x_1y_1, & s_0 &=x_0y_0+3x_0y_1+3x_1y_0+4x_1y_1\\
r_1&=3x_0y_0 - x_0y_1 - x_1y_0 - 2x_1y_1, & s_1&=-x_0y_0+2x_0y_1+2x_1y_0+3x_1y_1\end{align}$$
Thus, we see both $5x^2 + 13xy + 7y^2$ and $7x^2 + 13xy + 7y^2$ exhibited as compositions of the form $5x^2 + 7xy + y^2$ with itself.
When Gauss turns to defining the class group for a given discriminant beginning in Section 242, he seems to make a rather arbitrary choice of which form to use as the composition of two given forms of the same discriminant. My question is: is there anything canonical about Gauss's definition (other than -1, 0, 0, and 0 being nice, simple numbers)? His operation turns the classes of primitive forms under the action of the matrices of the form $\begin{bmatrix} 1 & c\\ 0 & 1 \end{bmatrix}$ into a group. But could the compositions have been systematically chosen in a different way to produce a group?
Or if it is not canonical, are there several functions from the ideals in a quadratic field to the corresponding binary quadratic forms through which multiplication of ideals translates into different operations on forms?
|
The composition of binary quadratic forms that has always been used is a version developed by Dirichlet. This can be found on page 49 in the first edition of Cox, with a correction in the second edition.
It is quite recent that Bhargava showed that there were actually 14 distinct ways to, say, implement the conditions described by Gauss. Bhargava got a Fields Medal for the job.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1679228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Exist unique $g_0 \in H^1(0, 1)$ such that $f(0) = \int_0^1 (f'g_0' + fg_0) \text{ for all }f \in H^1(0, 1)$? The mapping $f \mapsto f(0)$ from $H^1(0, 1)$ into $\mathbb{R}$ is a continuous linear functional on $H^1(0, 1)$. Does there exist a unique $g_0 \in H^1(0, 1)$ such that$$f(0) = \int_0^1 (f'g_0' + fg_0) \text{ for all }f \in H^1(0, 1)?$$
|
Hint:
$H^1(0,1)$ is a Hilbert space. If you can show that
$$L : H^1(0,1) \to \mathbb R, \ \ \ L(f) = f(0)$$
is a bounded linear functional, the Riesz representation theorem tells you that
$$L(f) = \langle f, g_0\rangle\ \ \ \forall f\in H^1(0,1),$$
which is exactly what you want.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1679342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Prove that $\sup{(A+B)} = \sup{A}+\sup{B}$
Let $A,B$ be subsets of $\mathbb{R}$. Prove that $\sup{(A+B)} = \sup{A}+\sup{B}$.
I think in order to solve this we are going to have to use the mathematical definition of supremum. Maybe we can break this up into $4$ cases: $A$ is finite, $B$ infinite; etc. This would allow us to relate supremum to the maximal value of a set and make it easier to work with.
|
$A \leq \sup A$, $B \leq \sup B$, so $\sup A + \sup B \geq \sup(A+B)$
Also, $A + B \leq \sup (A+B)$, so $A \leq \sup(A+B) - B$, so $\sup A \leq \sup (A+B) -B$, then $\sup B \leq \sup (A+B) - \sup A$. Done
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1679429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Correct set notation for "all integers which are not multiples of 7"? What is correct set notation for "all integers which are not multiples of $7$"? My best guess is:
$$ \{ x : (\forall k \in \mathbb{Z})(\neg(7k = x)) \}$$
Or
$$ \{ x : \neg(\exists k \in \mathbb{Z})(7k = x) \}$$
However this seems unlike other examples I have seen.
Is there are proper way to denote this set in set notation?
|
The most accurate translation of the English condition would be
$$
\left\{n\in\mathbb{Z}:\frac n7\not\in\mathbb{Z}\right\}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1679543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42",
"answer_count": 9,
"answer_id": 5
}
|
Why is one relation transitive but the other is not? From what I have read about a transitive relation is that if xRy and yRz are both true then xRz has to be true.
I'm doing some practice problems and I'm a little confused with identifying a transitive relation.
My first example is a "equivalence relation"
$S=\{1,2,3\}$ and $R = \{(1,1),(1,3),(2,2),(2,3),(3,1),(3,2),(3,3)\}$
My Book solutions say that this relations is
Reflexive and Symmetric
My Second example is "partial order"
$S=\{1,2,3\}$ and $R =\{(1,1),(2,3),(1,3)\}$
My Book solutions says is
Antisymmetric and Transitive
I got confused with why is the partial order(second example) Transitive. So what I did is that I applied $1R1$ and $1R3$ so $1R3$($xRy$ and $yRz$ so $xRz$).
I tried to applied this to my first example (equivalence relation). What I did is $1R1$ and $1R3$ so $1R3$ ($xRy$ and $yRz$ so $xRz$).
Can someone explain what I'm missing or doing wrong? What can I do to identify a transitive relation? As you can see on both practice examples both have the same set of relations $1R1$ and $1R3$ so $1R3$($xRy$ and $yRz$ so $xRz$) but one is transitive and the other is not.
|
A relation $R$ of the set $S$ is transitive if:
$$\forall a{\in} S~\forall b{\in}S~\forall c{\in}S: \Big(\big((a,b){\in}R\wedge(b,c){\in}R\big)\to (a,c){\in}R\Big)$$
That definition is equivalent to:
$$\neg \exists a{\in}S~\exists b{\in}S~\exists c{\in}S: \Big(\big((a,b){\in}R\wedge(b,c){\in}R\big)\wedge (a,c){\notin}R\Big)$$
Thus what you are looking for are counter examples. Look for any $(a,b), (b, c)$ in the relation without a matching $(a,c)$ in the relation. Just one shows the relation is not transitive, but you have to be sure there are none to claim transitivity.
My first example is a "equivalence relation" $S=\{1,2,3\}$ and $R = \{(1,1),(1,3),(2,2),(2,3),(3,1),(3,2),(3,3)\}$ My Book solutions say that this relations is Reflexive and Symmetric
This is not transitive because while $(1,3)$ and $(3,2)$ are in the relation, $(1,2)$ is not. One counter example is all we need.
PS: Because this is not transitive, it is not an equivalence relation. Equivalence relations are those which are reflexive, symmetric, and transitive.
My Second example is "partial order" $S=\{1,2,3\}$ and $R =\{(1,1),(2,3),(1,3)\}$ My Book solutions says is Antisymmetric and Transitive
This is transitive because there is only one pair of $(a,b),(b,c)$ elements from which a counter example could be formed, $(\color{red}1,\color{blue}1),(\color{blue}1,\color{red}3)$, but $(\color{red}1,\color{red}3)$ is indeed in the relation; so there is no counter example.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1679615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Solving an IVP with Laplace Transforms I'm trying to solve the following IVP (differential equations) with the Laplace Transform method:
\begin{cases}
y''+9y=36t\sin(3t)\\
y(0) = 0\\
y'(0) = 3
\end{cases}
After taking the Laplace Transform of both sides, I obtain
$$s^2Y(s) - 3 + 9sY(s) = \frac{216}{(s^2+9)^2}$$
where $Y(s) = \mathcal{L}[y(t)]$.
Solving for $Y(s)$, I get the following equation:
$$Y(s) = \frac{216}{(s^2+9)^2(s^2+9s)} + \frac{3}{s^2+9s}$$
When I try to take the inverse Laplace transform of both sides to solve for $y(t)$, I can't figure out what to do with the first term on the RHS (second term is a simple partial fraction decomposition). The algorithm for partial fractions with linear terms is problematic because the numerator of the first term has a $\frac{1}{0}$ issue. Without partial fractions, the only way I can think to do this one is through Reduction of Order, but my book only gives examples of the case where we only have $(s^2+b^2)^{k+1}$ in the denominator. The thing that's messing me up so much is the $s^2+9s$ term. Any help or insight in how to solve this problem from here would be much appreciated.
EDIT: After probably 2 hours of mulling over this problem, I realized that I had forgotten the $s$ after 216 because of the chain rule when taking the derivative of transforms. That makes this so much easier. I think I can do it.
|
Notice:
*
*$$\mathcal{L}_{t}\left[y'(t)\right]_{(s)}=sy(s)-y(0)$$
*$$\mathcal{L}_{t}\left[y''(t)\right]_{(s)}=s^2y(s)-sy(0)-y'(0)$$
*$$\mathcal{L}_{t}\left[t\right]_{(s)}=\frac{1}{s^2}$$
*$$\mathcal{L}_{t}\left[\sin(t)\right]_{(s)}=\frac{1}{1+s^2}$$
*$$\mathcal{L}_{t}\left[\sin(at)\right]_{(s)}=\frac{a}{a^2+s^2}$$
*$$\mathcal{L}_{t}\left[t\sin(at)\right]_{(s)}=\frac{2as}{\left(a^2+s^2\right)^2}$$
Solving your question:
$$y''(t)+9y(t)=36t\sin(3t)\Longleftrightarrow$$
$$\mathcal{L}_{t}\left[y''(t)+9y(t)\right]_{(s)}=\mathcal{L}_{t}\left[36t\sin(3t)\right]_{(s)}\Longleftrightarrow$$
$$\mathcal{L}_{t}\left[y''(t)\right]_{(s)}+9\mathcal{L}_{t}\left[y(t)\right]_{(s)}=36\mathcal{L}_{t}\left[t\sin(3t)\right]_{(s)}\Longleftrightarrow$$
$$s^2y(s)-sy(0)-y'(0)+9y(s)=36\cdot\frac{6s}{\left(9+s^2\right)^2}\Longleftrightarrow$$
Now, set $y(0)=0$ and $y'(0)=3$:
$$s^2y(s)-s\cdot0-3+9y(s)=\frac{216s}{\left(9+s^2\right)^2}\Longleftrightarrow$$
$$s^2y(s)-3+9y(s)=\frac{216s}{\left(9+s^2\right)^2}\Longleftrightarrow$$
$$s^2y(s)+9y(s)=\frac{216s}{\left(9+s^2\right)^2}+3\Longleftrightarrow$$
$$y(s)\left[s^2+9\right]=\frac{216s}{\left(9+s^2\right)^2}+3\Longleftrightarrow$$
$$y(s)=\frac{\frac{216s}{\left(9+s^2\right)^2}+3}{s^2+9}\Longleftrightarrow$$
$$y(s)=\frac{3\left(72s+\left(9+s^2\right)^2\right)}{\left(9+s^2\right)^3}\Longleftrightarrow$$
$$\mathcal{L}_{s}^{-1}\left[y(s)\right]_{(t)}=\mathcal{L}_{s}^{-1}\left[\frac{3\left(72s+\left(9+s^2\right)^2\right)}{\left(9+s^2\right)^3}\right]_{(t)}\Longleftrightarrow$$
$$y(t)=3\mathcal{L}_{s}^{-1}\left[\frac{72s+\left(9+s^2\right)^2}{\left(9+s^2\right)^3}\right]_{(t)}\Longleftrightarrow$$
$$y(t)=3\mathcal{L}_{s}^{-1}\left[\frac{72s}{\left(9+s^2\right)^3}+\frac{1}{s^2+9}\right]_{(t)}\Longleftrightarrow$$
$$y(t)=3\left[72\mathcal{L}_{s}^{-1}\left[\frac{s}{\left(9+s^2\right)^3}\right]_{(t)}+\mathcal{L}_{s}^{-1}\left[\frac{1}{s^2+9}\right]_{(t)}\right]\Longleftrightarrow$$
$$y(t)=3\left[\frac{t\left(\sin(3t)-3t\cos(3t)\right)}{3}+\frac{\sin(3t)}{3}\right]\Longleftrightarrow$$
$$y(t)=t\left(\sin(3t)-3t\cos(3t)+\sin(3t)\right)\Longleftrightarrow$$
$$y(t)=(1+t)\sin(3t)-3t^2\cos(3t)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1679771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to prove $\forall x:P(x) \implies \exists x:P(x)$ without using UI? In standard FOL, can we prove $\forall x: P(x) \implies \exists x:P(x)$ without introducing a new free variable by universal instantiation, i.e without using $\forall x: P(x) \vdash P(y)$ where $y$ is not does not occur in $P(x)$?
I have tried direct proof, proof by contradiction, and proof by contrapositive. It looks impossible to me, but can't prove it.
|
If you use the following system from A Primer for Logic and Proof, Holly P. Hirst and Jeffry L. Hirst,
Axioms
Axiom 1: $A \implies (B \implies A)$
Axiom 2: $(A \implies (B \implies C)) \implies ((A \implies B) \implies (A \implies C))$
Axiom 3: $(\lnot B \implies \lnot A) \implies ((\lnot B \implies A) \implies B)$
Axiom 4: $(\forall x ~:~ A(x)) \implies A(t)$, provided that $t$ is free for $x$ in $A(x)$.
Axiom 5: $\forall x ~:~ (A \implies B) \implies (A \implies \forall x ~:~ B)$, provided that $x$ does not occur free
in $A$.
Rules of inference
Modus Ponens (MP): From $A$ and $A \implies B$, deduce $B$.
Generalization (GEN): From $A$, deduce $\forall x ~:~ A$.
Notice that axiom 4 is the only axiom that doesn't hold in an empty universe (substitute false for A). Also notice the theorem to prove doesn't hold in an empty universe. So this axiom must be used somewhere in the proof. So a new variable must be introduced if you use these axioms.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1679909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Is inversion ever a field automorphism? Is the function sending every element of a field to its multiplicative inverse (and 0 to 0) ever a field automorphism?
|
Proposition. Inversion is an automorphism of a field $F$ if and and only if $F$ is one of $\mathbb F_2$, $\mathbb F_3$, or $\mathbb F_4$.
Proof:
Let $F$ be a field and suppose that inversion is a field automorphism of $F$. If $x \in F \setminus \{0,1\}$, then $x$ and $1-x$ are both invertible, and hence, by our assumption,
$$
1=1^{-1}=(x+(1-x))^{-1}=x^{-1} + (1-x)^{-1}.
$$
Multiplying out, it follows that $x(1-x)=(1-x) + x=1$, and hence
$$
x^2-x+1=0.
$$
Since a quadratic polynomial over a field has at most two roots, this means that there are at most two possibilities for $x$. Hence $F$ has at most $4$ elements.
If $F=\mathbb F_2$ or $F=\mathbb F_3$, then inversion is the identity map, and hence an automorphism. If $F=\mathbb F_4$, there is only one non-trivial field automorphism $\varphi$, namely the one which satisfies $\varphi(x)=x^2$. We know that multiplicative group $\mathbb F_4^\times$ is cyclic of order $3$. The field automorphism $\varphi$, when restricted to $\mathbb F_4^\times$, induces an automorphism of the multiplicative group $\mathbb F_4^\times$, which must also be non-trivial. However, the only non-trivial automorphism of a cyclic group of order $3$ is inversion. Hence $\varphi$ is the inversion automorphism. (Of course, one can also check this directly for $\mathbb F_4$, if one is more comfortable doing the explicit calculation.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1679965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Integrating $\int\frac{x^2-1}{(x^2+1)\sqrt{x^4+1}}\,dx$ I came across a question today...
Find $$\displaystyle\int\dfrac{x^2-1}{(x^2+1)\sqrt{x^4+1}}\,dx$$
How to do this? I tried to take $x^4+1=u^2$ but no result. Then I tried to take $x^2+1=\frac{1}{u}$, but even that didn't work. Then I manipulated it to $\int \dfrac{1}{\sqrt{1+x^4}}\,dx+\int\dfrac{2}{(x^2+1)\sqrt{1+x^4}}\,dx$, butI have no idea how to solve it.
Wolframalpha gives some imaginary result...but the answer is $\dfrac{1}{\sqrt2}\arccos\dfrac{x\sqrt2}{x^2+1}+C$
|
Hint Divide the numerator and denominator by $x^2$, to get:
$$\int \frac{(1-\frac{1}{x^2})dx}{(x+\frac{1}{x})(\sqrt {(x+\frac{1}{x})^2-2} )}$$
Then put $x+\frac{1}{x}=t$
$$\int \frac{dt}{t(\sqrt{t^2-2})}$$
Which is easily taken care of by putting $t=\sqrt2 \sec\theta$,
$$\int \frac{(\sqrt2 \sec\theta\tan \theta)d\theta}{\sqrt2 \sec\theta \sqrt2 \tan\theta}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1680049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Solving two specific limits without L'Hôpital's Rule Greetings to any and all that might read this. I am a 12-th grade student from Portugal and, here, it is not allowed for us to solve limits through L'Hôpital's Rule at this level (in fact, it isn't even taught, though we learn derivatives as well...), which places me (us) in a somewhat difficult position regarding the evaluation of some limits. Specifically, I've come across two particular limits that, somehow, have proven quite elusive to solve without recurring to L'Hôpital's Rule. Namely,
$$\lim _{x\to -2}\left(\frac{e^{x+2}-1}{\ln\left(7x+15\right)}\right)$$
and
$$\lim _{x\to 0}\left(\frac{\ln\left(x+1\right)-x}{x^2\left(1-x\right)} \right)$$
Through L'Hôpital's Rule, I've been able to determine that the first limit should yield $\frac{1}{7}$ and the second $-\frac{1}{2}$, but I haven't been able to reproduce this results without resorting to L'Hôpital's Rule, which is the way they demand the exercise to be solved. I've tried several variable substitutions, but all no avail; I might be missing something quite obvious, which is a mistake I often make, but I seem not to be able to solve them. In that sense, I was expecting that the Math Stack Exchange community could help me with this problem.
I thank you in advance for your answers and I apologize for any inconvenience my question might represent.
|
The first one is elementary if we set $f(x) = e^{x+2}, g(x) = \ln (7x+15).$ The expression then equals
$$\frac{(f(x)-f(-2))/(x-(-2))}{(g(x)-g(-2))/(x-(-2))}.$$
By definition of the derivative, the above $\to f'(-2)/g'(-2),$ which is easy to compute.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1680242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How do I show that if $f$ is entire and $\{\lvert f(z)\rvert < M\}$ is connected for all $M$, then $f$ is a power function?
Let $f$ be a non constant entire function satisfying the following conditions :
*
*$f(0)=0$
*for every positive real $M$, the set $\{z: \left|f(z)\right|<M\}$ is connected.
Prove that $f(x)=cz^n$ for some constant $c$ and positive integer $n$.
Let $f(z)=a_nz^n+\cdots+a_1z+a_0$ be function that satisfies the given conditions. As $f(0)=0$ we have $a_0=0$ and $f(z)=a_nz^n+\cdots+a_1z$.
As $f$ is non-constant function, its zeros are isolated. So, there exists an $r>0$ such that $f$ is non-zero on $B_r=\{z:|z|<r\}$. I was thinking of connecting this to connectedness of $\{z: \left|f(z)\right|<M\}$.
I wanted to check what goes wrong in case of $f(z)=z^2+z$. I want to check if the given set is connected for this but failed in doing so.
|
Let $D_M=\{z:|f(z)|<M\}$ open connected set; since $f$ is non-constant $D_M$ is not the plane for any $M$; given any Jordan curve $J \subset D_M$, the interior of $J$ is contained in $D_M$ by maximum modulus, hence $D_M$ is simply connected.
Let now $B_{2r}$ a small disc of radius $2r$ around $0$ where $f$ vanishes only at $z=0$ and $2a=\min_{|z|=r}|f(z)| >0$; since then $f(\partial B_r) \cap D_a = \emptyset$ it follows that $D_a \subset B_r$ (as otherwise if there is $|w|>r, |f(z)| <a$ there is a path in $D_a$ joining $w$ and $0 \in B_r$ and that must intersect $\partial B_r$) so in particular $D_a$ bounded and $f$ vanishes only at zero.
It follows that $f$ is a polynomial since $\infty$ is not an essential singularity ($|f(z)| > a, |z|>r$ and since $f$ vanishes only at $0$ we are done!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1680346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
Finding the value of $\sum_{k=1}^{\infty} \frac{x^{4k-3}}{4k-3}$
This is my power series: $$\sum_{k=1}^{\infty} \frac{x^{4k-3}}{4k-3}$$
I need to find the sum of it. Unfortunately, I have kind of no idea how to do it. I think I need to substitute with something. One of the series that I do know of is $\sum_{k=0}^{\infty} x^k = \frac{1}{1-x}$ which looks similar, $(4k-3)$ instead of $k$ but I really don't know how to substitute them.
Maybe I can do this, which will help, but I'm not sure?
$$\sum_{k=1}^{\infty} \frac{x^{4k-3}}{4k-3} = \sum_{k=1}^{\infty} \int_0^x \frac{x^{4k-4}}{4k-4} dx$$
Edit: I can see how I made a mistake with the integral.
At least with that I could have something like ${(x^4)}^{k-1}$? Any ideas?
|
$$ \frac{x^{4k-3}}{4k-3} = \int_{0}^{x} z^{4k-4}\,dz $$
hence:
$$ \sum_{k\geq 1}\frac{x^{4k-3}}{4k-3} = \int_{0}^{x}\sum_{k\geq 1} z^{4k-4}\,dz = \int_{0}^{x}\frac{dz}{1-z^4} $$
but:
$$ \frac{1}{1-z^4} = \frac{1}{2}\left(\frac{1}{1+z^2}+\frac{1}{1-z^2}\right) $$
so:
$$ \sum_{k\geq 1}\frac{x^{4k-3}}{4k-3} = \frac{1}{2}\,\arctan(x)+\frac{1}{4}\,\log\left(\frac{1+x}{1-x}\right)$$
as soon as $|x|<1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1680445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Random Variable - Probability and Statistic Suppose that $f(x)$ is a continuous and symmetric pdf, where symmetry is the property that $f(x) = f(-x)$ for all $x$. Show that $P(-a ≤ X ≤ a) = 2F(a) - 1$
Does anyone have any idea what this is even asking? I have honestly no idea what to do.
|
$$
P(-a\le X \le a) = F_X(a) - F_X(-a) = F_X(a) - (1-F_X(a)) = 2F_X(a) -1
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1680574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Why does the Axiom of Selection solve Russell's Paradox in Set Theory? I am a beginner in mathematics and I was reading a text on Set Theory that talked about how Zermelo's Axiom of Selection "solves" Russel's Paradox.
I understand that the the axiom does not allow constructions of the form
$$\{x \:: \text S(x) \}$$ and only allows$$\{x \in \text A \:: \text S(x) \}$$
but how does this change the outcome of the paradox when we have:
$$S = \{x \in \text A \:: \text x \notin \text x \}$$ where $S$ is still the set of all sets that do not contain themselves.
Won't we still get the paradox?
|
The way that the Axiom of Selection prevent's Russell's Paradox is by preventing you from selecting from all sets. Rather you are selecting from the elements of A. Since $ A \in A $ is forbidden by the Axiom of regularity the paradox can't arise.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1680720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
}
|
Calculating the expected value and variance of $n$ independent observations of $X$ I am attempting to find the expected value and variance of the random variable $X$ analytically (in addition to a decimal answer). $X$ is the random variable expression(100)[-1] where expression is defined by:
def meander(n):
x = [0]
for t in range(n):
x.append(x[-1] + 3*random.random())
return x
For those that do not understand the Python, $X$ is essentially the sum of a sequence of $100$ values with value of 3*random.random(), where random.random() is uniformly distributed on $[0,1)$.
I am almost certain that I will need to apply the concepts of:
$$\text{mean}\left(\bar{X}\right)=E\left(\frac{1}{n}\left(X_1+X_2+...+X_n\right)\right)=E\left(X\right)$$
$$\text{and}$$
$$\text{var}\left(\bar{X}\right)=var\left(\frac{1}{n}\left(X_1+X_2+...+X_n\right)\right)=\frac{1}{n}\text{var}\left(X\right)$$
$$\text{where, }\bar{X}=\frac{1}{n}\left(X_1+X_2+...+X_n\right)$$
I am having difficulty understand how I should be plugging in this equation and representing it symbolically, let alone calculating it. I created a simulation in order to better understand the distribution of the data (in addition to getting an estimate of the expected value) and it seems to be a Gaussian distribution (histogram of distribution after 100,000 trials). The simulation suggests an estimated expected value of $150.038527551$.
These solutions will culminate in the usage of the Central Limit Theorem in finding an analytical expression that approximates the pdf of $X$.
Any guidance or help to point me in the right direction would be very much appreciated!
|
So, your random variable is $$
X = 3X_1+\dots+3X_{100} = \sum_{k=1}^n 3X_k
$$
with $n=100$, where $X_1,\dots, X_n$ are independent, identically distributed random variables that are uniform in $[0,1)$. In particular, $\mathbb{E}\left[ X_k \right] = \frac{1}{2}$ and $\operatorname{var} X_k = \frac{1}{12}$ for every $1\leq k\leq n$.
By linearity of expectation, you get
$$
\mathbb{E}[X] = \mathbb{E}\left[ \sum_{k=1}^n 3X_k \right]
= \sum_{k=1}^n 3\mathbb{E}\left[ X_k \right]
=\sum_{k=1}^n 3\cdot \frac{1}{2} = n\cdot \frac{3}{2} = 150.
$$
(this does not rely on the fact that the $X_k$'s are independent, only on the fact that they all have a well-defined expectation).
By properties of variance (detailed below), crucially relying on the fact that the $X_k$'s are independent, you obtain
$$
\operatorname{var}(X) = \operatorname{var}\left( \sum_{k=1}^n 3X_k \right)
= \sum_{k=1}^n \operatorname{var}(3 X_k)
= \sum_{k=1}^n 9\operatorname{var} X_k
=\sum_{k=1}^n 9\cdot \frac{1}{12} = n\cdot \frac{3}{4} = 75
$$
where we used first the fact that "the variance of the sum of (pairwise) independent random variables is the sum of their variances",* and then that $ \operatorname{var}(aY) = a^2 \operatorname{var}(Y)$ for any real number $a$.
(*) Provided the variances are well-defined, i.e. the random variables are in $L^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1680862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Probability of sum to be divisible by 7 6 fair dice are thrown simultaneously. What is the probability that the sum of the numbers appeared on dice is divisible by 7 ?
|
One way is to add the coefficients of $x^7+x^{14}+x^{21}+x^{28}+x^{35}$ in the expression $(x+x^2+x^3+x^4+x^5+x^6)^6$ which will be symmetrical around the middle.
Another way is to use stars and bars, and apply inclusion-exclusion by preplacing $6$ in one or more of the $6$ cells, e.g. for a sum of $21$,
$\binom{20}{5} - \binom61\binom{14}{5} + \binom85 = 4332$
Once you have the number of favorable ways, you can compute $Pr$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1680976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Showing that a group with transposition acting transitively on ${1, \dots, n}$ is $S_n$ I am trying to prove that a group is $S_n$ where $n$ is any integer.
Right now I have gotten that it acts transitively on $\{1, \dots, n\}$, and contains a transposition.
Can I conclude from this that it is $S_n$? I think I'm missing something, but I can't find anything else about the group.
|
Like MooS said, it is not enough. I sum up some conditions that imply that a permutation group $H$ is $S_n$.
*
*If $n$ is prime, $H$ contains a transposition and acts transitively on $\{1,\dots,n\}$.
*If $n$ is an integer, $H$ contains a transposition and acts doubly transitively on $\{1,\dots,n\}$.
*If $n$ is an integer, $H$ contains a transposition and a $(n-1)$-cycle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1681091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Definition of connectedness? Connectedness is defined as: "A metric space $E$ is connected if the only subsets of $E$ which are both open and closed are $E$ and $\varnothing$. A subset $S$ of a metric space is a connected subset if the subspace $S$ is connected."
Can someone provide me with a more trivial/simple definition of connectedness?
|
Topological space is connected if and only if the only subsets in it, simultaneously open and closed, are the space itself and the empty subset.
In case of a metric space, the topology is induced by the metric.
As a counter-example, consider say a pair of parallel lines or planes as a single topological space - it is not connected.
Note also that it is not the same as path-connectedness.
And that the quality of being (dis)connected is not an intrinsic property of s (sub)set but is relative to the topological space with relation to which it is considered.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1681188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
}
|
CDF of absolute value of difference in random variables Let $X$ and $Y$ be independent random variables, uniformly distributed in the interval $[0,1]$. Find the CDF and the PDF of $|X - Y|$?
Attempt
Let $Z = |X - Y|$, so for $z \geq 0$, the CDF $F_{Z}(z) = \mathbf{P}(Z \leq z) = \mathbf{P}(|X - Y| \leq z) = \mathbf{P}(-z \leq X - Y \leq z)$, which is where the algebra becomes confusing. Since they are independent, the joint pdf of $X$ & $Y$ is simply 1, as long as $(X,Y)$ belong to the unit square.
The solution suggests a plot the event of interest as a subset of the unit square and find its area. Any hints?
|
For fixed $z\in\mathbb{R}$ we have:$$F_Z(z)=P\left(\left|X-Y\right|\leq z\right)=\int\int1_{\left(-\infty,z\right]}\left(\left|x-y\right|\right)f_{X,Y}(x,y)dxdy$$
where $f_{X,Y}(x,y)$ denotes the density of $\langle X,Y\rangle$.
Substituting this density we arrive at:
$$F_Z(z)=P\left(\left|X-Y\right|\leq z\right)=\int_{0}^{1}\int_{0}^{1}1_{\left(-\infty,z\right]}\left(\left|x-y\right|\right)dxdy$$
Can you work this out?
If the CDF is found then the PDF can be found by differentiating.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1681363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
}
|
Prove or disprove $ab\mid ac\Longrightarrow b\mid c$ I need to prove or to give a counter-example:
$$ab\mid ac\Longrightarrow b\mid c$$
My attempt:
Yes, this is correct, let's divide $ab$ and $ac$ by $a$ (assuming that $a\neq 0$) we get $$b\mid c$$ and that's it.
Is my attempt correct?
|
Yes. Your attempt is correct. Let me simplify it.
Proof:
Assuming $a$ is not 0.
$$ab|ac \implies kab = ac \implies kb = c \implies b|c$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1681452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Square root of $\sqrt{1-4\sqrt{3}i}$ How can we find square root of the complex number
$$\sqrt{1-4\sqrt{3}i}?$$
Now here if I assume square root to be $a+ib$ i.e.
$a+ib=\sqrt{\sqrt{1-4\sqrt{3}i}}$, then after squaring both sides, how to compare real and imaginary part?
Edit: I observed
$\sqrt{1-4\sqrt{3}i}=\sqrt{4-3-4\sqrt{3}i}=\sqrt{2^2+3i^2-4\sqrt{3}i}=\sqrt{(2-\sqrt{3}i)^2}$ which made calculation easier.
|
You need to first find the square root of $1 - 4 \sqrt{3}i$ using the same method: let $(c+di)^2 = 1 - 4 \sqrt{3} i$ and then compare real and imaginary parts to find $c$ and $d$ explicitly. This will give you two different answers. Then assume $(a+bi)^2 = c + di$, and compare real and imaginary parts to find $a$ and $b$ explicitly. This gives you two solutions for both solutions of $(c+di)^2 = 1 - 4 \sqrt{3} i$, so in total, you get four different answers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1681555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
}
|
Elementary question about the limit $\big( 1 - \frac 1 {\sqrt n}\big )^n$, $n\to\infty$. When calculating the limit $L=\big( 1 - \frac 1 {\sqrt n}\big)^n$, $n\to\infty$, what allows me to do the following:
$$
L=\lim \left(\left(1-\frac {1}{\sqrt n}\right)^\sqrt{n} \right)^\sqrt{n}
$$
As the term inside the outer parenthesis goes to $e^{-1}$, we have $L=\lim e^{-\sqrt n}=0$.
It's like we're distributing the parenthesis somehow:
$$\lim \left(\left(1-\frac {1}{\sqrt n}\right)^\sqrt{n} \right)^\sqrt{n}=\lim \left(\lim\left (1-\frac {1}{\sqrt n}\right)^\sqrt{n} \right)^\sqrt{n}=\lim e^{-\sqrt n}$$
The question is: Why can we do this? Which property are we using?
|
You are looking for the following statement.
Let $(a_n)$ be a converging sequence of non-negative real numbers with $\lvert \lim_{n → ∞} a_n \rvert < 1$. Furthermore, let $(e_n)$ be an unbounded, increasing sequence. Then $({a_n}^{e_n})$ converges to zero.
Proof idea. Let $a = \lim_{n → ∞} a_n$. Let $K > 1$ be a real number. Then $({a_n}^K)$ converges to $a^K$ by limit theorems. Since $e_n > K$ for large enough $n$, almost all members of $({a_n}^{e_n})$ are smaller than $a^K$, so both limes inferior and limes superior of the sequence $({a_n}^{e_n})$ lie within the interval $[0..a^K]$.
Because $K$ was arbitrary and $\lvert a \rvert < 1$, the sequence must converge to zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1681772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Any countable free group is embeddable into a free group of rank $2$ I have found this proof of the fact that any countable free group is embedable in a free group of rank $2$ (see the last page, Proposition 2). But isn't this proof incorrect?
First off, it says $w=b^{-i_1}a^{\epsilon_1}b^{i_1}\dots$. Shouldnt it be $w=b^{-\epsilon_1 i_1}a^{\epsilon_1}b^{\epsilon_1 i_1}\dots$? Or are they somehow equivalent.
Second, it says $a^{\epsilon_j}$ and $a^{\epsilon_{j+1}}$ are present in the literal and so $w$ cannot collapse to $1$. But that is not true...it even says that $i_j$ may equal $i_{j+1}$, and if so then we must collapse $a^{\epsilon_j}a^{\epsilon_{j+1}}$ as $a^{\epsilon_j+\epsilon_{j+1}}$, and so clearly these two literals are not present in $w$ in this case, and so how then do we know that after perhaps collapsing it some more we do not get the exponent of this to equal $0$?
|
Note that $$x_{i_1}^{\epsilon_1}=b^{-i_1}a^{\epsilon_1}b^{i_1}$$
Thee is no collapsing when going from $a^{\epsilon_j}a^{\epsilon_{j+1}}$ to $a^{\epsilon_j+\epsilon_{j+1}}$. Note that we speak about words over the alphabet $\{a,b,a^{-1},b^{-1}\}$ and that $a^n$ is just a notational shorthand for $\underbrace{aaa\ldots a}_n$ if $n>0$ or $\underbrace{a^{-1}a^{-1}a^{-1}\ldots a^{-1}}_{|n|}$ if $n<0$ or the empty word if $n=0$; at any rate $a^n$ stands for $|n|$ specific letters of our alphabet. Collapsing would mean that the total number of letters changes, but in $a^{\epsilon_1}a^{\epsilon_2}$ we have $|\epsilon_1|+|\epsilon_2|$ letters and in $a^{\epsilon_1+\epsilon_2}$ we have $|\epsilon_1+\epsilon_2|$ letters. We have $|\epsilon_1+\epsilon_2|=|\epsilon_1|+|\epsilon_2|$ because either $\epsilon_1=\epsilon_2=+1$ or $\epsilon_1=\epsilon_2=-1$. (In other words, neither $aa$ nor $a^{-1}a^{-1}$ allows collapsing).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1681851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How would one solve the following equation? This equation is giving me a hard time.
$$e^x(x^2+2x+1)=2$$
Can you show me how to solve this problem algebraically or exactly? I managed to solve it using my calculator with one of its graph functions. But I would like to know how one would solve this without using the calculator.
Highly appreciated,
Bowser.
|
The answer given by Desmos for intersection of the two curves $y=e^x$ and $y=\frac {2}{(x+1)^2}$ is $\color{red}{x=0.249}$. Now we have
$$(x+1)^2=2e^{-1}\iff x^2+2x+1=2(1-x+\frac{x^2}{2}-\frac{x^3}{6}+\frac{x^4}{24}-\frac{x^5}{60}+O(x^6))$$ hence $$1-4x-\frac{x^3}{3}+\frac{x^4}{12}-\frac{x^5}{60}+20\cdot O(x^6)=0$$
The first approximation $1-4x=0$ gives $\color{red}{x\approx 0.25}$
The second approximation $ 1-4x-\frac{x^3}{3}=0$ gives $\color{red}{x\approx 0.24872}$
And we can continue but we see that the first approach is already good enough.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1681953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
If $f:\mathbb{R}\mapsto\mathbb{R}$ is a non-constant periodic function than $\lim_{x \to \infty} f(x)$ does not exist I am trying to prove if $f:\mathbb{R} \mapsto \mathbb{R}$ is any non-constant periodic function than $\lim_{x \to \infty} f(x)$ does not exist. What I have so far is this:
Suppose $f$ is periodic with period $p$. Suppose the limit exists and is equal to $L$. Than for every $\epsilon > 0$ there exists $\delta > 0$ such that $x > \delta \implies |f(x) - L| < \epsilon.$ Since $f$ is periodic we have
$|f(x) - L| = |f(x) + f(x+p) - f(x+p) - L| = |(f(x) - f(x+p)) + (f(x+p) - L)| \leq |f(x) - f(x+p)| + |f(x+p) - L| = |f(x) - f(x)| + |f(x) - L| = |f(x) -L|$
Thus $|f(x) - L| < |f(x) - L|$ a contradiction. Therefore $\lim_{x \to \infty} f(x)$ does not exist.
|
Your reasoning implies that $|f(x)-L|\le|f(x)-L|$, which it is not a contradiction. In fact, as comments suggest, the supossition that $f$ is periodic and has limit at infinity should imply that $f$ is constant.
So, let's suppose that $f$ is periodic and not constant. Then, there exist two real numbers $x,y$ such that $f(x)\neq f(y)$. Let $\epsilon=|f(x)-f(y)|/2$ and $p$ be the period. Suppose also that $L=\lim_{x\to\infty} f(x)$.
Now, for any $K\in\Bbb R$ we have some natural numbers $m,n$ such that $x+mp>K$ and $y+np>K$. You have to prove that $|f(x+mp)-L|<\epsilon$ and $|f(y+np)-L|<\epsilon$ together lead to a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1682005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
}
|
Weird Inequality that seems to be true Is it true that:
$$\left (3x+\frac{4}{x+1}+\frac{16}{y^2+3}\right )\left (3y+\frac{4}{y+1}+\frac{16}{x^2+3}\right )\geq 81,\ \forall x,y\geq 0$$
I have proved that $3x+\frac{4}{x+1}+\frac{16}{x^2+3}= 9 +\frac{(x-1)^2 (3x^2+1)}{(x+1)(x^2+3)}, \ \forall x\geq 0$, but I did not succeed in proving the initial inequality.
Thanks, for the counterexample. This is for sure good:
$$\left (3x+\frac{4}{x+1}+\frac{8}{\sqrt{2(y^2+1)}}\right )\left (3y+\frac{4}{y+1}+\frac{8}{\sqrt{2(x^2+1)}}\right )\geq 81,\ \forall x,y\geq 0$$
|
This inequality is false, e.g. $x=2.5$, $y=0.5$. You will obtain a value of $79.9901<81$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1682088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Bayesian Statistics: Finding Sufficient Statistic for Uniform Distribution The example: let $y_1,\dots,y_n \overset{\text{i.i.d.}}\sim U([0,\theta])$, where $\theta >0$ is unknown. Find a sufficient statistic for $\theta$.
Solution attempt:
$$g(y_1,\dots,y_n) = c\quad \text{(constant)}$$
$$P(y_i\mid\theta) = \frac{1}{\theta}\quad \text{ for } 0<y_i<\theta$$
$$P(y_1,\dots,y_n\mid\theta) = \prod_{i=1}^n P(y_i\mid\theta) = \frac{1}{\theta^n}\quad\text{ for } 0<y_1,\dots,y_n<\theta$$
Now this is where I got stuck. I have seen this post about Sufficient Statistic but I am still stuck. Could somebody help me find a sufficient statistic for this problem? (I think maybe taking the average or the maximum value of $y_i$s might work but not sure how to do the next step)
|
I find it at best irritating to use the same symbol to refer both to the random variable and to the argument to the density function. We can understand such things as $\Pr(Y\le y) = (\text{a certain function of } y)$ because capital $Y$ and lower-case $y$ mean two different things.
Write the density like this and see if you can do something with that:
$$
f_{Y_1,\ldots,Y_n}(y_1,\dots,y_n\mid\theta) = \prod_{i=1}^n P(y_i\mid\theta) = \begin{cases} 1/\theta^n & \text{if } \max\{y_1,\ldots,y_n\}\le\theta, \\
0 & \text{if } \max\{y_1,\ldots,y_n\} >\theta, \end{cases}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1682167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How to define a sequence with an infinite amount of each natural number? I was trying to create a sequence such that for every $x \in \mathbb{N}$, that sequence has a subsequence that converges to $x$.
Basically I came up with the sequence that has each natural number an infinite amount of times. Now I am having trouble defining it. Any help?
|
How about just:
$$ a_n=1,1,2,1,2,3,1,2,3,4,1,2,3,4,5,\ldots$$
Clearly each natural number appears an infinite amount of times, so you can extract a subsequence converging to whichever natural number you want.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1682326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
How to show that a map without fix point from annular region to annular region is homotopic to antipodal map $\Omega=\{x\in R^3: 1\le||x||\le2\}$
If $L:\Omega\rightarrow \Omega $ is continuous and without fix point , how to show $L$ is homotopic with antipodal map $x\rightarrow -x$?
|
The homology of $\Omega$ is $H_0(\Omega,\Bbb{Q})=H_2(\Omega,\Bbb{Q})=\Bbb{Q}$ and $H_i(\Omega,\Bbb{Q})=0$ if $i\ne 0,2$.
The Lefschetz formula tells you that a map f without fixpoints necessarily
$0=Tr(H_0(f))+Tr(H_2(f)=H_0(f)+H_2(f)$. But $H_0(f)=H_0(id)=1$ for all $f$,
hence in our case $H_2(f)=-1=H_2(-id)$. By the rational
Hurewicz theorem and the naturality of the Hurewicz map this implies that $\pi_2(f)=\pi_2(-id)$, hence $f$ is homotopic to $-id$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1682411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Number of integer solutions to the equation $x_1+x_2+x_3+x_4=100$ $x_1+x_2+x_3+x_4 = 100$ with
$1 \le x_1\le10$
$2\le x_2\le15$
$x_3\ge5$
$0\le x_4\le10$
Apparently this is the same as
$y_1 + y_2 + y_3 + y_4 = 92$ with
$y_1 \le 9$
$y_2 \le13$
$y_4 \le10$
I understand the $3$ conditions, but what has happened to $x_3$?
I can see that it has been subtracted but what has happened to the conditions?
|
We are defining $y_1=x_1-1, y_2=x_2-2,y_3=x_3-5,y_4=x_4$ and demanding that all the $y$'s be $\ge 0$. Once we defined $y_3=x_3-5$, there is no constraint on $y_3$ except that it be nonnegative, so it is not listed in the constraints.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1682488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
What is the difference between ($\tan x \sec^2x$) and ($\sin x/\cos^3x$)? Why is the answer to the integration different? $$\int \:\frac{\left(\sin x+\tan x\right)}{3\cos^2x}dx$$
I know I have to split the equation into
$$\frac{1}{3}\int \:\left(\:\frac{\sin x}{\cos x}\right)\left(\frac{1}{\cos x}\right)dx+\frac{1}{3}\int \:\left(\:\tan x\right)\left(\frac{1}{\cos^2x}\right)dx$$
I know that for the first part, it is $$\frac{1}{3}\int \tan x\sec xdx$$ which is $$\sec x$$.
However, for the second part, wouldn't it be $$\frac{1}{3}\int \tan x \sec^2xdx$$
If I used $$u=\tan x$$ then $$du=\sec^2xdx$$ so wouldn't the answer be $$\frac{1}{6}\tan^2x$$
However, the book is saying that the second part is supposed to be $$\frac{1}{6}\sec^2x$$ because I was supposed to convert the second part into $$\frac{1}{3}\int \frac{\sin x}{\cos^3x}dx$$ and let $$u=\cos x$$
What I am doing wrong? Why can't it be $$\tan \sec^2x$$ instead of $$\sin x/\cos^3x$$?
|
Both the answers differ by a constant(1/6). So both the answers and both the methods are correct. The constant of integration takes care of the constant(1/6).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1682610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Finding the zero-state output The input and output of a stable network are related via the following equation.
$$\frac{d^2y(t)}{d(t)} + \frac{2*dy(t)}{d(t)} + 10y(t) = \frac{dx(t)}{d(t)} + x(t)$$
x(t) = input, y(t) = output, u(t) = unit function.
The input is $$\frac{3u(t)}{e^t}$$
I want to find the zero-state output.
Now I have the transfer function as $$\frac{iw + 1}{-w^2 +2(iw) + 10}$$
But I'm not quite sure where to proceed from here. My intuition is to move the transfer function to the time domain through fourier transform, but I'm not sure how I would use that to continue the problem.
|
First find the solutions of $y''(t) + 2y'(t) + 10y = 0$. Passing to the characteristic polynomial, it has roots $\lambda_{1,2} = -1\pm 3i$, so every function of the form
$$
y_0(t) = Ae^{-t}e^{i3t} + Be^{-t}e^{-i3t}
$$
solves the homogenous equation.
Now consider the non-homogenous one. If $u$ is differentiable you get
$$
x'(t) + x(t) = 3u'(t)e^{-t}-3u(t)e^{-t}+3u(t)e^{-t} = 3u'(t)e^{-t}.
$$
I seek for particular solutions $y(t) = a(t)e^{-t}$. So $y'(t) = a'(t)e^{-t}-a(t)e^{-t}$, $y''(t) = a''(t)e^{-t}-2a'(t)e^{-t} + a(t)e^{-t}$. Substituting into the equation you get
$$
a''(t) + 9a(t) = 3u'(t).
$$
If you know $u(t)$ there is some chance to solve it, and your solution will be
$$
y_0(t) + a(t)e^{-t}
$$
for some opportune constants $A,B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1682732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is limit of function -1/0 ok? A quick question, i'm determining the limit of this function:
$$\lim_{x→1}\frac{x^2 - 2x}{x^2 -2x +1}$$
When I divide numerator and denominator by $x^2$ and fill in $1$, I get $-1/0$. This is an illegal form right? Or does it indicate it is going to $∞$ or $-∞$?
|
Notice, $$\lim_{x\to 1}\frac{x^2-2x}{x^2-2x+1}$$
$$\lim_{x\to 1}\frac{(x^2-2x+1)-1}{x^2-2x+1}$$
$$=\lim_{x\to 1}\left(1-\frac{1}{(x-1)^2}\right)\longrightarrow \color{red}{-\infty}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1682838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
}
|
Proof on set equality I was making a few exercises on set proofs but I met an exercise on which I don't know how to start:
If $A \cap C = B \cap C $ and $ A-C=B-C $ then $A = B$
Where should I start? Should I start from $ A \subseteq B $ or should I start from this $ ((A\cap C = B\cap C) \land (A-C = B-C)) \Rightarrow (A = B)$ ?
|
You show $A\subseteq B$ and $B \subseteq A$, as one usually would when showing that two sets are equal. Since the conditions are symmetric in $A$ and $B$, the two proofs are completely analoguous, so I will only do one of them.
To show $A \subseteq B$, take an $a \in A$, and note that either $a \in C$ or $a \notin C$. If $a \in C$, then we have $a \in A\cap C$. If $a\notin C$, then we have $a \in A-C$. In both cases, you may use the given set equalities to conclude that $a \in B$. This shows $A \subseteq B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1682961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Are any of these in the kernel? Let $T:P_2 \rightarrow P_3$ be the linear transformation with rule $T(p)(t) = tp(t)$. Which of the following (if any) are in the kernel of $T$?
*
*$p_1(t) = t^2$
*$p_2(t) = 0$
*$p_3(t) = 1+ t$
Here is my question. To be in the Kernel of $T$, $T(u) = 0$. Would that mean only the second function would be in the Kernel, since it is the only function listed which results in $T(u) = 0$. Or does this mean that they are all in the kernel, because they all can result in $0$, if $t = 0$.
|
The equation $T(u)=0$ is to be read in a way that the right hand side refers to the zero polynomial function, i.e. $f:\mathbb R \rightarrow \mathbb R, t\mapsto 0$, which is equal to zero for all $t$.
Thus, the first interpretation is correct, the result has to be equal to zero for all $t$, not just a particular choice. Thus, only $p_2$ is in the kernel.
To take an analogue with a vector $x \in \mathbb R^n$: In order to be in the kernel of a matrix $A$, it does not suffice that one of the components of $Ax$ is zero, but the whole vector is.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1683016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Prove that the inequality $\sin^8(x) + \cos^8(x) \geq \frac{1}{8}$ is true for every real number. Prove that the inequality $\sin^8(x) + \cos^8(x) \geq \frac{1}{8}$ is true for every real number.
|
$$\sin^8 x+\cos^8x \ge \frac 18;$$
$$ \left (\frac{1-\cos2x}{2} \right )^4+ \left (\frac{1+\cos2x}{2} \right )^4\ge \frac 18;$$
$$(1-\cos2x)^4+(1+\cos2x)^4 \ge2$$
$$1-4\cos2x+6\cos^22x-4\cos^32x+\cos^42x+$$
$$+1-4\cos2x+6\cos^22x-4\cos^32x+\cos^42x\ge 2$$
$$12\cos^22x+2\cos^42x \ge 0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1683097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Evalute $\lim\limits_{n\to\infty}\sin^2(\pi\sqrt{ n^2+n})$. I like to find the following limit:
$\lim\limits_{n\to\infty}\sin^2(\pi\sqrt{ n^2+n})$.
Any ideas or insight would be greatly appreciated.
|
Hint: Note that
$$\sqrt{n^2+n}=\sqrt{n^2+n}-n+n=\frac{n}{\sqrt{n^2+n}+n}+n=\frac{1}{\sqrt{1+1/n}+1}+n.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1683209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Find a $2 \times 2$ matrix $A$ with each main diagonal entries $0$, and with $A^2 = -I.$ I'm not sure how to tackle this problem.
I'm not certain what is meant by "each main diagonal entries 0". Does this mean:
$$A=\begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix}$$
I'm also not sure what it means by $A^2 = -I$?
From Wikipedia, I gathered that the identity matrix is an $n \times n$ square matrix with one's on the main diagonal and zero's elsewhere. What would would be considered the main diagonal?
|
The main diagonal runs from the top left to the bottom right, so you're looking for a matrix like
$$A = \left[ \begin{array}{cc} 0 & a \\ b & 0 \end{array}\right]$$
such that $A^2 = -I$. This can be done by trial and error, or by writing a system of a few equations after finding what $A^2$ is.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1683351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Why would the category of topological spaces be a balanced category (i.e. monic epimorphisms are isomorphisms)? I've just read on this page that
For example, $\mathsf {Set}$ (the cateogry of sets), $\mathsf {Grp}$ (the category of groups), and $\mathsf {Top}$ (the category of topological spaces) are all balanced.
(Balanced means that all the monic epimorphisms are isomorphisms).
I clearly understand for $\mathsf{Set}$ and $\mathsf{Grp}$, but isn't this wrong for $\mathsf{Top}$? For instance,
$$f:[0,1[ \longrightarrow S^1 \qquad t \longmapsto e^{2πit}$$
is continuous and bijective but is not an isomorphism in $\mathsf{Top}$. Am I missing something there?
Thank you for your comments!
|
As it was pointed out in the comments (by Pedro Sánchez Terraf and Rob Arthan), the PlanetMath page is wrong. It is not true that every monic epimorphism in $\sf Top$ is an isomorphism.
Other examples of such morphisms can be found in the category of Hausdorff spaces $\sf Haus$ (looking at the inclusion $\Bbb Q \hookrightarrow \Bbb R$) or in $\sf Ring$ (looking at $\Bbb Z \hookrightarrow \Bbb Q$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1683462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Finding the mean time to second failure? So I can find the mean time to failure by using the equation $∑kpk$ which is the expected number of trails. I then used the formula $(1−x)^{−2}=1+2x+3x^2+⋯$ to simplify the above formula to $1/p$ which is the mean time to failure. So what about the mean time to second failure? What I have so far is an example where we want to see the probability of getting two heads:
1/4 HH
1/8 THH
1/16 TTHH
1/32 TTTHH
1/64 TTTTHH
And I don't know where to go from there. I think I know what the answer is but I don't know how to prove it using the above methods.
|
If we are counting the number of trials, then notice that the 2nd success occurs on the $k$th trial. Hence, there is one success in the previous $k-1$ trials. there are $\binom{k-1}{1}$ ways to choose where the success happens, there are 2 successes with probability $p = 1/2$, and $k-2$ failures with probability $1-p$. Hence
$$P(X = k) = \binom{k-1}{1}(1-p)^{k-2}p^2.$$
This is a negative binomial distribution. It can be extended to $k$ trials with $r$ successes.
You could approach the way you did to get the mean. Or notice that in thise case,
$$X = X_1+X_2,$$
$X_i$ are the waiting times until the one success. Each is independent and follows a $\text{Geom}(p = 1/2)$ on $\{1,2,3,\dotsc\}$. Thus
$$E[X] = E[X_1] +E[X_2] = \frac{1}{p}+\frac{1}{p} = \frac{2}{1/2} = 4.$$
Second alternative is to use the tail sum formula, since $X$ is non-negative,
$$E[X] = \sum_x P(X\geq x).$$
Note: I treat/define the successes as 'failures' (that you want).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1683542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
prove or disprove if $\|AB - I\|<1$ then $\|BA - I\|<1$ can we say if $\|AB - I\|<1$ then $\|BA - I\|<1$ for some arbitrary norm.
I am trying to make counter example but I stuck please help me.
|
For given matrices $A$ and $B$, it depends on the norm you use (see this for examples of matrix norms), so it is important to specify your norm. We have already seen in other answers that for some norms this is false. For the Frobenius norm, namely $\|A\|_F \overset{def}= \sqrt{\mathrm{tr}(A^{\top}A)}$, this is true for symmetric matrices $A$ and $B$ ; this follows from the fact that the Frobenius norm is symmetric, i.e. $\|A^{\top}\|_F = \|A\|_F$, and $(AB-I)^{\top} = BA-I$. For any symmetric norm the result is true on symmetric matrices. Other instances of symmetric norms include the $L_{p,q}$ norms with $q=p$ (see this again, it is described there).
Otherwise producing counter examples is just a matter of crunching numbers ; you don't need to be smart, the identity will generally fail. Just try a bunch of numbers until it works.
Hope that helps,
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1683617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Establish the identity $\frac{\cot\theta + \sec\theta}{\cos\theta + \tan\theta} = \sec\theta \cot\theta\$ Establish the identity:
$$\dfrac{\cot\theta + \sec\theta}{\cos\theta + \tan\theta} = \sec\theta \cot\theta$$
The first step I got was:
$$\sec\theta \cot\theta = \dfrac{\sec\theta \cot\theta\,\big(\cos\theta + \tan\theta\big)}{\cos\theta + \tan\theta}$$
Then it tells me to rewrite the factor $$\cos\theta + \tan\theta$$
in the numerator using reciprocal identities.
How would I do that?
Here is what the assignment looked like:
|
Continuing from what you got:
$$\sec\theta \cot\theta = \dfrac{\sec\theta \cot\theta\,\big(\cos\theta + \tan\theta\big)}{\cos\theta + \tan\theta}$$
and since $\sec \theta \cos \theta = 1, \cot \theta \tan \theta = 1$, expand the brackets:
$$\sec\theta \cot\theta = \frac{\cot \theta + \sec \theta}{\cos \theta + \tan \theta}$$
There is no need to rewrite $\sec \theta \cot \theta$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1683698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How can trigonometric functions be negative? I cannot understand why $\cos(180-\theta)$ say is $-\cos\theta$. This is probably because my teacher first introduced trigonometry in triangles. I do not understand it for obtuse angles because I cannot think of them in a right triangle.
I realised that I couldn't feel what I had read "in my spleen" when I was looking at the proof for the law of cosines in an obtuse-angled triangle. I have spent quite some time thinking about how the "$-\cos\theta$" entered the derivation. I cannot fully understand, why the negatives which work in the $XY$-plane work in triangles. For instance, since in a triangle, all the sides are positive while taking the ratio of sides we do not get any negative values but how then does $\cos 120^{\circ}=-0.5$. My brain is in a mess right now. I would appreciate it if someone could help me out or suggest something that I can do.
Let me illustrate what I can't get around.
It is given that in the triangle $\angle BAC=120$ degrees,$|AC|=3$ and that D is the foot of the perpendicular from C to BD. Then $\cos\angle BAC=-0.5=\dfrac{AD}{AC} \implies AD=-1.5$ ?
|
The cosine of an obtuse angle simply does not come from ratios of the lengths of the sides of an obtuse angle. It's defined to be the $x$ coordinate of the intersection of the terminal side of the angle with the unit circle. That's all. There's nothing forcing us to make this definition, except that it's immensely useful and agrees with the ratios of sides definition for acute angles. From this definition you can prove that $\cos(180-\theta)=-\cos(\theta)$, using pictures like the one you displayed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1683878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Maximum minimum values in trigonometry
Find minimum value of $2\sin^2a+3\cos^2a$
Solving it we get $2+ \cos^2a$
Answer: $3$ (taking $\cos a$ as $-1$)
Why are we using the minimum cosine value as $-1$ instead of using the cosine as $0$?
This can make the minimum value as $2$.
|
$$2 \sin^2a+3 \cos^2 a= 3\cos^2a+2-2\cos^2 a=\cos^2a+2$$
$$0\le \cos^2a \le 1 \Rightarrow 2\le \cos^2a+2 \le 3$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1684054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Size issues in 2-categories I was playing a bit the 2-category Cat trying to have a better understanding of the notion of a 2-category (strict I guess). The usual definition of a category that I use assumes that $Hom(A,B)$ is a set.
What is an analogue of that condition in 2-categories? I guess you need to have some size restrictions in order to have a higher-Yoneda. I think the class of natural transformations between two functors is not a set in general, am I right?
|
Paul Blain Levy has a short (1-page) note on the topic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1684162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
proving that $\frac{(n^2)!}{(n!)^n}$ is an integer How to prove that $$\frac{(n^2)!}{(n!)^n}$$ is always a positive integer when n is also a positive integer. NOTE i want to prove it without induction. I just cancelled $n!$ and split term which are $n^2-(a^2)=(n-a)(n+a)$ where a is a perfect square. nothing more i could do.
|
$S_n \times S_n \times \cdots \times S_n$ is a subgroup of $S_{n^2}$.
The number in question is the index of that subgroup and thus an integer by Lagrange's theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1684230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Prove the following tautology: $\big[(p\leftrightarrow q) \land (\lnot q \to r) \land (p \to r)\big]\to r $
Prove the following tautology:
$$\big[(p\leftrightarrow q) \land (\lnot q \to r) \land (p \to
r)\big]\to r $$
My effort
I am trying to prove this with a direct reasoning,i.e without using truth tables.
Now since $ \lnot q \to r $ and $p \to r $ ,I think it follows that I can rewrite $$\big[(p\leftrightarrow q) \land (\lnot q \to r) \land (p \to
r)\big]=\big[(p \land \lnot q) \to r \big] \to r $$
which would be the proof for it,but I am not so sure :I just started studying math logic.
In what other ways (excluding truth table) could this problem be solved ?
|
Here is a natural deduction proof in a Fitch-style proof checker:
The law of the excluded middle (LEM) on $Q \lor \neg Q$ is referenced at the end.
Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1684325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
Proof by induction: Show that $9^n-2^n$ for any $n$ natural number is divisible by $7$. Can someone please solve following problem.
Show that $9^n-2^n$ for any $n$ natural number is divisible by $7$. ($9 ^ n$ = $9$ to the power of $n$).
I know the principle of induction but am stuck with setting up a formula for this.
|
Assume $7\mid(9^n-2^n)$; then you can write $9^n-2^n=7m$, for some integer $m$, or $9^n=7m+2^n$ as well. Then
$$
9^{n+1}-2^{n+1}=
9\cdot 9^n-2^{n+1}=
9(7m+2^n)-2^{n+1}=
63m+(9-2)2^n=
7(9m+2^n)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1684456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
}
|
Find a non-trivial solution to $5x^2+7y^2=3z^2$. Find a non-trivial solution to $5x^2+7y^2=3z^2$. Note: $x,y,z$ are integers.
My attempt: By a theorem I have $x^2 \equiv 1 \pmod 5$, $x^2 \equiv 1 \pmod 7$ and $x^2 \equiv 1 \pmod 3$. I just guessed $x=1$. So I have $5+7y^2=3z^2$. I now guessed $y=1$ and $z=2$.
Is there a better way to find non-trivial solutions than guessing?
|
The existence of integer solutions of $ax^2+by^2+cz^2=0$ or other ternary quadratic forms can be decided by using Legendre's theorem.
Theorem (Legendre): Let $a,b,c$ coprime positive integers, then $ax^2 + by^2 = cz^2$ has a nontrivial solution in integers (or rational numbers) $x,y,z$ if and only if $$\left(\frac{-bc}{a}\right)=\left(\frac{-ac}{b}\right)=\left(\frac{ab}{c}\right)=1.$$
If such solutions exists one can use a theorem of Holzer, that $x,y,z$ must be relatively small (provided $a,b,c$ are squarefree), i.e., $|x|\le \sqrt{|bc|}$, $|y|\le \sqrt{|ca|}$, and $|z|\le \sqrt{|ab|}$, so that one can search by computer (there are more effective algorithms, though, than just trying here).
Reference for finding explicitly solutions: J. E. Cremona, D. Rusin: Efficient solution of rational conics. Math. Comp. 72 (2003), no. 243, 1417 1441.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1684557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Notation for double-integrals - partial or full differentials? When you are trying to find a volume for a function z = f(x,y), the common notation is to find:
$$\int\biggr(\int f(x,y)dx\biggr)dy$$
However, when you do this, you are actually keeping the $y$ constant on the first integral. To me, for this to be the way it works, it seems like you should actually be using the partial differential $\partial x$ and $\partial y$. So it seems like the notation for this should be:
$$\int\biggr(\int f(x,y)\partial_z{x}\biggr)\partial_z{y}$$
Is this an incorrect intuition? Why or why not?
|
Apparently, you are correct, because when you use implicit differentation to solve partial derivatives of multivariable functions in the form $z = f(x, y)$, you will take the partial derivative of $z$ with respect to $x$, which means you will take the derivative of the $x$-terms and leave all $y$ terms as constants. It will be reversed when you take the partial derivative of $z$ with respect to $y$; you will treat all $x$-terms as constants and calculate the derivative of all the $y$-terms. Since it looks like it applies to integrals, yes, it looks like you would use $\partial x$ and $\partial y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1684679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Four statements, One statement is false math problem When trying to recall some facts about the ages of his three aunts, Josh made the
following claims:
*
*Alice is fifteen years younger than twice Catherine’s age.
*Beatrice is twelve years older than half of Alice’s age.
*Catherine is eight years younger than Beatrice.
*The three women’s ages add to exactly one-hundred years.
However, Josh’s memory is not perfect, and in fact only three of these four
claims are true. If each aunt’s age is an integer number of years, how old is
Beatrice?
I have these following equations:
$a=2c-15$
$b=\frac12 a +12$
$c=b-8$
$a+b+c=100$
How would I solve this problem? Do I assume one statement at a time is false and try the situations one by one?
|
The approach you suggest would work - as long as you find that only 1 of the situations is possible, and the other 3 aren't (like for example if someone has a non-integer/negative age, or you reach another contradiction of some sort).
On the other hand, you may find that there are several possibilities for which Josh's claims are false. But they may all give the same age for Beatrice which still enables you to answer the question. (Note that I haven't done the question so this may not be true at all)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1684786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
}
|
Show $\frac{3997}{4001}>\frac{4996}{5001}$ I wish to show that $$\frac{3997}{4001}>\frac{4996}{5001}.$$
Of course, with a calculator, this is incredibly simple. But is there anyway of showing this through pure analysis? So far, I just rewrote the fractions:
$$\frac{4000-3}{4000+1}>\frac{5000-4}{5000+1}.$$
|
Using long multiplication we get
$$3997\times5001=19988997>19988996=4996\times4001$$
which implies the desired result (because positive multiplication preserves order).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1684883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 8,
"answer_id": 4
}
|
Some Trouble Understanding set theory I'm currently in a discrete mathematics class and we've recently been discussing set theory. I feel like I have basic understanding of how to actually prove set relations when a question asks to do so. However I am having a lot of trouble when initially presented with questions, where I am asked to determine if a statement is true or false. The approach we were taught was to set up ven diagrams in order to help us. However I find that I get very lost in certain types of questions, especially ones where one set contains another. Here is an example of a problem I struggled with:
One of these is true and one is false, provide a proof for both:
(1) For all sets A, B and C, if A-B is a subset of A-C then C is a subset of B
(2) For all sets A, B and C, if C is a subset of B then A-B is a subset of A-C.
I understand that the first one is false and the second one is true. However when first presented with the problems the only way I was able to solve it was by plugging in sets, until one was false. I was hoping that someone would be able to provide me with a better approach to making sense of these type of problems, and possibly how I could represent these questions with a diagram.
Thanks!
|
Part (1):
The light gray area represents $A - B$. If you combine the dark and light gray areas it is $A - C$. Evidently, $A - B \subset A - C$ but $C \not\subset B$. How did I come up with this diagram? I purposely drew $B$ and $C$ so that $C \not\subset B$, and then I drew $A$ so that $B$ overlapped more with it than $C$ did. To prove that the statement is false, we have provided a counterexample, so we are done. (Remember that it is not true that $A - B \subset A - C \implies C \not\subset B$! A counterexample can be constructed for that too.
Part (2):
Since you were asking about how to intuitively approach these questions, let us not use Venn diagrams here. Instead, let us logically proceed from the givens. We know $C \subset B$: if $x \in C$ then $x \in B$. Equivalently, we have the contrapositive: $x \not\in B \implies x \not\in C$. Now, take any $y \in A - B$, which means $y \in A$ and $y \not\in B$. We just said that $y \not\in B \implies y \not\in C$, so we have $y \in A$ and $y \not\in C$. But this means $y \in A - C$! Thus $y \in A - B \implies y \in A - C$ which is the same as $A - B \subset A - C$.
Now how did I get this proof? I just followed logical deductions and definitions and tried to coherently put them together. For you, this may be easier than thinking graphically to come up with a diagram like the one above, or it may be more difficult, but as @IttayWeiss said, both will be greatly helped by practice to develop intuition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1685008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Proving set equivalence Let $S =\{[12],[3]^{-1},[13][4]\}$ and $T= \{[6^{16}],[24]+[67],[-158]\}$ be subsets of $\mathbb Z_{17}$. I am trying to prove $S=T$
So far I have $S =\{[12],[6],[1]\}$ since $[a^{-1}]=[b] \iff ab \equiv 1(mod\,x)$ and $T= \{[1],[6],[12]\}$
But I am not sure if this is sufficient. Does anyone know?
|
While you arrived at the right conclusion, you may want to be a bit more verbose in your presentation.
*
*Since $6 \cdot 3 \equiv 18 \equiv 1 \mod 17$, we have $[3]^{-1} = [6]$.
*Since $13 \cdot 4 \equiv 52 \equiv 1 \mod 17$, we have $[3]\cdot [14] = [1]$.
This yields $S = \{[12],[6],[1]\}$ and we may also note that these elements are pairwise distinct.
Similarly
*
*$6^{16} \equiv (6^2)^8 \equiv 2^8 \equiv (2^4)^2 \equiv (-1)^2 \equiv 1 \mod 17$ yields $[6^{16}] = [1]$.
*$24+67 \equiv 7 + 16 \equiv 23 \equiv 6 \mod 17$ implies $[24]+[67] = [6]$.
*$-158 \equiv -158 + 170 \equiv 12 \mod 17$ implies $[-158]=[12]$ and thus
$T = \{[1],[6],[12]\}$.
Therefore $S = T = \{ [1], [6], [12] \}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1685108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can you apply the fundamental theorem of calculus with the variable inside the integrand? I was wondering if I can apply the FTOC: $\frac { d }{ dx } \left( \int _{ a }^{ x }{ f(t) } dt \right) =f(x)$ to an implicit function with the variable being differentiated implicit in the function.
For example: $\frac { d }{ dx } \left( \int _{ a }^{ x }{ xf(t) } dt \right)$ or any function in the form: $\frac { d }{ dx } \left( \int _{ a }^{ x }{ f(x,t)f(t) } dt \right) $ and if so, what would we get.
|
By definition
$$\frac{d}{dx}\int_a^xf(x,t)dt=\lim_{\Delta x\to 0}\frac{1}{\Delta x}\left[\int_a^{x+\Delta x}f(x+\Delta x,t)dt-\int_a^x f(x,t)dt\right]$$
$$=\lim_{\Delta x\to 0}\frac{1}{\Delta x}\left[\int_a^{x+\Delta x}\left(f(x,t)+\frac{\partial f}{\partial x}\Delta x\right)dt-\int_a^x f(x,t)dt\right]$$
$$=\lim_{\Delta x\to 0}\frac{1}{\Delta x}\left[\int_a^{x+\Delta x}f(x,t)dt-\int_a^x f(x,t)dt + \int_a^{x+\Delta x}\frac{\partial f}{\partial x}\Delta x dt\right]$$
$$=\lim_{\Delta x\to 0}\frac{1}{\Delta x}\left[\int_x^{x+\Delta x}f(x,t)dt + \int_a^{x+\Delta x}\frac{\partial f}{\partial x}\Delta x dt\right]$$
$$=\lim_{\Delta x\to 0}\frac{1}{\Delta x}\left[f(x,x)\Delta x + \Delta x \int_a^{x+\Delta x}\frac{\partial f}{\partial x}dt\right]$$
$$=f(x,x)+\lim_{\Delta x\to 0} \int_a^{x+\Delta x}\frac{\partial f}{\partial x}dt$$
$$=f(x,x)+\int_a^x \frac{\partial}{\partial x}f(x,t)dt$$
In conclusion, you will only get the first term if you apply the FTOC blindly. Because the integrand contains $x$, you will have the second extra term.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1685283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Expansion of an expression. I want to know how to expand expressions like $(x+y+z)(a+b+c)$, I currently have a problem I want to solve but I know FOIL if it is $(x+y)(a+b)$ but what do I do when it is $(x+y+z)(a+b+c)$?
|
Use the distributive property.
We have that
$$(x+y+z)(a+b+c)=x(a+b+c)+y(a+b+c)+z(a+b+c)$$
Can you continue?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1685416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What is the domain of $f(x)=x^x$? What is the domain of $f(x)=x^x$ ?
I used Wolfram alpha where it is said that the domain is all positive real numbers. Isn't $(-1)^{(-1)} = -1$ ? Why does the domain not include negative real numbers as well?
I also checked graph and its visible for only $x>0$ . Can someone help me clarify this?
|
The expression $x^y$ can be assigned a reasonable meaning for all real $x$ and all rational numbers of the form $y=m/n$, where $m$ is even and $n$ is odd and positive. Thus $x^y=(x^m)^{1/n}$, interpreted as the unique real $n$th root of $x^m$ (define $0^0$ to be $1$). Since every real number can be arbitrarily well approximated by such "even/odd" rationals, by continuity, a synthetic definition of $x^y$ can be obtained for all real $x$ and $y$. For example, using this definition, a graph can be plotted for the relation $y^y=x^x$, which runs smoothly as a loop through all four quadrants (along with the obvious line $y=x$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1685523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Coverings of connected sum of four copies of $\mathbb{R}P^2$ G.Baumslag in one of his papers asserts that a group $G = \langle a,b,c,d | a^2b^2c^2d^2 = 1 \rangle$ contains all fundamental groups of closed compact orientable surfaces of genus $g\geq 2$? I think it can be proved using covering theory. The group $G$ is the fundamental group of $\mathbb{R}P^2\sharp\mathbb{R}P^2\sharp\mathbb{R}P^2\sharp\mathbb{R}P^2$. We know that $M_k$ (compact orientable surface of genus $k$) covers $M_2$ by $\mathbb{Z}/{(k-1)}$-action. So the sufficient condition would be that $M_2$ covers $\mathbb{R}P^2\sharp\mathbb{R}P^2\sharp\mathbb{R}P^2\sharp\mathbb{R}P^2$?
However, I know that $M_k$ double covers the connected sum of $k+1$ projective planes $\sharp^{k+1} \mathbb{R}P^2$. In particular, $M_3$ (not $M_2$) covers $\mathbb{R}P^2\sharp\mathbb{R}P^2\sharp\mathbb{R}P^2\sharp\mathbb{R}P^2$. It follows that $G$ contains groups of all orientable surfaces of genus $2k+1$ with $k\geq 1$. What to do with even genus?
|
$M_2$ cannot cover your space, since they both have the same (non-zero) Euler characteristic.
EDIT: Just to expound a bit, this means $\pi_1(M_2)$ is not a subgroup of your group.
DOUBLE EDIT: This also rules out all even-genus closed orientable surfaces, since they would have to go through the orientable double-cover of your space, which has Euler characteristic $-4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1685716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Does there exist an analytic function such that $|f(z)|=x$ for $z=x+iy \in D$? Does there exist an analytic function $f=u+iv$ in $D=\{z:|z|<1\}$ such that $|f(z)|=x$ for $z=x+iy \in D$? Prove your response.
I am pretty stuck on this.
I know that $f(z)=u(z)+iv(z)$ and
$$u_x=v_y \quad u_y=-v_x$$
Further, I want to know if it's possible for $u^2(z)+v^2(z)=x^2$ and that's about the place I stick.
Attempt
We are looking for a function of the $form f(z)=u(z)+v(z)$ such that
$$u^2+v^2=x^2$$
Differentiating with respect to $y$ we have
$$2uu_x+2vv_x=0$$
and so
$$uu_y=-vv_y$$
Supposing $f$ is analytic from the Cauchy-Riemann Equations we have
$$uv_x=vu_x$$
and so
$$uv_x-vu_x=0=\frac{\partial}{\partial y}\left( \frac{u}{v}\right)$$
What about $v^2$? Do we presume it is equal to $1$
Integrating gives
$$\frac{u}{v}=c \Rightarrow u=cv$$
where $c$ is some constant. We have
$$u^2+c^2u^2=x^2$$
or
$$u=\frac{x}{\sqrt{1+c^2}}$$??
which says $v=\frac{cx}{\sqrt{1+c^2}}$ and since $v^2=1$ we have to have $c^2x^2=1+c^2$ or $x=\sqrt{\frac{1}{c^2}+1}$ which gives us $x$ constant which a contradiction of our assumption $z \in \{z:|z|<1\}$ and so this is not possible.
|
Consider $f$, restricted to the imaginary axis. However, if you mean $|f(z)| = |x|$ and only want to work with the Cauchy-Riemann differential equations (CRDE), derive $u^2 + v^2 = x^2$ with respect to $y$ and use CRDE to see that $u = cv$ for some $c$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1685794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find a value of $n$ such that the coefficients of $x^7$ and $x^8$ are in the expansion of $\displaystyle \left(2+\frac{x}{3}\right)^{n}$ are equal
Question: Find a value of $n$ such that the coefficients of $x^7$ and $x^8$ are in the expansion of $\displaystyle \left(2+\frac{x}{3}\right)^{n}$ are equal.
My attempt:
$\displaystyle \binom{n}{7}=\binom{n}{8} $
$$ n(n-1)(n-2)(n-3)(n-4)(n-5)(n-6) \times 2^{n-7} \times (\frac{1}{3})^7= n(n-1)(n-2)(n-3)(n-4)(n-5)(n-6)(n-7) \times 2^{n-8} \times (\frac{1}{3})^8 $$
$$ \frac{6}{7!} = \frac{n-7}{40320} $$
$$ n-7 = 48 $$
$$ n=55 $$
|
The coefficient of $x^7$ is
$$\binom{n}{7}\frac{2^{n-7}}{3^7}$$
And the coefficient of $x^8$ is
$$\binom{n}{8}\frac{2^{n-8}}{3^8}$$
Comparing them we get:
$$\binom{n}{8}=\binom{n}{7}\frac{3}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1685895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
}
|
A random invertible matrix I work on a project, for these project i need to generate a square random invertible matrix.
I found out how to generate a square random matrix, still i want to be sure that this is an invertible one, without having to compute the determinant or to generate this matrix multiple times, can you please give me a tip ?
|
A mean to be sure that a matrix has nonzero determinant is to take it as diagonally dominant (say for example on each column $j$, $|a_{jj}|> \sum_{i=1...n, i \neq j}|a_{ij}|$) https://en.wikipedia.org/wiki/Diagonally_dominant_matrix It can be also done on rows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1686116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Solving an infinite series $$\sum_{c=1}^\infty \frac{1}{(2c+1)^2(2c-1)^2} = \frac{1}{16}(\pi^2 -8)$$
I got the result using wolfram alpha but I don't know how to calculate it. I tried breaking it into telescopic sums but it can't be separated like that. Any hints?
|
$$\frac1{(2n+1)^2(2n-1)^2}=\frac14\left(\frac1{2n-1}-\frac1{2n+1}\right)^2=\frac14\left(\frac1{(2n-1)^2}-\frac2{4n^2-1}+\frac1{(2n+1)^2}\right)$$
Now, we get
$$\frac{\pi^2}6=\sum_{n=1}^\infty\frac1{n^2}=\frac14\sum_{n=1}^\infty\frac1{n^2}+\sum_{n=1}^\infty\frac1{(2n-1)^2}\implies$$
$$\sum_{n=1}^\infty\frac1{(2n-1)^2}=\frac{\pi^2}8\;,\;\;\sum_{n=1}^\infty\frac1{(2n+1)^2}=\frac{\pi^2}8-1$$
Telescoping, we also get
$$\sum_{n=1}^\infty\frac1{4n^2-1}=-\frac12$$
so all in all we get that your series equals
$$\frac14\left(\frac{\pi^2}8-1+\frac{\pi^2}8-1\right)=\frac14\left(\frac{\pi^2}4-2\right)=\frac1{16}\left(\pi^2-8\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1686187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that (p ∧ q) → (p ∨ q) is a tautology? I am having a little trouble understanding proofs without truth tables particularly when it comes to →
Here is a problem I am confused with:
Show that (p ∧ q) → (p ∨ q) is a tautology
The first step shows: (p ∧ q) → (p ∨ q) ≡ ¬(p ∧ q) ∨ (p ∨ q)
I've been reading my text book and looking at Equivalence Laws. I know the answer to this but I don't understand the first step.
How is (p ∧ q)→ ≡ ¬(p ∧ q)?
If someone could explain this I would be extremely grateful. I'm sure its something simple and I am overlooking it.
The first thing I want to do when seeing this is
(p ∧ q) → (p ∨ q) ≡ ¬(p → ¬q)→(p ∨ q)
but the answer shows:
¬ (p ∧ q) ∨ (p ∨ q) (by logical equivalence)
I don't see a equivalence law that explains this.
|
As $\lnot p\lor p$ is trivial and $p→q$ means $p$ necessitates $q$. This gives $\lnot p\lor q$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1686369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 8,
"answer_id": 5
}
|
Program to create graph with modified bessel function $$e =\frac{1}{x}\frac{I_{1}(2x)}{I_{0}(2x)}$$
$$e =\frac{2}{1+\sqrt{1+x²\frac{8}{3}}}$$
$$e =\frac{1}{\sqrt{x\frac{4}{3}}} \frac{I_{\frac{2}{3}}(x\frac{4}{3}^{3/2})}{I_{\frac{-1}{3}}(x\frac{4}{3}^{3/2})}$$
I wanted to plot those functions but I'm having a hard time finding a program that does graphs with modified Bessel function of the first kind. Can anyone recommend me one?
|
Have you tried Wolfram Alpha ? Just type Plot[BesselI(1,2x)/BesselI(0,2x)/x,{x,-5,5}]
into the dialog box, and press Enter. If you want to plot more than one function at once,
the syntax is Plot[{... , ... , ...},{x, ... , ...}], where the dots are to be replaced
by the definitions of the functions, and the values of the horizontal plot limits.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1686476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Possible to determine position of number based on base 3 order of magnitude? I struggled with a good title for this - sorry if it ended up being confusing. I am attempting to try to partition a series of decimal numbers (starting at 1) by base 3 orders of magnitude. Essentially breaking them into ordered groups of 1,3,9,27,etc.
I've had a hard time verbalizing what I am trying to do here (even to myself), so if this is still confusing perhaps the breakdown below will help?
1
--- 3^0 is 1, so first partition contains 1 number
2
3
4
--- 3^1 is 3, so second partition contains 3 numbers
5
6
7
8
9
10
11
12
13
--- 3^2 is 9, so third partition contains 9 numbers
14
...
40
--- 3^3 is 27, so fourth partition contains 27 numbers
and so on
Give a base 10 number, I would like to determine which partition it would be in based on the breakdown above. So:
14 would return 3
13 would return 2
6 would return 2
1 would return 0
Can anyone suggest any ways to accomplish this or perhaps point me in the right direction?
My background isn't in mathematics so I apologize if it's unclear what I'm asking here or if this is a bad question.
|
Let's focus on the last number $a_n$ of the $n$th partition (where $n$ starts from $0$):
$$
1, 4, 13, 40, \ldots
$$
Observe that it is the partial sum of a geometric series:
$$
a_n = 1 + 3 + \cdots + 3^n = \frac{3^{n + 1} - 1}{3 - 1}
$$
Taking the inverse function and using a ceiling function, we conclude that the $m$th number must be in the partition given by:
$$
p(m) = \lceil\log_3(2m + 1)\rceil - 1
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1686595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is a function satisfying these premises, ae. positive or zero? Let g be defined on $[0,1]$ such that the integral between $t_0$ and $t_1$, for all $t_0 < t_1$, is positive or zero. Does g satisfy $g(x) \ge 0$ ae. ?
If not, what if we add the continuity ?
I thought about saying that if it was negative on some measure-positive set A, then take an interval within A and obtain a contradiction. But not every measure-positive set contains an interval.
|
Suppose $g$ is integrable and $\int_a^b g \ge 0$ for all $0 \le a \le b \le 1$. Then $g(x) \ge 0$ ae. $x \in [0,1]$.
Let $\phi(t) = \int_0^t g$, then the Lebesgue differentiation theorem shows that
$\phi$ is ae. differentiable with $\phi'(t) = g(t)$ ae.
Suppose $x<y$, then $\phi(y) = \phi(x)+ \int_x^y g \ge \phi(x)$, so
$\phi$ is non decreasing, hence $\phi'(t) \ge 0$.
Hence $g(t) \ge $ ae.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1686690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving $T(n) = T(n-2) + \log_2 n$ to be $\Omega(n\log_2 n)$ As title, for this recursive function $T(n) = T(n-2) + \log_2 n$, I worked out how to prove that it belongs to $O(n\log n)$; however I'm having trouble proving it to be also $\Omega(n\log n)$, i.e. lower-bounded by $n\log_2 n$ asymptotically.
Following the standard inductive procedure provided by CLRS algorithm text, I have:
Assuming for some integer n such that for all $m<n$, $T(m) \geqq Cm\log_2m$, then:
$$T(n) \geqq C(n-2)\log_2(n-2) + \log_2 n$$
From this point on, it gets tricky (for me at least) to deduce the conclusion that $T(n) \geqq Cn\log_2n$
One possible way to move on is to give an observation that for all $n\geqq 4$, $\log_2(n-2)\geqq \log_2(n/2)=\log_2 n - 1$, but then this will generate a $-Cn$ term that I cannot get rid of.
The solutions manual of CLRS 2e suggests strengthening the induction hypothesis, i.e. assuming for all $m<n$, $T(m) \geqq Cm\log_2 m + Dm$ instead. It then arrives at $T(n) \geqq Cn\log_2 n$ and claims the proof is complete, which I think is incorrect as an inductive proof must arrive at exactly the same expression as the inductive hypothesis ($T(n) \geqq Cn\log_2 n + Dn$). In fact, I don't see how introducing a $Dm$ term here makes it easier at all to arrive at $T(n) \geqq Cn\log_2n + Dn$. I think I'm stuck. Any insight is much appreciated!
|
The recurrence relation is given as $T(n) = T(n-2) + \lg n$.
Let's unroll the formula half the way, that is, we will do $k = \lfloor n/4\rfloor$ steps
\begin{align}
T(n) &= T(n-2) + \lg n \\
&= T(n-4) + \lg (n-2) + \lg n \\
&\ \ \vdots \\
&= T(n-2k-2) + \underbrace{\lg (n-2k) + \lg \big(n-2(k-1)\big) + \ldots + \lg (n-2\cdot0)}_{k \text{ summands, each }\geq\ \lg(n-2k)\ \simeq\ \lg(n/2)}\\
&\geq k\cdot \lg(n-2k).
\end{align}
Given that both $k$ and $n-2k$ are of order $\Theta(n)$, then $T(n)$ is bounded from below by $\Omega(n \log n)$.
I hope this helps $\ddot\smile$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1686792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Non-separated quotient of separated scheme I am reading Mumford's GIT book. I found the following claim there.
Let $X$ be an algebraic variety. Let $G$ be an algebraic group acting on $X$. Then the categorical quotient of $X$ by $G$ may be non-separated.
Question: Could you construct an example?
|
This answer is given by Takumi Murayama. I am just writing down the details.
Consider the quotient of $X = \mathbb{C}^2 \backslash \{ 0 \}$ by $\mathbb{C}^*$ action $\lambda (x, y) = ( \lambda x, \lambda^{-1} y )$.
We can consider two charts
$U_1 = \{ (x,y) \in \mathbb{C}^2 \backslash \{ 0 \}$ such that $x \neq 0 \}$
$U_2 = \{ (x,y) \in \mathbb{C}^2 \backslash \{ 0 \}$ such that $y \neq 0 \}$
Let me consider $z= xy$ and $t=x$ as coordinates on $U_1$.
$U_1 = \mathbb{C} \times \Big( \mathbb{C} \backslash \{ 0 \} \Big) $ where $z \in \mathbb{C}$ and $t \in \mathbb{C} \backslash \{ 0 \}$. The action is given by $\lambda(z, t ) = (z, \lambda t)$. So the quotient $U_1 / G$ is $\mathbb{C}$ with coordinate $z = xy$.
If one swap $x$ and $y$, the same words could be repeated for $U_2$. So the quotient $U_2 / G$ is also $\mathbb{C}$ with coordinate $xy$. Quotient of $( U_1 \cap U_2 ) / G$ is $\mathbb{C} \backslash \{ 0 \}$.
Quotient of whole thing $(U_1 \cup U_2)/G$ cab be obtained by gluing $U_1 / G$ and $U_2 / G$ along $(U_1 \cap U_2)/G$. So we get a line two origins indeed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1686872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How to use Borel-Cantelli specifically to show that the probability of a simple random walk returning to the origin in finite time is 1? Suppose we have that $X_i$ are iid random variables with $P(X_i =1) = P(X_i = -1) = 1/2$ and that $X_0 = 0$. Then, we define the simple symmetric random walk to be $S_n = \sum_{i=1}^n X_i$. We define the hitting time as $\tau_{i,j} = min\{n \geq 1 : S_n = j \ |\ S_0 = i\}$. I would like to show that $P(\tau_{0,0} < \infty) = 1$.
Specifically, I would like to use the Borel-Cantelli lemma to show that if I can define some independent sequence of events here, and show that the sum of the events is infinite, then moves of some finite length happen infinitely often, and so I can show that it must return to $0$. However, I am not sure how to define such an indexed event. Could someone give me a hint? Thank you
|
As pointed out in comments, there could be good ways of doing this using not Borel-Cantelli, but the properties of either Markov Chains or (I would recommend) martingales, since your random walk is both a Markov chain and a martingale.
However, there is a neat argument which uses the Borel-Cantelli lemma. Suppose that the random walk is at $n$. The probability of reaching $0$ before reaching $n+1$ is $\frac{1}{n+1}$.
For the random walk to go back to $0$, it must reach some maximum $k$, then go back $0$ before ever reaching $k+1$. (Or the same argument by symmetry if it goes to $-1$ before $1$.) So the events that it reaches $n$ for the first time, and then reaches $0$ before $n+1$, are disjoint and independent. The sum of their probabilities is infinite, hence at least one of them must occur with probability $1$.
To see that the probability, when at $n$, of reaching $0$ before $n+1$ is $\frac{1}{n+1}$, you could use a simple martingale argument. If win \$1 for heads and lose \$1 for tails, and stop gambling when you win \$1 or lose \$n, the probability of winning or losing must be such that your expectation is \$0.
One can also see that to remain in a range for ever without ever reaching either of the endpoints has probability $0$. To see this, just consider that there is a positive probability of getting $m$ heads or $m$ tails in a row, where $m$ is the length of the range. If we have infinite tries, we must, with probability $1$, get this sequence eventually.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1686977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Show that a regularizing operator $K : C_c(\Omega) \to \mathcal{D}'(\Omega)$ has kernel $k \in C^\infty(\Omega \times \Omega)$. I am reading Francois Treves' Introduction to pseudodifferential and Fourier integral operators, vol. I.
Let $\Omega \subseteq \mathbb{R}^n$ be open. On page 11, Treves defines what it means for a continuous linear map $K : C_c^\infty(\Omega) \to \mathcal{D}'(\Omega)$ to be regularizing. $K$ is called regularizing if and only if $K$ extends to a continuous linear map $\mathcal{E}'(\Omega) \to C^\infty(\Omega)$.
Then Treves states the following proposition.
If $K : C^\infty_c(\Omega) \to \mathcal{D}'(\Omega)$ is regularizing, then the associated Schwartz kernel $k \in \mathcal{D}'(\Omega \times \Omega)$ is in $C^\infty(\Omega \times \Omega)$.
I am trying to prove this, and here's what I have so far. It is enough to show that $k$ is $C^\infty(\Omega \times \Omega)$ on tensor products since they are dense in $\mathcal{D}(\Omega \times \Omega)$. So let $\varphi, \psi \in C_c^\infty(\Omega)$. Use the statement of the Schwartz Kernel Theorem, and the fact that $K\psi \in C^\infty (\Omega)$ (since $\psi \in \mathcal{E}'(\Omega))$, to get
$$\langle k, \varphi \otimes \psi \rangle = \langle K\psi , \varphi \rangle = \int_\Omega (K\psi)(x)\varphi(x)dx.$$
But what I really want is some smooth function $k(x,y)$ of two variables, integrated against $\varphi \otimes \psi$. That is, what I want is
$$\langle k, \varphi \otimes \psi \rangle = \int_\Omega \int_\Omega k(x,y)(\varphi \otimes \psi)(x,y)dxdy$$.
Unfortunately, I am stuck and can't move further with the calculation. Hints or solutions are greatly appreciated!
|
This result can be found in the Hormander's book, The Analysis of Linear Partial Differential Operators I. More precisely, Theorem 5.2.6., page 132.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1687088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Identity having to do with weak derivative For $a<b \in \mathbb{R}$, let $(a,b) = G \subset \mathbb{R}$ be a bounded interval in the real numbers.
Show that there exists no $v \in L^2(G)$ and no $y \in G$ such that $$ \int_G v \varphi \text{d}x = \varphi(y) $$
holds for all test functions $\varphi \in C^{\infty}_0(G) $.
Since I don't have a mathematics background I have trouble with this kind of problems. Can someone give me a generous hint how one can solve such a (and particularly this) problem?
|
HINT: Argue by contradiction that such $v,y$ exist. By Cauchy-Schwarz, the set $$A=\{ \varphi (y) : \varphi \in C^{\infty}_0 (G), ||\varphi||_2 \le 1\}$$ is bounded (it is contained in the interval $(-||v||_2, ||v||_2)$).
Now, try to construct some sequence $\{ \varphi_n \}_{n \ge 1}$ of bell-shaped smooth functions with compact support satisfying:
*
*$||\varphi_n||_2 \le 1$ for all $n$
*the support of $\varphi_n$ becomes thinner and thinner around $y$ (for example, it is contained in $(y-1/n^2, y +1/n^2)$)
*$\varphi_n (y) \to + \infty$
This will contradict the boundedness of $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1687177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is a good notation for an “even falling factorial”? It has been suggested to me that I use this notation:
$$
\lfloor n \rfloor_2 = 2 \left\lfloor \frac n 2 \right\rfloor = \text{“even floor of $n$''} = \text{largest even integer}\le n.
$$
I also want to write about an “even falling factorial” that, for example, given the inputs $57$ and $6$, or $56$ and $6$, has this value:
$$
56\times54\times52\times50\times48\times46,
$$
i.e. it is
$$
\lfloor 57 \rfloor_2 \cdot (\lfloor 57 \rfloor_2 - 2) \cdot (\lfloor 57 \rfloor_2 - 4) \cdot (\lfloor 57 \rfloor_2 - 6) \cdot (\lfloor 57 \rfloor_2 - 8) \cdot (\lfloor 57 \rfloor_2 - 10)$$
so in general, given $n$ and $k$, it is this:
$$
\prod_{j=0}^{k-1} \lfloor n - 2j \rfloor_2.
$$
I could just call it $n\mathbin{\sharp}k$ or something like that. But my questions are:
*
*Is there some standard notation for this?; and
*What notation would be easiest for the reader to follow when the topic is neither the notation nor the concept that it denotes but rather the notation and the concept are merely being used in the course of discussing a topic for which they are useful?
|
Could you just define it as a two-variable function $f(n,k)?$
You could avoid introducing new notation if you wrote it
$$f(n,k)=\displaystyle\prod_{i=0}^{k-1} \left(2\left\lfloor\frac{n}{2}\right\rfloor - 2i\right)$$
Another way would be
$$f(n,k) = k! \cdot 2^k \cdot \binom{\left\lfloor\frac{n}{2}\right\rfloor}{k}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1687282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Show that if $ \lim_{x \to a} f(x) = -\infty$ then $\lim_{x \to a} (f(x))^2 = \infty$
Let $a \in \mathbb{R}$. Use the $\epsilon-\delta$ definition of the limit to show that if $\displaystyle \lim_{x \to a}f(x) = -\infty$ then $\displaystyle \lim_{x \to a} (f(x))^2 = \infty$.
We are given that $$\forall M < 0, \exists \delta \quad 0 <|x-a| < \delta \quad \implies \quad f(x) < M$$ and need to show $$\forall M > 0, \exists \delta \quad 0<|x-a|<\delta \quad \implies \quad (f(x))^2 > M$$. How can I get from the first to the second?
|
The two $M$s can be distinct provided they can be arbitrarily large. Here is how I would write it.
$\forall\,M > 0, \exists\,\delta > 0, |x - a| < \delta \Rightarrow f(x) < -M \iff -f(x) > M \Rightarrow (f(x))^{2} > M^{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1687386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.