Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
If one of two sets has larger cardinality, there is a map onto the other set
Let A and B be sets with the cardinality of A less than or equal to B. Show there exists an onto map from B to A.
I am struggling with this proof. I don't know how to show this. Any help would be greatly appreciated
|
A not empty
There exists an injective map $f:A\rightarrow B$, there exists an inverse $g:f(A)\rightarrow A$. Let $a\in A$, $h:B\rightarrow A$ defined by $h(x)= g(x), x\in f(A)$ otherwise $h(x)= a$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1480670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
How to evaluate $ \int_0^\infty \frac{\log x}{(x^2+a^2)^2} dx $ Evaluate $$ \int_0^\infty \frac{\log x}{(x^2+a^2)^2} dx $$ $$(a>0) $$
How can I use contour appropriately?
What is the meaning of this integral?
(additionally posted)
I tried to solve this problem.
First, I take a branch $$ \Omega=\mathbb C - \{z|\text{Re}(z)=0\; \text{and} \; \text{Im}(z)\le0\} $$
Then ${\log_\Omega z}=\log r +i\theta (-\frac{\pi}{2}\lt\theta\lt\frac{3\pi}{2})$
Now, $\frac{\log z}{(z^2+a^2)^2}$ is holomorphic in $\Omega - \{ai\}$ with double poles at $ai$.
Now I'll take the contour which forms an indented semicircle.
For any $0\lt\epsilon\lt{a}$, where $\max (1,a)\lt R$, $\Gamma_{R,\epsilon}\subseteq\Omega - \{ai\}$ and in $\Omega$, $i=e^{i\pi/2}$.
Now using the residue formula, $$2\pi{i}\operatorname*{Res}_{z=ai}\frac{\log_\Omega{z}}{(z^2+a^2)^2}=2\pi{i}\operatorname*{lim}_{z\to ai}\frac{d}{dz}(z-ai)^2\frac{\log_\Omega{z}}{(z^2+a^2)^2}=\frac{\pi}{2a^3}(\log_\Omega{ai}-1)$$
Now, the last part, take $i=e^{i\pi/2}$, then is equal to $\frac{\pi}{2a^3}(\log{a}-1+i\pi/2)$
So, I can split integrals by four parts,
$$\int_{\epsilon}^R dz + \int_{\Gamma_R} dz + \int_{-R}^{-\epsilon} dz + \int_{\Gamma_\epsilon} dz$$
First, evaluate the second part,
$$\left|\int_{\Gamma_R} dz\right|\le\int_0^{\pi}\left|\frac{\log_\Omega{Re^{i\theta}}}{(R^2e^{2i\theta}+a^2)^2}iRe^{i\theta}\right|d\theta$$
Note that
$$\left|\log_\Omega{Re^{i\theta}}\right|=\left|\log R+i\theta\right|\le\left|\log R\right|+|\theta|$$
$$\left|R^2e^{2i\theta}+a^2\right|\ge R^2-a^2\quad (R\gt a)$$
Then, 2nd part $\le\frac{R(\pi R+\frac{\pi^2}{2})}{(R^2+a^2)^2}\to 0\; \text{as} \; R \to \infty\quad \left|\log R\right|\lt R\;\text{where}\;(R\gt 1)$
So, 4th part similarly, goes to $\;0$.
Then 3rd part, substitute for $\;t=-z$,
$$\int_\epsilon^{R}\frac{\log t}{(t^2+a^2)^2}dt + i\pi\int_\epsilon^{R}\frac{dt}{(t^2+a^2)^2}$$
And $\;i\pi\lim\limits_{{\epsilon \to 0},\;{R\to\infty}}\int_\epsilon^{R}\frac{dt}{(t^2+a^2)^2}=\frac{\pi}{4a^3}$
With tedious calculations, I got $\frac{\pi}{4a^3}(\log a -1)$.
|
I thought that it might be instructive to add to the answer posted by @RonGordon. We note that the integral of interest $I_1(a^2)$ can be written
$$I_1(a^2)=\int_0^\infty \frac{\log^2 x}{(x^2+a^2)^2}\,dx=-\frac{dI_2(a^2)}{d(a^2)}$$
where
$$I_2(a^2)=\int_0^\infty\frac{\log^2x}{x^2+a^2}\,dx$$
Now, we can evaluate the integral $J(a^2)$
$$J(a^2)=\oint_C \frac{\log^2z}{z^2+a^2}\,dz$$
where $C$ is the key-hole contour defined in the aforementioned post. There, we have
$$\begin{align}
J(a^2)&=-4\pi i\,I_2(a^2)+4\pi^2\int_0^\infty \frac{1}{x^2+a^2}\,dx \\\\
&=-4\pi i\, I_2(a^2)+\frac{2\pi^3}{a}\\\\
&=2\pi i \left(\text{Res}\left(\frac{\log^2 z}{z^2+a^2},ia\right)+\left(\text{Res}\left(\frac{\log^2 z}{z^2+a^2},-ia\right)\right)\right)
\end{align}$$
Finally, after calculating the residues, and simplifying, we obtain the integral $I_2(a^2)$ whereupon differentiating with respect to $a^2$ recovers the integral of interest $I_1(a^2)$. And we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1480792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
}
|
Given a 95% confidence interval why are we using 1.96 and not 1.64? I'm studying for my test and am reviewing the solution to some example problems. The problem is:
You are told that a new standardized test is given to 100 randomly selected third grade students in New Jersey. The sample average score is $\overline Y$ on the test is 58 and the sample standard deviation is $s_y$ = 8.
The first part of the question asks you to construct a 95% confidence interval for the mean score of all New Jersey third graders.
First I found $$\text{SE}_y = \frac{s_y}{\sqrt{n}} = \frac{8}{\sqrt{10}} = 0.8.$$
Then I did
$$58 \pm (0.8)(1.64).$$
However, instead of using $1.64$ they used $1.96$. Can someone explain why?
|
Two reasons:
1) Students are in New Jersey.
2) the $z$-value associated with 95% is z=1.96; this can be seen as part of the 1-2-3, 65-95-99 rule that tells you that values within $ p.m 1-, 2- or 3$- deviations from the mean comprise 65- 95- or 99% of all values in the distribution . Or look it up in a table.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1480904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
}
|
If $f\in C^{1}(0,1]\cap C[0,1]$ and $f'\not\in L^{1}(0,1)$, then $f$ oscillates at $0$? Please think it easy because it is not an assignment.
Question : Let $f\in C^{1}(0,1]\cap C[0,1]$ and $f'\not\in L^{1}(0,1)$. Then, does $f$ oscillate frequently and $f'$ is unbounded at $0$?
When I asked the similar question before, Daniel Fischer taught me that such functions satisfy the above conditions.
I accepted at the time but a abstract proof seems to be difficult after thoughts.
Of course, it is clear that such function is Riemann integrable by the fundamental theorem of calculus and $f'$ should be unbounded at $0$.
The problem is whether $f$ should oscillate at $0$ when thinking in the framework of Lebesgue integral.
I think that an image is what the infinite sum of slope diverges due to oscillation but I don't know well how it can prove.
I'm glad if you give me the strategy of proof when you can prove.
It's good even only hints.
Thank you in advance.
|
If $f'$ was bounded near $0$, then since $f'$ is continuous on $(0,1]$, then
it would follow that $f'$ is bounded on $[0,1]$ and hence integrable.
Since $\int_x^1 f' = f(1)-f(x)$ for $x>0$, we see that $\lim_{x \downarrow 0} \int_x^1 f'$ exists, hence $f'$ must change sign 'frequently' near zero (if $f'$ does not change sign on $(0,\epsilon)$, then it would be integrable).
The description 'oscillates frequently' is a bit vague. However $f$ cannot
be monotone on any interval $[0,\epsilon)$, otherwise $f'$ would not change
sign (near zero) and would be integrable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1481054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Understanding of countable set short theorem. I am struggling to understand what the theorem is stating.
Question 1
Let n equals 2, would that mean that $B_n$ would be $B_2$= $\left\{\left\{a_1\right\},\left\{a_1,a_2\right\}\right\}$?
Question 2
Given n=2 and 3 how would small b, be described?
In the case of $B_{n-1}$,would the sets (b,a), be
(0,$a_1$),($b_1$,$a_2$).
I have a feeling I am totally lost and I am hoping someone could provide a descret example so that I understand what it is stating?
|
My answer for Question 1
Let A = {a,b,c} then B_2 = {(a,a),(a,b),(a,c),(b,a),(b,b),(b,c),(c,a),(c,b),(c,c)}
That's way the proof says B_1 = A
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1481228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove that the set {→, ¬} is functionally complete I am not sure how to do this question. I have looked at some of the other similar questions but to no avail
I know that for a set of operators to be functionally complete, the set can be used to express all possible truth tables by combining members of the set into a Boolean expression
|
The implication $\to$ is defined by
$$ a\to b \equiv \neg a \vee b. $$
This means, that
$$ \neg a \to b \equiv a \vee b$$
and thus, you can express logical or using $\to$ and $\neg$. Furthermore you know de Morgans rules and you have
$$ \neg(\neg a \vee \neg b) \equiv \neg\neg a\wedge \neg\neg b = a\wedge b.$$
Thus you can express all logical operators using $\to$ and $\neg$. The set of these is known to be functionally complete.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1481317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Is $x^6-3$ irreducible over $\mathbb{F}_7$? I know that $\mathbb{F}_7=\mathbb{Z}_7$, and the all possible solutions of $x^6-1=0$ over $\mathbb{Z}_7$ are 1~6, so if we let the root of equation $x^6-3$ as $t $ then the solutions of $x^6-3=0$ is $t$~$6t$, which shows that all the solutions are not in $\mathbb{Z}_7$. However I can't say that $x^6-3$ is irreducible by this fact(since there could exist 2 or more degree polynomial divides $x^6-3$.) Is it irreducible over $\mathbb{Z_7}$? Why?
|
Assume that you have an irreducible polynomial $P$ of degree $d$ dividing your polynomial. Then taking $r$ to be the class of $X$ in the field :
$$\mathbb{F}_{7^d}:=\frac{\mathbb{F}_7[X]}{(P)} $$
We know that $r^6=3$ in this new field (since $P$ divides $X^6-3$). On the other (because you know Lagrange's theorem) you have that :
$$r^{7^d-1}=1 $$
Now I claim that both equations are not compatible when $d=2$ or $3$. Let us do the case $d=2$. In that case we have $r^{48}=1$ and $r^6=3$ but $48=6.8$ so : $r^{48}=3^8=3$ mod $7$ which is not $1$.
And the case $d=3$ $r^{342}=1$ and $r^6=3$ but $342=6.57$ so that $r^{342}=3^{57}=3$ mod $7$ which is not $1$.
Hence you cannot have an irreducible polynomial of degree $1$, $2$ or $3$ dividing $X^6-3$, this means that your polynomial is irreducible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1481456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Subspace topology of matrices I have been given homework in which the $2\times 2$ matrices of determinant $1$ are equipped with the subspace topology of $\mathbb{R}^4$. However, $\mathbb{R}^4$ is a space of 4-tuples, while 2x2 matrices are not n-tuples. How do I get the open sets of this topology?
How is this set of matrices a subset of $\mathbb{R}^4$?
|
Any matrix
$$\begin{pmatrix}
a & b\\
c & d
\end{pmatrix}\in\mbox{Mat}^{2\times 2}(\mathbb{Z})$$
can be interpreted as the touple $(a,b,c,d)\in\mathbb{R}^4$, so $\mbox{Mat}^{2\times 2}(\mathbb{Z})$ can be thought of a subspace of $\mathbb{R}^4$ and be given the subspace topology.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1481627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
To find ratio of Length and Breadth of a Rectangle Given a rectangular paper sheet. The diagonal vertices of the sheet are brought together and folded so that a line (mark) is formed on the sheet. If this mark length is same as the length of the sheet, what is the ratio of length to breadth of the sheet?
|
Let the length be $l$, and the width be $w$. When the sheet is folded so that the diagonals meet, the fold crosses the sheet from one of the long sides to the other, and meets the long sides the same distance from the opposite corners. Let the distance from the intersection of the fold and the long side to the nearest corner be $d$, and the length of the fold be $f$.
Then, from the geometry of the fold, we can write:
$$w^2+(l-2d)^2=f^2$$
$$w^2+d^2=(l-d)^2$$
Solving for $d$ from the second equation results in:
$$d=\frac {l^2-w^2}{2l}$$
Substituting it into the second equation, and using your stipulation that $f=l$:
$$w^2-2(l^2-w^2)+\frac{(l^2-w^2)^2}{l^2}=0$$
Finally if we define the ratio $r = \frac l w$ and rearrange the above equation, we arrive at:
$$r^4-r^2-1=0$$
Thus $r^2$ is the golden ratio, so $r$ is the square root of the golden ratio.
Interesting problem!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1481713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
1 Form Integral along the curve Let $\alpha$ be the $1$-form on $D=\mathbb{R}^2-\{(0,0)\}$ defined by,
$$
\alpha=\frac{xdx+ydy}{x^2+y^2},
$$
where $(x,y)$ are cartesian coordinates on $D$.
*
*Evaluate the integral of the $1$-form $\alpha$ along the curve $c$ defined $c(t)=(t\cos\phi,t\sin\phi)$, where $1 \le t \le 2$ and $\phi$ is a constant.
So far:
\begin{eqnarray}
c(t)&=&(t\cos\phi, t\sin\phi)\\
c'(t)&=& (\cos\phi, \sin\phi)\\
x^2+y^2 &=&t^2\cos^2\phi+t^2\sin^2\phi=t^2\\
\int a(c'(t))&=& \int^2_1{t^{-1}}dt = \ln{2}
\end{eqnarray}
I am not even slightly convinced about this, any input appreciated!
|
In polar coordinates $\alpha=\frac{1}{r}dr=d\ln r$ and the integral along the straight radial line is simply $\ln2-\ln1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1481872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Centre of the circle Another approach to the curvature of a unit-speed plane curve $\gamma$ at a point $\gamma (s_0)$ is to look for the ‘best approximating circle’ at this point. We can then define the curvature of $\gamma$ to be the reciprocal of the radius of this circle.
Carry out this programme by showing that the centre of the circle which passes through three nearby points $\gamma (s_0)$ and $\gamma (s_0 \pm \delta_s)$ on $\gamma$ approaches the point $$\epsilon (s_0) = \gamma (s_0) + \frac{1}{\kappa_s (s_0)}n_s(s_0)$$
as $\delta_s$ tends to zero. The circle $C$ with centre $\epsilon (s_0)$ passing through $\gamma (s_0)$ is called the osculating circle to $\gamma$ at the point $\gamma (s_0)$, and $\epsilon (s_0)$ is called the centre of curvature of $\gamma$ at $\gamma (s_0)$. The radius of $C$ is $\frac{1}{|\kappa_s (s_0)|} = \frac{1}{\kappa (s_0)}$, where $\kappa$ is the curvature of $\gamma$– this is called the radius of curvature of $\gamma$ at $\gamma (s_0)$.
I have done the following: The three points $\gamma (s_0), \gamma (s_0 + \delta_s), \gamma (s_0 - \delta_s)$ are on the circle with radius $r$ and centre $\epsilon$. So $$r^2=\|\gamma (s_0)-\epsilon\|^2=\|\gamma (s_0 + \delta_s)-\epsilon\|^2=\|\gamma (s_0 - \delta_s)-\epsilon\|^2$$
Since we want show that the centre of the circle tends to $\epsilon (s_0)$ we do the following:
\begin{align}
|\epsilon (s_0)-\epsilon|
&=|\gamma (s_0) + \frac{1}{\kappa_s (s_0)}n_s(s_0)-\epsilon| \\
&\leq | \gamma (s_0) -\epsilon|+\frac{1}{|\kappa_s (s_0)|}|n_s(s_0)| \\ &=r+\frac{1}{|\kappa_s (s_0)|}|n_s(s_0)|.
\end{align}
Is this correct so far? How could we continue?
EDIT:
We have that the radius of $C$ is $\frac{1}{|\kappa_s (s_0)|}$ so we get
\begin{align}
|\epsilon (s_0)-\epsilon|
&=|\gamma (s_0) + \frac{1}{\kappa_s (s_0)}n_s(s_0)-\epsilon| \\
&\leq | \gamma (s_0) -\epsilon|+\frac{1}{|\kappa_s (s_0)|}|n_s(s_0)| \\
&=r+\frac{1}{|\kappa_s (s_0)|}|n_s(s_0)| \\
&=\frac{1}{|\kappa_s (s_0)|}+\frac{1}{|\kappa_s (s_0)|}|n_s(s_0)| \\
&=\frac{1}{|\kappa_s (s_0)|}(1+|n_s(s_0)|) \\
&=\frac{1}{|\kappa (s_0)|}(1+|n_s(s_0)|).
\end{align}
What do we get from that?
|
$\newcommand{\e}{\epsilon}
\renewcommand{\d}{\pm\Delta s}
\renewcommand{\n}[1][]{\,n\left(s_{0}#1\right)}
\newcommand{\gs}{\gamma\left(s_{0}\right)}
\newcommand{\gd}{\gamma\left(s_{0}\d\right)}
\newcommand{\dg}{\gamma\,'\left(s_{0}\right)}
\newcommand{\ddg}{\gamma\,''\left(s_{0}\right)}
\newcommand{\O}[1][]{\mathcal{O}\big(\Delta^{#1}\big)}$
Assume circle with radius $ R $ passes through points $\,\gs,\,\gd$.
Then vectors connecting these points with center $\e$ have the same length $R$.
\begin{align}
\left\|\gs-\e\right\| = \left\|\gd-\e\right\| = R
\end{align}
Observe that these vectors are, in fact, unit normals $\,\n,\,\n[\pm\Delta],$ at points $\,\gs,\,\gd$ multiplied by radius $ R $:
\begin{align}
\gs-\e &= R\n \\
\gd-\e &= R\n[\d]
\end{align}
These vectors point to the same place, which is center of circle, i.e. $$\gs-R\n=\gd-R\n[\d]=\e$$
Therefore
\begin{align}
\gs - \gd = R\,\Big(\n-\n[\d]\Big) \implies
R = \dfrac{\gd - \gs}{\n[\d]-\n}
\end{align}
Consider Taylor expansion $\, \gd = \gs + \Delta\,\dg + \dfrac{\Delta^{2}}{2}\,\ddg + \O[3]$.
Using Taylor expansion $\,\gd\,$ and $\,\n[\d] \,$ we get
\begin{align}
R = \dfrac{\dg + \O[2]}{n\,'\left(s_0\right) + \O[2]} \approx
\dfrac{\dg}{n\,'\left(s_0\right)} = \dfrac{\dg}{-\dg\cdot\kappa} = -\dfrac{1}{\kappa}
\end{align}
where $\, \dfrac{dn}{ds} = -\kappa\left(s_{0}\right)\,\dg\,$ by Frenet formula.
Therefore the center of the circle can be expressed as
\begin{align}
\boxed{\;\e = \gs - R\n = \gs + \dfrac{\n}{\kappa\left(s_{0}\right)\,}\;}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1481950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
}
|
Fourier transform invariant functions other than the bell curve? Are there any functions that are their own Fourier transforms other than $e^{-\pi x^2} $?
|
There is a complete characterization of the probability densities that are (modulo a constant factor) their own Fourier transforms (aka characteristic functions) in a paper of K. Schladitz and H.J. Engelbert:
"On probability density functions which are their own characteristic functions",
Theory Probab. Appl., vol. 40 (1995) pp. 577–581. The class is surprisingly large.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1482089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Help with understanding definition and how to visualize spherical basis vectors? Can someone please help me understand the definition and how I can visualize these spherical basis vectors in 3-space?
https://en.wikipedia.org/wiki/Spherical_basis#Spherical_basis_in_three_dimensions
$e_+ = -\frac{1}{\sqrt{2}}e_x - \frac{i}{\sqrt{2}} e_y$
$e_0 = e_z$
$e_- = +\frac{1}{\sqrt{2}} e_x - \frac{i}{\sqrt{2}} e_y$
Where $e_x$, $e_y$, $e_z$ are the standard, Cartesian basis vectors in 3D.
So an arbitrary vector $r \in \mathbb{R}^3$ can be expressed as:
$r = a_+ e_+ + a_- e_- + a_0 e_0$.
What's most confusing to me is the use of imaginary numbers in $e_+$ and $e_-$. How does one go $i$ units in the $e_y$ direction? And even after I assume that $i$ is in the $e_z$ direction, I'm confused by the use of complex coefficients in the x-y plane. Wouldn't that mean that if we allowed complex coefficients that $Span(e_+) = Span(e_-)$, making $e_+$ and $e_-$ linearly dependent?
|
$a_+$ and $a_-$ are in general not real for real vectors $r$. The allowed values for them to give a real vector is a subspace of $\Bbb C \times \Bbb C$ of two real dimensions. This is inconvenient for ordinary representations of real vectors, but this particular representation is useful in some situations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1482178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Disjunctive Normal Form I need some help understanding how to convert a formula into disjunctive normal form.
Can anybody explain how one would write φ = ((p ∨ q ∨ r) → (¬p ∧ r)) in disjunctive normal form?
Is it possible to use truth tables to help converting to DNF form?
|
I think a truth table could work for simpler propositions, but this one in question is relatively complicated - using logical equivalences might be easier.
Consider a conditional proposition $P \implies Q$, this is equivalent to $(\neg P \vee Q)$, which can be verified using a truth table.
Thus considering, $\varphi = ((P \vee Q \vee R) \implies (\neg P \land R))$,
\begin{align}
&\phantom{{}\equiv{}} ((P \vee Q \vee R) \implies (\neg P \land R))\\
&\equiv \neg (P \vee Q \vee R) \vee (\neg P \land R) \\
&\equiv ((\neg P) \land (\neg Q) \land (\neg R)) \vee (\neg P \land R) \\
&\equiv ((\neg P \land R) \vee \neg P) \land ((\neg P \land R) \vee \neg Q) \land ((\neg P \land R) \vee \neg R) \\
& \phantom{((P \vee Q \vee R) \implies} \vdots
\end{align}
which can be simplified using logical equivalences.
Hope this helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1482287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Problem with congruence equation $893x \equiv 266 \pmod{2432}$ I'm trying to solve $893x \equiv 266 \pmod{2432}$.
Firstly, I find the $\operatorname{gcd}(893, 2432)$ using the extended Euclidean Algorithm. When I calculate this, I receive that the gcd is (correctly) $19$, and that $19 = 17(2432) -49(893)$. From this, I know that there are $19$ distinct solutions.
I then divide my above, initial congruence by the gcd, obtaining $47x \equiv 14 \pmod{128}$.
I know that $\operatorname{gcd}(129, 47) = 1$ and that $1 = 18(128) - 49(47)$.
Therefore $14 = 14(18)(128) -14(49)(47)$.
This implies that a solution to the congruence $47x \equiv 14 \pmod{128}$ is $x = -14(49) = -686$.
$-686 ≡ 82 \pmod{128}$, so I substitute $x = -14(49)$ for $x = 82$.
From this, I gather then that the solution to the congruence is $82 + 128t$, where $t$ is one of $0,1,2,...,18$. However, I believe this is not correct.
Where did I go wrong, and how might I go about fixing this?
Thank you so much!
|
Where did I go wrong ?
Nowhere.
How might I go about fixing this ?
There's nothing to fix.
However, I believe this is not correct.
Next time, have more faith in yourself. ;-$)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1482436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Finite OUTER measure and measurable set Royden's Real Analysis (4th edition), problem #19 (Chapter 2.5):
Let $E$ have a finite OUTER measure. Show that if $E$ is not measurable, then there is an open set $O$ containing $E$ that has a finite outer measure and for which $m^{*}(O - E) > m^*(O)-m^*(E)$.
My question is how can a set of finite measure be not measurable? I know that every set of finite positive measure harbors non-measurable subsets, but how could the whole set $E$ be not measurable when it has a finite measure by assumption?
Thanks.
I have righted the above problem. Sorry all!
|
It's probably supposed to say "Let $E$ have finite outer measure".
There's an errata list here. There is no entry for this problem, but the entry for problem 18 on page 43 looks similar to this problem and is supposed to start "Let $E$ have finite outer measure".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1482527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
If $ab$ is an element of group $G$, are $a$ and $b$ both elements of group $G$ as well? Obviously, if $a$ and $b$ are elements of group $G$, then $ab$ is in $G$ as well.
Is the converse true?
I tried to think about it in terms of the inverse of the original statement (considering it's the contrapositive of the statement I'm trying to prove), but I wasn't sure how to prove that that was true (i.e. if one or both of the elements are not in $G$, then $ab$ is not in $G$).
|
Let $H$ be a subgroup of a group $G$ with the property that whenever $ab \in H$ for some $a,b \in G$, then $a \in H$ or $b \in H$. Then $H=G$. For suppose $H \subsetneq G$. Let $a \in G-H$. And put $b=a^{-1}$. Then $ab=1 \in H$, since $H$ is a subgroup. But then by the property, $a$ or $a^{-1}$ are elements of $H$, whence $a \in H$, contradicting the choice of $a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1482612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Find the locus of intersection of tangents to an ellipse if the lines joining the points of contact to the centre be perpendicular. Find the locus of intersection of tangents to an ellipse if the lines joining the points of contact to the centre be perpendicular.
Let the equation to the tangent be $$y=mx+\sqrt{a^2m^2+b^2} $$ This has to roots for $m$ that i.e $m_1$ and $m_2$.
perpendicular to this line passing through $(0,0)$ is $$my+x=0 $$ slope is $\frac{-1}{m}$
so for perpendiculars $$\frac{(-1)(-1)}{m_1m_2} =-1 $$ so $m_1m_2=-1 $
from equation of tangent $y=mx+\sqrt{a^2m^2+b^2} $
$$m^2(x^2-a^2)-2mxy+y^2-b^2=0 $$and hence locus is $$\frac{y^2-b^2}{x^2-a^2}=-1 $$ But this is not the correct answer. The answer is $b^4x^2+a^4y^2=a^2b^2(a^2+b^2) $. What's the error ?
|
Let $(x_1,y_1)$ be the generic point of the locus.
It is well known that $$\frac {xx_1}{a^2}+\frac {yy_1}{b^2}=1$$ represents the line passing through the points of contact of the tangents from $(x_1,y_1)$.
Now consider the equation $$\frac {x^2}{a^2}+\frac {y^2}{b^2}-\left(\frac {xx_1}{a^2}+\frac {yy_1}{b^2}\right)^2=0$$ It is satisfied by the coordinates of the center and the points of contact.
Since it can be written $$x^2 \left(\frac {x_1^2}{a^4}-\frac 1{a^2}\right) + y^2 \left(\frac {y_1^2}{b^4}-\frac 1{b^2}\right) + \frac {2\,x\,y\,x_1y_1}{a^2b^2}=0$$
it is quadratic homogeneous so represents a pair of lines (degenerate conic)
, clearly the lines joining the points of contact to the centre.
It is not difficult to prove that the lines are mutually perpendicular iff the sum of the coefficients of $x^2$ and $y^2$ is zero, that is $$\frac {x_1^2}{a^4}+\frac {y_1^2}{b^4}=\frac 1{a^2}+\frac 1{b^2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1482708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can we talk about the continuity/discontinuity of a function at a point which is not in its domain? Let us say that I have a function $ f(x)=\tan(x)$ we say that this function is continuous in its domain.
If I have a simple function like $$ f(x)=\frac{1}{(x-1)(x-2)} $$
Can we really talk about its continuity/discontinuity at $x=1$ or at $x=2$.
From what I know we can't since it is not in its domain.
But doesn't it make every function of the form $$ f(x)=\frac{1}{g(x)}$$ continuous in its domain. Where $g(x)$ is any polynomial and $g(x)=0$ at n points (lets say).
|
A function by definition must be defined for all points in the domain. So formally speaking a function like $\tan(x)$ doesn't even know what $\pi/2$ is (other than the codomain maybe). So no, it does not make sense to talk about continuity at points outside the domain.
For example is the function $f\colon \Bbb{R} \to \Bbb{R}$, $x\mapsto x$ continuous at the point $x=\text{New York}$? It is just as meaningless to talk about $\tan(x)$ being continuous at $\pi/2$ as it is to talk about it being continuous at $\text{New York}$.
Another way to see this is the definition. A function is continuous at a point $a$ iff:
$$\lim\limits_{x\to a} f(x) = f(a)$$
Well if the right side doesn't exist, this clearly can't be true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1482787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
How to find $f(15/2)$ if $f(x+1)=f(x)$? Suppose $f(x+1)=f(x)$ for all real $x$. If $f$ is a polynomial and $f(5)=11$, then $f(15/2)$ is ??
How do I approach questions like this?
|
The problem is that how a polynomial can be periodic! Is there any other way than being a constant? So your function is really $f(x)=11$. That's all.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1482911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
$a,b\in A$ are selfadjoint elements of $C^*$-algebras, such that $a\le b$, why is $\|a\|\le \|b\|$ Let $A$ be a unital $C^\ast$-algebra with unit $1_A$.
a) Why is $a\le \|a\|1_A$ for all selfadjoint $a\in A$ and
b) If $a,b\in A$ are selfadjoint such that $a\le b$, why is $\|a\|\le \|b\|$?
I would try it with the continuous functional calculus $$\phi:C(\sigma(a))\to A,\; f\mapsto f(a),$$ for a) with $f(x)=|x|-x$, which is a positive function on $\sigma(a)\subseteq \mathbb{R}$, is it correct?
for b) It is $a\le \|a\|$ and $b\le \|b\|$, therefore $a\le \|b\|$. How can I continue?
Regards
|
For a) you don't need functional calculus. Note that $\|a\|\,1_A-a$ is a selfadjoint with spectrum contained in $[0,\infty)$, so positive ($\sigma(a+k\,1_A)=\{\lambda+k:\ \lambda\in\sigma(a)$).
As stated, b) is not true: $-3<-2$, but $\|-2\|=2<\|-3\|$. It is true for positives, though. For $0\leq a \leq b$, we represent $A\subset B(H)$, then
$$
\langle a\xi,\xi\rangle\leq\langle b\xi,\xi\rangle
$$
for all $\xi\in H$. Then
$$
\|a\|=\sup\{|\langle a\xi,\xi\rangle:\ \|\xi\|=1\}
\leq
\sup\{|\langle b\xi,\xi\rangle:\ \|\xi\|=1\}
=\|b\|.
$$
If you want to do this without representing, the same idea can be achieved using states.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1483027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
limit n tends to infinity for arbitrary non negative real numbers
I tried evaluating the limit taking log and afterwards i couldnt proceed further. I tried solving using numbers instead of variables and did not got any relation from that and help in this matter is appreciated
|
Here is a version with L'Hôpital's rule. It's not quite as strong (logically) as the other answers, as we'll explain.
Assume that $a_1 \leq a_2 \leq \dots \leq a_k$. Starting from
$$
L = \lim_{n\to\infty} \left(a_1^n + a_2^n + \dots + a_k^n\right)^{1/n}
$$
we know
$$\begin{split}
\ln L &= \ln \lim_{n\to\infty} \left(a_1^n + a_2^n + \dots + a_k^n\right)^{1/n} \\
&= \lim_{n\to\infty}\ln\left(a_1^n + a_2^n + \dots + a_k^n\right)^{1/n} \\
&= \lim_{n\to\infty}\frac{\ln\left(a_1^n + a_2^n + \dots + a_k^n\right)}{n}\\
&\stackrel{\text{H}}{=} \lim_{n\to\infty}\frac{(\ln a_1) a_1^n + (\ln a_2)a_2^n + \dots + (\ln a_k)a_k^n}{a_1^n + a_2^n + \dots + a_k^n} \\
&= \lim_{n\to\infty} \frac{a_k^n}{a_k^n}\cdot \frac{(\ln a_1) \left(\frac{a_1}{a_k}\right)^n + (\ln a_2)\left(\frac{a_2}{a_k}\right)^n + \dots + (\ln a_{k-1})\left(\frac{a_{k-1}}{a_k}\right)^n+ \ln a_k}{\left(\frac{a_1}{a_k}\right)^n + \left(\frac{a_2}{a_k}\right)^n + \dots + \left(\frac{a_{k-1}}{a_k}\right)^n+1} \\
&= 1 \cdot \frac{0 + 0 + \dots + 0 + \ln a_k}{0 + 0 + \dots + 0 + 1} = \ln a_k
\end{split}$$
(“H” denotes the invocation of L'Hôpital's rule). Therefore $L=a_k$.
Remarks:
*
*Limit proofs with L'Hôpital's rule are “backwards” in the sense that we don't know that the limit $L$ exists until we get to the end.
*To verify the power sequence is indeterminate requires a check of cases. If $a_k > 1$ then the base $a_1^n + \dots + a_k^n$ tends to $\infty$ so the limit is of the indeterminate form $\infty^0$. If $a_k < 1$ then the base $a_1^n + \dots + a_k^n$ tends to $0$ so the limit is of the indeterminate form $0^0$. If $a_k =1$, the base $a_1^n + \dots + a_k^n$ tends to $1$. This form ($1^0$) is not indeterminate; the limit is $1$, but that requires a separate proof without L'Hôpital's rule.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1483083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Compactness of a set in order topology Consider a partial order in $R^2$ given by the relation $(x_1,y_1)<(x_2,y_2)$ EITHER if $x_1<x_2$ OR if $x_1=x_2$ and $y_1<y_2$.
Then in the order topology on $R^2$ defined by the above order, how can I conclude that $[0,1]$×$[0,1]$ is not compact?
My thought:I have two doubts.
*
*I am not getting how the given relation < is a partial ordering on $R^2$ when it doesn't seem to be reflexive to me. However, it is antisymmetric and transitive.
*If I agree that the relation < is a partial ordering then am I right to say that all the points of unit square [0,1]×[0,1] belong to the order topology of $R^2$? Then how it is not compact?
Please tell me at what point I am wrong and help me to reach the result. Thanks in advance.
|
Hint: $[0,1]^2$ is a subset of $\mathbb R^2$, so it is given a subspace topology (coming from the order topology of $\mathbb R^2$). Note that in this topology, $\{x\} \times [0,1]$ is open as
$$\{x\} \times [0,1] = [0,1]^2 \cap\big( \{x\} \times (-0.1,1.1)\big)$$
and $\{x\} \times (-0.1,1.1)$ is open in the order topology. Then the subset of open sets
$$\{ \{x\}\times [0,1]| x\in [0,1]\}$$
is an open cover of $[0,1]\times [0,1]$. But this open cover has no finite subcover.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1483224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find the number of integer solutions for $x_1+x_2+x_3 = 15$ under some constraints by IEP. For equation
$$
x_1+x_2+x_3 = 15
$$
Find number of positive integer solutions on conditions:
$$
x_1<6, x_2 > 6
$$
Let: $y_1 = x_1, y_2 = x_2 - 6, y_3 = x_3$
than, to solve the problem, equation $y_1+y_2 +y_3 = 9$ where $y_1 < 6,0<y_2, 0<y_3 $ has to be solved. Is this correct?
To solve this equation by inclusion-exclusion, number of solution without restriction have to be found $C_1 (3+9-1,9)$ and this value should be subtracted by $C_2 (3+9-7-1,2)$ , (as the negation of $y_1 < 6$ is $y_1 \geq 7$).
Thus:
$$
55-6=49
$$
Is this the correct answer ?
Problem must be solved using inclusion-exclusion...
|
Here is a different way to break it down
$$
x_1\in\{1,2,3,4,5\}
$$
and given $x_1$ we then have $x_1+x_2<15$ and $x_2>6$ combined as
$$
6<x_2<15-x_1
$$
And whenever $x_1$ and $x_2$ are given, the value of $x_3$ follows from them.
For $x_1=5$ we then have $x_2\in\{7,8,9\}$ so three choices for $x_2$. Each time $x_1$ is decreased by $1$ we gain one option for $x_2$. Thus we have a total of
$$
3+4+5+6+7 = 25
$$
sets of integer solutions under the given constraints.
I ran the following code snippet in Python which confirmed the figure of 25:
n = 0
for x1 in range(1,16):
for x2 in range(1,16):
for x3 in range(1,16):
if x1 < 6 and x2 > 6 and x1+x2+x3 == 15:
n += 1
print n, ":", x1, x2, x3
I understand that I did not answer the question using the method required, but I wonder why I find the number of solutions to be $25$ whereas the OP and the other answer find it to be $49$. Did I misunderstand the question in the first place?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1483343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Proving non compactness of a space I'm trying to show that the space $\mathbb R^p$, endowed with a metric $d'(x,y) = \frac{d_2(x,y)}{1 + d_2(x,y)}$, where $d_2(x,y)$ is the Euclidean distance, is closed and bounded but not compact.
I've had no problem with the first two proof, but I cannot go ahead with the proof of non compactness. I only know that I have to use the Bolzano-Weierstrass property about subsequences and to proceed by contradiction, assuming that the space is compact.
|
Other answer using sequences.
The sequence $(a_n)$ with $a_n=(n,0, \dots, 0)$ is bounded (by $1$) but has no converging subsequence. Hence applying Bolzano–Weierstrass theorem, our metric space is not compact.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1483419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
If $F$ is a field with $\operatorname{char} F = 0$ and $f(x) \in F[x]$ irreducible, all zeros of $f(x)$ in any extension has multiplicity $s = 1$. What I have trying is:
Suppose that $f(x)$ has at least one zero $\alpha$ such that $f(x) = (x - \alpha)^sq(x)$, $s > 1$ in some extension. Then I guess that $(x-\alpha)^{s-1} \mid f(x)$.
So, $f(x)$ is not irreducible, where $f(x) = (x-a)^{s-1}h(x)$.
But is seems wrong once I neither used the hypothesis $\operatorname{char} F = 0$.
What I am loosing? Could someone help me?
Thanks a lot.
|
This isn't quite right, first of all $(x-\alpha)^{s-1}$ might not be a polynomial with coefficients in $F$ when $\alpha\not\in F$. However, you do know in characteristic $0$ that the derivative of a non-constant polynomial is not $0$. If
$$f(x)=(x-\alpha)^s\prod_{i=1}^n (x-\alpha_i)^{e_j}, s>1, e_j\ge 1$$
then we have that $\gcd(f'(x),f(x))$--both of which have coefficients in $F$--cannot be $1$, since this same relationship would hold in an extension where clearly $(x-\alpha)\big|\gcd(f(x),f'(x))$, i.e. $\alpha$ is a root of this gcd, which is a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1483489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
On partition of integers I came across an example in my textbook where it was asked to find the generating function for the number of integer solutions of:
${2w+3x+5y+7z=n}$ where ${0\le w, 4\le x,y, 5\le z}$
The proposed solution is:
$$\frac1{1-t^2}\cdot\frac{t^{12}}{1-t^3}\cdot\frac{t^{20}}{1-t^5}\cdot\frac{t^{35}}{1-t^7}$$
I do understand the generating functions for these partitions, I do not however grasp the reason why the numerator for instance when ${4\le x,y}$ is ${t^{12}}$ and ${t^{20 }}$ respectively. Or why it is ${t^{35}}$ when ${5\le z}$
|
Take $x$ as an example; it corresponds to the factor $\dfrac{t^{12}}{1-t^3}$. Now
$$\frac1{1-t^3}=\sum_{n\ge 0}t^{3n}=1+t^3+t^6+t^9+\ldots+t^{3k}+\ldots\;.\tag{1}$$
When you multiply the four series together, a typical term $t^{3k}$ of this series would contribute $3k$ to the total exponent of any term in the product in which it’s involved; that would correspond to $x=k$. However, you don’t want to allow $x$ to be $0,1,2$, or $3$, so you need to get rid of the terms $1,t^3,t^6$, and $t^9$: you want to start with $t^{12}=t^{3\cdot4}$. Multiplying $(1)$ by $t^{12}$ does exactly that: you get
$$\frac{t^{12}}{1-t^3}=t^{12}\sum_{n\ge 0}t^{3n}=\sum_{n\ge 0}t^{12+3n}=t^{12}+t^{15}+t^{18}+\ldots\;.\tag{1}$$
A similar explanation applies to the other numerators that aren’t $1$: $20=5\cdot 4$, and $35=7\cdot 5$, in each case giving you the minimum acceptable exponent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1483582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
In a ring, how do we prove that a * 0 = 0? In a ring, I was trying to prove that for all $a$, $a0 = 0$.
But I found that this depended on a lemma, that is, for all $a$ and $b$, $a(-b) = -ab = (-a)b$.
I am wondering how to prove these directly from the definition of a ring.
Many thanks!
|
Proceed like this
*
*$a0 = a(0+0)$, property of $0$.
*$a0 = a0 + a0$, property of distributivity.
*Thus $a0+ (-a0) = (a0 + a0) +(-a0)$, using existence of additive inverse.
*$a0+ (-a0) = a0 + (a0 + (-a0))$ by associativity.
*$0 = a0 + 0$ by properties of additive inverse.
*Finally $0 = a0$ by property of $0$.
Your lemma is also true, you can now prove it easily:
Just note that $ab +a(-b)= a(b + (-b))= a0= 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1483716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 2
}
|
Find the inverse of the cubic function What is the resulting equation when $y=x^3 + 2x^2$ is reflected in the line $y=x$ ?
I have tried and tried and am unable to come up with the answer.
The furthest I was able to get without making any mistakes or getting confused was $x= y^3 + 2y^2$. What am I supposed to do after that step?
|
If you plot the equation $y = x^3 + 2y^2$, you'll find that it fails the horizontal line test, and thus that it is not a one-to-one function and its inverse is not a function. So user130558's answer must be wrong since it doesn't include any $\pm$ signs.
Unless your textbook/teacher tells you otherwise, they probably expect you to simply give the result $x = y^3 + 2y^2$ that you already got.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1483839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Two real rectangular matrices $AB=I$ Let $A$ be an $m×n$ real valued matrix and $B$ be an $n×m$ real valued matrix so that $AB$=$I$. Thus we must have
*
*$n>m$
*$m\ge n$
*if $BA=I$ then $m>n$
*either $BA=I$ or $n>m$
What I tried: Either $n<m$ or $n>m$ → either $\operatorname{Rank}(A) = \operatorname{Rank}(B)=n$ or $n>m$ → either $BA=I_n$ or $n>m$. So option 4 seems to be true to me. But how can I reject option 3?
|
Answering my own question for the sake of completeness for the learners in mathematics community, the proof is as follows:
$Rank(AB)=Rank(I_m)=m\implies m\le n$
$\therefore$ either $m<n\ or\ m=n$
In case $m=n$;$AB=I\implies B=A^{-1}\implies BA=I$
Thus, either $m<n\ or\ m=n$ is true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1483949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving for the radius of the Earth based on distance to horizon problem Lets say you are standing on the top of hill with height $d$ and from the top of this hill you can see the very top of radio tower on the horizon. Your goal is to determine the radius of the Earth, so you drive from the base of the hill, in a straight line (along an arc) to the base of the radio tower. Your odometer measures an arc length, $s$ and you can measure the height of the radio tower, $a$. So, what is $R$?
I may have just forgotten my basic algebra but I am trying to solve this by using three equations and three unknowns:
$$s = R(\phi_1+\phi_2) $$
where
$$ \cos(\phi_1) = \frac{R}{R+a} $$
and
$$\cos(\phi_2) = \frac{R}{R+d}.$$
Here, $a$, $d$ and $s$ are all known. So we have three equations and three unknowns where $R$ is the radius of the Earth to be determined, $\phi_1$ and $\phi_2$ are the angles of their respective arc lengths.
How does one solve this?
As a note, this is not a homework problem. I actually want to go perform this experiment to see how accurate my determined radius is.
|
Treating each side of the symmetric situation separately, we have $\cos\phi_1=R/(R+a)$, and thus for $\phi\ll1$ (which holds for realistic hills and radio towers) $1-\frac12\phi_1^2\approx R/(R+a)$. Solving for $\phi_1$ yields $\phi_1\approx\sqrt{2a/(R+a)}\approx\sqrt{2a/R}$. Likewise, $\phi_2\approx\sqrt{2d/R}$. Then $s=R(\phi_1+\phi_2)\approx\sqrt{2R}(\sqrt a+\sqrt d)$, so
$$
R\approx\frac{s^2}{2\left(\sqrt a+\sqrt d\right)^2}\;.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1484077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
$f \in L^p \cap L^q$ implies $f \in L^r$ for $p\leq r \leq q$. Let $(X, \mathfrak{M}, \mu)$ is a measure space. Let $f:X \rightarrow \mathbb{C}$ be a measurable function. Prove that the set $\{1 \le p \le \infty \, | \, f \in L^p(X)\}$ is connected.
In other words prove that if $f \in L^p(X) \cap L^q(X)$ with$ 1\le p < q \le \infty$ and if $p\le r \le q$, then $f \in L^r(X)$.
I've tried to prove this using Holder's Inequality, here's my attempt:
Since $r \in [p,q]$, let $\lambda$ be such that $(1-\lambda)p+\lambda q = r.$ We use the fact that $f \in L^r(X)$ iff $|f|^r \in L^1(X)$. Let $m = \frac{1}{1-\lambda}$ and $n=\frac{1}{\lambda}$ so that $m$ and $n$ are conjugates.
Then, \begin{eqnarray*}
\int_{X} |f|^r \, d\mu &=& \int_{X} |f|^{(1-\lambda)p+\lambda q} \, d\mu \\
&=& \int_X |f|^{(1-\lambda)p}|f|^{\lambda q} d\mu \\
&\stackrel{\text{H$\ddot{o}$lder}}{\leq}& \left(\int_X |f|^p \, d\mu\right)^{(1-\lambda)}\left(\int_X |f|^q \, d\mu \right)^\lambda \, \\
&<& \infty
\end{eqnarray*}
Since surely the RHS is finite since $f \in L^p(X) \cap L^q(X)$.
Is this correct? Is this the intended method of proof?
|
Alternative:
$$\int\left|f\right|^{r}d\mu=\int_{\left\{ \left|f\right|\leq1\right\} }\left|f\right|^{r}d\mu+\int_{\left\{ \left|f\right|>1\right\} }\left|f\right|^{r}d\mu\leq\int_{\left\{ \left|f\right|\leq1\right\} }\left|f\right|^{p}d\mu+\int_{\left\{ \left|f\right|>1\right\} }\left|f\right|^{q}d\mu$$$$\leq\int\left|f\right|^{p}d\mu+\int\left|f\right|^{q}d\mu<\infty$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1484175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
The partial fraction expansion of a 4x4 matrix The partial fraction expansion of a matrix is given by $$(I\xi-A)^{-1}=\sum_{i=1}^{N}\sum_{j=1}^{n_{i}}T_{ij}\frac{1}{(\xi-\lambda_{i})^{j}}$$,
$T_{ij}\in\mathbb{R}^{n\times n}$, $\lambda_{i}$ the eigenvalues of the matrix $A$ and $n_{i}$ the multiplicity of the respective eigenvalues.
Take $$A=\begin{bmatrix}0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -4 & 0\end{bmatrix}$$
I already calculated the eigenvalues as $\lambda_{1}=-2i$, $\lambda_{2}=-i$, $\lambda_{3}=i$ and $\lambda_{4}=2i$, each with multiplicity of 1. But I am having trouble with determining the respective $T_{ij}$.
|
In your case, the multiplicity of each eigenvalue is one so you have
$$ (I\xi-A)^{-1}=\sum_{i=1}^{N} T_{i}\frac{1}{(\xi-\lambda_{i})}. $$
The matrix $A$ is diagonalizable so you have a basis $(v_1, \ldots, v_N)$ of eigenvectors of $A$ with $Av_i = \lambda_i v_i$. Let us multiply the equation above by $v_j$:
$$ \frac{v_j}{\xi - \lambda_j} = (I\xi - A)^{-1}(v_j) = \sum_{i=1}^N \frac{T_j v_j}{\xi - \lambda_i}. $$
It is easy to see that this equation will be satisfied if $T_i v_j = \delta_{ij} v_j$. Since $(v_1, \ldots, v_n)$ is a basis, this determines $T_i$ uniquely and allows you to find them explicitly given the eigenvectors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1484423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Can you integrate by parts with one integral inside another? From the definition of the Laplace transform:
$$\mathcal{L}[f(t)]\equiv \int_{t=0}^{\infty}f(t)e^{-st}\mathrm{d}t$$
where $s \in \mathbb{R^+}$.
$$\mathcal{L}\left[\int_{u=0}^{t}f(u)\mathrm{d}u\right]=\int_{t=0}^{\infty}e^{-st}\mathrm{d}t\int_{u=0}^{t}f(u)\mathrm{d}u$$
$$=\int_{t=0}^{\infty}e^{-st}\mathrm{d}t\int_{u=0}^{t} f(u)\mathrm{d}u=\int_{u=0}^{t} f(u) \mathrm{d}u\int_{t=0}^{\infty}e^{-st}\mathrm{d}t$$
$$\left[-\cfrac{1}{s}e^{-st} \int_{u=0}^{t} f(u)\mathrm{d}u \right]_{\color{red}{0}}^{\color{red}{\infty}}+\int_{\color{red}{0}}^{\color{red}{\infty}}\cfrac{1}{s}e^{-st}f(\color{blue}{t})\mathrm{d}\color{blue}{t}$$
$$=\cfrac{1}{s}\mathcal{L}\left[\color{#180}{f}\right]$$
Assuming that on page 486 in this book it was done by parts I have $\mathbf{3}$ questions about the calculation above:
$\mathbf{\color{red}{1)}}$ For the $2$ sets of limits marked red above for which variable ($u$ or $t$) do they belong?
$\mathbf{\color{blue}{2)}}$ For the variables marked blue, shouldn't the $t$'s be $u$'s?
$\mathbf{\color{#180}{3)}}$ For the part marked green should this be $f(t)$ instead of $t$?
Please explain your answers.
Thank you.
|
1) $t$
2) No: The $f(t)$ there should be thought of as ${d\over dt}\int_0^t f(u)\,du$, in accordance with the integration by parts.
3) No, $\mathcal L[f]$ is correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1484518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
SHORTEST method for finding the third vertex of an equilateral triangle given two vertices? I know the usual method of calculating third vertex by using distance formula , forming quadratic and solving and stuff , but i was wondering if there was a shortcut method for finding it without much havoc ?
Eg: Equilateral triangle ABC , A(3,2) and B(5,1) find third vertex C?
I tried by considering two circles centred at A,B respectively but ended up with another hopeless equation , how do i approach this kinds of questions analytically ?
|
midpoint of $AB$ = $(4, 1.5)$
slope of $AB = -\frac{1}{2} $
right bisector of $AB$ ... $(y-1.5)=2(x-4)$
parametrize bisector ... $$\vec \ell(t)= (4, 1.5) + \frac{t}{\sqrt 5}(1,2) $$ where I have put in the factor of $\sqrt 5$ so that the distance from (4, 1.5) is given by $|t|$
now the altitude of an equilateral triangle is $\frac{\sqrt 3}{2}$ times the length of each side ( in this case $\sqrt 5$)
so the co-ordinates of the point $C$ will be given by $\vec\ell( \frac{\pm\sqrt{15}}{2})$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1484654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How many telephone numbers that are seven digits in length have exactly five 6's? How many phone numbers that are seven digits in length, have exactly five 6's?
My attempt: {{6,6,6,6,6}{ , }}
$(5(top) 5(bottom)) * (18(top) 2(bottom)) = 153$
my reasoning is that the first subset containing the 6's must all be 6, so when I do the permutation I get 1, now on the second subset, there are 18 numbers remaining (9+9) since 6 is not allowed and I can only pick two, so when I do the permutation i get 153.
so 1*153=153.
However, I feel like this is wrong, if so can someone point me down the right direction?
|
The locations of the $6$'s can be chosen in $\binom{7}{5}$ ways. For each such choice, the remaining two slots can be filled in $9^2$ ways.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1484751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Question about possible relations Hello I have a question about possible equivalence relations.
I know that a relation can be Reflexive, Symmetric , Transitive.
But my question is, is there any strict limitations one has on the other.
For example if we had a relation then there are eight possible combinations of the above, for example we could have R S T and not R , S, T, or Not R, Not S, T,
for example.
To me they all seem possible except for a relation that is not reflexive but symmetric and transitive.
Any insight?
|
What about a relation on a set where nothing relates to anything? This is symmetric and transitive, but not reflexive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1484876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How many non-isomorphic simple graphs are there on n vertices when n is... How many non-isomorphic simple graphs are there on n vertices when n is 2? 3?
4? and 5?
So I got...
$2$ when $n=2$
$4$ when $n=3$
$13$? (so far) when $n = 4$ But I have a feeling it will be closer to 16.
I was wondering if there is any sort of formula that would make finding the answer easier than just drawing them all out. And if not, if anyone could confirm my findings so far.
|
There is no nice formula, I’m afraid. What you want is the number of simple graphs on $n$ unlabelled vertices. For questions like this the On-Line Encyclopedia of Integer Sequences can be very helpful. I searched in on the words unlabeled graphs, and the very first entry returned was OEIS A000088, whose header is Number of graphs on n unlabeled nodes. A quick check of the smaller numbers verifies that graphs here means simple graphs, so this is exactly what you want. It tells you that your $1,2$, and $4$ are correct, and that there are $11$ simple graphs on $4$ vertices. You should check your list to see where you’ve drawn the same graph in two different ways. If you get stuck, this picture shows all of the non-isomorphic simple graphs on $1,2,3$, or $4$ nodes. The OEIS entry also tells you how many you should get for $5$ vertices, though I can’t at the moment point you at a picture for a final check of whatever you come up with.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1484974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What is the purpose of showing some numbers exist? For example in my Analysis class the professor showed $\sqrt{2}$ exists using Archimedean properties of $\mathbb{R}$ and we showed $e$ exists. I want to know why it's important to show their existence?
|
You believe $\sqrt{2}$ is a number... so the question is whether or not it's a real number.
If the real numbers didn't have a number whose square is 2, that would be a rather serious defect; it would mean that the real numbers cannot be used as the setting for the kinds of mathematics where one wants to take a square root of $2$.
Basically, things cut both ways; while you could use the argument as justifying the idea of taking the square root of 2 is a useful notion, the more important aspect is that it justifies the idea of the real numbers is a useful notion.
Also, the argument is useful as a demonstrates of how to use the completeness properties to prove things.
Furthermore, it reinforces the notion that this type of reasoning can be used to define specific things. Some people have a lot of difficulty with this type of argument; e.g. "you haven't defined $\sqrt{2}$, you've just defined a way to get arbitrarily close to a square root of 2 without ever reaching it". It's a lot easier to dispel such misconceptions when the subject is something as clearly understood as $\sqrt{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1485057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 7,
"answer_id": 4
}
|
Show that a group of order 7 has no subgroups of order 4. I can do this with the use of Lagrange's theorem but my professor says its possible without it. I can't find how to go about solving it. Any hints would be appreciated.
|
Okay, suppose your subgroup is $G=\{1,a,b,c\}$, and your big group is $H=G\cup \{d,e,f\}$.
Now, what can $ad$ be? Not $1$, since $a^{-1}\in G$ and $d\notin G$. Not $a$, since $d\neq 1$. Not $b$ (resp. $c$) since $a^{-1}b\in G$ (resp. $a^{-1}c\in G$). Also not $d$, since $a\neq 1$. Hence $ad\in \{e,f\}$. By similar logic, $bd, cd\in \{e,f\}$. By the pigeonhole principle, two of these must agree. Without loss, suppose $ad=e=bd$. But now $add^{-1}=bdd^{-1}$ so $a=b$, a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1485189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
}
|
Very fascinating probability game about maximising greed? Two people play a mathematical game. Each person chooses a number between 1 and 100 inclusive, with both numbers revealed at the same time. The person who has a smaller number will keep their number value while the person who has a larger number will halve their number value. Disregard any draws.
For example, if the two players play 50 and 70, the first player will retain 50 points while the second will only get 35.
There are five turns in total and each person receives a score equal to the sum of their five values.
What is the optimum winning strategy?
Obviously playing 100 each turn is a bad strategy since if the other player plays 70 then they gain 20 points more than you. Similarly, playing 1 is also a bad move since you are guaranteed to receive less points than your opponent.
If we assume that our opponent is a computer that picks numbers from 1 to 100 with equal probability, we can work out the expected value which will maximise our score relative to the computer's. (I have worked out this to be 60 something - I think)
But, if this is true then the computer will realise that it is pointless to play anything less than 30 something so we can further assume the computer will not play such low numbers.
This gives a different optimal number to play each time. Repeating this method will give different values of the 'best' number to play. I'm just wondering what this number is.
Also, the 'five turns' thing is of course irrelevant, but with a human it is interesting to predict the other player's strategy and moves.
So does there exist a number, which will maximise the total expected value? (We can assume our opponent has the same amount of knowledge as us)
|
As @Greg Martin pointed out, you can solve such games using linear programming under the assumption that the goal is to win by the largest margin over the other player.
I used an online zero-sum game solver to find the following solution; I'm not sure if this optimal strategy is unique.
Never choose numbers in $[1,25]$. Choose even numbers in $[26,100]$ with decreasing probability ($P(26)\approx0.0575$; $P(100)=0.\overline{01}=\frac1{99}$) and choose odd numbers in $[27,49]$ even less often ($P(27)\approx0.0166$; $P(49)\approx0.000372$). Never choose odd numbers greater than $49$.
This is a fairly curious result, and perhaps someone else can offer some insight into it.
The strategy above is appropriate when playing over several rounds and the scores accumulate. But if you're playing only one round and care only for winning but not the margin of victory, then the strategy is very different! Simply play either $26$ or $52$ with equal probability and you are guaranteed to win at least half the time. (If your opponent adopts the same strategy, you will tie every single game.) Again, I'm not sure if this strategy is unique.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1485281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Prove that Riemann Sum is larger Let $ f:[a,b] \rightarrow \mathbb{R}$ be a differentiable function. Show that if $ P = (x_0, x_1, ..., x_n) $ is a partition of [a,b] then
$$ U(P, f') := \sum_{j=1}^n M_j \Delta x_j \ge f(b) - f(a) $$ where $$ M_j := sup\{f'(t) : t \in [x_{j-1}, x_j] \}$$
How do I go about proving this? I can draw a graph of it and clearly the Riemann sum is always larger than the definite integral but I'm not sure how to go about proving it otherwise.
|
If $f$ is continuously differentiable, then by fundamental theorem of calculus we have $\int_{a}^{b}f' = f(b) - f(a)$; the result follows from the fact that Darboux integration is equivalent to Riemann integration and that $\inf U(f'; P) = \int_{a}^{b}f'$ by definition.
If $f$ is simply differentiable, then by mean-value theorem we have
$f(x_{i}) - f(x_{i-1}) \leq (x_{i}-x_{i-1})M_{i}$ for all $i$; summing over all $i$ gives the desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1485364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $f(x)$ is positive and $f'(x)<0$, $f"(x)>0$ for all $x>0$, prove that $\frac{1}{2}f(1)+\int_1^nf(x)dx<\sum_{r=1}^nf(r)$
If $f(x)$ is positive and $f'(x)<0$, $f"(x)>0$ for all $x>0$, prove that $$\frac{1}{2}f(1)+\int_1^nf(x)dx<\sum_{r=1}^nf(r)<f(1)+\int_1^nf(x)dx$$
Since function is decreasing, then area of rectangles $$f(2)[2-1]+f(3)[3-2]+...<\int_1^nf(x)dx$$
Hence, $$\sum_{r=2}^nf(r)<\int_1^nf(x)dx$$
Adding $f(1)$ on both sides,
$$\sum_{r=1}^nf(r)<f(1)+\int_1^nf(x)dx$$
But how do I get the second inequality : $$\frac{1}{2}f(1)+\int_1^nf(x)dx<\sum_{r=1}^nf(r)$$
|
$f'' > 0$ means that $f$ is strictly convex, so the graph of $f$ restricted to each interval $[r, r+1]$ lies below the straight line connecting
$(r, f(r))$ and $(r+1, f(r+1))$:
$$
f(x) \le (r+1-x) \, f(r) + (x-r) \, f(r+1)
$$
with equality only at the endpoints of the interval.
The integral of a linear function over an interval is just
the value at the midpoint, multiplied by the length of the interval.
It follows that
$$
\int_r^{r+1}f(x)dx < \frac 12 \bigl( f(r) + f(r+1)\bigr )
$$
and therefore
$$
\int_1^nf(x)dx < \frac 12 f(1) + f(2) + \dots + f(n-1) + \frac 12 f(n)
$$
or
$$
\frac 12 f(1) + \int_1^nf(x)dx + \frac 12 f(n) < \sum_{r=1}^n f(r) \, .
$$
The conclusion now follows since $\frac 12 f(n) > 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1485513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Lyapunov invariant set for affine systems Given a linear system $\dot{x}=Ax$ such that the real part of every eigenvalue of $A$ is less than $0$, Lyapunov's equation $A^T P + P A = -Q$ with $Q$ being any suitably sized positive definite matrix gives us an invariant ellipsoid $x^T P x \leq 1$, i.e. for any initial state $x_0$ such that $x_0^T P x_0 \leq 1$ we know that the states $x$ or rather $x(t)$ (making dependency to $t$ explicit) reachable from $x_0$ remain inside the invariant ellipsoid, i.e. $x(t)^T P x(t) \leq 1 ~\forall t \geq t_0$.
How can this be generalized to affine systems $\dot{x}=Ax + b$ where the real part of every eigenvalue of $A$ is less than $0$? Clearly, transforming the affine system into a linear system by extending state vector by $b$ with $\dot{b}=0$ does not help since we will have eigenvalue(s) $0$ which violates our assumption.
|
As Evgeny suggested, it is enough to translate the coordinate system, i.e. for $x=y-A^{-1} b$ we have $\dot{y} = \dot{x} = A x + b = A (y - A^{-1} b) + b = A y - b + b = A y$ and thus the invariant ellipsoid is $y^T P y = (x+A^{-1}b)^T P (x+A^{-1}b) \leq 1$. Note that $A^{-1}$ exists since we demanded that the real part of every eigenvalue of $A$ is less than $0$ and it holds that a matrix is invertible if and only if it has no eigenvalue which is $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1485640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Why is "similarity" more specific than "equivalence"? At least regarding matrices, we have
*
*$A$ is similar to $B$ if $\exists S: B=S^{-1}AS$
*$A$ is equivalent to $B$ if $\exists P,Q: B=Q^{-1}AP$
I am confused about the usage of the terms "similar" and "equivalent". I would have thought that equivalence is more specific than similarity, i.e. equivalence is a special case (subset) of similarity. However, the above indicates the other way around.
How can this be explained? Is this specific to linear algebra, or is this a math-wide phenomenon?
|
$A$ and $B$ are similar if they represent the same endomorphism $f$ of a finite dimensional $K$-vector space $E$ in different bases. Thus similar matrices are equivalent, but the converse is false:
They're similar if and only if they have the same Jordan normal form, and also if and only if they have the same similarity invariants, which the invariant factors of $E$, seen as a $K[X]$-module through $f$ (i. e. for any $v\in E$, $\;X\cdot v=f(v)$).
On another hand, $A$ and $B$ are equivalent if and only if $\;\operatorname{rank}A=\operatorname{rank}B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1485732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Trying to understand a remark from Atiyah and Macdonald's *Introduction to Commutative Algebra* Can someone explain the following remark from page 52 of Atiyah and Macdonald's Introduction to Commutative Algebra book.
The names isolated and embedded come from geometry. Thus if $A=k[x_1,...,x_n]$, where $k$ is a field, the ideal $\mathfrak a$ gives rise to a variety $X \subset k^n$. The minimal primes $p_i$ corresponds to the irreducible components of $X$ and the embedded primes corresponds to subvarieties of these, i.e., varieties embedded in the irreducible components.
Moreover, is the above remark useful in finding primary decomposition of an ideal? How do we use this in finding a decomposition?
|
Let $\mathfrak a=\mathfrak q_1\cap\cdots\cap\mathfrak q_n$ be a reduced primary decomposition. Then $V(\mathfrak a)=V(\mathfrak q_1)\cup\cdots\cup V(\mathfrak q_n)$. Let $\mathfrak p_i=\sqrt{\mathfrak q_i}$. Now we have $X=V(\mathfrak p_1)\cup\cdots\cup V(\mathfrak p_n)$. If $\mathfrak p_i\subset\mathfrak p_j$, then $V(\mathfrak p_j)\subset V(\mathfrak p_i)$, so an embedded prime gives rise to a subvariety of a minimal (isolated) prime. In fact, $X=\bigcup V(\mathfrak p_i)$ where $\mathfrak p_i$ is running through the minimal primes over $\mathfrak a$.
I doubt that in general this can be useful to find out a primary decomposition since you loose a lot of information when passing from $\mathfrak a$ to $V(\mathfrak a)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1485832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is a coordinate system a requirement for a vector space? In other words, can a vector space exist, and not have a coordinate system?
I'm asking because in the definitions that I've seen of a vector space, there's no mention of a coordinate system.
|
For any finite dimensional vector space $V$, there exists a coordinate system (it is not unique). Indeed, let $ n := \dim V$. Then you can find a linearly independent set $\mathcal{B} = \{u_1, \dots, u_n\}$ of $n$ vectors that will generate $V$, and $\mathcal{B}$ can be taken as a coordinate system if your order its elements.
However, the vector space structure doesn't include one particular specified coordinate system. In other words, to define a finite dimensional vector space, you don't need to specify a coordinate system, but you can always find one if you want.
In the general case of (maybe infinite) vector space, the existence of a coordinate system is not a requirement, and there are examples of vector spaces with no coordinate system, as @Hagen von Eitzen pointed out in a comment.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1485951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Limit of a the function $f(x,y)=x \log(y-x)$ at $(0,0)$ Let $D=\{(x,y) \in \mathbb{R}^2 :y>x\}$ and $f:D\to \mathbb{R}$ with $f(x,y)=x \log(y-x)$.
Does the limit $\lim_{(x,y)\rightarrow(0,0)} f(x,y)$ exist?
I consider $E=\{(-t,t) : t>0\}$ so $f\vert_{E}=-t\log(2t) \to 0 $ for $t \to 0$. So the limit is null?
|
Hint: Approach $(0,0)$ along a path with $x$ positive and $y=x+e^{-1/x}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1486043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How many participants required?
A test consisting of 20 problems is given at a math competition. Each correct answer to each problem gains 4 points; each wrong answer takes away 1 point, and each problem left without an answer gets 0 points. What is the lowest possible number of participants in the competition that is needed, so that at least two of them will get an equal number of points?
I don't see how this is an accurate question even. You can have two participants and get equal number of points.
If you have 3 it works as well. What is the point of this problem?
Thanks!
EDIT:
I think small cases should be good:
C - correct, W - Wrong, M - Omitted
$20C \to 80$.
$19C, 1W, 0M \implies 75$ $19C, 0W, 1M \implies 76$.
$18C, 1W, 1M \implies 71, 18C, 2W, 0M \implies 70, 18C, 0W, 2M \implies 72$.
$17C, 1W, 2M \to 67, 17C, 2W, 1M \to 66, 65, 17C, 0W, 3M \to 68$.
The difference, from the highest to highest is $4$. For each $xC$ there are $(21-x)$ terms. So the series is:
So for $0C$ there are $21-0 = 21$ terms.
Thus, the series is:
$1 + 2 + 3 + ... + 21 = \frac{21 \cdot 22}{2} = 231$.
We need at least $232$ participants.
|
I understand the problem as follows: What is the minimum number of participants such that we can be sure to see two participants with the same score.
Denote by $c$, $w$ the number of correct, resp., wrong answers of some participant, and by $s$ his score. Then $$c\geq 0,\quad w\geq0,\quad c+w\leq20,\quad s=4c-w\ .$$
Given $c\in[0\ ..\ 20]$ we have $w\in[0\ ..\ 20-c]$, so that $$s=4c-w\in [5c-20\ ..\ 4c]=:J_c\ .$$ The set $S$ of possible scores is then given by
$$S=\bigcup_{c=0}^{20}J_c\ .$$
The $J_c$ are intervals of successive integers, beginning with $J_0=[-20\ ..\ 0]$, moving steadily to the right and becoming shorter. The last interval is the singleton $J_{20}=[80\ ..\ 80]$. As long as the left endpoint of $J_c$ is at most one more than the right endpoint of $J_{c-1}$ we obtain an uninterrupted sequence of integers in $S$. This condition is violated as soon as
$$5c-20\geq 4(c-1)+2\ ,$$
or $c\geq18$. In the transition $17\rightsquigarrow18$ the number $69$ is left out; in the transition $18\rightsquigarrow19$ the numbers $73$ and $74$ are left out, an in the transition $19\rightsquigarrow20$ the numbers $77$, $78$, $79$ are left out. It follows that
$$S=[-20\ ..\ 80]\setminus\{69, 73,74, 77, 78, 79\}\ ,$$
so that $|S|=101-6=95$.
In order to guarantee two contestants with the same score we therefore need $96$ participants.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1486147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Evaluate the limit of $\lim_{n\to\infty}\frac{n^{1/3}\sin(n!)}{1+n}$ I cannot seem to wrap my head around this problem due to its complexity. The answer seems to be that the limit goes to zero for the following reasons:
*
*$\sin(x)$ will never be greater than $1$, and thus will only lessen the value of the numerator.
*If $\sin(x)$ is omitted from the equation, the equation reads $\frac{n^{1/3}}{1+n}$. Therefore, because the power of the denominator is greater than that of the numerator, the denominator will reach infinity faster than the numerator, causing the limit to converge to zero.
This answer seems a bit trivial, but I cannot disprove the rationale. Can someone confirm or deny my thinking process? If denied, could you theorize a potential, more rigorous proof?
|
Use the sandwich/squeeze theorem + algebra of limits + as you already mentioned the bounded property of the sine function if you want a more rigorous way to show the limit. Alternatively you could formally prove that it has the limit you suggest using the definition of a limit.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1486218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
}
|
Easiest way in general to find the sin, cos, arcsin, arccos, of "not so easy" angles/values without using a calculator? I was wondering if there are any easy ways in general to find the sin, cos, arcsin, arccos, of "not so easy" angles/values without using a calculator. By "not so easy" I mean just not things like 0, $\pi/6$, $\pi/4$, $\pi/3$, or $\pi/2$, which one encounters routinely. And also not $\sin(\theta+2n\pi)$, $\sin(n\pi)$, $\sin(-\theta)$, and the like. The reason I'm asking this terribly general question (please forgive me for that) is because in Calculus class I always make some sort of error when these sorts of calculations need to be done to solve a problem.
For example, on a recent question I had to find $0<\theta<2\pi$ satisfying $\sin\theta=-1/2$.
We know $\sin(\pi/6)=1/2$ and $\sin(-\pi/6)=-1/2$ since $-\sin(x)=\sin(-x)$. Also we know sin has period $2\pi$ so $\sin(-\pi/6+12\pi/6)=\sin(11\pi/6)$=$-1/2$.
But how would one get $\sin(7\pi/6)=-1/2$ for example?
Or similarly how would one know the value of sin $4\pi/3$, $5\pi/3$, $5\pi/4$, $6\pi/5$, $5\pi/6$, etc. Or cos, arcsin, arccos, of such "nasty" values?
Again, I apologize for the ridiculously general question, it's just that I know no other way of asking it.
|
Since $\pi/4$ is an angle of a $1:1:\sqrt{2}$ right-angled triangle, and $\pi/6$ and $\pi/3$ are angles of a $1:\sqrt{3}:2$ right-angled triangle, the $\sin / \cos / \tan$ of these and their multiples are easy to calculate on the fly.
$\sin 6\pi/5$ or something similar would be harder.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1486320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
$L^q(X) \subset L^p(X)$ if $p\leq q$ and $\mu(X) < \infty$. Let $(X,\mathfrak{M},\mu)$ be a measure space. Show that if $\mu(X) < \infty$ then $L^q(X) \subset L^p(X)$ for $1\le p \le q \le \infty$.
Is it enough to define $$E_0 = \{x \in X \, : \, 0 \leq |f(x)| < 1\}$$ and
$E_1 = X \setminus E_0$ so that $\mu(E_0), \, \mu(E_1) < \infty$ and $E_0$ is measurable as $E_0 = \{x \in X : |f(x)| \geq 0 \} \cap \{x \in X : |f(x)| \leq 1 \}$ and $f$ is measurable. Therefore if $f \in L^q$,
\begin{eqnarray*}
||f||^p_{L^p} &=& \int_{E_0}|f|^p \, d\mu + \int_{E_1} |f|^p \, d\mu \\
&\leq& \int_{E_0}|f|^p \, d\mu + \int_{E_1} |f|^q \, d\mu \\
&<& \mu(E_0) \,\, + ||f||_{L^q}^q \\
&<& \infty.
\end{eqnarray*}
which implies $f \in L^p$.
Doe this suffice? I've seen a proof using Holder's inequality (that has now been given as an answer for those interested), that is much shorter, but does this suffice?
|
Your proof is fine. You can also try to show that if $p<q$ then $$\lVert f\rVert_p \leqslant \lVert f\rVert_q \mu(X)^{1/p-1/q}$$
by using Hölder's inequality. This gives a bit more information than your proof. In particular for $X$ a probability space gives $p\mapsto \lVert f\rVert_p$ is increasing.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1486440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
How can I get eigenvalues of infinite dimensional linear operator? What I want to prove is that for infinite dimensional vector space, $0$ is the only eigenvalue doesn't imply $T$ is nilpotent.
But I am not sure how to find eigenvalues of infinite dimensional linear operator $T$. Since we normally find eigenvalues by find zeros of characteristic polynomials, we even cannot find the characteristic polynomial for this situation.
I am specifically interested in the differential operator on the vector space of all formal power series.
|
There's no general way to find eigenvalues of an operator in an infinite dimensional space. Actually, some operators don't have any eigenvalue.
You may consider the vector space $k[X]$ of polynomials over the field $k$, and the operator $D : P \mapsto P'$. This operator has only $0$ as an eigenvalue, but is not nilpotent (although it's not far from being nilpotent since for all $P$, there exists $n_P$ such that $D^{n_P}(P) = 0$ if you take $n_P = \deg P + 1$).
The operator $\sigma : P(X) \mapsto X P(X)$ is an example of operator with no eigenvalues.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1486533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
}
|
Proof about the subgroup of injective functions being the subgroup of group of even permutations I have the following problem:
Let $H$ be the subgroup of $S_n$, where $S_n$ denotes the group of injective functions mapping a set to itself, and the order of $H$ is odd. I need to prove that $H$ is a subgroup of $A_n$ where $A_n$ is an alternating group of degree n.
My idea: If the order of $H$ is odd, then it means that the least common multiple of length of the disjoint cycles that make up $H$ is odd, which implies that there are only cycles of odd length. I think all cycles can be written as a multiple of 2-cycles, thus, I can conclude that $H$ is a subgroup of the group of even permutations?
Am I right? Any help would be appreciated!
Thanks in advance!
|
Have you seen in your course (or maybe an earlier one) that if $AB$ are subgroups of a finite group $G$, then the set $AB$ (which need not be a subgroup in general) has cardinality $\frac{|A||B|}{|A \cap B|}$. If you already know that $[S_{n}:A_{n}] = 2$, you can then check that when $H$ is a subgroup of $S_{n}$ of odd order, the set $HA_{n}$ has cardinality $[H :H \cap A_{n}]|A_{n}|$, which is an odd integer multiple of $|A_{n}|$. Since $|S_{n}| = 2|A_{n}|$, we can only have $|HA_{n}| = |A_{n}|,$ so that $H \subseteq A_{n}$.
Alternatively, you can proceed along the lines you were thinking: any $k$-cycle in $S_{n}$ may be written as a product of $k-1$ $2$-cycles. Hence when $k$ is odd, any $k$-cycle is a even permutation, and lies in $A_{n}$. If $H$ has odd order, as you came close to saying, it is the case that every element $h \in H$ can be written as a product of disjoint cycles, all of odd length, and every such $h$ lies in $A_{n}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1486666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Proving $\Gamma (\frac{1}{2}) = \sqrt{\pi}$ There are already proofs out there, however, the problem I am working on requires doing the proof by using the gamma function with $y = \sqrt{2x}$ to show that $\Gamma(\frac{1}{2}) = \sqrt{\pi}$.
Here is what I have so far:
$\Gamma(\frac{1}{2}) = \frac{1}{\sqrt{\beta}}\int_0^\infty x^{-\frac{1}{2}}e^{-\frac{x}{\beta}} \, dx.$
Substituting $x = \frac{y^2}{2}\, dx = y\,dy$, we get $\Gamma(\frac{1}{2}) = \frac{\sqrt{2}}{\sqrt{\beta}}\int_0^\infty e^{-\frac{y^2}{2\beta}} \, dy$.
At this point, the book pulled some trick that I don't understand.
Can anyone explain to me what the book did above? Thanks.
|
It's using the Gaussian integral or the fact that the area under normal density is 1. Since you have the statistics tag, I'm guessing the latter should be familiar already.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1486784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Trying to compute an integral using Dirichlet's problem solution I want to compute for $r < 1$ that
$$ r \cos \phi = \frac{1}{2 \pi} \int_0^{2 \pi} \frac{ (1-r^2) \cos \theta }{1 - 2r \cos( \phi - \theta) + r^2 } d \theta $$
In my notes, it says that the way to show this is by solving the Dirichlet Problem directly for the boundary condition $u_0(z) = x $. How can we do that?
|
Note that if in general $f(e^{i\theta})$ is a function defined on $\{|x| = 1\}$ in $\mathbb R^2$, then the function
$$u(re^{i\phi}) = \frac{1}{2\pi} \int_0^{2\pi} \frac{1 - r^2}{1 - 2r \cos(\phi - \theta) + r^2} f(e^{i\theta}) d\theta $$
defined in $\{|x| \le 1\}$ solves the Dirichlet's problem. That is, $u$ satisfies
$$\begin{cases} \Delta u = 0 & \\ u|_{\{|x| = 1\}} = f & \end{cases}$$
So if now $f =x$, then in terms of polar coordinate, $f(e^{i\theta}) = \cos \theta$, and we see that $u =x$ (by the uniqueness of the solution of the Dirichlet problem, note $u = x$ is clearly harmonic). So we have for all $1>r$ and $\phi$,
$$r\cos\theta= u(re^{i\phi}) = \frac{1}{2\pi} \int_0^{2\pi} \frac{(1 - r^2)\cos\theta}{1 - 2r \cos(\phi - \theta) + r^2} d\theta.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1486963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
The page numbers of an open book totalled $871$. At what pages was the book open? So far I have $x + (x-1) = 871$. This has led to one page being $435$ and the other, $436$. Is there anything needing alteration, or is this equation workable?
|
This equation (the equation you wrote) works.
x+ (x−1) = 871.
However, the equation x + (x + 1) = 871 also works, since these two numbers are also consecutive numbers.
Edit: The solution to the second equation yields the exact same answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1487075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
If $x$ divides $x-z$, then $x$ divides $z$
For any integer x and z , if $x|(x-z)$ then $x|z$
My attempt: suppose $x|(x-z),$ let $y= x-z$
$x|y $ means there is any integer r such that $y=r*x$
So $ x-z=rx $, which equals $(x-z)/(x) =r $ this is where I get stuck.
|
You con write:
$$\frac {x-z}{x}=k$$ with $k$ integer. This expression si equivalent: $$\frac {x}{x}-\frac {z}{x}=k$$ but you can continue $$1-\frac {z}{x}=k$$ $k$ is integer therefore $x|z$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1487154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Give an example to show that converse is not true: I know that if $ T: X \rightarrow Y$ be a continuous then $Gr(T)=\{(x,Tx):x\in X\}$ is closed in $X\times Y$.
Since let $({x_n,T(x_n)}) \subseteq Gr(T)$ such that $(x_n,T(x_n)) \rightarrow (x,y) \in $$X\times Y$.Thus $x_n\rightarrow x$ , $T(x_n) \rightarrow y$.
But $T$ is continuous.So $T(x_n)\rightarrow Tx$.
i.e $y=T(x)$ and $(x,y)\in Gr(T)$.
But that converse is not true.For example $ f(x) =
\begin{cases}
Sin\frac{1}{x} & x\neq0 \\
0 & x=0
\end{cases}$
is not Continuous but i can not prove that $Gr(f)$ is closed.
Can you show this...
|
It seems that you're not necessarily looking for a linear map.
Consider the example $f:[-1,1] \to \Bbb R$ given by
$$
f(x) =
\begin{cases}
1/x & x \neq 0\\
0 & x = 0
\end{cases}
$$
Note that the graph of the function is closed, but the function is discontinuous.
Interesting fact: functions on a compact metric space are continuous if and only if their graph is compact.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1487263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How can you disprove the statement $4=5$? I know this sounds insane. But bear with me, my friend said this to me with a straight face:
Can you disprove the statement: $4=5$?
And I was like is that even a question, thats obvious, $5$ is $4+1$ and $5$ comes after $4$.
He was like: but you haven't still disproved my statement.
Something like this went for around 30 minutes...and he wasn't happy at the end at all
Is he correct? If not, how does one actually give a good sensible answer to such a person?
|
Subtract 4 from both sides of $4=5$, yielding $0=1$. Now $0$ is defined to be the number such that $0+x=x$ for every $x$ and $1$ the number such that $1\cdot x = x$ for every $x$. But with that definition of $0$ we get that $0\cdot x = (0+0)\cdot x = 0\cdot x + 0\cdot x$, so $0\cdot x =0$ for every $x$. So since $1$ is the multiplicative identity (the property described above), but $0$ behaves differently multiplicatively, they can't be equal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1487376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 0
}
|
Proving homogenous quadratic inequalities Okay, I'm having trouble proving this:
$$5x^2-4xy+6y^2 \ge 0, \text{ where } x,y \in \mathbb{R}$$
I have tried a few values of $x$ and $y$ and I find that is true. EX: $x=y=0$ which make the equation $= 0$
I have tried factoring it but i find that the factors will involve imaginary numbers.
So i would like a way to prove this inequality.
|
$$
5x^2-4xy+6y^2 = 2(x-y)^2 + 3x^2 + 4y^2,
$$
where each term is non-negative
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1487501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How do I find an integer value for which an expression is non-prime? I've just begun Robert S. Wolf's, Proof, Logic and Conjecture. At the end of the first chapter there are some exercises to warm you up for the proof techniques he will eventually introduce. I only mention this so that you are aware that I have yet to encounter formal proof techniques.
The first part of the question simply asks you to substitute small values of $n$ into the expression $n^2-n+41$ and to test if these values are prime. I did this for $n=1$ to $12$, and all the values seemed prime. This leads me onto the second part of the question where I am stuck.
I'll paraphrase the question:
(1)(b)
Find a positive integer value of $n$ for which the expression $n^2-n+41$ is not a prime number.
My attempt
I will decompose the expression $n^2-n+41$ into symbolic and numeric parts i.e. $n^2-n$ and $41$, in order to obtain a better understanding of it.
The symbolic part of the expression $n^2-n$ can be factorised to $n(n-1)$. From this it is apparent that this portion of the expression will only ever return even values, because it will always be of the form where we have an odd number multiplied by an even number. For example: for $n=5$, an odd number, we have $5(5-1)=5(4)=20$; similarly for $n=4$, an even number, we have $4(4-1)=4(3)=12$.
The addition of an odd number and even number returns an odd number, thus the addition of $41$ (an odd number) to $n^2-n$ (an expression which always returns even numbers) will give an odd number for all integer values of $n$.
As $n^2-n+41$ always returns odd numbers, it then stands to reason that if we are to find any non-prime value of this expression it will also be odd.
The only way I could think of doing this was by defining the odd numbers as $2n+1$, (where $2n$ is an even number and $n$ is an integer) and equating this to the expression in the hope that the intersection would return values that are odd and non-prime, however it is not the case that this equivalence returned an integer value of $n$ for which this expression is not prime.
Where have I gone wrong?
|
How about $n=41$?
In general, if you choose $n$ so that all of the terms in a sum are divisible by the same number, then the whole sum will be divisible by that number.
Edit: My understanding is that your approach was to set $n^2-n+41 = 2n+1$ and look for integer solutions. But this is quite a strong condition: you're saying not just that $n^2-n+41$ is odd, but that it's the particular odd number $2n+1$. This is a quadratic in $n$, so it has at most $2$ solutions - it's not particularly surprising that it doesn't have integer solutions.
But for any integer value of $n$, $n^2-n+41$ is odd. If you just want to express that $n^2-n+41$ is odd, the relevant equation is $n^2 -n +41 = 2k+1$. This equation has exactly one integer solution for every value of $n$: an example is $n = 41$, $k = 840$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1487692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 0
}
|
An example showing that van der Waerden's theorem is not true for infinite arithmetic progressions One of the possible formulations of Van der Waerden's theorem is the following:
If $\mathbb N=A_1\cup \dots\cup A_k$ is a partition of the set $\mathbb N$, then one of the sets $A_1,\dots,A_k$ contains finite arithmetic progressions of arbitrary length.
In the other words, if we color the set of all positive integers by finitely many colors, there must be a monochromatic set containing arbitrarily long finite arithmetic progressions.
I assume that the same result is not true if we require that one of the sets contain an infinite arithmetic progression. (Otherwise this result would be well-known.)
What is an example showing that this stronger version of the above theorem is not true? (I.e., an example of a coloring of $\mathbb N$ by finitely many colors with no monochromatic infinite arithmetic progression.)
|
Let's build an example with just two sets in the partition, $A$ and $B$.
Enumerate the set $\mathbb{N}\times\mathbb{N}^{>0}$ as $\{(n_k,d_k)\mid k\in \mathbb{N}\}$ (think of $n$ as the starting point of an arithmetic progression and $d$ as the constant difference between the terms). We start with $A = B = \emptyset$ and ensure that at every stage of the construction (indexed by $k\in \mathbb{N}$), $A$ and $B$ are finite subsets of $\mathbb{N}$.
At stage $k$ of the construction, look at the pair $(n_k,d_k)$.
Case $1$: $n_k\in A$. Let $m$ be the least natural number such that $n_k + md_k\notin A$, and add $n_k+md_k$ to $B$.
Case $2$: $n_k\in B$. Repeat Case $1$, but with the roles of $A$ and $B$ reversed.
Case $3$: $n_k$ is not in $A$ or $B$. Add $n_k$ to $A$ and repeat Case $1$.
To complete stage $k$ of the construction, if the natural number $k$ is not in $A$ or $B$, add it to $A$.
At the end of the infinite construction, we have a partition of all of $\mathbb{N}$ into the two sets $A$ and $B$ (since each $k$ was either in $A$ or in $B$ at the end of stage $k$). Suppose there is an infinite arithmetic progression contained in $A$. It begins with some $n$ and the terms have constant difference $d$. Now $(n,d) = (n_k,d_k)$ for some $k\in \mathbb{N}$. But at the end of stage $k$, we ensured that some element $n+md$ of the arithemetic progression was in $B$, contradiction.
The symmetric argument shows that there is no infinite arithmetic progression contained in $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1487778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 7,
"answer_id": 2
}
|
What is the pmf of rolling a die until obtaining three consecutive $6$s? I am trying to find the pmf of rolling a die until 3 consecutive 6s turn up. I am able to find the expected value using a tree diagram, but the pmf is not obvious to me.
Let A be the event of not rolling 6, and let B be the event of rolling a 6.
The geometric distribution does not work, because we could have any number of As and Bs (ex. ABBAAABBA...) until we reach BBB. But using a binomial doesn't make sense to me either, because we don't care how many As or Bs we have, we just care about the final 3 consecutive Bs.
|
The probabilities can be calculated recursively by $p(1)=p(2)=0$, $p(3)={1\over 216}$ and for $n>3$, $$p(n)={5\over 6}\,p(n-1)+{5\over 36}\,p(n-2)+{5\over 216}\,p(n-3).\tag1$$
I suspect that an explicit formula for $p(n)$ will be too complicated to be useful, but you may get useful information via the probability generating function $$G(s)={s^3\over 216-180s-30s^2-5s^3}.\tag 2$$ For example, differentiating $G$ and setting $s=1$ shows us that the expected number of throws until we see three 6s in a row is $258$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1487911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Why is $h(z)$ tending to $0$ $?$ To prove the uniqueness of Laurent's Decomposition :
$f(z)$ is analytic in the region $\rho\lt |z-z_0|\lt \delta $ .It is decomposed as $$f(z)=f_0(z)+f_1(z)$$ where $f_0(z)$ is analytic for $|z-z_0|\lt \delta $ and $f_1(z)$ is analytic for $|z-z_0|\gt \rho$ .
Let $f$ has another decomposition $$f(z)=g_0(z)+g_1(z)$$
Then $$g_0-f_0=f_1-g_1$$ . Let $h(z)$ be defined such that $$h=g_0-f_0 \ \ on\ \ |z-z_0|\lt \delta \\ h=g_1-f_1 \ \ on\ \ |z-z_0|\gt \rho $$ The two definitions overlap on the domain $\rho\lt |z-z_0|\lt \delta $ . So $h$ is entire on the whole of $\mathbb C$ .
Now what needs to be shown is that $h$ is bounded and tends to $0$ as $z$ tends to $\infty$ . Then by Liouville's theorem $h(z)=0$ all over $\mathbb C$ and the uniqueness is proved. That is the point I'm stuck at . How to show $h(z)\rightarrow 0$ when $z\rightarrow \infty$ .$?$
|
The theorem you are citing has another condition:
We normalize the decomposition so that $f_1(\infty) = 0$
Also, the proof of the claim says
And $h(z)$ tends to $0$ as $z\to\infty$.
So, and I don't mean this in a mean way, but all your problems originated from sloppy reading. A very common and fatal flaw in mathematics.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1488042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Continuous function $\geq 0$ on dense subset I'd like to use this fact, but I couldn't find out, how to prove it. Assume we have a continuous function $f : X \rightarrow \mathbf{R}$ with $X$ a topological space and $f \vert_D \geq 0$ on a dense subset $D \subset X$. Can I conclude that $f \geq 0$ on the whole space $X$?
I know that a function is uniquely determined by its restriction to a dense subset. But how can I use that to prove the fact?
Thanks in advance!
|
Take $x\in X$. If $x\in D$, you done. If not, there exists $(x_n)\in D^{\mathbb{N}}$ converging to $x$, since $\bar{D}=X$. Then $$0\leq f(x_n)\xrightarrow[n\to\infty]{}f(x)$$
using continuity of $f$.
Edit: not sufficient for general topological spaces, cf. comment below. I'm leaving this answer here exactly because of this (I found the discussion in the comments useful, at least for me, to understand some of the concepts at play.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1488159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
A concentration problem Consider $N$ variables $X_1,X_2,\ldots,X_N$ with $\Pr(X_i=a_i)=\Pr(X_i=-a_i)=1/2$. Here $a_1,a_2,\ldots,a_N\in [0,1]$.
Does there exist some concentration result about $\sum_{i=1}^N X_i$?
|
If $a_i$ are deterministic, than you can think about new random variables $Y_i=\frac{X_i}{a_i}$ and you already have the lower bound for this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1488267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Determine the convergence of a recursive sequence Determine the convergence of the sequence defined by $\sum_{n \in \mathbb{N}}a_n$ such as
$$a_1=1,\quad a_{n+1}=\frac{2+\cos(n)}{\sqrt{n}}a_n$$
By the test of the general term, we have
$$\frac{1}{\sqrt{n}} \le \frac{2 + \cos{(n)}}{\sqrt{n}} \le \frac{3}{\sqrt{n}} $$
So, by the theorem of the sandwich,
$$ \lim_{n \rightarrow \infty}\frac{1}{\sqrt{n}}
=\lim_{n \rightarrow \infty}\frac{2 + \cos{(n)}}{\sqrt{n}}
=\lim_{n \rightarrow \infty}\frac{3}{\sqrt{n}} $$
We get, $\lim_{n \rightarrow \infty}\frac{2+\cos(n)}{\sqrt{n}}a_n=0$.
How to a continue my proof?
|
For $n\geq 16 $ we have $|a_{n+1}|=(|2+\cos n|/\sqrt n)|a_n|\leq(3/4)|a_n|$ so for $n\geq 16$ we have $|a_{n+1}|\leq (3/4)^{n-15}|a_{16}|.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1488354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Isomorphism between two groups mapping the same elements I need to show that there is an isomorphic function $f:\Bbb Z_6 \to U(14)$ that $f(5)=5$
I can show that $U(14)$ is isomorphic to $\Bbb Z_6$ by showing that $U(14)$ is cyclic of order 6, but I do not know whether this would imply that $\Bbb Z_6$ is isomorphic to $U(14)$. Can I make this inference? Also, how do I now show that $f(5)=5$?
Help would be greatly appreciated.
Thanks in advance!
|
Note that if there exists an isomorphism f: G ----> G', f^-1 is an isomorphism from G' to G. (If f is a bijection, then it inverse is as well, and t is easy to check that the condition for isomorphisms hold for inv(f).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1488435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
meaning of $B \in \left \{ \mathcal P(A)|A \in \mathcal F \right \}$
$$B \in \left \{ \mathcal P(A)|A \in \mathcal F \right \}$$
I am having a hard time understanding what this is saying. I know how to rewrite this to $\exists A \in \mathcal F(B=\mathcal P(A))$, which is the same meaning, but still I am confused about the meaning.
Translating the original: "the set $B$ is an element of the power set of $A$ such that $A$ is an element of the family of sets," correct?
I think that the family of sets is mainly confusing me. Also, If $B=\mathcal P(A)$, how can $B \in \mathcal P(A)$? I thought that because of Russell's paradox, a set cannot be an element of itself, i.e., $T \notin T$.
To sum up, I am super confused and super dense and appreciate help.
|
$B$ is not an element of the powerset of $A,$ where $A\in \mathcal F.$ Rather, $B$ is a powerset of $A$ for some $A\in\mathcal F.$ As you wrote yourself, there exists $A\in\mathcal F$ such that $B=\mathcal P(A).$
When we say that $B\in\{\mathcal P(A)\mid A\in \mathcal F\},$ we're saying that $B$ is an element of the set $\{\mathcal P(A)\mid A\in\mathcal F\}.$ (As opposed to $B\in\mathcal P(A).$) Each element of this set is a powerset of $A$ for some $A\in\mathcal F.$ Hence $B$ is such a powerset.
For instance, if I told you, $x\in\{y\mid y\in\mathbb R \land y>0\},$ we'd know that $x=y>0$ and $x\in\mathbb R.$ We wouldn't try to conclude that $x\in y.$
Edit: You also seem to be thinking that there is some set in this description that necessarily contains itself, but this is not so.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1488571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Overview of Linear Algebra Very new student tackling this course, and I've never been this terrified from Math before. I cannot grasp the meaning of things in Linear Algebra, most of what's stated is either obscure, meaningless, and abstract when I am first tackling them. What sort of advice can you give?
I'll be more specific and state my misconceptions.
First and foremost, the simplest question of all; What in the world is a subspace? I am NOT looking for the criteria that makes a subspace so. I want to cling into any fundamental definition. Why for instance should it include the 0 vector? What does a subspace even look like (assuming we are working with R2 and R3)
When I am told that it possess closure under addition and multiplication, I always imagine an infinite plane or a line -- basically; the coordinate space as whole (If that really is the case, then what is the point of defining a subspace).
These lingering thoughts are really dragging me behind...I have a LOT more questions to come, but let this be an impression.
Thanks for Clarifying~
|
First of all, while it is good to have an intuition and visualization for what mathematical objects are, you cannot remove yourself from the formal definitions. When things get too complicated to visualize, knowing and being able to apply definitions will save you.
That being said, recall that a vector space $X$ is a set which satisfies a long list of axioms. A subspace of $X$ is a subset of $X$ which also satisfies all those axioms. One of the axioms is that a vector space must include a zero vector. Thus, any subspace must also include the zero vector.
A subspace of $\mathbb R^2$ is either just the origin or a line through the origin (or even all of $\mathbb R^2$ is a subspace of itself). A subspace of $\mathbb R^3$ is either the origin, a line containing the origin, or a plane containing the origin (or again, all of $\mathbb R^3$). The reason for defining a subspace is that not all vector spaces can be visualized at $\mathbb R^2$ or $\mathbb R^3$. Some are vastly more complicated. By having a formal definition, we can find properties true of all subspaces, not just planes and lines.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1488691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
Why proximal gradient instead of plain subgradient methods for LASSO? I was thinking to solve LASSO via vanilla subgradient methods. But,I have read people suggesting to use Proximal GD. Can somebody highlight why proximal GD instead of vanilla subgradient methods be used for LASSO?
|
OK, since you liked my comments... :-)
*
*Better performance. That's it.
*Longer, but more handwaving, answer: proximal methods allow you to use a lot more information about the function at each iteration. Also, proximal gradient methods take into account a much larger neighborhood around the initial point, enabling longer steps.
*Third, practical answer: you should not be implementing your own LASSO solver. There are too many good implementations out there. If you need to port one from one language to another, that's one thing, but to write one from scratch at this point makes no sense unless you have very specific problem structure you need to exploit.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1488818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Largest number among the given numbers How to find the largest number among the following numbers.
$(a)\;3^{210}$
$(b)\;7^{140}$
$(c)\;17^{105}$
$(d)\;31^{84}$
Any help will be greatly appreciated.
|
If you are allowed a calculator, take the logs and compute them. $\log 3^{210}=210 \log 3$ and so on
If you must do it by hand, you need to be clever. To compare a and b, note that the exponents are both divisible by $70$, so you are comparing $(3^3)^{70}$ and $(7^2)^{70}$. You may know $3^3$ and $7^2$, or they are not hard to compute. Similar techniques will deal with the others.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1488925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
How do I isolate $y$ in $y = 4y + 9$? $y = 4y + 9$
How do I isolate y?
Can I do
$y = 4y + 9$
$\frac{y}{4y} = 9$ etc
Also some other questions please:
*
*$\frac{5x + 1}{3} - 4 = 5 - 7x$
In the above (1), if I want to remove the '3' from the denominator of the LHS, do I multiply everything on the RHS by 3? What about the '-4' on the LHS, do I do anything to that?
*
*In the above
$\frac{y}{4y}$
If I want to solve it, do I cross a y out from the top, or from the bottom? Does that make sense ? :\
Thanks in advance..I have an exam tomorrow..
|
You have the right idea to get $y$ by itself on the left. However, instead of dividing by $4y$, you would subtract $4y$ from both sides. This will give you
$$y - 4y = 9.$$
Now, if you combine the like $y$-terms on the left. What would $y - 4y$ turn into? You can think of this as: $1y - 4y$.
For your next problem, if you want to multiply both sides by $3$, then you multiply everything by $3$, including the $-4$ on the left-hand side.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1489031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
An inequality involving the $L^1$ norm and a Fourier transform Can someone please tell me where this inequality comes from:
$$\int_{\infty}^{\infty} |g(\xi)|\cdot | \hat{\eta}(\xi) |^2 \;d\xi \leq \frac{1}{2\pi} \| \eta \|_{L^1(\mathbb{R})}^2 \int_{-\infty}^{\infty} |g(\xi)| \; d\xi$$
where the hat denotes the Fourier transform.
I came across this when reading "On the Korteweg-de Vries-Kurumoto-Sivashinsky equation" (Biagioni, Bona, Iorio and Scialom 1996).
I'd really like the Fourier series analogue of this inequality.
|
Starting with Hölder, we have
$$ \def\norm#1{\left\|#1\right\|} \norm{g \hat \eta^2}_{L^1} \le \norm{g}_{L^1} \norm{\hat \eta^2}_{L^\infty} $$
By definition of $\hat \eta$, we have
\begin{align*}
\def\abs#1{\left|#1\right|}\abs{\hat \eta(\xi)} &=\abs{ \frac 1{(2\pi)^{1/2}} \int_{\mathbf R} \eta(x) \exp(-ix\xi)\, dx}\\
&\le \frac 1{(2\pi)^{1/2}} \int_{\mathbf R} \abs{\eta(x)}\, dx\\
&\le \frac 1{(2\pi)^{1/2}} \norm{\eta}_{L^1}
\end{align*}
and hence
$$ \norm{\hat \eta^2}_{L^\infty} = \norm{\hat \eta}_{L^\infty}^2 \le (2\pi)^{-1}\norm{\eta}_{L^1}^2 $$
Altogether
$$ \norm{g \hat \eta^2}_{L^1} \le \norm g_{L^1} (2\pi)^{-1} \norm{\eta}_{L^1}^2 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1489133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Write $n^2$ real numbers into $n \times n$ square grid Let $k$ and $n$ be two positive integers such that $k<n$ and $k$ does not divide $n$. Show that one can fill a $n \times n$ square grid with $n^2$ real numbers such that sum of numbers in an arbitrary $k \times k$ square grid is negative, but sum of all $n^2$ numbers is positive.
Source: Homework
My idea: I presented $n = mk + r$ and write $-k^2$ into each square with coordinate of the form $(ik, jk)$ where $i, j$ are positive integers and 1 in another squares. Then the sum of each $k \times k$ square is $-1$ and sum of $n^2$ numbers is $n^2 - m^2(k^2+1)$. But this doesn't work for large $m$.
|
This can be done so that all rows are the same. Here's the idea of my solution: let the first column all have a value of $a > 0$, to be chosen later. Let the next $k - 1$ columns all have $-1$ in them. Then repeat this pattern: one column of $a$'s, $k-1$ column's of $-1$'s. Then every $k \times k$ square has exactly one column of $a$'s and the rest are $-1$'s. Thus, the sum over this square is $ ka - k(k-1)$. If we want this value to be negative, then we need $a < k - 1$, i.e. $a = k-1 - \epsilon$ for some $\epsilon > 0$, to be decided later.
Now let's look at the sum over the whole matrix. If we write $n = mk + r$, then we have $m$ columns of $a$'s and $n - m$ columns of $-1$'s. The total sum is then $$ nma - (n - m)n$.
Note that $n = mk + r \implies n - m = m(k - 1) + r$. Thus, we have that the sum over the whole matrix is $$ nm((k - 1) - \epsilon) - ((k - 1)m - r)n = rn - nm\epsilon.$$
Thus, if we take $\epsilon = \frac{r}{2m}$, then we have that the sum is $$ rn - nm\epsilon = rn - nm \frac{r}{2m} = \frac{rn}{2} > 0$$
as desired.
EDIT: The final matrix looks like: $$ \left(\begin{array}{cccc|cccc|c|ccc}
a& -1 & \cdots & -1 &a & -1 & \cdots & -1 &\cdots &a & \cdots & -1 \\
a& -1 & \cdots & -1 &a & -1 & \cdots & -1 &\cdots &a & \cdots & -1 \\
\vdots& \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots &\ddots &\vdots & \ddots & \vdots \\
a& -1 & \cdots & -1 &a & -1 & \cdots & -1 &\cdots &a & \cdots & -1 \\
\end{array} \right)$$
Where $a = k-1 -\frac{r}{2m}$ in each block but the last I have one column of $a$'s and $(k-1)$ columns of $-1$'s. In the last column, I have $r$ $-1$'s. note, if I multiply the whole matrix by $2m$, then it has integer entries.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1489244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
}
|
representation of a complex number in polar from
Write the following in polar form: $\frac{1+\sqrt{3}i}{1-\sqrt{3}i}$
$$\frac{1+\sqrt{3}i}{1-\sqrt{3}i}=\frac{1+\sqrt{3}i}{1-\sqrt{3}i}\cdot\frac{1+\sqrt{3}i}{1+\sqrt{3}i}=\frac{(1+\sqrt{3}i)^2}{1^2+(\sqrt{3})^2}=\frac{1+2\sqrt{3}i-3}{4}=\frac{-2+2\sqrt{3}i}{4}=-\frac{1}{2}+\frac{\sqrt{3}}{2}i$$
$|z|=r=\sqrt{(-\frac{1}{2})^2+(\frac{\sqrt{3}}{2})^2}=\sqrt{\frac{1}{4}+\frac{3}{4}}=1$
arg$z$=$tan^{-1}(\frac{\sqrt{3}}{2}\cdot -2)=-\frac{\pi}{3}$
acorrding to Wolfram $\theta=0$
|
HINT:
$$\frac{1+i\sqrt{3}}{1-i\sqrt{3}}=$$
$$\left|\frac{1+i\sqrt{3}}{1-i\sqrt{3}}\right|e^{\arg\left(\frac{1+i\sqrt{3}}{1-i\sqrt{3}}\right)i}=$$
$$\frac{|1+i\sqrt{3}|}{|1-i\sqrt{3}|}e^{\left(\arg\left(1+i\sqrt{3}\right)-\arg\left(1-i\sqrt{3}\right)\right)i}=$$
$$\frac{\sqrt{1^2+\left(\sqrt{3}\right)^2}}{\sqrt{1^2+\left(\sqrt{3}\right)^2}}e^{\left(\tan^{-1}\left(\frac{\sqrt{3}}{1}\right)--\tan^{-1}\left(\frac{\sqrt{3}}{1}\right)\right)i}=$$
$$\frac{\sqrt{4}}{\sqrt{4}}e^{\left(\tan^{-1}\left(\frac{\sqrt{3}}{1}\right)+\tan^{-1}\left(\frac{\sqrt{3}}{1}\right)\right)i}=$$
$$\frac{2}{2}e^{\frac{2\pi}{3}i}=$$
$$1\cdot e^{\frac{2\pi}{3}i}=$$
$$e^{\frac{2\pi}{3}i}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1489339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Find $\ \limsup_{n\to \infty}(\frac{2^n}{n!}) $ Find$\ \limsup_{n\to \infty}(\frac{2^n}{n!}) $
The following 2 facts have already been proven.
*
*$\ 2^n < n! $ for $n \ge 4$ (Proof by induction)
*$\ \frac{2^{n+1}}{(n+1)!} < \frac{2^n}{n!} $ for $\ n\ge 4 $
It would be sufficient to show that $\ \lim_{n\to \infty} \frac{2^n}{n!} = 0 $
However, what are some ways to show this?
I have found a proof using the definition of convergence.
(For all $\ \epsilon > 0 $, Find N such that if n >= N, $\ |\frac{2^n}{n!} - 0| < \epsilon $)
However, I was wondering if there was a simpler proof.
Thanks!
|
This might be a little simpler: For large $n,$
$$\frac{2^n}{n!} = \frac{2}{n}\frac{2}{n-1}\cdots \frac{2}{3}\frac{2}{2}\frac{2}{1}\le \frac{4}{n(n-1)}\cdot 2 = \frac{8}{n(n-1)}\to 0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1489445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
What would this region even look like and how can I sketch it? I need to sketch the following region in $\mathbb{R^3}$.
$$D=\{(x,y,z) : 0 \leq z \leq 1-|x|-|y|\}$$
I really have no idea how to go about sketching this and I can never visualise in 3D.
I have tried letting $x,y,z=0$ and seeing if that can help me see what it looks like but I'm very stuck.
Thank you.
|
If you are comfortable sketching functions in the plane, first think in horizontal planes at different heights. This amounts to fixing $z = h$ (a given height) and letting $x$ and $y$ vary.
At height $h$ what you see is the plot of the function $h = 1 - |x| - |y|$, that is, you should plot
$$|y| = 1-h -|x|$$
$$y = \pm (1-h -|x|).$$
With the plus sign in front, this looks like a "$\wedge$" character; with the minus sign in front, it looks like a "$\lor$" character.
Note that the $\pm$ equation only makes sense as long as the right hand side is positive in the first equation.
If you take that into account (you can do the calculations) you will finally have a diamond shape with vertices at $1-h$ to the north, south, east and west of the origin in the $x,y$ plane. The diamond gets smaller as the height increases, finally becoming a point when $h=1$.
This description corresponds to the graph of $z=1-|x|-|y|$, but you need $0\leq z\leq 1-|x|-|y|$. This means that for each $x,y$ your set includes all points with height between the base $z=0$ and the "diamond shaped hat" just described. This furnishes a solid pyramid with a diamond (square) base.
The functions in each quadrant $\{x>0, y>0\}$, $\{x<0, y>0\}$, etc. are planes, so that the pyramid will have a linear or planar vertical profile in any direction. (Looking at it from the side, $y=0$, for example, gives $z\leq 1-|x|$)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1489574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Number of words with length 3 with certain restrictions In today's quiz (about combinatorics) there was a question I was not able to solve mathematically:
Find the number of words with length 3 over an alphabet $ \Sigma = \{1,2,...,10\} $ where $x \lt y \le z$. ($x$ is the first letter, and so on).
How can I solve this in an efficient way? Of course you can just count it like:
$9 \times 10 \times 10 \rightarrow$ 1 possiblity
$8 \times \{9,10\} \times \{9,10,10\} \rightarrow$ 2 + 1 possibilities
....
But how do I solve it with "combinatoric" methods?
|
Think about it like this: If we had to choose $a,b,c$ from $\{1,2,\ldots,10\}$ so that $a < b < c$, we could just choose a subset of $3$ elements from $\{1,2,\ldots,10\}$ and sort them, giving $a < b < c$. The number of such subsets is $\binom{10}{3}$. Now, since we have $x < y \leq z$, we have $x < y < (z + 1)$; we may thus choose $3$ distinct elements from $\{1,2,\ldots,11\}$ (the three elements are $x$, $y$, and $z+1$). We thus have $\binom{11}{3}$ ways to do so, yielding $\binom{11}{3}$ as our answer. In other words, counting $x < y \leq z$ from $\{1,2,\ldots ,10\}$ is the same as counting $x < y < (z + 1)$ from $\{1,2,\ldots,11\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1489767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
I know R is closed and open but is it a closed and open interval? I know $\mathbb{R}$ is open and closed but is this the same as saying it's an open interval and a closed interval?
|
This really depends on your working definition of "interval" — that is, your textbook's definition or your teacher's.
If "interval" is defined as:
*
*$I \subseteq \mathbb{R}$ is an interval $\iff$ there are $x, y \in \mathbb{R}$ such that $I$ is one of $[a,b], [a,b), (a,b]$ or $(a,b)$
then No, $\mathbb{R}$ is not an interval.
However, if "interval" is defined as:
*$I \subseteq \mathbb{R}$ is an interval $\iff$ for all $a, b \in \mathbb{R}$ with $a \le b$, if $a, b \in I$ then $[a, b] \subseteq I$
then Yes, $\mathbb{R}$ is an interval.
NOTES
*
*According to 2., not just $\mathbb{R}$ itself but also all half-unbounded "rays" like $(-\infty, a)$ and $[b, +\infty)$ are intervals. According to 1., however, they are not.
*Definition 1. is more restrictive. BUT, if 1. is relaxed to allow $a, b \in \overline {\mathbb{R}} = \mathbb{R} \cup \{-\infty, +\infty\}$, then the resulting definition is equivalent to 2.
In any case, $\mathbb{R}$ is not a closed interval, because if any "closed interval" contains its inf and sup; however, the "extended real $-\infty = \inf(\mathbb{R}) \notin \mathbb{R}$ (and similarly for $\sup$ and $+\infty$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1489883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Represent $ f(x) = 1/x $ as a power series around $ x = 1 $ As stated on the title, my question is: (a) represent the function $ f(x) = 1/x $ as a power series around $ x = 1 $. (b) represent the function $ f(x) = \ln (x) $ as a power series around $ x = 1 $.
Here's what I tried:
(a) We can rewrite $ 1/x $ as $ \frac{1}{1 - (1-x)} $ and thus using the series $ \frac{1}{1-k} = \sum_{n=0}^\infty k^n, |k| < 1 $, we can write that:
$ \frac{1}{x} = \frac{1}{1 - (1-x)} = \sum_{n=0}^\infty (1-x)^n, |1-x| < 1 $
I have a doubt because when I type "power series of 1/x when x = 1" on WolframAlpha the result is $ \sum_{n=1}^\infty (-1)^n \cdot (-1+x)^n $.
Am I wrong?
(b) Since $ (\ln (x))' = \frac{1}{x} $, all I have to do is integrate both sides of (a)' answer:
$ \int \frac{1}{x} dx = \int \sum_{n=0}^\infty (1-x)^n dx \therefore \ln(x) = \sum_{n=0}^\infty \frac{(1-x)^{n+1}}{n+1} + C $ and by putting $ x = 1 $ we get $ C =0 $ and thus $ \ln(x) = \sum_{n=0}^\infty \frac{(1-x)^{n+1}}{n+1} $.
Are my answers correct?
Really appreciate the help.
Have a good night, mates.
|
$$
\begin{align}
\frac1x
&=\frac1{1+(x-1)}\\
&=1-(x-1)+(x-1)^2-(x-1)^3+\dots\\
&=\sum_{k=0}^\infty(-1)^k(x-1)^k
\end{align}
$$
(a) You are correct; your series is the same as mine, however, usually we expand in powers of $(x-a)^n$.
(b) integrating $\frac1t$ between $t=1$ and $t=x$ gives
$$
\begin{align}
\log(x)
&=\sum_{k=0}^\infty\frac{(-1)^k}{k+1}(x-1)^{k+1}\\
&=\sum_{k=1}^\infty\frac{(-1)^{k-1}}k(x-1)^k
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1490051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Taylor Series Expansion of $ f(x) = \sqrt{x} $ around $ a = 4 $ guys.
Here's the exercise: find a series representation for the function $ f(x) = \sqrt{x} $ around $ a = 4 $ and find it's radius of convergence.
My doubt is on the first part: I can't seem to find a pattern.
$ \circ f(x) = \sqrt x \rightarrow f(4) = 2 \\ \circ f'(x) = \frac{1}{2} \cdot \frac{1}{\sqrt x} \rightarrow f'(4) = \frac{1}{4} \\ \circ f''(x) = \frac{1}{2} \cdot \left( -\frac{1}{2} \right) \cdot x^{-\frac{3}{2}} \rightarrow f''(4) = \frac{-1}{4 \cdot 8} = \frac{-1}{4 \cdot (2 \cdot 4^1)} = \frac{-1}{2 \cdot 4^2} \\\\ \circ f^{(3)}(x) = -\frac{1}{4} \cdot \left(- \frac{3}{2} \right) \cdot x^{-\frac{5}{2}} \rightarrow f^{(3)}(4) = \frac{3}{4 \cdot 32} = \frac{3}{4 \cdot (4^2 \cdot 2)} = \frac{3}{4^3 \cdot 2} \\\\ \circ f^{(4)}(x) = \frac{3}{8} \cdot \left(- \frac{5}{2} \right) \cdot x^{-\frac{7}{2}} \rightarrow f^{(4)}(4) = \frac{-3 \cdot 5}{4 \cdot 128} = -\frac{3 \cdot 5}{4 \cdot (4^3 \cdot 2)} = -\frac{3 \cdot 5}{4^4 \cdot 2} $
hmm.. Can I say then that:
$ f(x) = 2 + \frac{(x-4)}{4} + \frac{1}{2} \cdot \sum_{n=2}^{\infty} \frac{(-1)^{n-1} \cdot (1 \cdot 3 \cdot 5 \dots (2n-3)) \cdot (x-4)^n}{4^n} $ and using the ratio test:
$ \lim_{n \to \infty} \left| \frac{(x-4)^{n+1} \cdot (2n-1)}{4^{n+1}} \cdot \frac{4^n}{(x-4)^n \cdot (2n-3)} \right| = \frac{|x-4|}{4} \rightarrow -4 < x-4 < 4 \therefore 0 < x < 8 $ and thus R = 4.
But here's the thing: the answer on the book is:
$ 2 + \frac{(x-4)}{4} + 2 \cdot \sum_{n=2}^{\infty} (-1)^{n-1} \cdot \frac{1 \cdot 3 \cdot 5 \dots (2n-3) \cdot (x-4)^n}{2 \cdot 4 \cdot 6 \dots (2n) \cdot 4^n} $
Am I missing something?
Thanks in advance, guys!
|
You were on the right track. We have for $n\ge 2$
$$\begin{align}
f^{(n)}(x)&=(-1)^{n-1}\frac12 \frac12 \frac32 \frac52 \cdots \frac{2n-3}{2}x^{-(2n-1)/2}\\\\
&=(-1)^{n-1}\frac{(2n-3)!!}{2^n}x^{-(2n-1)/2}\tag 1
\end{align}$$
where $(2n-3)!! = 1\cdot 3\cdot 5\cdot (2n-3)$ is the double factorial of $2n-3$. Evaluating $(1)$ at $x=4$ yields
$$f^{(n)}(4)=(-1)^{n-1}\frac{2(2n-3)!!}{2^n4^n}$$
Therefore, we have
$$\begin{align}
f(x)&=f(4)=f'(4)(x-4)+2\sum_{n=2}^{\infty}(-1)^{n-1}\frac{(2n-3)!!}{(2^n)(4^n)n!}(x-4)^n\\\\
&=2+\frac{(x-4)}{4}+2\sum_{n=2}^{\infty}(-1)^{n-1}\frac{(2n-3)!!}{(2n)!!(4^n)}(x-4)^n\\\\
\end{align}$$
where $(2^n)n!=(2n)!!=2\cdot 4\cdot 6\cdots 2n$ and again $(2n-3)!!=1\cdot3\cdot 5\cdots(2n-3)$. And we are done!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1490190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding the probability that $A$ occurs before $B$ Given an experiment has $3$ possible outcomes $A, B$ and $C$ with respective probabilities $p, q$ and $r$ where $p + q + r = 1.$ The experiment is repeated until either outcome $A$ or outcome $B$ occurs. How can I show that $A$ occurs before B with probability $\dfrac{p}{p+q}$?
|
Let $a$ denote the given probability. Then by independence, we see that $$a = p + ra,$$
which yields
$$a = \frac p{1-r}= \frac p{p+q}. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1490297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Find the area of the triangle There are two points $N$ and $M$ on the sides $AB$ and $BC$ of the triangle $ABC$ respectively. The lines $AM$ and $CN$ intersect at point $P$. Find the area of the triangle $ABC$, if areas of triangles $ANP, CMP, CPA$ are $6,8,7$ respectively.
|
From the hypothesis, it follows that
*
*$CN:CP = area(\triangle CAN):area(\triangle CPA)=13:7$
*$MP:MA =area(\triangle CMP):area(\triangle CMA)=8:15$
Applying Menelaus' Theorem to $\triangle APN$ with $B, M,C$ we have that
$$\frac{BN}{BA}\cdot \frac{CN}{CP}\cdot \frac{MP}{MA}=1,\tag{3}$$
(note that we use non-directed length here). So
$$\frac{area(\triangle ABC)}{area(\triangle BCN)}=\frac{BA}{BN}=\frac{CP}{CN}\cdot \frac{MA}{MP}=\frac{7}{13}\cdot\frac{15}{8}=\frac{105}{104}.$$
It follows that $$area(\triangle ABC)=105\times area(\triangle ACN)=\color{red}{1365}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1490393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
Does this integral have a closed form? $\int^{\infty}_{-\infty} \frac{e^{ikx} dx}{1-\mu e^{-x^2}}$ The original integral contains $\sin [n(x+a)] \sin [l(x+a)]$ but I think this form is simpler:
$$\int^{\infty}_{-\infty} \frac{e^{ikx} dx}{1-\mu e^{-x^2}}$$
$\mu <1$, so I can use the Taylor expansion to calculate the integral with very good accuracy.
$$\frac{1}{1-\mu e^{-x^2}}=1+\sum^{\infty}_{j=1} \mu^j e^{-jx^2}$$
And then of course I complete the square in the exponent and it becomes Poisson integral.
The series will be:
$$\sum^{\infty}_{j=1}\frac{ \mu^j}{\sqrt{j}} e^{-b^2/j}$$
($b$ depends on $k$, but in the original expression it is more complex, so its exact form doesn't matter, it's sufficient that it is real. )
If this series or the original integral have a closed form it will be more convenient. I wasn't able to find it in the literature.
|
We have:
$$\begin{eqnarray*}\int_{-\infty}^{+\infty}\frac{e^{ikx}}{1-\mu e^{-x^2}}\,dx = \int_{-\infty}^{+\infty}\frac{\cos(kx)}{1-\mu e^{-x^2}}\,dx &=& 2\sum_{r=1}^{+\infty}\mu^r\int_{0}^{+\infty}\cos(kx)e^{-rx^2}\,dx\\&=&2\sum_{r\geq 1}\mu^r \sqrt{\frac{\pi}{r}}\,e^{-\frac{k^2}{4r}}\\&=&2\sqrt{\pi}\sum_{r\geq 1}\frac{\mu^r}{\sqrt{r}}\,e^{-\frac{k^2}{4r}}\end{eqnarray*}$$
that can be re-written through the Poisson summation formula, but does not have a "nice" closed form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1490503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why E is the same for hypergeometric and binomial laws I'm confused a bit, not sure if it's tiredness or something but...
Why do the mathematical expectation of the hypergeometric law is the same as the binomial one?
After all, the hypergeometric one correspond to sampling without replacement, while for binomial it's with replacement.
In my book it's justified quickly this way:
$$
E[V] = E[X_{1} + X_{2} + ... + X{n}]
= E[X_{1}]+E[X_{2}]+...+E[X_{n}]
= \frac{N}{N+M}+\frac{N}{N+M}+...+\frac{N}{N+M} = n \frac{N}{N+M}
$$
Say we have N black balls and M white balls, then that would mean the probability to pick a black ball is the same at the ith sampling as it is at the first?! But how can it still be N+M when in fact at the 2nd sampling there is N+M-1 balls in the box?
Shouldn't the probability differ depending on the sampling iteration? I thought it was a conditional case...
I thought the last equality would more resemble:
$$
E = \frac{N}{N+M} + \frac{N-1}{N+M-1} + ... + \frac{N-k}{N+M-k}
$$
Maybe someone could shed some light on this for me?
Why is the expectation $np$ for $n$ samplings without replacement?
Thanks!
|
This can be regarded as a consequence of linearity of expectation, even for non-independent random variables. The marginal (i.e. unconditional) probability for each ball remains the same even without replacement.
For example, the first ball has a probability $\dfrac{N}{N+M}$ of being black and $\dfrac{M}{N+M}$ of being white. So the second ball has a probability $\dfrac{N-1}{N+M-1} \times \dfrac{N}{N+M}+\dfrac{N}{N+M-1} \times \dfrac{M}{N+M} = \dfrac{N}{N+M}$ of being black, since the first ball can be black or white, and something similar is true for all the later balls.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1490587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Using $dx$ for $h$. Is it mathematically correct to write
$$f'(x)=\lim_{dx\to0}\frac{f(x+dx)-f(x)}{dx},$$
rather than
$$f'(x)=\lim_{h\to0}\frac{f(x+h)-f(x)}{h}?$$
If not, what is the difference? If so, why isn't this notation used from the beginning? My feeling for the latter is that it would align the derivative more with the inverse of the indefinite integral.
|
No it doesn't matter, as pointed out what you write doesn't matter, as long as the concept gets across, $dx$, $f$, $john$ $\oplus$, you can use any of them, $h$ is just the common one and used because everyone recognises it but in certain context, to make clear, you must use other letters as you're doing more than one thing at once.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1490657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
if $(X,A)$ has homotopy extension, so does $(X \cup CA,CA)$ I guess I could use the property: The homotopy extension property for $(X,A)$ is equivalent to $X\times \{0\}\cup A\times I\ $ being a retract of $X\times I$.
Then there is a retraction:$$X\times I\rightarrow X\times \{0\}\cup A\times I$$
I need to show$$(X\cup CA)\times I\rightarrow (X\cup CA)\times \{0\}\cup CA\times I$$is a retraction too. But I don't know what to do next. Thank you for your time
|
Putting it more generally, if $(X,A)$ has the HEP, and $f:A \to B$ is a map, then $(B\cup_f X,B)$ has the HEP. The proof is analogous to that of Stefan, which is the case $B=CA$.
You need the method of defining homotopies on adjunction spaces, which again uses the local compactness of $I$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1490732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proof of the hockey stick/Zhu Shijie identity $\sum\limits_{t=0}^n \binom tk = \binom{n+1}{k+1}$ After reading this question, the most popular answer use the identity
$$\sum_{t=0}^n \binom{t}{k} = \binom{n+1}{k+1},$$
or, what is equivalent,
$$\sum_{t=k}^n \binom{t}{k} = \binom{n+1}{k+1}.$$
What's the name of this identity? Is it the identity of the Pascal's triangle modified.
How can we prove it? I tried by induction, but without success. Can we also prove it algebraically?
Thanks for your help.
EDIT 01 : This identity is known as the hockey-stick identity because, on Pascal's triangle, when the addends represented in the summation and the sum itself are highlighted, a hockey-stick shape is revealed.
|
This is essentially the same as the induction answer already mentioned, but it brings a pictorial perspective so I thought to add it as an alternative answer here.
Here's a restatement of the identity (which you can verify to be equivalent easily): On Pascal's triangle, start from a number (one of the $1$s) on the left edge and move diagonally rightwards and downwards, summing the terms as we go along. We can decide to stop at any point, and the sum of all these terms will be the number directly below and to the left of the final summand.
This actually trivialises the proof of the identity. Note that if we decided to add one more term to the sum (the term to the right of the current sum), on the one hand this "lengthens" the stick by $1$ tile, but on the other hand it adds the term adjacent to the sum---which by definition of Pascal's triangle, produces the number in the tile directly below and horizontally in between the sum and this new term. This can be rigorously translated to the inductive step in a formal induction proof.
To illustrate, let's refer to the picture in the question, and focus on the yellow hexagonal tiles. (Note that this is a reflected case of what I described above since it starts from the right edge, but this doesn't affect the discussion.) Currently, we have $1+6+21+56=84$, which is a true identity. If I added $126$ to the LHS, I would also be adding $126$ to the RHS, which by definition gives us $210$, the term below and in between them on Pascal's triangle. Once you really convince yourself of the validity of this argument, the (formal) proof of the identity should come naturally!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1490794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "73",
"answer_count": 18,
"answer_id": 15
}
|
Finding a triangle ratio.
In the triangle ABC, the point P is found on the side AB. AC = 6 cm, AP = 4 cm, AB = 9 cm. Calculate BC:CP.
For some reason, I cannot get this even though I tried for half an hour.
I got that, $BC/CP < 17/10 = 1.7$ by the triangle inequality. $AP/PB = 4/5$
But that does not help one but, I'm very stuck!
|
Let $\angle{BAC}=\theta$. Then, by the law of cosines,
$$\begin{align}BC:CP&=\sqrt{9^2+6^2-2\cdot 9\cdot 6\cos\theta}:\sqrt{4^2+6^2-2\cdot 4\cdot 6\cos\theta}\\&=\sqrt{9(13-12\cos\theta)}:\sqrt{4(13-12\cos\theta)}\\&=\color{red}{3:2}\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1490876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Probabilistic approach to prove a graph theory theorem A theorem in graph theory is as the following,
Let $G=(V,E)$ be a finite graph where $V$ is the set of vertexes and $E$ is the set of edges. Then there exists a vertex subset $W$, i.e. $W \subset V$, such that the number of edges connecting $W$ and $W^C$ is at least $\frac{|E|}{2}$, where $W^C = V\backslash W$ and $|E|$ is the total number of edges in the graph $G$.
The question is to prove this theorem by a probabilistic approach.
My idea is as follows:
Let $|V|=n,|W|=m$, then the maximum number of possible edges between $W,W^C$ is $m\times(n-m)$. The maximum number of possible edges in graph $G$ is $C_n^2 = \frac{n!}{2!\times(n-2)!}$. We can treat the edges as if they were randomly scattered in the $C_n^2$ positions, and with probability $p=\frac{m\times(n-m)}{C_n^2}$ one edge would connect $W,W^C$.
Then it is like a Bernoulli trial of $|E|$ times with success probability $p=\frac{m\times(n-m)}{C_n^2}$, and the probability there are at least $\frac{|E|}{2}$ edges connecting $W,W^C$ is $\sum\limits_{k = \left\lceil {\frac{1}{2}|E|} \right\rceil }^{|E|} {C_{|E|}^k{p^k}{{(1 - p)}^{|E| - k}}} $. But this probability is for a particular $W$. We need to show with probability there exists one or more such $W$ satisfying the conditions of the above-mentioned theorem. I got stuck here. Anyone can help with this proof? Thank you!
|
Take the graph $G = (V,E)$ and for each vertex, randomly select whether it will lie in $W$ with independent probability $p = \frac{1}{2}$. For each edge $e_i \in E$, define the random variable $X_i$, which is $1$ if $e$ connects $W$ to $W^C$, and $0$ otherwise. Then, $\mathbb{P}(X_i = 1) = \frac{1}{2}$, and $\mathbb{E}(X_i) = \frac{1}{2}$.
By linearity of expectation: $$\mathbb{E}(\text{number of edges connecting $W$ and $W^C$}) = \mathbb{E}(\sum X_i) = \sum E(X_i) = \sum \frac{1}{2} = \frac{|E|}{2}$$
Since the expected number of edges crossing the cut is $\frac{|E|}{2}$, there must be at least one choice of $W$ that has at least $\frac{|E|}{2}$ edges crossing the cut.
You can see that the claim holds with certainty because if you assume that all possible choices for $W$ have fewer than $\frac{|E|}{2}$ edges crossing the cut, then we get the contradictory:
$$\mathbb{E}(\text{number of edges connecting $W$ and $W^C$}) < \frac{|E|}{2}$$
ETA: To those who are still confused - here is a small explicit example:
Suppose $V = \{v_1, v_2\}$ and $E = \{(v_1,v_2)\}$.
We consider all possible choices for $W$, where vertices are independently placed in $W$ with probability $\frac{1}{2}$:
*
*$W = \emptyset$. No edges cross the cut. This has probability $\mathbb{P}(v_1 \notin W \cap v_2 \notin W) = \frac{1}{2}\cdot \frac{1}{2} = \frac{1}{4}$.
*$W = \{v_1\}$. One edge crosses the cut. This has probability $\mathbb{P}(v_1 \in W \cap v_2 \notin W) = \frac{1}{2} \cdot \frac{1}{2} = \frac{1}{4}$.
*$W = \{v_2\}$. One edge crosses the cut. Again, this has probability $\mathbb{P}(v_1 \notin W \cap v_2 \in W) = \frac{1}{4}$.
*$W = \{v_1,v_2\}$. No edge crosses the cut. Again, probability is $\frac{1}{4}$.
Then, $\mathbb{P}((v_1,v_2) \textrm{ crosses the cut}) = \frac{1}{2}$ and the expected number of crossing edges is $$0\cdot \frac{1}{4} + 1\cdot \frac{1}{4}+1\cdot \frac{1}{4}+0\cdot \frac{1}{4} = \frac{1}{2} = \frac{|E|}{2}$$
If all of the possibilities had fewer than $\frac{|E|}{2}$ edges crossing the cut, the expectation would have to be less than $\frac{|E|}{2}$. Therefore, there must be at least one choice for $W$ such that at least $\frac{|E|}{2}$ edges crosses the cut. In this case, one such choice is $W = \{v_1\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1491000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Is Abelian Group Subset consisting of elements of finite order a Subgroup? I know this question has asked and answered before, so my apologies for posting a duplicate query. Instead of the solution, I'm seeking a "nudge" in the right direction to complete the proof on my own. So please, hints or criticism only. Thanks!
"Show that in an Abelian group $G$, the set consisting of all elements of $G$ of finite order is a subgroup of $G$."
Let $H\subset G$ consisting of all elements of a finite order. Then $H$ contains the inverse element $e \in G$, since $e^1 = e$.
Next, let $a \in G$ such that $a^n = e$ for some $n \in \mathbb{Z}$. So $a \in H$, and $a^{-1} \in H$ since $(a^{-1})^n = (a^{n})^{-1} = e^{-1} = e$. Hence the order of $a^{-1}$ is finite, and $H$ is closed under inverses.
Last, let $b^{m} = e \in G$ for some $m \in \mathbb{Z}$. Then $b \in H$ $ab \in G$. To show $ab \in H$, notice that $e = ee = a^{n}b^{m}$ and
$$a^{n}b^{m} = (ab)a^{n-1}b^{m-1}$$
since $G$ is Abelian. So $ab \in H$ and $H$ is closed under the operation of $G$. So $H$ is a subgroup of $G$.
This last part I'm nearly certain is wrong. I've tried to use the fact that $G$ is Abelian to manipulate the equation $$ab = ba$$ using left and right multiplication in as many ways as I can think of to yield a proof that the order of $ab$ is finite. As usual, I'm sure I'm missing something obvious, and would appreciate any advice pointing towards, rather than disclosing, the solution.
|
HINT: Suppose that $a,b\in H$; you need to show that $ab\in H$. There are positive integers $m,n$ such that $a^m=b^n=e$; use the fact that $G$ is Abelian to show that $(ab)^{mn}=e$ and hence that $ab\in H$. Basically this requires showing by induction that $(ab)^k=a^kb^k$ for $k\ge 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1491092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Finding the value of a summation Question:
For positive integers $n$, let the numbers $c(n)$ be determined by the rules $c(1)=1$, $c(2n)=c(n)$, and $c(2n+1)=(-1)^nc(n)$. Find the value of $\displaystyle\sum_{n=1}^{2013}c(n)c(n+2)$.
To solve this question, I tried to figure out some patterns.
we are given $c(1)=1$ but observe that if $n=1$ then $C(2)=C(1)$, which means $C(2)=1$ as well.
Also, $C(4)=C(2)=1$ and $C(8)=C(4)=1$ and so on such as $C(16)=C(8)$...
Then I tried working with the third rule which is $c(2n+1)=(-1)^n c(n)$
and i notice if $n=1$ then $c(3)=(-1)^{1} c(1)=-1$. However if $n=2$ then $c(2(2)+1)=c(5)=(-1)^2 c(2)=1$ So this shows that not every odd number will be $-1$
So I thought it could be alternate, and I tested the third rule with $n=3$ which would result in $c(7)=(-1)^3 c(3)=1$ however i get another positive $1$.
then i tried one more time with $n=4$ which is $c(9)=(-1)^4 c(4)=1$ but i get $1$ again.
So anyone have any ideas on how i can solve this problem, the pattern doesn't seem so obvious to me with $c(3)$ being negative.
|
Rewrite the sum as
$$S = \sum_{m=1}^{1006}c(2m)c(2m+2) + \sum_{m=0}^{1006}c(2m+1)c(2m+3).$$
The first sum covers the even $n$; the second covers the odd $n$.
Then replace:
$$S = \sum_{m=1}^{1006}c(m)c(m+1) + \sum_{m=0}^{1006}(-1)^m c(m) (-1)^{m+1} c(m+1)$$
$$S = c(1)c(3) + \sum_{m=1}^{1006} [1 + (-1)^m(-1)^{m+1}] c(m)c(m+1).$$
Since $(-1)^m(-1)^{m+1} = (-1)^{2m+1} = -1$ for all integer $m$, every term in the sum vanishes and we're left with
$$S = c(1)c(3) = -1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1491200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find $2^{3^{100}}$ (mod 5) and its last digit The question is to find $2^{3^{100}} \pmod 5$ and its last digit.
I think we have to find $2 \pmod 5$ and $3^{100}\pmod 5$ separately, right?
$$2 = 2 \pmod 5$$
$$3^4 = 1 \pmod 5$$
$$3^{100} = 1 \pmod 5$$
$$2^{3^{100}} = 2^1 = 2 \pmod 5$$
Is this solution correct? And to find the last digit do we just solve modulo $10$?
|
Yes, you solve modulo $10$ for the last digit. Note that the number is even, so its last digit is taken from $2, 5, 6, 8$--it's not divisible by $5$ so the last digit cannot be $0$. Your original solution is a bit off, though $3^4\equiv 1\mod 5$, but in the exponent it's every $4$ which gives $1$, so we want $3^2\equiv 1\mod 4$. Then $3^{100}\equiv 1\mod 4$. So
$$2^{3^{100}}=2^{4k+1}\equiv (16)^k\cdot 2\mod 5$$
which gives the same result, but by sounder methods. Since it is $2$ mod $5$ as you have said, the only choices are $2$ and $7$. $7$ is odd, so it must be $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1491279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Intuitive understanding of the -b in this definition of a plane I am learning about SVMs in computer science. The book I'm reading defines a hyperplane with an "intercept term", b
$\vec{w}^T \vec{x}= -b$
What does this intercept mean, intuitively? From Khan Academy, I understand how the dot product of a vector on a plane, $\vec{a} $ and a normal vector to that plane $\vec{n}$, i.e. I see how $\vec{a}\cdot\vec{n}^T=0$ would define a plane. (because the normal dotted with any vector on the plane is zero).
But what is the $-b$, intuitively, in $\vec{w}^T \vec{x}= -b$?
|
That's not too complicated. Let us imagine $n=3$. Suppose we are looking for plane whose unit normal is ${\bf{n}}$ and passes through the point ${{{\bf{x}}_0}}$. Now the equation of all point lying on this plane will be
$$\eqalign{
& {\bf{n}}.\left( {{\bf{x}} - {{\bf{x}}_0}} \right) = 0 \cr
& {\bf{n}}.{\bf{x}} - {\bf{n}}.{{\bf{x}}_0} = 0 \cr
& \left\{ \matrix{
{\bf{n}}.{\bf{x}} + b = 0 \hfill \cr
b = - {\bf{n}}.{{\bf{x}}_0} \hfill \cr} \right. \cr} $$
and hence your $b$ is minus of the dot product of unit normal and position vector of a special point on the plane. I emphasize that it is just a matter of notation and nothing more. See the below picture
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1491370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $C(a) $ ={$r \in R : ra = ar$} is a subring of $R$ containing $a$. For a fixed element $a \in R$ define $C(a)$ ={$r \in R : ra = ar$}. Prove that $C(a)$ is a subring of $R$ containing $a$.
attempt: Recall by definition , $B$ is a subring of $A$ if and only if $B$ is closed under subtraction and multiplication.
Then Suppose $r,c \in C(a)$. Then its closed under the subtraction (that is if and only if $C(a)$ is closed with respect to both addition and negatives.
Closed under +
$(r+c)a = ra + ca = ar + ac = a(r + c)$
Closed under
Likewise under multiplication:
$(rc)a = r(ca) = r(ca) = (ra)c = (ar)c = a(rc)$.
Then from the closure of addition we have $ra + ca = ar + ac$. So
$ra = ar$ and $ca = ac$, thus $a \in C(a)$ for all $r\in R$, and $-c \in C(a)$. so $C(a)$ is a subring of $R$.
Can someone please verify this? Any help or better approach would be really appreciated it ! thanks.
|
Let $c\in C (a)$. Then $ac=ca$. Now we have $-ca=-(ca)=-(ac)=-ac=a (-c)$. This shows that $-c\in C (a)$, and the above argument completes the proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1491470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What does $P(A \triangle B)$ mean? I know its a simple question but its hard to google these symbols. Anyways, anyone care to explain what $P(A \triangle B)$ means? I haven't seen this symbol before and I'm not sure on how to interpret it.
|
The triangle is almost certainly being used to for symmetric difference:
$$A\mathrel{\triangle}B=(A\setminus B)\cup(B\setminus A)=(A\cup B)\setminus(A\cap B)\;.$$
The $P$ might be the power set operator, in which case the whole thing is the set of all subsets of $A\mathrel{\triangle}B$:
$$P(A\mathrel{\triangle}B)=\{X:X\subseteq A\mathrel{\triangle}B\}.$$
However, a script $P$ is more often used for this purpose, and as has just been pointed out to me, you used the probability tag, so $P(A\mathrel{\triangle}B)$ is probably the probability assigned to the subset $A\mathrel{\triangle}B$ of your sample space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1491702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
(Combinatorial) proof of an identity of McKay Lemma 2.1 of this paper claims that for integer $s>0$ and $v \in \mathbb{N}$, we have
$$
\sum_{k=1}^s \binom{2s-k}{s} \frac{k}{2s-k} v^k (v-1)^{s-k}
= v \sum_{k=0}^{s-1} \binom{2s}{k} \frac{s-k}{s} (v-1)^k
$$
The author gives a combinatorial interpretation of the left hand side in terms of closed walks on a class of graphs that are locally acyclic in a particular sense, but omits the proof of equality. I've hit a wall trying to prove this, so I was wondering what ideas others have. Combinatorial proofs welcomed!
|
Suppose we seek to verify that
$$\sum_{k=1}^n {2n-k\choose n} \frac{k}{2n-k} v^k (v-1)^{n-k}
= v \sum_{k=0}^{n-1} {2n\choose k} \frac{n-k}{n} (v-1)^k.$$
Now the LHS is
$$\frac{1}{n}\sum_{k=1}^n {2n-k-1\choose n-1} k v^k (v-1)^{n-k}.$$
Re-write this as
$$\frac{1}{n}\sum_{k=1}^n {2n-k-1\choose n-k} k v^k (v-1)^{n-k}.$$
Introduce
$${2n-k-1\choose n-k} =
\frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{1}{w^{n-k+1}} (1+w)^{2n-k-1} \; dw.$$
Observe that this zero when $k\gt n$ so we may extend $k$ to infinity
to obtain for the sum
$$\frac{1}{n} \frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{1}{w^{n+1}} (1+w)^{2n-1}
\sum_{k\ge 1} k v^k (v-1)^{n-k} \frac{w^k}{(1+w)^k}\; dw
\\ = \frac{(v-1)^n}{n} \frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{1}{w^{n+1}} (1+w)^{2n-1}
\frac{vw/(v-1)/(1+w)}{(1-vw/(v-1)/(1+w))^2} \; dw
\\ = \frac{(v-1)^n}{n} \frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{1}{w^{n+1}} (1+w)^{2n}
\frac{vw(v-1)}{((v-1)(1+w)-vw)^2} \; dw
\\ = v\frac{(v-1)^{n+1}}{n} \frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{1}{w^{n}} (1+w)^{2n}
\frac{1}{(-1-w+v)^2} \; dw
\\ = v\frac{(v-1)^{n-1}}{n} \frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{1}{w^{n}} (1+w)^{2n}
\frac{1}{(1-w/(v-1))^2} \; dw.$$
Extracting the coefficient we obtain
$$v\frac{(v-1)^{n-1}}{n}
\sum_{q=0}^{n-1} {2n\choose q}
\frac{(n-1-q+1)}{(v-1)^{n-1-q}}
\\ = v
\sum_{q=0}^{n-1} {2n\choose q}
\frac{(n-1-q+1)}{n} (v-1)^q
\\ = v
\sum_{q=0}^{n-1} {2n\choose q}
\frac{(n-q)}{n} (v-1)^q.$$
This concludes the argument.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1491768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Correct Terminology for Semi Inverse Mapping Suppose we have two finite sets $X$ and $Y$ and a many to one mapping $f:X\rightarrow Y$.
Now let me define another mapping $g:Y\rightarrow\mathcal{P}(X)$ where $\mathcal{P}$ denotes the power set. We define $g(y)=\{x|x\in X, f(x)=y\} \forall y$.
Can we call $g$ as an inverse of $x$ in any meaningful way? I know it is stretching the definition, but is there any terminology for such a mapping?
|
More generally, there is a function $$f^{-1}:\mathcal{P}(Y) \to \mathcal{P}(X):S \mapsto \{x \in X \ \vert \ f(x) \in S\}$$ called the inverse image (or preimage) of $f$.
The map you described is the special case where you only consider singletons. This could be called the fiber map (this is not usual terminology though).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1491864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.