Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Prove that $x$ is a limit point of $A_1$
Let $A_m \subseteq \mathbb{R}^n$, $A_m \ne \emptyset$ and $A_{m+1} \subseteq A_m$. Suppose that $\bigcap\limits_{m=1}^{\infty} A_{m}=\emptyset$ and that $x \in \bigcap\limits_{m=1}^{\infty} \overline {A_{m}}$.
Prove that $x$ is a limit point of $A_1$.
My attempt: I'm trying to prove that $Br^*(x)\cap A_1 \ne \emptyset , \forall r > 0$ or that $\exists$ {$x_n$} $\subseteq A_1\setminus${$x$} that $x_n \to x$.
Let $x \in \bigcap\limits_{m=1}^{\infty} \overline {A_{m}}$ $\iff$ $x \in \overline {A_{m}}$, $\forall m \in \mathbb{N}$ $\iff Br(x)\cap A_m \ne \emptyset$, $\forall m \in \mathbb{N}$ and $\forall r>0$.
But since $x \notin$ $\bigcap\limits_{m=1}^{\infty} A_{m}$, so that $\exists$ $m \in \mathbb{N}$ that $x \notin A_m$. By the hypothesis, we have that $x \notin A_{m+1}$, for this particular $m$.
From this, how do I get that $x \in {A_1}'$? Any help to finish this proof would be appreciated. Thank you!
| Hint
Deduce that $x \in A_{m+1}'$. Since $A_{m+1} \subseteq A_1$ and $x \notin A_1$, you can then deduce that $x \in A_1'$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3372541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $x$ is the remainder when a multiple of $4$ is divided by $6$, and $y$ is the remainder when a multiple of $2$ is divided by $3$, maximise $x+y$. The question is: if $x$ is the remainder when a multiple of $4$ is divided by $6$, and $y$ is the remainder when a multiple of $2$ is divided by $3$, what is the greatest possible value of $x+y$?
The book says "the greatest value of $4$ is divided by $6$, which produces a remainder of $4$. The greatest value of $y$ is when $2$ is divided by, which produces a remainder of $2$. Therefore, the greatest value of $x+y$ is $6$."
I think what's throwing me off is the phrase "multiple of $4$" bc it makes me think that any multiple of $4$ can be divisible by $6$ (i.e. $24/6 = 4$). The books answer doesn't use multiples, just the $4$ and $2$, respectively. I don't understand how this works. Can someone please clarify?
| In order to get to the book's conclusion, you can test out a few numbers:
$4$ leaves remainder $4$ when divided by $6$.
$8$ leaves remainder $2$ when divided by $6$.
$12$ leaves remainder $0$ when divided by $6$.
$16$ leaves remainder $4$ when divided by $6$.
Then you can observe the possible remainders are $0, 2$ and $4$. This is because $16$ is $12$ more than $4$, a multiple of $6$, so the remainders will follow the same pattern after $16$.
There is another way to think about this pattern. When you add $4$ to a number, the remainder will be the same as if you subtract $2$ from the number, which explains the 'decreasing by $2$ pattern'.
Now try this with small multiples of $2$ and find the remainder when they are divided by $3$. This gives the book's answer of $4 + 2 = 6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3372654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Find the solution to the following differential equation: $ \frac{dy}{dx} = \frac{x - y}{xy} $ The instructor in our Differential Equations class gave us the following to solve:
$$ \frac{dy}{dx} = \frac{x - y}{xy} $$
It was an item under separable differential equations. I have gotten as far as $ \frac{dy}{dx} = \frac{1}{y} - \frac{1}{x} $ which to me doesn't really seem much. I don't even know if it really is a separable equation.
I tried treating it as a homogeneous equation, multiplying both sides with $y$ to get (Do note that I just did the following for what it's worth)...
$$ y\frac{dy}{dx} = 1 - \frac{y}{x} $$
$$ vx (v + x \frac{dv}{dx}) = 1 - v $$
$$ v^2x + vx^2 \frac{dv}{dx} = 1 - v $$
$$ vx^2 \frac{dv}{dx} = 1 - v - v^2x$$
I am unsure how to proceed at this point.
What should I first do to solve the given differential equation?
|
We write the differential equation as
\begin{align*}
xyy^\prime=x-y\tag{1}
\end{align*}
and follow the receipt I.237 in the german book Differentialgleichungen, Lösungsmethoden und Lösungen I by E. Kamke.
We consider $y=y(x)$ as the independent variable and use the substitution
\begin{align*}
v=v(y)=\frac{1}{y-x(y)}=\left(y-x(y)\right)^{-1}\tag{2}
\end{align*}
We obtain from (2)
\begin{align*}
v&=\frac{1}{y-x}\qquad\to\qquad x=y-\frac{1}{v}\\
v^{\prime}&=(-1)(y-x)^{-2}\left(1-x^{\prime}\right)=\left(\frac{1}{y^{\prime}}-1\right)v^2
\end{align*}
From (1) we get by taking $v$:
\begin{align*}
\frac{1}{y^{\prime}}=\frac{xy}{x-y}=\left(y-\frac{1}{v}\right)y(-v)=y-y^2v\tag{3}
\end{align*}
Putting (2) and (3) together we get
\begin{align*}
v^{\prime}=\left(y-y^2v-1\right)v^2
\end{align*}
respectively
\begin{align*}
\color{blue}{v^{\prime}+y^2v^3-(y-1)v^2=0}\tag{4}
\end{align*}
and observe (4) is an instance of an Abel equation of the first kind.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3372807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 0
} |
Which sets of sequence is countable and Uncountable. Consider the sequences
$$\displaystyle X=\left\{(x_n): x_n \in \left\{0,1\right\},n \in \mathbb{N} \right\}$$
$$and$$
$$\displaystyle Y=\left\{(x_n)\in X:x_n=1 \;\;\text{for at most finitely many n} \right\}$$
I have to choose which is uncountable and which is countable.
Solution i tried- Here $X$ is set of sequence with enteries from $\left\{0,1\right\}$ thus it has number of elements $2^{\aleph_0}$ which is uncountable .
Now The set $Y$ it has all the sequences from the set $X$ but some of its elements of sequences is replaced by the only '$1$' so its Cardinality will be less then $2^{\aleph_0}$ ,but by $\textbf{ continuum hypothesis}$ there is no set having Cardinality in-between the ${\aleph_0}$ and $2^{\aleph_0}$ so the set $Y$ will be countable
I write this proof but i don't even know this is correct or not but i am sure about set $X$ but not sure about $Y$ please help me with set $Y$
Thank you.
| Let $A_n=\{1,\cdots,n\}$. For each $f \in \{0,1\}^{A_n}$, define $g_f:\Bbb N \to \{0,1\}$ by $$g_f(x)=\begin{cases}f(x)&\text{if}\;x \in \{1,2,..,n\}\\0&\text{otherwise} \end{cases}$$ Then each $g_f \in Y$. Then $Y$ can be written as $$Y=\cup_{n=1}^\infty Y_n$$ where $Y_n=\left\{g_f: f \in \{0,1\}^{A_n}\right\}$ . Here each $Y_n$ is finite and hence $Y$ is countable!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3372929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
$(a_n)_n\subset [c,d]$ with limit point $h$ and $f: [c,d] \to \mathbb{R}$ continuous $\implies$ $f(h)$ is a limit point of $(f(a_n))_n$
$(a_n)_n\subset [c,d]$ with limit point $h$ and $f: [c,d] \to
\mathbb{R}$ continuous $\implies$ $f(h)$ is a limit point of
$(f(a_n))_n$
My attempt:
Let $f: [c,d]\to \mathbb{R}$ be continuous. Since $(a_n)_n\subset [c,d]$ has a limit point $h$, there is $(a_{n_k})_k$ such that $a_{n_k}\to h$. Since $f$ is continuous, for every $\varepsilon >0$, there is $\delta >0$ such that for all $x,h\in [c,d]$: $$|x-h|<\delta \implies |f(x)-f(h)|<\varepsilon$$. Particularly, for every $k\in \mathbb{N}$, there is a $x_{n_k}\in [c,d]$ such that $|x_{n_k}-h|<\delta \implies |f(x_{n_k})-f(h)|<\varepsilon$.
| The last sentence is not correct. Since $a_{n_k}\to h$, there is $K$ s.t. $|a_{n_k}-h|<\delta $ for all $k\geq K$. Therefore, $|f(a_{n_k})-f(h)|<\varepsilon $ when $k\geq K$, what prove the claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3372997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Image of a normal $*$-homomorphism Let $\mathcal M$ be a von Neumann algebra. Let $\pi:\mathcal M\to\mathcal M$ be a normal $*$-homomorphism Is $\pi(\mathcal M)$ again a von Neuman algebra? By [J. Dixmier, Les algebres d’operateurs dans l’Espace Hilbertien, 2nd ed., Gauthier-Vallars, Paris, 1969., Part I, Chapter 4.3, Corollary 2] $\pi(\mathcal M)$ is a weakly closed $*$-subalgebra of $\mathcal M.$ Clearly, $\pi(\mathcal M)$ has a unit $\pi(1).$ So is it not enough to say that $\pi(\mathcal M)$ is again a von Neumann algebra? But many places I have seen such as Sunder, V (An Invitation to von Neumann algebra) where the author has emphasized on the injectiveness of $\pi$!!
| The image $\pi(\mathcal{M})$ of a normal $*$-homomorphism $\pi\colon \mathcal{M}\to\mathcal{N}$ between von Neumann algebras $\mathcal{M}$ and $\mathcal{N}$ is indeed weakly closed in $\mathcal{N}$ (and thus a von Neumann algebra), also when $\pi$ is not injective.
In fact, one way to prove the general statement (in which $\pi$ need not be injective) is to reduce it to the 'injective case', as follows.
Note that the kernel $\mathop{Ker}(\pi)$ of $\pi$ is a weakly closed $*$-subalgebra of $\mathcal{M}$, and thus a von Neumann algebra. In particular, $\mathop{Ker}(\pi)$ has a greatest projection, $c$, (the unit of the von Neumann algebra $\mathop{Ker}(\pi)$.) Using the fact that $\mathop{Ker}(\pi)$ is in addition a two-sided ideal of $\mathcal{M}$ one can show that $c$ is central in $\mathcal{M}$ (see Theorem 6.8.8 of Kadison & Ringrose Vol. II, or perhaps 69{II,IV} of my thesis.)
Now the trick is to consider the von Neumann algebra $(1-c)\mathcal{M}$. Since $\pi(c)=0$, we have $$\pi(a)\ =\ \pi(ca+(1-c)a)\ =\ \pi((1-c)a)$$ for all $a\in\mathcal{M}$, and so $\pi$ can be written as the composition
$$ \mathcal{M} \stackrel{a\mapsto (1-c)a}\longrightarrow (1-c)\mathcal{M}\stackrel{\varrho}\longrightarrow \mathcal{N},
$$
where $\varrho$ is simply the restriction of $\pi$ to $(1-c)\mathcal{M}$. Note that, $\varrho$ is a normal $*$-homomorphism that is in addition injective. Since moreover the image $\varrho(\,(1-c)\mathcal{M}\,)$ of $\varrho$ coincides conveniently with the image of $\pi\equiv \varrho\circ ((1-c)(\,\cdot\,))$, we see that the image $\pi(\mathcal{M})$ of the (not necessarily injective) normal $*$-homomorphism $\pi$ is a von Neumann algebra when the image of the injective normal $*$-homomorphism $\varrho$ is a von Neumann algebra.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3373089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Number of Automorphisms of $S_6$ Trivial question about counting the number of automorphisms of $S_6$:
I know that for all $n \geq 3$, $Z(S_n)=1$, so Inn$(S_n) \cong S_n$.
I also know that $S_6$ has nontrivial outer homomorphisms, Out$S_n \cong \mathbb{Z}_2$.
Does this mean there are $S_n + \mathbb{Z}_2$ automorphisms in total, since Inn$(S_n)$ and Out$(S_n)$ are disjoint sets? So, other symmetric groups (excluding $S_2$) have $n!$ elements, while $S_6$ has $6! + 2$.
| We have $\operatorname{Out}(S_6)=\operatorname{Aut}(S_6)/\operatorname{Inn}(S_6)\cong C_2$ and hence
$$
|\operatorname{Aut}(S_6)|=|S_6|\cdot |C_2|=6!\cdot 2.
$$
Here we have used that $|G/N|=\frac{|G|}{|N|}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3373196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
multiple choice question on number theory Let $a\in \mathbb{Z}$ be such that $a=b^2+c^2$ where, $b,c \in \mathbb{Z}-\{0\}$. Then $a$ cannot be written as
*
*$pd^2$ where $d \in \mathbb{Z}$ and $p$ is prime with $p \equiv 1 \pmod4$
*$pd^2$ where $d \in \mathbb{Z}$ and $p$ is prime with $p \equiv 3\pmod4$
*$pqd^2$ where $d \in \mathbb{Z}$ and $p,q$ are primes with $p \equiv 1\pmod4$ and $q \equiv 3\pmod4$
*$pqd^2$ where $d \in \mathbb{Z}$ and $p,q$ are primes with $p,q \equiv 3\pmod4$
The similar question with some part missing is asked here, but I want to discuss additional options too.
Here, I got 1) is false Since $13=3^2+2^2$
For other options What should I do
| (2) Is correct since we know that every sum of two squares must be divisible by primes of the form $4n+1$ or of the form $4n+3$ if they are raised to an even exponent.
Check here for more details.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3373266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proof that $3^{10^n}\equiv 1\pmod{10^n},\, n\ge 2$ This should be rather straightforward, but the goal is to prove that
$$3^{10^n}\equiv 1\pmod{10^n},\, n\ge 2.$$
A possibility is to use
$$\begin{align*}3^{10^{n+1}}-1&=\left(3^{10^n}-1\right)\left(1+\sum_{k=1}^9 3^{10^n k}\right)\\&=\left(3^{10^n}-1\right)\left(3^{9\cdot 10^n}-1+3^{8\cdot 10^n}-1+\cdots +3^{10^n}-1+10\right),\end{align*}$$
but I don't see how this proves the equality in the question. I got only
$$3^{10^{n}}\equiv 1\pmod{3^{10^{n-1}}-1}.$$
Perhaps someone here can explain.
| Use induction
Basis
$$3^{100}\equiv (3^{10})^{10}$$
$$\equiv 59049^{10}$$
$$\equiv 49^{10}$$
$$\equiv 2401^5$$
$$\equiv 1\pmod {100}$$
Induction hypothesis
$$\frac{3^{10^{n}}-1}{10^n}\in\mathbb Z$$
Inductive step
$$\frac{3^{10^{n+1}}-1}{10^{n+1}}$$
$$=\frac{3^{10^{n}}-1}{10^n}\times \frac{1+\sum_{k=1}^9 3^{10^nk}}{10}$$
But of the terms multiplied are integers, as the first one directly follows from induction hypothesis and second one by $$\forall 1\leq k\leq 9, 10|10^n|(3^{10^n}-1)|(3^{10^nk}-1)$$
$$\Longrightarrow 3^{10^nk}\equiv 1\pmod {10}\forall 1\leq k\leq 9$$
$$\Longrightarrow 1+\sum_{k=1}^9 3^{10^nk}\equiv 0\pmod {10}$$
Hence proved
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3373359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Difference of elementary sets is elementary / Difference of intervals are intervals We call a set $E$ in $\mathbb{R}^d$ elementary iff it can be represented as a finite union of boxes. Let $E,F$ be elemnetary sets. I need to demonstrate that $E/F$ is elementary.
My question My proof (below) feels very messy and clumsy. Is there a way to simplify it, or maybe propose a different proof?
My proof
So using trivial set theoretic relations I rewrote the original equation as
\begin{align*}
E/F &= \left(\bigcup_{i=1}^{n} B_i\right) /\left(\bigcup_{j=1}^{m} C_j\right)\\[10pt]
&=\bigcup_{i=1}^{n} \bigcap_{j=1}^{m} B_i / C_j
\end{align*}
From here on it suffices to show that the difference of two boxes is elementary.
Now to prove this I went brute force.
Result 1. Set difference of two intervals is a union of two intervals.
Let $I_1 :=(a,b), I_2 = (c,d)$. We have permutations (unfortunately together with the cases where $b< a$ or $d<c$)
\begin{align*}
(a, b, c, d) &\implies N = (a,b)\\
(a, b, d, c) &\implies N = (a,b) \\
(a, c, b, d) &\implies N=(a,c) \\
(a, c, d, b) &\implies N=(a,c)\cup(d,b)\\
(a, d, b, c) &\implies N =(a,b)\\
(a, d, c, b) &\implies N=(a,b)\\
(b, a, c, d) &\implies N=\emptyset\\
(b, a, d, c) &\implies N=\emptyset\\
(b, c, a, d) &\implies N=\emptyset\\
(b, c, d, a) &\implies N=\emptyset\\
(b, d, a, c) &\implies N=\emptyset\\
(b, d, c, a) &\implies N=\emptyset\\
(c, a, b, d) &\implies N=\emptyset\\
(c, a, d, b) &\implies N=(d,b)\\
(c, b, a, d) &\implies N=\emptyset\\
(c, b, d, a) &\implies N=\emptyset\\
(c, d, a, b) &\implies N=(a,b)\\
(c, d, b, a) &\implies N=\emptyset\\
(d, a, b, c) &\implies N=(a,b)\\
(d, a, c, b) &\implies N=(a,b)\\
(d, b, a, c) &\implies N=\emptyset\\
(d, b, c, a) &\implies N=\emptyset\\
(d, c, a, b) &\implies N=(a,b)\\
(d, c, b, a) &\implies N=\emptyset\\
\end{align*}
I somehow have a feeling that this step can be justified very simply, but somehow I cannot see how.
Result 2. $I_1\cup J_1 \times I_2 \times \dots \times I_n = I_1\times I_2 \times \dots \times I_n \cup J_1 \times I_2 \times \dots \times I_n$. Can be verified directly using the definition of a box.
Combining both we obtain
\begin{align*}
E/F &=\bigcup_{i=1}^{n} \bigcap_{j=1}^{m} B_i / C_j \\[10pt]
&=\bigcup_{i=1}^{n} \bigcap_{j=1}^{m} \left\{(x_1,\dots,x_d)\in\mathbb{R}^d: x_i \in I_i / J_i\right\} \\
&=\bigcup_{i=1}^{n} \bigcap_{j=1}^{m} \left\{(x_1,\dots,x_d)\in\mathbb{R}^d: x_i \in A_i \cup B_i\right\} \\
&=\bigcup_{i=1}^{n} \bigcap_{j=1}^{m} (A_1 \cup B_1) \times \dots \times (A_n \cup B_n) \\
\end{align*}
And applying Result 2 we see that this is a untion of boxes, and thus elementary.
| Note the $C_j^c$ is a union of $2d$ unbounded boxes.
So $E_i \cap C_j^c$ is again a union of boxes since the intersection of two boxes with edges parallel to the axis is a box or the empty set.
So finally you will have two finite big unions of intersections of boxes,which is a finite union of boxes
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3373489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
is the quadratic equation appropriate for this? I work in a paper mill as a tech. There is a formula for percent solvents in a liquor solution. It is s=(A*P^2) + (B*P)
$S$ is the percent solvent, $A$ and $B$ are constants, $3.21953$ and $8.117$ respectively. I would like to solve for $p$, as instrumentation can tell me percent solvents, but I would like to verify the percentage for calibration purposes. I started moving stuff around and got confused, as college math classes were quite some time ago. I get that there is a squared variable there, so I have to square root something, and that percent solvents will never be negative, so that's probably an absolute value. This looks like a quadratic equation to me, if I replace the coefficients to $P^2$ and $p$ with $a$,$b$, and call percent solvents ($s$) $c$.
So then I subtracted $s$, and get $0= AP^2+Bp -C$. cool. So then I assumed I could use the quadratic equation,
$P= \frac{-b \pm \sqrt{b^2+4ac}}{2a}$.
Is this a safe assertion? Or have I misused this formula?
(also if someone who knows how to format math on this website were to come tidy this up for me it would be much appreciated.
| Yes, this is how it would be done.
You would then have $\displaystyle P = \frac{-B \pm \sqrt{B^2-4As}}{2A}$.
When you evaluate both the $+$ and $-$, make sure you pick the $P$ for which the percent solvent makes sense physically.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3373626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
For an estimate $\hat{f}$ of a regressor $f$, showing $\mathbb{E}[(f(x)-\hat{f}(x))^2]$ is equivalent to another expression. In the context of regression for machine learning, suppose I have a function from an instance space $I$ to $\mathbb{R}$, say $f:I \rightarrow R$, and that I have an estimator $\hat{f}:I \rightarrow R$.
The textbook I am reading states without proof that the expectation $\mathbb{E}[(f(x)-\hat{f}(x))^2] = (f(x)-\mathbb{E}[\hat{f}(x)])^2 + \mathbb{E}[(\hat{f}(x)-\mathbb{E}[\hat{f}(x)])^2]$.
Expanding the left side using linearity and the fact that expectation of a constant is the constant I get:
$\mathbb{E}[f(x)^2-2f(x)\hat{f}(x)+\hat{f}(x)^2] = f(x)^2-2f(x)\mathbb{E}[\hat{f}(x)]+\mathbb{E}[f(x)^2]$, and I am not sure how this equals the right side. Any insights to easily see this appreciated.
| Note that $\bigl(f(x)-\hat f(x)\bigr)^2=f(x)^2-2f(x)\hat f(x)+\hat f(x)^2$, which is different than what you wrote due to the sign of the last term. (EDIT: This was corrected in the question shortly after I pointed it out)
Thus,
$$
\mathbb E\bigl(f(x)-\hat f(x)\bigr)^2=\bigl(f(x)-\mathbb E\hat f(x)\bigr)^2+\mathbb E\bigl(\hat f(x)^2\bigr)-\bigl(\mathbb E\hat f(x)\bigr)^2.
$$
Finally, observe that
$$
\mathbb E\bigl(\hat f(x)^2\bigr)-\bigl(\mathbb E\hat f(x)\bigr)^2=\mathbb E\bigl(\hat f(x)-\mathbb E\hat f(x)\bigr)^2,
$$
by expanding out the right side and cancelling. Note that both expressions appearing in the last step equal the variance $\textrm{Var}\hat f(x)$.
Thus, I believe the intended identity is the following:
$$
\mathbb E\bigl(f(x)-\hat f(x)\bigr)^2=\bigl(f(x)-\mathbb E\hat f(x)\bigr)^2+\mathbb E\bigl(\hat f(x)-\mathbb E\hat f(x)\bigr)^2.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3373726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove that conditional independence does not imply independence? I am trying to prove that conditional independence does not imply independence, ie that $P(A|C)P(B|C)=P(A \cap B|C) \nRightarrow P(A \cap B)=P(A)P(B)$
I guess I need a counter-example but I am struggling to find a way of homing in onto one.
So far I have tried drawing Venn diagrams, and I can see that there is no reason why the sizes of the relevant intersections should multiply as implied by the above, but I am not sure how to proceed from there.
| If $C = A^c,$ then as long as the $P(A) \neq 1,$ then the conditional independence equation will just be $0=0$ while we've learned nothing about whether $A$ and $B$ are independent.
That is, take any two $A, B$ that aren't independent, then $P(A) \neq 1$ (why?) and setting $C=A^c$ implies $A|C, B|C$ are independent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3373874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$\zeta(0)$ and the limit of $(1-s)\zeta(s)$ as $s\to 1$ I am hoping to compute $\zeta(0)$ where $\zeta$ is of course the Riemann zeta function. My first attempt was to use the functional equation which yields:
$$\zeta(0) = \frac{1}{\pi}\cos\left(\frac{\pi}{2}\right)\zeta(1)~.$$
Now, since $\cos(\pi/2)=0$ and $\zeta(1)\to\pm\infty~,$ it looks like L'Hospital would be my best friend here, but alas: what on earth is $\zeta'(1)$? If anything this seems to make it worse.
I am aware that this question has been asked several times already so I tried to find a nice answer for it and I sort of did. How to Compute $\zeta (0)$? DonAntonio offers a very nice solution, but it relies on the equation:
$$\lim_{s\to1}~(1-s)\zeta(s)=-1~.$$
Again, my only tool for limit evaluation, L'Hospital, causes the same problem as above. Is there a nice elementary way of computing either of those limits? Apparently this is related to something called residue which I don't really know what that is. I tried to look it up and there seems to be some pretty heavy theory involved. I hope somebody can provide a more elementary explanation for all of this.
Thaks a lot,
Alex
| $\newcommand{\multichoose}[2]{{#1}^{[\!\underline{#2}\!]}}$
If you want to show
$$\lim_{s\to 1}(s-1)\zeta(s)=1\text{,}$$
here's a way that doesn't invoke other special functions, built on estimating the zeta sum by certain integrals.
Start with the rectangle rule for integration:
$$\int_{n}^{n+1}f(x)\mathrm{d}x\approx f(n+1)\text{.}$$
If derive the error term using integration by parts, we get
$$\int_{n}^{n+1}f(x)\mathrm{d}x=f(n+1)-\int_{n}^{n+1}(x-n)f'(x)\mathrm{d}x\text{.}$$
This is just as much a relation expressing the value of $f$ in terms of its integrals:
$$f(n+1) = \int_{n}^{n+1}f(x)\mathrm{d}x+\int_{n}^{n+1}(x-n)f'(x)\mathrm{d}x\text{.}$$
Summing over $n$, we get
$$\sum_{n=1}^{\infty}f(n)=\int_{1}^{\infty}f(x)\mathrm{d}x +f(1) +\int_{1}^{\infty}(x-\lfloor x \rfloor)f'(x)\mathrm{d}x\text{.}$$
Let $f(k)=k^{-s}$. Then
$$\sum_{n=1}^{\infty}\frac{1}{k^s}=\int_{1}^{\infty}\frac{\mathrm{d}x}{x^s} +1 -s\int_{1}^{\infty}\frac{(x-\lfloor x \rfloor)}{x^{s+1}}\mathrm{d}x\text{,}$$
i.e.,
$$\boxed{\zeta(s)=\frac{1}{s-1} +1 -s\int_{1}^{\infty}\frac{x-\lfloor x \rfloor}{x^{s+1}}\mathrm{d}x}$$
(DLMF 25.2.8). The right side of this equation is defined for all $\Re s>0$, $s\neq 1$.
We can push this method farther. Rewrite the integral as
$$\begin{split}
\int_{1}^{\infty}\frac{x-\lfloor x \rfloor}{x^{s+1}}\mathrm{d}x&=\int_{1}^{\infty}\frac{x-\lfloor x \rfloor-\tfrac{1}{2}+\tfrac{1}{2}}{x^{s+1}}\mathrm{d}x\\
&=\frac{1}{2}\int_{1}^{\infty}\frac{\mathrm{d}x}{x^{s+1}}+\int_{1}^{\infty}\frac{x-\lfloor x \rfloor-\tfrac{1}{2}}{x^{s+1}}\mathrm{d}x \\
&=\frac{1}{2s}+(s+1)\int_{1}^{\infty}\frac{b_2(x-\lfloor x\rfloor)}{x^{s+2}}\mathrm{d}x
\end{split}$$
where $b_2(u)=\tfrac{1}{2}(u^2-u)$. In these steps, we separated out the mean value of $x-\lfloor x \rfloor$ then integrated by parts. Therefore
$$\boxed{\zeta(s)=\frac{1}{s-1} +1 -\frac{1}{2} -s(s+1)\int_{1}^{\infty}\frac{b_2(x-\lfloor x\rfloor)}{x^{s+2}}\mathrm{d}x}\text{.}$$
The right side of this equation is defined for all $\Re s > -1$, $s\neq 1$.
Then $\lim_{s\to 1}(s-1)\zeta(s)=1$ and $\zeta(0)=-\tfrac{1}{2}$ follow by direct substitution into the boxed equations.
The two expressions above are special cases of
$$\zeta(s)=\frac{1}{s-1}+1+\sum_{k=0}^{n-1}\multichoose{s}{k}\frac{B_{k+1}}{k+1}-\multichoose{s}{n+1}\int_1^{\infty}\frac{(B_{n+1}(x-\lfloor x \rfloor) -B_{n+1})\mathrm{d}x}{x^{s+n+1}}\text{,}$$
valid for $\Re s > -n$, $s\neq 1$ (DLMF 25.2.10); here $\multichoose{s}{k}$ are the multiset coefficients, $B_k(u)$ are the Bernoulli polynomials, and $B_k$ are the Bernoulli numbers:
$$\begin{align}
(1-t)^{-s}&=\sum_{k=0}^{\infty}\multichoose{s}{k}t^k \\
\frac{t\mathrm{e}^{tu}}{\mathrm{e}^t-1}&=\sum_{k=0}^{\infty}B_k(u) \frac{t^k}{k!} \\
\frac{t}{\mathrm{e}^t-1}&=\sum_{k=0}^{\infty}B_k \frac{t^k}{k!}\text{.}
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3374153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Investigating whether a function is bounded I have the following function:
$$y=\frac{x^{2}-3}{x^{2}+7}$$
and I'm trying to determine, whether the function is bounded or not. To find the upper bound, I rewrote the function as $y=\frac{x^{2}-7+10}{x^{2}+7}$, and it's obvious that upper bound is 1.
However, how would I find the lower bound?
| The graph of the function is symmetric about the $y$-axis, so we only need to think about the lower bound when $x\geq 0$. And $y$ is increasing for $x\geq 0$. So the minimum occurs when $x=0$, so the greatest lower bound is $-\dfrac{3}{7}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3374245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Interchange between expected value and infinite summation (Fubini theorem) Let $S_n = \sum_{i=1}^nX_i$ (where the $X_i$ are i.i.d.) and let N be a positive, integer valued r.v., independent from the sequence $X_n$.
Suppose also that $E[N]<\infty$ and $E[|X_i|]<\infty$.
What I want to prove is the following step:
$$E\biggr[\sum_{n=0}^\infty S_n\mathbf1_{(N=n)}\biggl] =\sum_{n=0}^\infty E[S_n\mathbf1_{(N=n)}]$$
The explanation should be the Fubini theorem, but for applied it, I should demonstrate that $$\sum_{n=0}^\infty E[|S_n\mathbf1_{(N=n)}|]<\infty $$
This is what I have to prove right?
using independence and the fact that the $X_i$ are independent I come up with: $\sum_{n=0}^\infty\mu nP(N=n)$ where $\mu=E(X_i)$ that we know to be $<\infty$. But here I don't know how to proceed
| I assume that all $X_i$'s are independent and have identic distribution.
In order to apply Fubini, you have to show (like you said)
$$\sum_n E(|S_n|I_{\{N=n\}}) <\infty$$
We now prove this:
$$\sum_n E(|S_n|I_{\{N=n\}}) = \sum_n E\left(\left|\sum_{k=1}^n X_k\right|I_{\{N=n\}}\right) \leq \sum_n E\left(\sum_{k=1}^n |X_k|I_{\{N=n\}}\right)$$
$$= \sum_n \sum_{k=1}^n E(|X_k|)P(N=n) = \sum_n n E(|X_1|)P(N=n) $$
$$= E(|X_1|) \sum_n nP(N=n) = E(|X_1|) E(N) < \infty$$
You can also apply dominated convergence theorem:
$$E\left[\sum_{n=0}^\infty S_nI_{\{N=n\}}\right]= E\left[\lim_{k \to \infty}\sum_{n=0}^k S_nI_{\{N=n\}}\right] = \lim_{k \to \infty} E\left[\sum_{n=0}^k S_nI_{\{N=n\}}\right]$$$$= \sum_{n=0}^\infty E[S_n]P(N=n) = E[X_1]E[N]$$
here the sequence of functions $(\sum_{n=0}^k S_n I_{\{N=n\}})_{k=1}^\infty$ is dominated by the integrable function $\sum_{n=0}^\infty |S_n| I_{\{N=n\}}$ as the calculation above shows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3374356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In-/surjectivity of $R[X] \rightarrow \text{Map}(R,R)$ for infinite integral domain $R$. Let $R$ be a ring and consider
\begin{eqnarray*}
\phi : &R[X] &\longrightarrow \text{Map}(R,R) \\
&f &\mapsto \,\,\,(r \mapsto f(r))
\end{eqnarray*}
I have shown that $\phi$ is a ring homomorphism iff $R$ is commutative.
Now assume that $R$ is an infinite integral domain. (Then $R$ is commutative, so $\phi$ is a ring homomorphism.) I want to show that $\phi$ is injective, but not surjective. To do this, suppose that for arbitrary $f,g ∈ R[X]$, we have $\phi(f) = \phi(g)$. Then we have:
$$ (\forall r ∈ R)(f(r) = \phi(f)(r) = \phi(g)(r) = g(r)).
$$
Now if $f, g$ were functions, then we could conclude that $f =g$, and $\phi$ would be injective. I figured that the same would apply for polynomials (because they are equal at every point). However, we never used that $R$ is an infinite integral domain here.
For surjectivity (or actually: non-surjectivity) it would suffice to show an example of a function $f \notin \text{im} \phi$. I have the feeling that here is where the infinity of $R$ comes into play, but as $R$ remains abstract, I haven't found a concrete example of this yet.
No homework question, just an exercise. Any help will be greatly appreciated.
Edit
What happens when instead $R$ is finite? (i.e. when $R$ is a finite field.) Can $\phi$ still be injective? And surjective? The latter is at least possible (if not plausible) on grounds of cardinality: Map$(R,R)$ is finite, while $R[X]$ is not.
| For $R$ infinite, the set of all possible functions $f:R \to R$ has cardinality $2^{\vert R \vert}$, whereas (because polynomials can be put into $1-1$ correspondence with finite sequences from $R$) $\vert R[X] \vert = \vert R \vert$. Cantor's theorem tells us that for any set, $\vert R \vert \lt 2^{\vert R \vert}$, so $\phi$ can't be surjective.
If $\forall x \in R~f(x)=g(x)$, then because $R$ is an integral domain, it embeds in a field $F$, and because $R$ is infinite $f-g \in R[X] \subseteq F[X]$ has infinitely many roots, so it must be identically $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3374515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Cities and Induction. Trying to find a dead end. There are ($n$ > 1) cities and every pair of cities is connected by exactly one road. The road can go only from A to B, only from B to A, or in both directions.
The goal is to find a dead-end city, if it exists, i.e., a city x to which there is a direct one-way road from every other city, but there is no direct road going from X to any other city. You are allowed to make only one type of question – “Is there a direct road going from city A to city B?” The answer to this question will be a “Yes” or a “No”. Use mathematical induction to show that if there are cities then one can find a dead-end city, if there is one, using at most 2(−1) questions.
My thoughts:
Base C: = 2
2(2−1)=2 questions. Ask if there is a road from A to B. And ask if there is a road from B to A.
Induction H: Assume that the claim is true when = , for some k > 1.
2(−1)
IS: We want to prove that the claim is true when n = k + 1
I also noted that there can only be a maximum of one dead-end city. But I'm unsure of where to proceed.
| If $n=1$, ask zero questions and know that the one city $A$ is vacuously a dead end (i.e., vacuously "every other" city has a one-way road to $A$, and vacuously there is no road from $A$ to any other city).
Assume $n>1$. Pick two cities $A,B$ and ask whether there is a direct road $A\to B$.
*
*If "yes", $A$ cannot be the dead-end, but perhaps $B$. Use at most $2((n-1)-1)$ questions to find a dead end $X$ in the graph obtained by removing $A$. Now in the original graph, the only possible dead ends are $X$ and $B$ (and it may happen that $X=B$, in which case we are already done). Ask about the existence $B\to X$ (thereby using up the $2(n-1)$ question budget). If "yes", the dead end must be $X$; if "no" the dead end must be $B$.
*If "no", $B$ cannot be the dead end, but $A$ can. Proceed as above, with $A,B$ swapped.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3374598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Let $\mathit f:X_1 \to X_2$ be continuous and surjective. With certain property of $d$, if $(X_1, d_1)$ is complete, then is $(X_2,d_2)$ complete? Let $\mathit f:X_1 \to X_2$ be continuous and surjective, and $d_1(p,q)\le d_2 \bigl(\mathit f(p),\mathit f(q)\bigl)$, $\forall p,q\in X_1$.
*
*If $(X_1, d_1)$ is complete, then is $(X_2,d_2)$ complete?
*If $(X_2,d_2)$ is complete, then is $(X_1,d_1)$ complete?
I have proved the second question as below:
Suppose that $(X_2,d_2)$ is complete. Then for a Cauchy sequence $\bigl(\mathit f(x_i)\bigl)_{i=1}^\infty$ must converge, i.e. $\forall \epsilon \gt0$, $ \exists N\in\mathbb N$ such that $\forall n\gt m\gt N$, $d_2\bigl(f(x_n),f(x_m)\bigl)\lt\epsilon$, and $\lim_{x\to\infty}f(x_i)=f(x)$.
Since $d_1(x_n,x_m)\le d_2 \bigl(\mathit f(x_n),\mathit f(x_m)\bigl)\lt\epsilon$, $(x)_{i=1}^\infty$ is also a Cauchy sequence.
Since $f$ is continuous, $\lim_{x\to\infty}f(x_i)=f(x)$ implies $\lim_{x\to\infty}x_i=x$.
Therefore Cauchy sequence $(x)_{i=1}^\infty$ converges, which implies that $(X_1,d_1)$ is complete.
However firstly I do not think I showed that EVERY Cauchy sequence converges, and secondly for the question 1, I think it is not necessarily true but I can not find a proof or an counterexample either.
Could anyone help me improve the proof above and also share some hints or counterexamples for the question 1?
| For question 1.
Let $(y_n)\subset X_2$ be a Cauchy sequence. As $f$ is surjective, there exists $(x_n)\subset X_1$ such that $f(x_n)=y_n$ for all $n$. Now $d_1(u,v)\leq d_2(f(u),f(v))$ for all $u,v$ implies that $(x_n)$ is a Cauchy sequence. The completeness of $(X_1,d_1)$ implies the convergence of $(x_n)$ towards $x\in X_1$. Finally, the continuity of $f$ implies that $(y_n)=(f(x_n))$ converges towards $y=f(x)\in X_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3374751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Handle finite integral of unbounded function I am trying to show that there exists a $\delta>0$ such that in the measure space $(X,\mathcal{A},\mu),\, u \in \mathcal{L}^1$:
$\forall E \in \mathcal{A}: \mu(E) < \delta \Rightarrow |\int_E u\,d\mu|< \frac{1}{100}$
I can show this if u is bounded. However, the problem is if u is unbounded. Then the integral can still be finite, e.g. $\frac{1}{\sqrt x}$.
I can't find a delta when u is unbounded.
Any hint/help would be appreciated
| Since $u\in L^1$, $u$ is finite almost everywhere.
Therefore $|u|\land N:=min\{|u|, N\}$ monotonically increases and converges to $|u|$ a.e. as $N\rightarrow\infty$.
Choose sufficiently large $N$ so that $\int_{X}|u|d\mu-\int_{X}(|u|\land N)d\mu=\int_{X}(|u|-|u|\land N)d\mu<\epsilon$. Note that the integrand is always positive.
Since $\int_{E}(|u|\land N)d\mu\leq N\mu(E)$, you can now choose $E$ with sufficiently small $\mu(E)$ so that $\int_{E}(|u|\land N)d\mu<\epsilon$.
Combining the previous lines proves what you need. Just set $\epsilon=\frac{1}{200}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3375186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Showing monotone convergence of recursive relation with $ x_{n+1} = \sin(x_{n}) $ how I can show the following monotonic relation : $x_{n}-x_{n+1}>=0$ with
$x_{n+1}=sin(x_{n})$:
My Idea:
with
$x_{n}-x_{n+1}=x_{n}-sin(x_{n})=....$here I am stuck.
it were nice if someone can help me at.
greetings
| This requires an additional ssumption. For example if $x_1=-\frac {\pi} 2$ then $x_2=-1 >x_1$.
If $x_n \geq 0$ for all $x$ then this result is true and it follows from the inequality $\sin x \leq x$ for all $x \geq 0$.
Proof of $\sin x \leq x$ for $x \geq 0$: let $f(x)=x-\sin x$. Then $f(0)=0$ and $f'(x)=1-\cos x \geq 0$. Hence $f$ is montonically increasing so $f(x) \geq f(0)=0$ for $x \geq 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3375352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Function for finding the length of a curve connecting two points in a two-dimensional sphere I am trying to study differential geometry.
I am confused with regards to the following function for finding the length of
a curve $\gamma$ connecting two points $p, q ∈ S^2$
$$L(γ) = \int^1_0|\dot{γ}(t)| dt,γ(0) = p, γ(1) = q$$
Where $S^2$ is a 2-dimensional sphere sitting in the three dimensional Euclidean space $R^3$
I am unfamiliar with the "dot above function" notation (dot above $\gamma$), what does it mean? And from where is this function derived or what is it called?
| As @math.pr said the dot shows drive function. But about formula, if you take an eleman on curve and assume to be straight, so its length can be computed by euclidean meter as below:
$$dL=\sqrt{dx^2+dy^2+dz^2} \Longrightarrow \int dL = \int \frac{\sqrt{dx^2+dy^2+dz^2}}{dt} \ dt \Longrightarrow$$
$$L = \int \sqrt{\frac{dx^2+dy^2+dz^2}{dt^2}} \ dt = \int \sqrt{\frac{dx^2}{dt^2}+\frac{dy^2}{dt^2}+\frac{dz^2}{dt^2}} \ dt = \int \sqrt{(\frac{dx}{dt})^2+(\frac{dy}{dt})^2+(\frac{dz}{dt})^2} \ dt$$
$$= \int \left|\left(\frac{dx}{dt},\frac{dy}{dt},\frac{dz}{dt}\right)\right| dt = \int \left| \overset{.}{\gamma}(t) \right| dt \Longrightarrow L = \int \left| \overset{.}{\gamma}(t) \right| dt$$
And the limits show the start and end points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3375459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove or disprove: ${\rm Aut}(\Bbb Z_8)$ is abelian and cyclic. $\DeclareMathOperator{\Aut}{Aut}$
So for this class I was introduced to Automorphisms through the homework. We had to prove that Automorphisms under composition is a group, and the next question was asking whether $\Aut(\Bbb Z_8)$ is abelian and/or cyclic.
I have no idea how to approach this problem. To disprove it I'd need an example but I don't know how to find the elements of $\Aut(\Bbb Z_8)$. And if it is true, I'd need to write a solid proof which I also don't know how to approach.
Ideas I have so far:
I have a hunch that $\Aut(\Bbb Z_8)$ is not abelian. If it were, I'd basically need to show that $f(g(x)) = g(f(x))$, which I think is only true if the functions are inverses of each other. So I'd need to prove that every function in $\Aut(\Bbb Z_8)$ is an inverse of every other function in $\Aut(\Bbb Z_8)$, which I don't think is possible.
I am unsure about whether $\Aut(Z8)$ is cyclic or not. $\Bbb Z_8$ is cyclic and generated by $1$, but I don't know how/if I could use that information to prove that $\Aut(\Bbb Z_8)$ is cyclic.
Any help/hints/resources would be appreciated. Thanks!
| Hint:
An automorphism $f$ of $\mathbf Z_8$ maps the generator $\bar 1$ onto another generator, and this image characterises $f$.
Now the generators of $\mathbf Z_8$ are $\;\{\bar 1,\bar 3,\bar 5,\bar 7\}$, hence $\operatorname{Aut}(\mathbf Z_8)$ has order $4$. Check that any automorphism $f$ satisfies $f^2=\text{id}$, and deduce from this relation that $\operatorname{Aut}(\mathbf Z_8)$ is commutative.
Note: if you know that there are, up to an isomorphism, only two groups of order $4$, and that they both are commutative, it's still shorter.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3375699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
In maximum likelihood estimation, why is it hard to directly optimize the likelihood function? In Boyd's Chapter 7, it writes
I am just wondering what is the reason we do not maximize the likelihood function directly and instead construts the log-likelihood function?
What is the fundamental reason that makes the product of densities harder to maximize? Is it because it is difficult to test convexity, generate gradient, or something else?
| In small-$n$ problems, optimizing the likelihood may be tractable, and is in practice sometimes done. However optimizing a likelihood function that involves the product of many terms (for instance $n \sim 10^8$) is computationally difficult because you must take derivatives of extremely high powers of terms and cross terms. It is much simpler to optimize a sum of these terms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3375817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How many spheres can fit inside this larger sphere? I would like to know if there is a way to do the following: calculate the maximal number of spheres of unit radius that can fit inside a sphere of radius 200 times the unit radius.
This is a generalisation of a question that was asked in a biology class. I was wondering if there exist some theorems on this, since I don't know how to start on it.
| There is also a packing arrangement known as Random Close Pack. RCP depends on the object shape - for spheres it is 0.64 meaning that the packing efficiency is 64% (as you can also see in Jack D'Aurizio's link). Therefore, if the balls are randomly distributed, then you can fit approximately $0.64 \cdot \frac{\frac{4}{3}\pi (20r)^3}{\frac{4}{3}\pi r^3} \approx 5120 $, which is far from the highest estimation but still pretty good.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3375911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
how to solve $\operatorname{rem}(6^{15},17)$ without using a calculator. I am trying to solve $\operatorname{rem}(6^{15}, 17)$.
I know that we have to use congruences but don't know how to go on.
$6 ≅ 6 \mod 17$??
Can anyone please point me in the right direction?
Do I have to use CRT in here?
| A low tech solution:
$6^2 = 36 \equiv 2 \pmod{17}$ ($34$ is a multiple of $17$).
So $6^4 = (6^2)^2 \equiv 2^2 = 4 \pmod{17}$
Hence: $6^8 = (6^4)^2 \equiv 4^2 = 16 \equiv -1 \pmod{17}$
Also: $6^{15}=6^1 \cdot 6^2 \cdot 6^4 \cdot 6^8$ ($15$ is $1111$ in binary), so modulo $17$ this becomes
$6 \times 2 \times 4 \times -1 = -48 \equiv -48 + 3\times 17 = 3$
So $3$ is the answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3376018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Proof: If $x$ is odd, then $x+2$ is odd. I'm fairly new to writing proofs so any advice can help.
I'm asked to prove the following statement: "If $x$ is odd, then $x+2$ is odd". Here is my proof:
We will prove this by contraposition: if $x+2$ is not odd, then $x$ is not odd.
Let there be an integer $k$ such that $x+2 = 2k$.
Thus,
\begin{align}
x & = 2k-2 \\
& = 2(k-1)
\end{align}
Then $x = 2(k-1)$ is an even number.
Since the contrapositive is true, the statement "If $x$ is odd, then $x+2$ is odd" is true by logical equivalency.
The problem is: I don't know if my proof is enough or how to properly tackle them. Any advice?
| This seems fine as long as you know that "not odd" is the same as even for integers. Also, for your opening sentence in the proof, I might say "If $x+2$ is even then we can write $x+2=2k$ for some integer $k$."
You can also just prove this directly if you know that odd integers are of the form $2k+1$. That is, if $x=2k+1$, then $x+2=2(k+1)+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3376151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove or construct counterexample for statement about measure. Let (S,S,u) be a measure space and f,g $\in$ $L^0$ satisfy u(x $\in S$ : $f(x) < g(x)) > 0$. Prove or construct a counterexample for the following statement. There exists constants a, b $\in$ R s.t. u({x $\in$ S : f(x) $\leq$ a < b $\leq$ g(x)}) > 0.
My first instinct was to try to find a counterexample. I tried to do this by finding a situation where f(x) is always increasing s.t. for any choice of a, all f(x) < a would yield only a countable set of points, hence obtaining measure 0, but I am having a difficult time doing this. I also tried proving the statement using lim sups and infs, but I don't know if they even necessarily exist.
| $\{x:f(x)<g(x)\}=\bigcup_{p \in Q} \bigcup_{q \in \Bbb{Q}}\{x:f(x) \leq p<q \leq g(x)\}$
There exist $p_0,q_0 \in \Bbb{Q}$ such that $u(\{x:f(x) \leq p<q \leq g(x)\})>0$
because if all these sets had measure zero then by subadditivity you would have that $u(\{x:f(x)<g(x)\})=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3376280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to define a group structure on a given arbitrary set? I am doing a course in Abstract Algebra, and my teacher gave me a question: "Find all possible group structures on a set X whose cardinality is ≤ 4."
I know basic group theory, but I am unable to understand what exactly does the question expect us to do (meaning of the question in simple language, and a possible technique or way to answer such questions, exact answer not needed.)
| As you are specifically wanting
*
*to understand what the question asks, and
*a hints.
I will address these and (in view of (2)) not give a worked solution.
1) As I read it, the question essentially wants you to fix a set with $4$ elements and find all group structures on this set. A different (easier) question is "find all groups with four elements" (but then the question should have read "Find all possible group structures up to isomorphism on a set $X$ whose cardinality is $\leq 4$).
Possibly this second interpretation is what was intended when the question was written, but I do not think this is what the question is actually asking. [This is discussed in the comments to the question.]
2) It is well-known that there are two groups of order $4$, up to isomorphism. If you are unaware of this fact then you should start by verifying it. Lets fix the set $X=\{a, b, c, d\}$. Every bijection from the set $X$ to the group $\mathbb{Z}_4$ defines a group structure on $X$. So, for example, $a\mapsto 0$, $b\mapsto 1$, $c\mapsto 2$, $d\mapsto 3$ gives a group structure, while $a\mapsto 1$, $b\mapsto 2$, $c\mapsto 3$, $d\mapsto 0$ gives a different group structure. There are $4!$ bijections between $X$ and $\mathbb{Z}_4$, and similarly $4!$ between $X$ and $\mathbb{Z}_2\times\mathbb{Z}_2$ (the Klein $4$-group). Hence, there are at most $4!+4!$ group structures which we can put on $X$. However, there is some double-counting going on. The actual number of group structures is $4!/2+4!/6=16$. Why?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3376406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find $\lim_{x \to 0} \frac{(\tan(\tan x) - \sin (\sin x))}{ \tan x - \sin x}$ Find $$\lim_{x\to 0} \dfrac{\tan(\tan x) - \sin (\sin x)}{ \tan x - \sin x}$$
$$= \lim_{x \to 0} \dfrac{\frac{\tan x \tan (\tan x)}{\tan x}- \frac{\sin x \sin (\sin x)}{\sin x}}{ \tan x - \sin x} = \lim_{x \to 0} \dfrac{\tan x - \sin x}{\tan x - \sin x} = 1$$
But the correct answer is $2$. Where am I wrong$?$
| @Surb identified your error with the choice$$f_1=\tan(\tan x),\,g_1=\tan x,\,f_2=\sin(\sin x),\,g_2=\sin x.$$One method that would work is to use$$\tan x=x+\frac13 x^3+o(x^3),\,\sin x=x-\frac16 x^3+o(x^3)$$together with$$x+cx^3+c(x+cx^3)^3=x+2cx^3+o(x^3),$$viz.$$\frac{\tan(\tan x)-\sin(\sin x)}{\tan x-\sin x}=\frac{\tan(x+\frac13 x^3+o(x^3))-\sin(x-\frac16 x^3+o(x^3))}{\frac12x^3+o(x^3)}\\=\frac{(x+\frac23 x^3)-(x-\frac13 x^3)+o(x^3)}{\frac12x^3+o(x^3)}=2+o(1).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3376481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Confusion on integrability of stopping times I know the definition of a $\mathbb F-$stopping time $\tau$ is that for all $n \in\mathbb N $ that $\{ \tau \leq n\} \in \mathcal{F}_{n}$
How do the ideas of integrability and well-definedness of $\tau$ actually fit in to the concept of a stopping. I realize the question is not concise.
For example, if $P(\tau < \infty)=1$ does this mean that $\tau$ is integrable? I know this is the case if $\tau$ is $-$a.s. bounded, i.e. there exists $c\in \mathbb N$ so that $\tau \leq c-$a.s. and hence $E[\tau] \leq E[c]=c<\infty$
| You should just think of $\tau$ as a random variable which takes values in $\mathbb N$ with the additional measurability property that $\{\tau \le n\} \in\mathcal F_n$ for each $n\in\mathbb N$.
Hence, $\tau$ being integrable and well-defined means the same thing as what it does for any other random variable to be integrable or well-defined. For example, integrability means that $\mathbb E[\vert\tau\vert] = \mathbb E[\tau] < \infty$.
That $\mathbb P(\tau < \infty) = 1$ just means that $\tau$ is a.s. finite. As with any other random variable, this is weaker than integrability. That is, if $\tau$ is integrable, then it is almost surely finite, but not vice-versa. For an example of a random variable that is a.s. finite but not integrable, consider the one given in the St. Petersburg Paradox.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3376580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Eigenvector of Matrix with Duplicate Columns I have this $5 \times 5$ matrix :
\begin{pmatrix}
1&1&1&1&1 \\ 2&2&2&2&2 \\ 3&3&3&3&3 \\
4&4&4&4&4 \\ 5&5&5&5&5
\end{pmatrix}
I need to find the eigenvalues and the eigenvectors. I found out that the eigenvalules are $15$ and $0$, $0$ with an algebraic multiplicity of $4$.
I am now calculating the eigenvectors and I was wondering is there a simple direct way of knowing the eigenvector of the eigenvalue of $15$, not by the usual calculation?
Is there a known eigenvector for an eigenvalue that is the sum of every column, as there is an eigenvector for an eigenvalue that is the sum of every row $[(1,1,1,1,...) ]$?
And what about cases in which the columns have the same sum, but are not identical?What is the eigenvector in that case?
For example:
\begin{pmatrix}
3&5&4 \\ 2&2&1 \\ 2&0&2
\end{pmatrix}
| This is obviously a rank-one matrix, which you’ve verified by finding that the algebraic multiplicity of $0$ is four. Its column space (image) is spanned by $v=(1,2,3,4,5)^T$, so the only possibility for an eigenvector with a nonzero eigenvalue is a multiple of $v$.
As for the second question, remember that there’s no such thing as “the” eigenvector: every nonzero scalar multiple of an eigenvector is also an eigenvector with the same eigenvalue. So, if the columns of a matrix are all multiples of some nonzero column, then the column space is again one-dimensional and every nonzero column is an eigenvector.
Algebraically, such a matrix can be decomposed into the outer product $uv^T$ of a pair of vectors. We then have $(uv^T)u=u(v^Tu)=(v^Tu)u=\lambda u$, therefore $u$ is an eigenvector with eigenvalue $v^Tu$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3376693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
an ordered abelian group has no order units An element $e$ in $G^{+}$ is called an ordered unit in an ordered abelian group $(G,G^{+})$ if for any $g\in G$,there exits a positive integer such that $-ne\leq g \leq ne$.
In Rordam's book,there is an example to show that not all ordered abelian groups have order units.
He takes $G$ as $c_0(\Bbb N,\Bbb Z)$,which is the group of all sequences of integers such that eventually converge to $0$. Let $G^{+}$ be the set of those sequences $(x_n)$ such that $x_n\geq 0$. Then $(G,G^{+})$ is an ordered abelian group without order units.
Suppose $(G,G^{+})$ has an ordered unit $f\in G^{+}$,how to choose $g\in G$ such that there does not exist $n$ such that $-nf\leq g \leq nf$.
| Suppose $f\in c_0(\mathbb N,\mathbb Z)$ is an order unit. Put $k_0=\max\{k\in\mathbb N:f(k)\neq0\}$. Define $g\in c_0(\mathbb N,\mathbb Z)$ by $g(k_0+1)=1$ and $g(k)=0$ for $k\neq k_0+1$. Then there is no $n\in\mathbb N$ such that $g\leq nf$, a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3376822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integrating both sides of an equation with respect to what? Let's say we have the following DE:
$ \frac{dy}{dx} = x $
1. That's how It could be solved:
$dy = x dx$
$\int{dy}=\int{xdx}$
$y=\frac{1}{2}x^2+c$
Is it mathematically correct to separate $dy$ and $dx$ ? Or it would appear as the derivative of $y$ with respect to $x$ is a fraction which is not true.
2. Another method of writing the solution can be:
$\int\frac{dy}{dx} dx=\int{xdx}$
$\int{dy}=\int{xdx}$
$y=\frac{1}{2}x^2+c$
That's integrating the DE with respect to x. Is this mathematically correct? It seems like the $dx$ cancels each other in the LHS, but we are not allowed to treat this as a fraction.
My Questions are:
*
*Which of these ways of writing the solution is the most accurate?
*And which of these 2 is wrong in terms of following mathematical structure?
*When we integrate both sides of the equation, can we just put the integral sign without integrating with respect to a variable? (for example in case 1)
| It is separable equation where $ y= y(x)$. Consider the problem
$$\frac{dy}{dx}=F(x)*Q(y)$$ where $F(x)$ depends only of $x$ and $Q(y)$ only of $y$.
If $Q(y) \neq 0$ we can rewrite this as
$$\frac{y'(x)}{Q(y(x))}= F(x)$$
$$\int_{x_0}^x{\frac{y'(t)}{Q(y(t))}dt}=\int_{x_0}^x{F(t)dt}$$
Then $y(t)=s$ We will get
$$\int_{y(x_0)}^{y(x)}{\frac{ds}{Q(s)}} = \int_{x_0}^x{F(t)dx}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3376975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Possible for $X_{n} \to - \infty$ when $E[\xi_{i}]=0$ and $X_{n}=\sum\limits_{i=1}^{n}\xi_{i}$ I am attempting to construct an example where: $X_{n} \to - \infty-$a.s. and $E[\xi_{i}]=0$ and $X_{n}=\sum\limits_{i=1}^{n}\xi_{i}$
My idea: we would need a process that has greater weighting to the negative side, e.g.
$1-\frac{1}{i}=P(\xi_{i}=-\frac{1}{i})=1-P(\xi_{i}=\frac{1}{i})\Rightarrow P(\xi_{i}=1-\frac{1}{i})=\frac{1}{i}$
This is the example I had in mind, however, I am not sure this assures that $X_{n} \to \infty$ a.s. since I always have positive weighting (albeit with dwindling probability)
| Consider the random variable $\zeta_i$ with $P(\zeta_i=2^i)=2^{-i}$ and $P\left(\zeta_i=\frac{-2^i}{2^i-1}\right)=\frac{2^i-1}{2^i}$ for $i\geq 1$. Then it can be verified that $E\zeta_i=0$ and furthermore $P(\zeta_i>0\quad \text{i.o})=0$ by the Borel cantelli lemma. Hence eventually $\zeta_i<0$ with probability one so $X_n=\sum_{i=1}^n \zeta_i\to -\infty$ a.s. as $n\to \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3377127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there exists such figure $A$ on plane $E^2$ which isometry group is isomorphic to $\mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_2$ I suppose that there are no such $A \subset E^2$ which satisfy
$$\text{Iso}(A) \simeq \mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_2$$
But I'm stuck on showing this in formal way. Can we use here Classification of Euclidean plane isometries theorem to prove it ?
| One way to prove this is to prove that the isometry group of $E^2$ does not even contain a subgroup isomorphic to $\mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_2$. Yes, the classification of Euclidean isometries will help, but you also need to know about some special subgroups of the group of isometries. In particular, you can use the following facts:
*
*For every finite group of isometries $G$ of $E^2$, there exists a point $x \in E^2$ fixed by each element of $G$.
So we can assume that your subgroup $G = \text{Iso}(A) \approx \mathbb Z_2 \oplus \mathbb Z_2 \oplus \mathbb Z_2$ fixes some point $x$.
*
*The subgroup $\Gamma_x < E^2$ of all isometries that fixes $x$ can be decomposed as a split short exact sequence
$$1 \to S^1_x \to \Gamma_x \to \mathbb Z_2 \to 1
$$
where $S^1_x$ is the circle group of rotations around $x$, and a splitting $\mathbb Z_2 \to \Gamma_x$ is given by any reflection across a line through $x$.
So, $G$ is contained in $\Gamma_x$.
*
*The normal subgroup $S^1_x$ contains a unique order $2$ element, namely the $180^\circ$ rotation around $x$. That element generates an order 2 subgroup $R_x$, which is a normal subgroup of $\Gamma_x$.
To finish the proof we consider two cases.
If $G$ contains $R_x$ then $G \cap S^1_x = R_x$, because of uniqueness of the order $2$ element in the group $S^1_x$. It follows that the homomorphic image of $G$ in $\mathbb Z_2$ is isomorphic to $G/R_x$ which has order $4$.
Whereas if $G$ does not contain $R_x$ then $G \cap S^1_x$ is trivial, by the same uniqueness argument. It follows the homomorphic image of $G$ in $\mathbb Z_2$ is isomorphic to $G$ which has order $8$.
In either case one gets a contradiction: $\mathbb Z_2$ only has order 2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3377273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Sets and subsets: What is the difference between these two statements? Would it be correct to say that $\emptyset \subseteq \emptyset$ or $\emptyset \subseteq \{\emptyset\}$? To my understanding, the null set is just an empty set, so a null set is a subset of a set that contains the null set as an element, hence $\emptyset \subseteq \{\emptyset\}$ would be true. But wouldn't the first statement also be true?
| They are both true but mean different things altogether.
$\emptyset \subset \emptyset$ is true.
It is true for any of the following reasons and maybe more.
1) Every set is a subset of itself.
2) The emptyset is a subset of any set
3) The emptyset has no elements so every element is in the emptyset is vacuuously in the emptyset
4) there are no elements in the emptyset that are not in the emptyset.
$\emptyset \subset \{\emptyset\}$ is also true but it means something entirely different.
It is true because
1) The emptyset is a subset of any set.
2) The emptyset has no elements at all so every element it has is vacuously an element of $\{\emptyset\}$.
3) The emptyset doesn't have any elements not in $\{\emptyset\}$.
etc.
However the reason you gave:
"so a null set is a subset of a set that contains the null set as an element, hence ∅⊆{∅} would be true."
is completely wrong.
A set, $A$ is NOT as subset of a set containing it.
$A\not \subset \{A\}=B$
$A$ is an element of $B$ but the elements within $A$ are not elements of $B$ at all. We never consider the elements within elements when determining subsets.
That $A\in B$ is no more relevant and has no more bearing on whether $A$ is subset of $B$ then whether $froofroothedog \in B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3377576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Chess Board (5×5) problem
25 small squares of a 5×5 chess board are coloured with 5 different
colours available, such that each row contains all 5 available colours
and no two adjacent squares have same colour. Then the no. of
different arrangements possible are?
My attempt:
Let the colours be R,B,G,W,V
To fill the first row I have 5×4×3×2×1
Now for the second row I am sort of messing up I am able to determine 4 cases only but I am getting it at all messed up again and again.
Help me with the editing as well please.
| As you noticed for the first we have $5!$ arrangement, for the second row we can use inclusion and exclusion principle as follows.
Notably, the number of cases for the second row with at least two adjacent squares with the same colour with respect to the first row, by inclusion and exclusion principle, is:
$$5\cdot 4!-\binom{5}{2}\cdot 3!+\binom{5}{3}\cdot 2!-\binom{5}{4}\cdot 1!+\binom{5}{5}\cdot 0!=76$$
therefore the number of cases for the second row with no adjacent squares with the same colour with respect to the first row is:
$$5!-76=120-76=44$$
which is valid for all the others three rows and then finally, by multiplication rule, we obtain:
$$5!\cdot (44)^4$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3377725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Calculating integral of signum I am supposing to calculate the following integral: $$\int _{0}^{1}\mathrm{sgn}(x-x^{3})dx.$$ I assumed that on interval $(0,1)$ signum is positive. So:$$\int _{0}^{1}\mathrm{sgn}(x-x^{3})dx=\left [ x-x^{3}\right ]_{0}^{1}=0.$$ Is it correct?
| We have $x-x^3 >0$ for $x \in (0,1).$ Hence $\mathrm{sgn}(x-x^3)=1$ for $x \in (0,1).$
Thus $\int _{0}^{1}\mathrm{sgn}(x-x^{3})dx= \int _{0}^{1}1dx=1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3377856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Mean value theorem for vector valued function in $\mathcal{C}([0,a]\times E\times E, E)$ Let $a$ is a real number such that $a>0$, $E$ a Banach space and $f :[0,a]\times E\times E\rightarrow E$ a continuous function.
Is the following statement correct?
For every $t\in [0,a]$, let $\overline{conv}$ the closure of the convex hull, then : $$\int_{0}^{t}f(s,u(s),v(s)) ds\in t\,.\overline{conv} \{f(s,u(s),v(s))\::s\in [0,t] )\}$$
with $u,v\in \mathcal C([0,a],E)$
If so why this is correct?
| In the end, you are simply integrating a continuous function. You always have
$$
\int_0^a f(s)\,ds\in a\,\overline{\operatorname{conv}}\{f(s):\ s\in [0,a]\}.
$$
This is a straightforward consequence of the definition of a Riemman integral: the Riemann sums for your integral are of the form
$$
\sum_j f(s_j)\, \Delta_j=a\,\sum_j f(s_j)\, \tfrac{\Delta_j} a,
$$
where $\sum_j\Delta_j=a$. The right-hand-side is a convex combination of values of $f$, times $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3377995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Wrong solution in commission problem. In one congress there are 15 physics and 15 math teachers. How many committees of 8 teachers can be formed with at least 4 math teachers and at least 2 physics teachers?
I know how to solve this problem. However. I can't explain why the following solution is incorrect
1)
Commission with 6 mathematicians and 2 physicists:
possibilities to select 4 mathematicians * possibilities to select 2 physicists * possibilities to select the 2 remaining mathematicians from the 11 left:
${14 \choose 4} {15 \choose 2} {11 \choose 2}$
2)
Commission with 4 mathematicians and 4 physicists:
possibilities to select 4 mathematicians * possibilities to select 2 physicists * possibilities to select the 2 remaining physicists from the 13 left:
${15 \choose 4} {15 \choose 2} {13 \choose 2}$
3)
Commission with 5 mathematicians and 3 physicists:
possibilities to select 4 mathematicians * possibilities to select 2 physicists * possibilities to select the 1 remaining mathematicians from the 11 left * possibilities to select the 1 remaining physicists from the 13 left:
${15 \choose 4} {15 \choose 2} {11 \choose 1} {13 \choose 1}$
The answer would then be 1) + 2) + 3).
Why is this solution wrong?
| You're double counting. For instance, suppose we label the math teachers $M_1$ through $M_{15}$. In the 6 mathematicians case, you're treating choosing $M_1$ as the "4" as different from choosing $M_1$ as one of the "2 remaining mathematicians from the 11 left". But there's no difference between those two cases. For the 6 mathematicians case, you should just have ${15 \choose 6} {15 \choose 2}$.
Consider a simpler case. How many ways are there to choose three people with from 2 math teachers and 2 physics teachers, if you need at least one of each? For the case where we end up with two math teachers, the logic you present would say "there are ${2 \choose 1}=2$ ways to pick which math teacher we're using to satisfy the 'at least one math teacher' requirement, and ${1 \choose 1}=1$ ways to choose a math teacher from the remaining 1 math teacher". But there's no difference between choosing one math teacher versus the other to be the "at least one".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3378108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
} |
Question aboutPartial Fractions for example:
$$\frac{{{x^2} + 4}}{{x\left( {x + 2} \right)\left( {3x - 2} \right)}}\, = \frac{A}{x} + \frac{B}{{x + 2}} + \frac{C}{{3x - 2}}$$
first method is:
$${x^2} + 4 = A\left( {x + 2} \right)\left( {3x - 2} \right) + Bx\left( {3x - 2} \right) + Cx\left( {x + 2} \right)$$
but it is hard and take much time to find A,B,C
second method(Substituting the roots, or "zeros") is:
\begin{align*}x & = 0 \,\,\,\,\, : & \hspace{0.5in}4 & = A\left( 2 \right)\left( { - 2} \right) & \hspace{0.5in} & \Rightarrow & \hspace{0.25in}A & = - 1\\ x & = - 2 : & \hspace{0.5in}8 & = B\left( { - 2} \right)\left( { - 8} \right) & \hspace{0.25in}&\Rightarrow & \hspace{0.25in}B & = \frac{1}{2}\\ x & = \frac{2}{3}\,\, : & \hspace{0.5in}\frac{{40}}{9} & = C\left( {\frac{2}{3}} \right)\left( {\frac{8}{3}} \right) & \hspace{0.25in} & \Rightarrow & \hspace{0.25in}C & = \frac{{40}}{{16}} = \frac{5}{2}\end{align*}
it is better and easier method for Partial Fractions. My question is why this method works? can you prove this method?
| It works from there
$$f(x)={x^2} + 4 = A\left( {x + 2} \right)\left( {3x - 2} \right) + Bx\left( {3x - 2} \right) + Cx\left( {x + 2} \right)$$
indeed
$$f(0)=4=A(2)(-2)\implies A=-1$$
and so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3378438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Groups with the subtraction operation Why do integer mod integer sets with the operation of subtraction not form groups?
For example, integers mod 3 is {0,1,2}, which has an identity (0) and inverses (self inverses). And subtraction is an operation because any arguments into the operation outputs something still within integers mod 3. I suspect I am missing something as to why that is not a group.
| Have you checked associativity?
For example, is $(2-1)-1=2-(1-1)$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3378544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Evaluate the following infinite sum with parameter So the sum actually originates from the following integral:
$$\int_0^\infty \frac{\left\lfloor x \right\rfloor}{x^{a + 1}}dx$$
where $a$ is a real parameter. I've managed to transform the integral into the following sum (except for the case $a = 0$, which is observed separately), which actually became a lot more interesting for me:
$$\sum_{n=0}^\infty \frac{n}{a}(\frac{(n+1)^a - n^a}{n^a(n+1)^a})$$
But I'm clueless as to how to approach that sum? Any ideas? I'm really keen on figuring this out! Many thanks in advance!
| $HINT$
$n\frac{(n+1)^a-n^a}{n^a(n+1)^a}=\frac{n}{n^a}-\frac{n+1-1}{(n+1)^a}=\frac{1}{n^{a-1}}-\frac{1}{(n+1)^{a-1}}+\frac{1}{(n+1)^a}$
Now take cases for $a$
For $a=1$ the series diverge.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3378660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Is the sample quantile unbiased for the true quantile? I would like to find a way to show whether the sample quantile is an unbiased estimator of the true quantiles. Let $F$ be strictly increasing with density function $f$. I will define the $p$-th quantile for $0<p<1$ as $Q(p)=F^{-1}(p)$ and the sample quantile as $$\hat{F}_n^{-1}(p)=\inf\{x:\hat{F}_n(x)\geq p\},$$ where $\hat{F}_n(x)$ is the empirical distribution function, given by $$\hat{F}_n(x)=\frac{1}{n}\sum_{i=1}^n I(X_i \leq x).$$ Based on literature I have read, I expect the sample quantile to be biased, but I am having trouble figuring out how to take the expected value of $\hat{F}_n^{-1}(p)$, particularly since it is defined as the infimum of a set. I do know that the expected value of the empirical distribution function is $F(x)$. Any help or references that could guide me would be greatly appreciated!
| $\hat{F}_n^{-1}(p)$ is the smallest value $x$ such that at least $p$ fraction of the sample points satisfy $X_i \leq x$. In other words, at least $np$ of the sample points satisfy $X_i \leq x$, and since $np$ may not be an integer we can actually say at least $\lceil np \rceil$. Thus $\hat{F}_n(p)^{-1}=x$ if and only if at least $\lceil np \rceil$ of the sample points satisfy $X_i \leq x$, and there exists $X_i$ such that $X_i=x$ (otherwise we'd be able to shrink $x$ a little and still have $\hat{F}_n(x)>p$).
This is still in a bit too complicated a form to take the expected value, but it may help to look at a small case, say $n=1$. In this case, if $p>0$ then $\hat{F}_n^{-1}(p)=x$ if and only if $X_1=x$. In other words, $\hat{F}_n^{-1}(p)=X_i$ for all $p>0$. This is not an unbiased estimator of $Q(p)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3378799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The slope of the hyperbola $b^2 x^2 - a^2y^2 = a^2 b^2$ at the upper end of its right-hand latus rectum is $4/3$. What is the eccentricity? How to approach this type of problem?
The slope of the curve $b^2 x^2 - a^2y^2 = a^2 b^2$ at the upper end of its latus rectum to the right of the origin is $4/3$. What is the eccentricity of the curve?
I get the equation of derivative of $y'= \dfrac{xb^2}{ya^2}$. I still don't have a point of $x$ and $y$ for latus rectum to insert in equation $e=\sqrt {a^2 + b^2}/a$
| As mentioned by @Blue, the latus rectum is a vertical line through the focus $(c,0) \equiv (ae,0)$.
The abcissa of the latus rectum is $ae = \sqrt{a^2+b^2}.$ So, to find the $y$ coordinates of its terminii, we have $$b^2a^2e^2 - a^2y^2 = a^2b^2$$ $$\implies y^2 = b^2(e^2-1)$$
$$\implies y' = \frac 43 = {aeb^2\over b\sqrt{e^2-1} a^2} =\frac ba\cdot{e\over\sqrt{e^2-1}}$$
Note that $$e^2 = 1+{b^2\over a^2}$$
$$\implies \sqrt{e^2-1} = \frac ba$$
$$\implies \boxed{e = \frac 43}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3378909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability that two people share the same birthday?
Suppose a room contains $n$ people. What is the probability that at least two people share the same birthday?
Let $A$ be the probability that at least two people have the same birthday. I know that the way to solve this question is actually to find the complement of A and solve $1 - P(A^c)$. However, I'm confused on why $A^c$ is the probability that no one shares the same birthday (everyone has different birthdays), and not the probability that at most two people share the same birthday. Isn't the opposite or complement of "at least two people share the same birthday" equal to "at most two people share the same birthday?"
| Let $D$ denote the number of days in a year, so $D=365$ or $D=366$ (or something else) depending on how you are counting (and which planet you are living on).
The probability that no one shares the same birthday is the product of the probabilities that the second person doesn't share their birthday with the first $(D-1)/D$ times the probability the third doesn't share with the first two $(D-2)/D$ and so on down the line, until we get
$$
\mathbb P(\text{no common birthdays})=\frac{D-1}{D}\cdots \frac{D-n+1}{D}.
$$
So the probability that at least two people share a birthday is $1$ minus this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3379024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
for what a does h have an extreme point in (1,0)? I got this function:
$h(x, y) = (x − 1)^2 + 2a(x − 1)y + y^2 + y^4$
For what $a$ does h have either a maxima- or a minima-point but not a saddle point in (1,0)?
I have confirmed that the point is stationary, but it gets really tricky when trying to use the Quadratic form when having $a$ around. Do you have any ideas on how to solve this?
| HINT
We have that
*
*$h_x=2(x-1)+2ay\implies h_x(1,0)=0$
*$h_y=2a(x-1)+2y+4y^3\implies h_y(1,0)=0$
then $(1,0)$ is a stationary point as you have noticed.
Then we need to consider
*
*$h_{xx}=2\implies h_x(1,0)=2$
*$h_{yy}=2+12y^2\implies h_y(1,0)=2$
*$h_{xy}=h_{yx}=2a\implies h_y(1,0)=2a$
finally proceed by the hessian matrix test.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3379115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Topology on the set $\mathbb N$, $U$ is open iff either $1\not\in U$ or else $\sum_{n\not\in U}\frac1n\lt\infty$
Define a topology on the set $\mathbb N$ of all natural numbers by calling a set $U$ open if either $1\not\in U$ or else $\sum_{n\not\in U}\frac1n\lt\infty$. Take $A=\mathbb N\setminus\{1\}$. Then, show that there is no sequence with values in $A$ converges to $1$.
I find this question in here.
(Loot at 'answer' part)
[My attempt]
Since ,for all $N \in \mathbb N$, $\{1,N,N+1,N+2,N+3,...\}$ is open set containing $1$, if $\{x_n\}_{n=1}^{\infty}$ is convergent sequence to $1$, then the range of $\{x_n\}_{n=1}^{\infty}$ is unbounded.
So, what is next step?
| Since $x_n$ is unbounded we can find a subsequence $x_{n_k}$ such that $x_{n_k}>k^{2}$ for all $k$. Let $U=\{1\}\cup (\{x_{n_1},x_{n_2},...\})^{c}$. Then $U$ is an open set containing $1$. Since $x_i \to 1$ we must have $x_i \in U$ for all $i$ sufficiently large but this is clearly false.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3379243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
the existence of a compact subgroup Suppose $G$ is a locally Hausdorff topological group,does there must exits a non-trivial compact subgroup?
| Say that a locally compact group is topologically torsion-free if it has no non-trivial compact subgroup. Elaborating on Moishe's comment, one sees that
A locally compact group $G$ is topologically torsion-free iff $G$ is Lie, the discrete quotient $G/G^\circ$ is torsion-free, and $G^\circ$ is contractible.
[And $G^\circ$ contractible means isomorphic to $S^k\ltimes R$ for some $k\ge 0$, some simply connected solvable Lie group $R$, and $S=\widetilde{\mathrm{SL}_2(\mathbf{R})}$].
To see this: any locally compact group $G$ has a connected-by-compact open subgroup $H$. If $G$ is topologically torsion-free, so is $H$. But every connected-by-compact locally compact group is compact-by-Lie (solution to Hilbert 5th problem). Hence $H$ is Lie, so $G$ is Lie too. From the connected Lie case, $G^\circ$ has the given form.
Now $G/G^\circ$ is torsion-free: indeed, otherwise, it has a nontrivial finite subgroup $L/G^\circ$. As a virtually connected Lie group, $L$ has a compact subgroup $K$ such that $KL^\circ=L$ (Mostow). Since $L$ is topologically torsion-free, $K=1$, and it follows that $L$ is connected, a contradiction. So $G/G^\circ$ is torsion-free.
Particular case (which can be checked directly using Pontryagin duality):
A locally compact abelian group is topologically torsion-free iff it is isomorphic to $\mathbf{R}^k\times \Lambda$ for some discrete torsion-free abelian group $\Lambda$ and some $k\ge 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3379373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Solving the congruence $7x + 3 = 1 \mod 31$? I am having a problem when the LHS has an addition function; if the question is just a multiple of $x$ it's fine.
But when I have questions like $3x+3$ or $4x+7$, I don't seem to get the right answer at the end.
| By Gauss's algorithm $\bmod 31\!:\,\ 7x\equiv -2\iff x\equiv \dfrac{-2}7\equiv\dfrac{-8}{28}\equiv\dfrac{-39}{-3}\equiv \,\bbox[5px,border:1px solid #c00]{13}$
Or by Inverse Reciprocity
$\bmod 31\!:\,\ \dfrac{-2}{7}\equiv \dfrac{-2-31\!\!\!\!\overbrace{\left[\dfrac{-2}{\color{}{31}}\bmod 7\right]}^{\large -2/3\,\equiv\,-9/3 \,\equiv\, \color{#c00}{-3\ }}}7\equiv\dfrac{-2-31[\color{#c00}{-3}]}7\equiv\dfrac{91}7\equiv\,\bbox[5px,border:1px solid #c00]{13}$
Or by the forward extended Euclidean Algorithm (and its fractional form)
$\ \ \ \ \begin{array}{rr}
[\![1]\!] &31\, x\,\equiv\ 0 \\
[\![2]\!] &\ \color{#0a0}{7\,x\, \equiv -2}\!\!\!\\
[\![1]\!]-4\,[\![2]\!] \rightarrow [\![3]\!] & 3\,x\, \equiv\, 8 \\
[\![2]\!]-2\,[\![3]\!] \rightarrow [\![4]\!] & \bbox[5px,border:1px solid #c00]{x\, \equiv 13}\!\!\!\!
\end{array}$
said multi-fractionally $\ \ \dfrac{0}{31} \overset{\large\frown}\equiv \color{#0a0}{\dfrac{-2}7} \overset{\large\frown}\equiv \dfrac{8}3 \overset{\large\frown}\equiv\,\bbox[5px,border:1px solid #c00]{\dfrac{13}1}\ $ $\ \leftarrow\ \text{easiest}\, {\textit general} \text{ method}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3379489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Understanding Almost Everywhere Convergence I want to better understand the following statement:
Assume $supp(f)=E=supp(f_n)$ for all $n$ with $m(E)<\infty$, $E$ measurable.
A sequence of measurable functions $\{f_n\}\rightarrow f$ almost everywhere on $E$.
*
*Does this mean:
$$\lim_{n\rightarrow\infty}f_n(x)=f(x) \text{ for }x\in B\subset E \text{ with } m(B)>0 $$
What I am confused is how this compares with the poin-wise convergence.
I understand $f=g$ almost everywhere on $E$, but I don't understand what is the exact definition for "convergence".
| Convergence almost everywhere is not a type of convergence. We can say "$f$ converges point-wise to $g$ almost everywhere", or "$f$ converges uniformly to $g$ almost everywhere" etc. The "almost everywhere" is saying that convergence happens on the whole domain except for a set of points with measure $0$.
To give you an example, let's say $f$ and $g$ are defined on the real line $\mathbb{R}$. Now imagine a case where "$f$ converges point-wise to $g$ at every point except $x=5$". Well this means $f$ converges point-wise to $g$ almost everywhere, because the only place it doesn't converge is the set $\{ 5 \}$, which has $0$ length
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3379625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Semigroup isomorphism between $(\{1,2,\dots \},\times)$ and $(\{0,1,2,\dots \},+)$. I know that the two semigroups $(\{0,1,2,\dots \},\times)$ and $(\{0,1,2,\dots \},+)$ are not isomorphic because if we want to map identity elements together then it can be see that we can't have injective function between them,but what can we say about $(\{1,2,\dots \},\times)$ and $(\{0,1,2,\dots \},+)$?
| Suppose there is an isomorphism $f:(\Bbb{N},+) \to ((\Bbb{N}-\{0\}, \times)$. Then since $f$ preserves idempotents, one has $f(0) = 1$. Let $a = f(1)$. Then for every $n >0$, $f(n) = a^n$. Thus $f(\Bbb{N}) = \{a^n \mid n \geqslant 0\}$ and hence $f$ is not a bijection, a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3379837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is this true? If is, how can I prove this?
$ (\forall x>1, \ \exists \delta) \ \frac{e^x}{1+x^n}<\frac{e^x}{1+\delta^n}$
I use this property to prove for $a>1, \int_0^a \frac{e^x}{1+x^n}\to e-1$ but I’m not sure that why this property is true.
| Answer for the original question: $\int_0^{1} \frac {e^{x}} {1+x^{n}}dx\to \int_0^{1} e^{x} dx=e-1$ and $\int_1^{a} \frac {e^{x}} {1+x^{n}}dx\to 0$; you can apply DCT for both integrals since the integrand is dominated by $e^{x}$ which is integrable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3379930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Cardinality of power set and binary sequence
Let $A$ be a set and $P(A)$ be the power set of $A$. Define $B(A)$ as
the set of all functions $F:A\rightarrow\{0,1\}$. For example,
$B(\mathbb{N})$ is the set of all binary sequences. Prove that $P(A)$
has the same cardinality as $B(A)$.
When $A$ is finite, this is easy to prove. I am interested in other cases; for instance when $A$ is countably infinite or uncountable. I am also a bit confused with the definition of $B(A)$. Could anyone help me with this one please?
| The bijection is given by defining the function $F_X:A\to\{0,1\}$ with $X\subseteq A$ as:
\begin{align}
F_X(a)=\begin{cases}
1&\text{if $a\in X$}\\
0&\text{if $a\notin X$}
\end{cases}
\end{align}
The things you have to show is that $G:\mathcal P(A)\to B(A)$ with $G(X)=F_X$ is a bijection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3380101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If $\alpha,\beta\in L$ algebraic over $K$ with degrees $m,n$, then $\alpha\pm\beta$ is algebraic with degree $\leq mn$ Sorry if this is a duplicate, I couldn't find anything on here with $m,n$ not being coprime.
My attempt thus far: first observe that $[k(\alpha):k]=m$, $[k(\beta):k]=n$.
Since $(k(\alpha,\beta):k(\alpha))$, $(k(\alpha,\beta):k(\beta))$ are finite extensions they are algebraic and we have $[k(\alpha,\beta):k(\alpha)]= k_1$ and $[k(\alpha,\beta):k(\beta)]= k_2$ for some $k_1,k_2$
Therefore $[k(\alpha,\beta):k]=mk_1=nk_2$ and $n\vert mk_1$ and $m\vert nk_2$.
But I don't see how to conclude though that its less than $nm$, Any hints would be appreciated
| We are given that
$[K(\alpha):K] = m, \; [K(\beta):K] = n; \tag 1$
we observe that
$\alpha \pm \beta \in K(\alpha, \beta) = K(\alpha)(\beta); \tag 2$
using (1), by the tower law we have
$[ K(\alpha, \beta): K]$
$= [ K(\alpha, \beta):K(\alpha)][K(\alpha):K] = [ K(\alpha, \beta):K(\alpha)]m. \tag 3$
Now
$ [ K(\alpha, \beta):K(\alpha)] = [K(\alpha)(\beta): K(\alpha)]$
$= \deg m_\alpha(x) \in K(\alpha)[x], \tag 4$
where $m_\alpha(x)$ is the minimal polynomial of $\beta$ over $K(\alpha)$; denoting by
$m(x) \in K[x] \tag 5$
the minimal polynomial of $\beta$ over $K$, it is easily seen we also have
$m(x) \in K(\alpha)[x], \tag 6$
since
$K \subset K(\alpha) \Longrightarrow K[x] \subset K(\alpha)[x]; \tag 7$
it follows then from the minimality if $m_\alpha(x)$ over $K(\alpha)$ that
$\deg m_\alpha(x) \le \deg m(x); \tag 8$
but
$\deg m(x) = [K(\beta):K] = n; \tag 9$
we may then transform (4) into
$ [ K(\alpha, \beta):K(\alpha)] = \deg m_\alpha(x) \le \deg m(x) = n , \tag{10}$
and so (3) becomes
$[ K(\alpha, \beta): K] \le nm; \tag{11}$
we thus conclude in light of (2) that $\alpha \pm \beta$ is algebraic over $K$ with degree at most $mn$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3380217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Divergence of Yamabe soliton In the article
Ma, Li; Cheng, Liang, Properties of complete non-compact Yamabe solitons, Ann. Global Anal. Geom. 40, No. 3, 379-387 (2011). ZBL1225.53038. at page 382, there is a calculation. It says that if take divergence of both sides of the equation
$$\nabla^2f=Rg,$$ where $\nabla^2$ is the Hessian operator, $R$ is the scalar curvature and $f\in C^\infty(M)$, we get
$$\nabla_jR+\frac{1}{n-1}R_{jk}\nabla^kf=0.$$ Please show me the intermediate calculation.
| First, $\nabla^k(Rg_{jk})=\nabla_jR$ because $\nabla g=0$.
Second, commuting derivatives using the definition of the Ricci tensor implies that
$$ \nabla^k(f_{jk}) = \nabla^k\nabla_j\nabla_k f= \nabla_j\Delta f + R_{jk}\nabla^k f . $$
Third, the trace of the equation gives $\nabla_j\Delta f = n\nabla_j R$.
Combining these three equations gives the result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3380334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to Differentiate two equations to find Maximum Values I am stuck on this Differentiation problem, any help would be great!
If $A=xy$ and $x+5y=20$ find the maximum value of $A$ and the values of $x$ and $y$ for which this maximum value occurs
| Another way to handle constrained problems is the "Lagrange multiplier method. Write the function to be, in this case, maximized as $f(x,y)= xy$ and write the constraint as $g(x,y)= x+ 5y- 20$. Then $\nabla f= y\vec{i}+ x\vec{j}$ and $\nabla g= \vec{i}+ 5\vec{j}$. An extreme point, either maximum or minimum, of f with constraint g= 0, occurs only when $\nabla f= \lambda \nabla g$ where $\lambda$, the "Lagrange multiplier", is a constant. That is, we must have $y\vec{i}+ x\vec{j}= \lambda \vec{i}$ so $y= \lambda$ and $x= 5\lambda$. We also have the constraint x+ 5y= 20- three equations to solve for x, y, and $\lambda$. We have $x+ 5y= 5\lambda+ 5\lambda= 10\lambda= 20$ so $\lambda= 2$ and then x= 10 and y= 2 for a maximum value of A equal to 20.
In this simple case, the simplest thing to do is what nicomezi suggested. Though I would solve the constraint for x= 20- 5y, rather than for y, and write the function to be maximized as $xy= (20- 5y)y= 20y- 5y^2$. Now either take the derivative with respect to y and set it equal to 0: 20- 10y= 0 so y= 20/10= 2 and then x= 20- 5(2)= 10 or "complete the square". $20y- 5y^2= -5(y^2- 4y+ 4- 4)= -5(y- 2)^2+ 20$. That is a parabola opening downward. It is "20 minus something" so is maximum, 20, when that "something", $5(y-2)^2$, is 0, at y= 2.
There you have three different methods, of different "sophistication" and different ease of calculation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3380436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Uniform convergence of series: $\sum_{n=1}^{\infty}{2^n\sin\left(\frac{x}{3^n}\right)}$ Task: we should find area of x>=0, that the series is Uniform convergence on this area
$$\sum_{n=1}^{\infty}{2^n\sin\left(\frac{x}{3^n}\right)}$$
There are some ways to proof "Uniform convergence of sum".
I tried to use Dirichlet and Abel's ways to solve, but they are not suited on this case.
However, I checked answers and got: it is not converges uniformly
| $$|\sin{\frac{x}{3^n}}| \leq \frac{|x|}{3^n}$$ so the series converges pointwise on the real line.
The series also converges uniformly in every bounded subset of $\Bbb{R}$ by the $M-$test of Weierstrass.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3380608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Is it true that every convex set of the Euclidean space is the sublevel set of some convex function? Let $C \subset \mathbb{R}^n$ be a convex set.
Is it true, that there exists a convex function $f$ such that
$C = \{x | f(x) \leq a\}$ for some $a \in \mathbb{R}$
| No, the claim as written is false. In dimension $n=1$, convex functions are continuous, so if $f$ is convex then $C$ would have to be closed. So as a counterexample, let $C=(-1,1)$.
(I don't know whether a slight change could fix the claim to be true.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3380724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Range of convergence of series Find all real value $a$ for which the series $$ \sum_{n=1}^{\infty}
{(\frac{1}{n} -\sin(\frac{1}{n}))^a}$$ convergent.
I tried using ratio test, logarithmic test, etc. But I could not find it.
| Since $\lim\limits_{n\rightarrow +\infty}\frac{1}{n}=0$, you have
$$ \frac{1}{n}-\sin\left(\frac{1}{n}\right)\underset{n\rightarrow +\infty}{\sim}\frac{1}{6n^3}$$
Thus
$$ \left(\frac{1}{n}-\sin\left(\frac{1}{n}\right)\right)^a\underset{n\rightarrow +\infty}{\sim}\frac{1}{6^an^{3a}} $$
and the series converges iff $3a>1$ that is to say $a>\frac{1}{3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3380876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is a 3×4 coefficient matrix such that [|⃗ ] has a solution for every 3×1 vector ⃗. I know that A needs a pivot in every row and that every column vector b (with m entries) is a linear combination of the columns of A. However, I am stuck on giving an example of a matrix that would have a solution for every b vector.
| $$A=\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\end{bmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3381041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the values of x and sum of series (as a function of x) for the geometric series which converges of $\sum_{n=0}^{\infty} (-\frac{1}{3})^n(x-7)^n$ Find the values of x.
$\sum_{n=0}^{\infty} (-\frac{1}{3})^n(x-7)^n$
I'm not sure how to combine like terms in this case but I got this:
$(-\frac{x}{3}+\frac{7}{3})^{2n}$
Then separated, x as -1 < x < 1
I get: -10 < x < -4/3
But this isn't correct. Does anyone know how to solve this and may kindly show me how?
Edit:
Thanks for clearing that confusion up, Dr. Zafar. All I had to do was rearrange the fraction, to get $(\frac{7-x}{3})^n$ and then simplifying it to get $10 > x > 4$.
| You were so close!!
Instead of saying $(\frac{-1}{3})^{n}(x-7)^n=(-\frac{x}{3}+\frac{7}{3})^{2n}$
You should have said $(\frac{-1}{3})^{n}(x-7)^n=(-\frac{x}{3}+\frac{7}{3})^{n}$
This is a property of exponents such that $a^n*b^n=(ab)^n$
The inverse of this property is useful when finding the prime factorization of perfect powers.
$$2304=48^2=(16*3)^2=16^2*3^2=2^8*3^2$$
Can you carry on from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3381185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
RREF using mod 2 operations Can someone please help me calculate the reduced row echelon form of the following matrix: $$ \begin{bmatrix} 1&1&1&0 \\ 1&1&0&1 \\ 0&0&1&1 \end{bmatrix} \in M_{3,4}(F_2)$$
Where $F_2$ denotes the field of scalars $\{0,1\}$ with operations doen using mod $2$ arithmetic.
I am having problems because no matter what I do, I get no leading entry in column 2. For instance, adding $R_1$ to $R_2$ would make $R_2= \{0,0,1,1\}$.
Can there be no leading entry in the second column of second row? From what I have learned, each column must have a leading entry except for in the bottom row.
| There's no difference in the algorithm:
\begin{align}
\begin{bmatrix}
1&1&1&0 \\
1&1&0&1 \\
0&0&1&1
\end{bmatrix}
&\to
\begin{bmatrix}
1&1&1&0 \\
0&0&1&1 \\
0&0&1&1
\end{bmatrix} && R_2\gets R_2+R_1
\\[2ex]&\to
\begin{bmatrix}
1&1&1&0 \\
0&0&1&1 \\
0&0&0&0
\end{bmatrix} && R_3\gets R_3+R_2
\\[2ex]&\to
\begin{bmatrix}
1&1&0&1 \\
0&0&1&1 \\
0&0&0&0
\end{bmatrix} && R_1\gets R_1+R_2
\end{align}
Modulo $2$ one has never to reduce the pivot.
There is no pivot in the second column because it's equal to the first column, so it is a linear combination of the preceding pivot columns; in the RREF, pivot columns are those that are not a linear combination of the preceding (pivot) columns; a nonpivot column is a linear combination of the preceding pivot columns, and the coefficients yield precisely the needed coefficients; indeed
$$
C_2=1C_1,\qquad C_4=1C_2+1C_3
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3381373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
HHT and HTH in tossing a coin A coin is flipped infinitely until you or I win. If at any point, the last three tosses in the sequence are $HHT$, I win. If at any point, the last three tosses in the sequence are $HTH$, you win. Which sequence is more likely?
Unfortunately, this configuration does not seem like the ones as "$HHT$ versus $THH$" (where clearly only $HHT$ wins iff the first two occuring $H$ are consecutive). Of course, here we can still assume that $TT$ does not occur (as after such a thing the game restarts), but it does not seem to help me enough.
Any help appreciated!
| Hm, I think I actually have an answer. Assuming there is no TT, consider the first occurence of HTH and suppose it is winning. Then just before it we can't have HH (else HHHTH has HHT in the beginning), we can't have TT by above, we can't have HT (else we get TT) and if we have TH, then in THHTH we have HHT in the middle. So HTH to win it must occur among the first four tosses. The possible ones are HTH, THTH, with total probability $\frac{1}{8} + \frac{1}{16} = \frac{3}{16}$. So HHT to be winning has $\frac{13}{16}$ probability.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3381462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Solve $x-1 \ge \sin(x)$ I got as far as $-1 = \sin(x)-x$.
I don't know what to do next. Pretty sure I am forgetting some simplification rule.
| Others have suggested there is no analytical solutions, however;
Let $x=\frac{5\pi }{2}$, then $\frac{5\pi }{2}-1\geq \sin(\frac{5\pi }{2})=1$. Just an example, you can go from here.
It seems there are infinite solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3381579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
Do any two affine rotations with no common fixed point generate an infinite group? Assume we have two affine rotations of the plane around two different fixed points.
Do they generate an infinite group?
| If you have a group $G$ of affine self-transformations of a vector space $V$: then if $G$ is finite then $G$ fixes $\frac1{|G|}\sum_{g\in G}g(0)$.
By contraposition, if $G$ is generated by a subset $S$ and there is no common fixed point for elements of $S$, then $G$ is infinite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3381687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What's the difference between deciding if a mathematical statement is true VS proving it? Isn't a statement only true if you can prove it?
Edit: To elaborate, when reading about foundations of math, there seems to be concepts of completeness and decidability that seems to suggest they are proving and deciding if true are different...
| My opinion is that there is sometimes theorems that work everytime but we still don't have a specific reasonable proof for (and they are quite rare). Also there is something called axioms that are agreed on to be true with no proof, just by convention (I know this is a little it bit different than "deciding" if it is true or not, but I am just giving some ideas that may help)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3381795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How can a "proper" function have a vertical slope? Plotting the function $f(x)=x^{1/3}$ defined for any real number $x$ gives us:
Since $f$ is a function, for any given $x$ value it maps to a single y value (and not more than one $y$ value, because that would mean it's not a function as it fails the vertical line test).
This function also has a vertical tangent at $x=0$.
My question is: how can we have a function that also has a vertical tangent? To get a vertical tangent we need 2 vertical points, which means that we are not working with a "proper" function as it has multiple y values mapping to a single $x$. How is it possible for a "proper" function to have a vertical tangent?
As I understand, in the graph I pasted we cannot take the derivative of x=0 because the slope is vertical, hence we cannot see the instantaneous rate of change of x to y as the y value is not a value (or many values, which ever way you want to look at it). How is it possible to have a perfectly vertical slope on a function? In this case I can imagine a very steep curve at 0.... but vertical?!? I can't wrap my mind around it. How can we get a vertical slope on a non vertical function?
|
My question is: how can we have a function that also has a vertical
tangent? To get a vertical tangent we need 2 vertical points...
As others have pointed out, this is the crux of the misunderstanding. That said, I'd like to try and succinctly highlight the core issue: and that is that derivatives are not defined by the secant of two points, but rather by the limit of secants approaching a certain point.
In the OP example, letting $x = 0$, as we take other domain values $x'$ trending closer to $x$, inspecting the trend of secant slopes is the essential meaning of the derivative there. In this case, the secant slopes grow larger without bound, which is the indication of an undefined (or infinite) derivative, and hence a vertical asymptote. The definition from Wikipedia:
Limit definition
A function $f$ has a vertical tangent at $x = a$ if the difference quotient used to define the derivative has infinite limit:
$$\lim_{h\to 0}\frac{f(a+h) - f(a)}{h} = {+\infty}\quad\text{or}\quad\lim_{h\to 0}\frac{f(a+h) - f(a)}{h} = {-\infty}.$$
The Wikipedia article on vertical tangents uses the same $f(x) = x^{1/3}$ example, so hopefully that's clarifying. I suspect that this kind of misunderstanding may be the result of a particular course of study failing to emphasize the limit definition of the derivative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3381871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 11,
"answer_id": 9
} |
What is signified by the use of a "big" integral sign? [photo example] I encountered what I'll call, for lack of a better term, a "big" integral sign in a generic form of Integral Product Rule. I call it "big" relative to those preceding, and especially to the one immediately following it, in this example. This notation is strange (unfamiliar) to me. Please share your experience with this symbol.
| It is the integration-by-parts formula, although demonstrated in a quite confusing way. Especially, it is unclear which function $\int$ is applied to. Using parentheses to emphasize the scope of $\int$ for each instance, we may instead write
$$\int(fg) = \left(\int f\right) g - \int \left(\left( \int f \right) g'\right). $$
It is still an unconventional way of demonstrating the IbP formula. So let me revert it to the good old way. Let $F $ denote an anti-derivative of $f$, i.e., $F' = f$. Then
$$ \int f(x)g(x) \, \mathrm{d}x = F(x)g(x) - \int F(x)g'(x) \, \mathrm{d}x. $$
Alternatively, it may help to adopt programmer-style notation for this. Write $\mathtt{Integrate}[f]$ for any ani-derivative of $f$ and $\mathtt{D}[g]$ for the derivative $g'$. Then the handwriting translates to
$$ \mathtt{Integrate}[f \cdot g] = \mathtt{Integrate}[f] \cdot g - \mathtt{Integrate}[\mathtt{Integrate}[f] \cdot \mathtt{D}[g]]. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3381977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Question about how to define open sets for continuous rational functions I am having difficulty doing the following question using the language of open sets.
Let $(X,\mathcal{T})$ be a topological space, and let $f,g:X\rightarrow \mathbb{R}$ be continuous functions.
Let $A=\{x\in X:g(x)=0\}.$ Prove that the function $h:(X-A)\rightarrow \mathbb{R}$ defined by $h(x)=\frac{f(x)}{g(x)}$ is continuous
I know how to show that a rational function is continuous if the functions in both numerator and denominator are both continuous in the sense of how is done in a beginning real analysis course.
For this particular question, for the condition $\{x \in A^{c}:\frac{f(x)}{g(x)}<0\},$ if I want to use an open set to describe the inverse image of the range, how do I account for the two cases where $f(x) > 0$ and $f(x) < 0$ given that $g(x) < 0.$
Thank you in advance.
| As restrictions of continuous functions are continuous,
f and g over R - A are continuous.
As g is never 0 over R - A, 1/g is defined and continuous over R - A.
Since the product of two continuous functions is continuous,
f/g is continuous over R - A.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3382077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find the volume of the function $y=\frac{6}{x}$ Consider the function $y=\frac{6}{x}$ bounded by $y=0$, $x=1$,and $x=3$ and is rotated around the $x$-axis.
Using the disk method I setup the integral as
$$\pi\int_1^33^2-\left(\frac{6}{x} \right)^2dx $$
solving gave me $42\pi$ but the answer should be $24\pi$, my question is where did I mess up?
| The integrand is incorrect. It should be just $(\frac{6}{x})^2$. There's no reason to subtract this from $3^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3382169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove that for $\alpha\not\in\mathbb{Q},\alpha>0$, $([n+n\alpha])_{n\in\mathbb{N}}\sqcup([n+n\alpha^{-1}])_{n\in\mathbb{N}}=\mathbb{N} $ My Quesion: Prove that for $\alpha\not\in\mathbb{Q},\alpha>0$, $([n+n\alpha])_{n\in\mathbb{N}}\bigcup([n+n\alpha^{-1}])_{n\in\mathbb{N}}=\mathbb{N}$, where $[k]$ means the integral part of $k\in\mathbb{Z}$, and $\sqcup$ means disjoint union.
My Idea: I just realized that $$\frac{1}{1+\alpha}+\frac{1}{1+\alpha^{-1}}=1, \quad\text{where } 1+\alpha, 1+\alpha^{-1} \text{ are positive irrational numbers}$$
So the interested sequences are Beatty's sequences. And here is a proof (page 11-13) for Beatty sequences.
| "You already provided a complete solution to your problem" Yes, I just proved it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3382300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find the probability mass function of X An airline operates a small 10-seat aircraft. It has just made 12 reservations for the next flight: the first 7 bookings will be confirmed at takeoff. Of the 5 other bookings, each of the bookings will be confirmed with probability = 1/2 and independence.
*
*What is the probability that more than 10 people will arrive at the start?
*Let X be the number of people refused at takeoff. What is the mass function of X?
*What is E [X]?
For the 1st question, I got 6/32. There are 32 possible combinations (2^5) possible for 5 passengers to each either confirm or not. Getting > 10 passengers means 4 or 5 of the final 5 passengers show up. There is one way to get 5 passengers confirmed and 5 ways to get four passengers confirmed for 6 total outcomes with 4 or 5 passengers.
For the 2nd part, I don't really understand the approach, what values can X obtain? X= 2,3,4,5 ? because the number of passengers on the plane can't exceed 10, so they have to refuse at least 2 people, that's what I think.
Can someone explain to me this part please?
Thank you.
| The number of people that arrive is given by $7+W$ where $W\sim\mathrm{Bin}(5,1/2)$. We want the probability $\mathbb P(7+W>10)=\mathbb P(W>3)$. We compute this by
\begin{align}
\mathbb P(W=4)+\mathbb P(W=5) &= \binom 54(1/2)^5 +\binom 55(1/2)^5 \\
&= 6(1/2)^5\\
&= 3/16.
\end{align}
$X$ is simply $(W-2)^+:= \max\{W-2,0\}$. So we have
\begin{align}
\mathbb P(X=0) &= \sum_{i=0}^3\mathbb P(W=i) = \sum_{i=0}^3 \binom 5i(1/2)^5 = 13/16\\
\mathbb P(X=1) &= \mathbb P(W=4) = \binom 54(1/2)^5 = 5/32\\
\mathbb P(X=2) &= \mathbb P(W=5) = \binom 55(1/2)^5 = 1/32.
\end{align}
We compute $\mathbb E[X]$ the usual way:
$$
\mathbb E[X] = \sum_{i=1}^2 i\cdot\mathbb P(X=i) = 1\cdot5/32+2\cdot1/32 = 7/32.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3382411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
$Y = |X|$, where $X \sim \text{N}(\mu, \sigma^2)$: How does the PDF and CDF change? I just encountered the random variable $Y = |X|$, where $X \sim \text{N}(\mu, \sigma^2)$. Now, based on what we know about the absolute value function, this random variable is still continuous; however, the absolute value function means that there exists a cusp at $X = 0$, and so the derivative is undefined at this point.
This makes me wonder: How does this affect the PDF and CDF? How would we go about calculating such things in this case?
I would greatly appreciate it if people could please take the time to clarify this situation.
| $$P(\lvert X\rvert\le\alpha)=\begin{cases}P(-\alpha\le X\le\alpha)&\text{if }\alpha>0\\ 0&\text{if }\alpha<0\end{cases}$$
Therefore the cdf is $$F_{\lvert X\rvert}(\alpha)=\begin{cases}0&\text{if }\alpha<0\\ F_X(\alpha)-\sup_{\beta<-\alpha} F_X(\beta)&\text{if }\alpha\ge 0\end{cases}$$
Since $F_X$ is continuous, $F_{\lvert X\rvert}(\alpha)=F_X(\alpha)-F_X(-\alpha)$ for all $\alpha\ge 0$.
$F_{\lvert X\rvert}$ will therefore be differentiable on $\Bbb R\setminus\{0\}$, because $F_X$ is. Since $F_{\lvert X\rvert}$ also turns out to be continuous on the whole $\Bbb R$, any function in this form $$\begin{cases}F'_{\lvert X\rvert}(\alpha)&\text{if }\alpha\ne0\\ c&\text{if }\alpha=0\end{cases}$$ will be a pdf of $\lvert X\rvert$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3382561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove convergence of two series: I would like to prove if the next two series are convergent.
First:
$$
\sum_{n=1}^{\infty}\log\left(\frac{n+1}{n}\right)\arcsin \left(\frac{1}{\sqrt{n}}\right)
$$
I think that this series is convergent, so $$\arcsin\left(\frac{1}{\sqrt{n}}\right)$$ is similar to $$\frac{1}{\sqrt {n}}$$. And $$\log\left(\frac{n+1}{n}\right)=\log\left(1+\frac{1}{n}\right)\sim \frac {1}{n} $$ if n goes to infinity.
So I have the series $$\sum_{n=1}^{\infty}\frac{1}{n}\frac{1}{\sqrt {n}}$$, this series converge. Is this argument valid to prove the convergence?
Second:
$$\sum_{n=1}^{\infty}1-\sec\left(\frac{1}{n}\right)$$.
Could you help me please? Give me some clue please!!!!
Thank you
| Answer for the second series: this series converges absolutely if $\sum \frac {1-\cos(\frac 1 n)} {|\cos(\frac 1 n)|}$ converges. Since the denominator tends to $1$ it is enough to prove convergence of $\sum {(1-\cos(\frac 1 n))}$. This series converges because $1-\cos \theta \leq \frac {\theta^{2}} {2}$ and $\sum \frac 1 {n^{2}} <\infty$. [ Proof of $1-\cos \theta \leq \frac {\theta^{2}} {2}$: $1-\cos \theta =2\sin^{2} \frac {\theta} 2 \leq \frac {\theta^{2}} {2}$ since $|\sin t| \leq t$ for all $t \geq 0$].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3382698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Division by $dx$ in multi-variable calculus .... I am stuck on this doubt :
Suppose $f=f(x,y,z).$ Hence, $ df= \frac {\partial f}{\partial x}dx + \frac { \partial f}{\partial y}dy + \frac {\partial f}{\partial z}dz.$
Then, is the following equation correct :
$$\frac {df}{dx}=\frac {\partial f}{\partial x}+\frac {\partial f}{\partial y}\frac{dy}{dx} + \frac {\partial f}{\partial z}\frac{dz}{dx} \,\,\,\,(*)$$
The reasoning used in obtaining $(*)$ is : "dividing" the whole equation by $dx$. Normally, the $\large \frac {d}{dx}$ operator is used in single variable calculus where only single-variable functions are differentiated wrt $x$. But it does look a bit awkward (at least to me) when used in multi-variable calculus. Do the expressions $\large \frac {df}{dx}$, $\large \frac {dy}{dx}$ and $\large \frac {dz}{dx}$ even make any sense when used like this ?
I know that "division" by $\partial x$ can cause problems in multi-variable calculus. But what about "division" by $dx$. It works fine in single-variable calculus.
If $\large \frac {df}{dx}$ makes any sense, then does it mean the "total" rate of change of $f$ wrt $x$ if $y$ and $z$ are allowed to change ?
Summary :
(1) Is division by $dx$ allowed in multi-variable calculus?
(2) What does $\frac {df(x,y,z)}{dx}$ mean if answer to (1) is "yes" ? Does it mean anything if the answer to $(1)$ is "no" ?
| If $$f=f(x,y,z)$$ where $x,y,z$ are independent variables, then you may divide $df$ by $dx$.
For example $$f(x,y,z)= xyz+x^2+y^2+z^2$$
$$df = (yz+2x)dx + (xz+2y)dy + (xy+2z)dz$$
$$\frac {df}{dx} = yz+2x + (xz+2y)\frac {dy}{dx} + (xy+2z)\frac {dz}{dx} =yz+2x $$
Which is the same thing as $\frac {\partial{f}}{\partial {x}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3382863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Green's function for Laplace Equation and the unit ball In order to find a solution to
$\Delta u(x) =0$ in $B(0,1)$ and $u(x)=g(x)$ on $\partial B(0,1)$
the book I am reading ((Graduate Studies in Mathematics) Lawrence C. Evans - Partial Differential Equations_ Second Edition -AMS (2010) Page 39) uses the Greens function to solve this problem. Finding a function that satifies
$\Delta \Phi^x(y) =0$ in $B(0,1)$ and $\Phi^x(y)=\Phi(y-x)$ on $\partial B(0,1)$
($\Phi $ being the fundamental solution to the laplace equation) will give us the Greens function $G(x,y)=\Phi(y-x)-\Phi^x(y)$, which we can solve the problem with.
For the unit ball $B(0,1)$ we used $\Phi^x(y)=\Phi(|x|(y-\tilde x))$ with $\tilde x =\frac{x}{|x|^2}$. And showed that this is harmonic and also satisfies the boundary condition - but only for $x \neq 0$. And that's where my question comes from:
Why isn't it a problem, that $G(x,y)=\Phi(y-x)-\Phi(|x|(y-\tilde x))$ is not well defined at $x = 0$ since the inversion used previously is not possible?
| Although the inversion is not possible, $G$ can be extended to $x = 0$, provided $y \neq 0$. This follows by observing that $|x|\tilde{x}$ has norm one for every $x \neq 0$ and upon recalling that the function $\Phi$ is radial. To be precise, for every sequence $x_n \to 0$, $x_n \neq 0$, we have that $$\lim_n \left[\Phi(x_n - y) - \Phi(|x_n|y - |x_n|x_n)\right]$$exists and is independent of the choice of $\{x_n\}$. We then set $G(0,y)$ equal to this value.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3382986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Quadrilateral ABCD is inscribed in circle, $AB=4, BC=5, CD=6, DA=7$, how long is $AC$? Quadrilateral ABCD is inscribed in circle, $AB=4, BC=5, CD=6, DA=7$, how long is $AC$?
I think I'm probably supposed to use Ptolemy's to solve this, but I don't know if it's possible. Is there a way to do this problem using Ptolemy's?
| By the theorem of cosines we get
$$AC^2=4^2+5^2-2\times 4\times 5\cos(\beta)$$
$$AC^2=7^2+6^2-2\times7\times6\cos(180^{\circ}-\beta)$$ and $$\cos(\pi-x)=-\cos(x)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3383116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the idea behind expm1 to avoid cancellation error? It is well known that when x is close to 0, computing exp(x) - 1 introduces significant cancelation errors. As such, we have expm1 implemented in c99 and python.
My question is how expm1 avoids cancellation error? Can anyone give me a general idea without too many nitty-gritty implementation details?
| Around $x=0$ the exponential is computed as some version of $1+xp(x)$ or $\frac{1+xg(x^2)}{1-xg(x^2)}$ or similar with some polynomials that approximate the exact term behind them in an uniform fashion on some interval.
For other values the logarithm laws $e^x=(e^{x/2^m})^{2^m}$ or $e^x=2^k3^me^{x-k\ln2-m\ln3}$ are used to reduce the argument to some small value.
Then it is easy to modify those approximating expressions to return the expm1(x) value, in the mentioned cases $xp(x)$ or $\frac{2xg(x^2)}{1-xg(x^2)}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3383316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Determine the intersection of equivalence relations Let $X$ be a set. We consider the relations on $X$ as subsets of $X\times X$. Let $U\subseteq X\times X$ be a subset, and let $S_U$ be the set of all equivalence relations on $X$ that contain $U$ as subset.
Let $$R:=\bigcap_{S\in S_U}S$$ which is an equivalence relation on $X$.
Suppose that $X=\mathbb{Z}$ and $U=\{(x,y)\in \mathbb{Z}^2\mid x+y\geq 100\}$, how can we define $R$ in that case?
| Well, $R$ is always going to be an equivalence relation and we can use this to help us.
Note that $(x,|x|+k)\in U$ for any $x\in\mathbb{Z}$ and any $k\geq 100$. Hence, if we let $x,y\in \mathbb{Z},$ then $(x,|x|+|y|+100)\in U$ by the previous and, likewise, $(y,|x|+|y|+100)\in U.$ Accordingly, these elements are also in $R$. Since $R$ is an equivalence relation, it's transitive, so this implies $(x,y)\in R$.
We conclude that $R$ is the trivial relation $\mathbb{Z}^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3383468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Modelling a horizontal mass damper using differential equations I am trying to solve this second-order differential equation:
$y'' + y'+ y + (y')^2 =0$
I was able to solve the equation $y'' + y'+ y $, by substituting $y$ as $Ae^{kt}$.
But now I have this new term $(y')^2$.
Note: This equation represents the simplified equation of forces on a horizontal mass damper.
| Let $u=\dfrac{dy}{dt}$ ,
Then $\dfrac{d^2y}{dt^2}=\dfrac{du}{dt}=\dfrac{du}{dy}\dfrac{dy}{dt}=u\dfrac{du}{dy}$
$\therefore u\dfrac{du}{dy}+u^2+u+y=0$
$(y+u^2+u)\dfrac{dy}{du}=-u$
This belongs to an Abel equation of the second kind.
Let $v=y+u^2+u$ ,
Then $y=v-u^2-u$
$\dfrac{dy}{du}=\dfrac{dv}{du}-2u-1$
$\therefore v\left(\dfrac{dv}{du}-2u-1\right)=-u$
$v\dfrac{dv}{du}-(2u+1)v=-u$
$v\dfrac{dv}{du}=(2u+1)v-u$
Let $r=u+\dfrac{1}{2}$ ,
Then $v\dfrac{dv}{dr}=2rv-r+\dfrac{1}{2}$
Let $s=r^2$ ,
Then $\dfrac{dv}{dr}=\dfrac{dv}{ds}\dfrac{ds}{dr}=2r\dfrac{dv}{ds}$
$\therefore2rv\dfrac{dv}{ds}=2rv-r+\dfrac{1}{2}$
$v\dfrac{dv}{ds}=v-\dfrac{1}{2}+\dfrac{1}{4r}$
$v\dfrac{dv}{ds}=v-\dfrac{1}{2}\pm\dfrac{1}{4\sqrt s}$
This belongs to an Abel equation of the second kind in the canonical form.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3383605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove there are distinct $x_1,\,x_2,\cdots,\,x_n$ such that $ \sum_{i=1}^n\frac{p_i}{f'(x_i)}=\sum_{i=1}^n p_i. $
Suppose $f(x)$ is differentiable on $[0,\,1]$, $f(0)=0$, $f(1)=1$ and $p_1,\,p_2,\cdots,\,p_n$ are $n$ positive real numbers. Prove there are distinct $x_1,\,x_2,\cdots,\,x_n$ such that
$$
\sum_{i=1}^n\frac{p_i}{f'(x_i)}=\sum_{i=1}^n p_i.
$$
I can only prove some special cases.
Let $p=\sum_{i=1}^n p_i$. It suffice to prove that $\sum_{i=1}^n\frac{p_i}{pf'(x_i)}=1$. A proper choose is $f'(x_i)=\frac{np_i}{p}$. From Darboux theorem, if $f'$ is large enough, these values can attain.
| Proof. $\blacktriangleleft$ Assume $\sum p_j = 1$, otherwise replace $p_j$ by $p_j/p$ for each $j$. By continuity and the Intermediate Value Theorem, there is some $y_1 \in (0,1)$ that $f(y_1) = p_1$, then there is some $y_2 \in (y_1, 1)$ that $f(y_2) = p_1 + p_2$. Do this $n-1$ times, we obtain that
$$
0 = y_0 < y_1 < \dots < y_{n-1} < y_n = 1, f(y_j) = \sum_{k = 1}^j p_k.
$$
Now by the Mean Value Theorem, there is $x_j \in (y_{j-1} , y_j)$ where $f(y_j) - f(y_{j-1}) = (y_j - y_{j-1}) f'(x_j)$ for $j \leqslant n$, and
$$
\frac {p_j}{f'(x_j)} = \frac {f(y_j) - f(y_{j-1})} {f'(x_j)} = y_j - y_{j-1}, j \leqslant n,
$$
so sum these up and we are done. $\blacktriangleright$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3383736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\left ( 1+\frac{n^{\frac{1}{n}}}{n} \right )^\frac{1}{n}+\left ( 1-\frac{n^{\frac{1}{n}}}{n} \right )^\frac{1}{n}<2$ Prove that $$\left ( 1+\frac{n^{\frac{1}{n}}}{n} \right )^\frac{1}{n}+\left ( 1-\frac{n^{\frac{1}{n}}}{n} \right )^\frac{1}{n}<2 \tag{1} $$ $\forall$ $n \gt 1$
I tried using Induction:
For the Base Step $n=2$ we have:
$$x=\sqrt{1+\frac{1}{\sqrt{2}}}+\sqrt{1-\frac{1}{\sqrt{2}}}$$
Then we get:
$$x^2=2+\sqrt{2}\lt 4$$
So $x \lt 2$
Now Let $P(n)$ is True, We shall need to prove $P(n+1)$ is also True
We have $P(n+1)$ as:
$$\left ( 1+\frac{(n+1)^{\frac{1}{n+1}}}{n+1} \right )^\frac{1}{n+1}+\left ( 1-\frac{(n+1)^{\frac{1}{n+1}}}{n+1} \right )^\frac{1}{n+1}$$
Now i tried to use the fact that:
$$f(x)=x^{\frac{1}{x}}$$ is a Monotone Decreasing $\forall x \ge e$
Hence $\forall n \ge 3$ we have:
$$(n+1)^{\frac{1}{n+1}} \lt n^{\frac{1}{n}} \tag{2}$$ and also
$$\frac{1}{n+1} \lt \frac{1}{n} \tag{3}$$
Multiplying $(2),(3)$ We get:
$$1+\frac{(n+1)^{\frac{1}{n+1}}}{n+1}\lt 1+\frac{n^{\frac{1}{n}}}{n}$$
Can we proceed from here?
| Note that by Bernoulli inequality in the form
$$(1+x)^a<1+ax, \quad 0<a<1$$
which can be easily proved by induction, we obtain
$$\left ( 1+\frac{n^{\frac{1}{n}}}{n} \right )^\frac{1}{n}+\left ( 1-\frac{n^{\frac{1}{n}}}{n} \right )^\frac{1}{n}<1+\frac{n^{\frac{1}{n}}}{n^2}+1-\frac{n^{\frac{1}{n}}}{n^2} =2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3383865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Boundary of a compact connected set in $\Bbb R^2$ If I have a compact connected set in $\Bbb R^2$, and I'm examining the boundary points of this set. Is it true that around every cusp/corner on the boundary, there's an open interval where the boundary is smooth? This seems intuitive to me, but I don't know if it's true.
I.e.: is it possible that the boundary is somehow badly behaved everywhere?
| Take a function $f : [0,1]\to \mathbb R$ which is continuous but nowhere differentiable with $f\geq 0, f(0)=f(1)=0$, and take $\{(x,y) \mid x\in [0,1] \land -f(x)\leq y \leq f(x)\}$.
This is clearly connected and compact. But the boundary is not smooth at any point, as the boundary is precisely the union of the graphs of $f$ and $-f$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3384022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Will the graphs of equivalent indefinite integrals always look identical? I came across the following indefinite integral
$$\int {\sqrt{1-\sin{\left(2x\right)}}}\space \mathrm{d}x \space (0 \leq x \leq \pi )$$
I attempted to solve it as follows:
$$ u = 1 - \sin\left(2x\right) \implies \sin\left(2x\right) = 1- u$$
$$\implies x = \dfrac{\arcsin\left ( 1 - u\right)}{2} \implies \mathrm{d}x = \dfrac{-\mathrm{d}u }{2 \sqrt{1- (1-u)^2} }$$
$$\therefore \textrm{ we have }\space \dfrac{-1}{2} \int \dfrac{ \sqrt{u} } {\sqrt { 1 - (1 - u)^2}} \space \mathrm{d}u = \dfrac{-1}{2} \int \sqrt {\dfrac{ u } { 1 - (1 -2u + u^2)}} \space \mathrm{d}u $$
$$ = \dfrac{-1}{2} \int \sqrt {\dfrac{u} {u(2-u)}} \space \mathrm{d}u = \dfrac{-1}{2} \int \sqrt {\dfrac{ 1 } { 2-u}} \space \mathrm{d}u $$
Let $z = 2 - u$. Then $\mathrm{d}z = -\mathrm{d}u$
$$\implies \dfrac{1}{2} \int \sqrt {\dfrac{ 1 } {z}} \space \mathrm{d}z = \dfrac{1}{2} \int z^{-1/2}= \sqrt{z} + C = \sqrt{2-u} + C = \sqrt{1 + \sin\left(2x\right)} + C $$
I ran the problem through the site Integral Calculator and got
$$\int {\sqrt{1-\sin{\left(2x\right)}}}\space \mathrm{d}x = -\sin{x} - \cos{x} + C$$
which only coincides with my solution if I take its absolute value. Refer to the image below:
$\sqrt{1 + \sin\left(2x\right)} + C $ is in orange, whereas $-\sin{x} - \cos{x} + C$ is in purple. In the diagram, $C = 0$. The shaded region is the given range of $x$ values : $0 \leq x \leq \pi$.
Now, from what I know, the graphs of equivalent indefinite integrals should look similar and should differ only by a constant, which means they will coincide after a translation transformation. However, it seems not to be the case here. Hence my question; will the graphs of equivalent indefinite integrals always look similar, or perhaps there is a mistake in my calculations?
In addition, the answer that is given in the problem book is
$$2\sqrt{2} \left[ \dfrac{t}{\pi}\right] + \sqrt{2} \space \mathrm{sgn} \space t \cdot \left\{ \cos{ \dfrac{t}{\pi} } - \cos{t}\right\} \\ \mathrm{ where } \space t = x - \dfrac{\pi}{4} \space \mathrm{ and } \left[ \cdot \right] \textrm{ is the integer part of the expression inside } $$
This further confused me because now I don't know the right answer and where I went wrong. Any help is appreciated.
|
Note that the integration is over a periodic function with periodic non-differentiable points as shown in the plot. Follow the steps below to perform such indefinite integration.
$$I=\int {\sqrt{1-\sin{\left(2x\right)}}}\space \mathrm{d}x
= \int {\sqrt{(\sin{x} - \cos{x})^2 }}\space \mathrm{d}x $$
$$= \int |\sin{x} - \cos{x}| \mathrm{d}x =\sqrt 2 \int |\sin(x-\frac{\pi}{4})| \mathrm{d}x = \sqrt 2\int |\sin t|dt$$
where the variable change $t=x-\frac \pi4$ is made in the last step. Note that the last expression is a positive periodic function with periodity $\pi$ and its repeated integral value in each periodic interval is given by
$$A=\int_0^\pi |\sin t| dx = 2$$
Then, for $t\ge 0$, reeexpress the integral as,
$$I =\sqrt 2 \int_0^{[\frac t\pi]} |\sin s| ds+\sqrt 2 \int_{[\frac t\pi]}^t \sin s ds + C $$
$$=\sqrt 2 A\left[\frac t\pi\right] - \sqrt 2 \cos s|_{[\frac t\pi]}^t + C$$
$$=2\sqrt{2} \left[ \dfrac{t}{\pi}\right] + \sqrt{2} \space \left( \cos{ \dfrac{t}{\pi} } - \cos{t}\right) +C $$
The result for $t<0$ can be derived similarly. The combined integral result then reads,
$$I = 2\sqrt{2} \left[ \dfrac{t}{\pi}\right] + \sqrt{2} \space \mathrm{sgn} \space t \cdot \left(\cos{ \dfrac{t}{\pi} } - \cos{t}\right) +C $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3384140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Volume integration Let
$$D = \left\{(x,y,z)\in\mathbb{R}^{3}\mid x\ge0,0\le y\le x, x^2+y^2\le {16}, 0\le z\le {5}\right\}.$$
I want to integrate
$$\displaystyle\iiint\limits_{D}\left({-4\,z+y^2+x^2}\right)\,\mathrm{d}V $$
We can see that $x^2+y^2=r^2$ so $r^2=16$. $r\to[0,16]$ and $\theta\to[0,2\pi]$ and $z \to[0,5]$
And then integration
$$\int_0^{2\pi}\int_0^5\int_0^{16}(-4z+y^2+x^2)\,dV= \int_0^{2\pi}\int_0^5\int_0^{16}(-4z+r^2)\,drdzd\theta=\frac{36160\pi}{3}$$
That is wrong answer and I don't know where I have done mistake.
| There are two errors.
*
*$r^2=16$ means $r=4$ not $16$
*$x\ge 0$ and $0\le y\le x$ means that $\theta$ varies between $0$ and $\pi/4$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3384241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Decomposition of Bilinear Form Let $E=\text{span}\{x_{1},...,x_{n}\}$ and $F=\text{span}\{y_{1},...,y_{n}\}$. Assume that $B$ is a bilinear form on $E\times F$, the author claims that one can write
\begin{align*}
B(x,y)=\sum_{j=1}^{m}\theta_{j}(x)\omega_{j}(y),
\end{align*}
where $\theta_{j}\in E^{\#}$, $\omega_{j}\in F^{\#}$, the (algebraic) linear forms.
I thought at the very beginning that the linear forms must be of the kind that $\varphi_{i}(e_{k})=\delta_{k,i}$ and $\psi_{j}(f_{k})=\delta_{k,j}$ for bases $\{e_{i}\}_{i=1}^{N}$ for $E$, and $\{f_{j}\}_{j=1}^{M}$ for $F$, but this does not give me any further to proceed to, as
\begin{align*}
B(x,y)&=B\left(\sum_{i=1}^{N}\lambda_{i}e_{i},\sum_{j=1}^{M}\nu_{j}f_{j}\right)\\
&=\sum_{i=1}^{N}\sum_{j=1}^{M}\lambda_{i}\nu_{j}B(e_{i},f_{j}),
\end{align*}
the crossed term $B(e_{i},f_{j})$ cannot be reduced, I wonder some magic must be going on, so how to get the author's claim?
| Consider the following : for each $(i,j)$, put $\theta_{(i,j)} = B(e_i, f_j) e_i^*$ (where $e_i^*(\sum_k \lambda_k e_k ) = \lambda_i$) and $\omega_{(i,j)} = f_j^*$.
Then compute $\sum_{(i,j)} \theta_{(i,j)}\omega_{(i,j)} (x,y) = \sum_i \sum_j e_i^*(x)f_j^*(y)B(e_i,f_j)$, so with your notations, $\sum_{(i,j)} \theta_{(i,j)}\omega_{(i,j)} (x,y) = \sum_i \sum_j \lambda_i \nu_j B(e_i,f_j) = B(x,y)$.
So $B= \sum_{(i,j)} \theta_{(i,j)}\omega_{(i,j)}$.
Note that you could have also chosen something else, such as $\theta'_{(i,j)} = e_i^*, \omega'_{(i,j)} = B(e_i,f_j)f_j^*$, or still other choices.
The underlying idea is that $(E\otimes F)^* \cong E^* \otimes F^*$ for finite dimensional vector spaces $E,F$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3384399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding $\sin 0.01$ to a first-order approximation (in the sense of a Taylor series expansion around $0$) I am trying to understand what I need to calculate here exactly:
To a first-order approximation (in the sense of a Taylor series expansion around 0), what is $\sin 0.01$?
If I understood it correctly, I have to calculate the first order Taylor series for the function $f(x) = sin(x)$ where $x = 0.01$.
I get the following:
$$f(x) = \sin(a) + \cos(x)(x-a)$$
and if I plug in $x = 0$ and $a = 0.01$ I just get $0.01$ as the answer again.
| You're exactly right (in answer)! You should be expecting this because of the so-called Small Angle Approximation that $\sin x \approx x$ when $x \approx 0$. Then as $0.01 \approx 0$ we have $\sin(0.01) \approx 0.01$, whatever that all means.
Note however that the first order approximation is
$$
T_1(x)= f(a) + f'(a)(x-a)
$$
where you have $a=0$ and $x$ is a variable (which we will set to $0.01$). You have a slight mislabeling of your equation. So you would have $T_1(x)= 0 + 1(x-0)= x$ so that $\sin(0.01) \approx T_1(0.01)= 0.01$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3384549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
To use Mean Value Theorem to prove $f(x)=\tan(x)$ increases over $(-\pi/2,\pi/2)$, don't we need $f(\pm\pi/2)$? Yet these values are undefined.
Prove with the Mean Value Theorem that the function $\tan(x)$ increases in the interval $(\frac{-\pi}{2}, \frac{\pi}{2})$.
My problem is that to use the Mean Value Theorem you need $f(\frac{\pi}{2})$ and $f(\frac{-\pi}{2})$ but in those values it's undefined. I asked if I could use a smaller interval but I was told I must to use $(\frac{-\pi}{2}, \frac{\pi}{2})$.
Thanks for reading.
| We have $\tan(x) = \frac{\sin(x)}{\cos(x)}$, hence $\tan'(x) = \frac{\cos^2(x)+\sin^2(x)}{\cos^2(x)} = \frac 1{\cos^2(x)}$. Let $x,y\in (-\pi/2,\pi/2)$, $x<y$. Then there is some $\xi\in (x,y)$ such that
$$
\tan(y)-\tan(x) = \tan'(\xi)(y-x) = \frac{y-x}{\cos^2(\xi)}> 0.
$$
Hence, $\tan(x)<\tan(y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3384688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Showing $x^4+x^3+2x+15$ is irreducible in $\mathbb{Q}[x]$ Specifically, I'm trying to solve this problem:
Prove that $p(x)=x^4+x^3+2x+15$ is an irreducible polynomial in $\mathbb{Q}[x]$ by considering $p(x)$ mod $3$ and showing that $p(x)$ has no rational roots.
I'm able to show this is irreducible by applying the rational root theorem to eliminate the possibility of a linear factor and then brute force eliminating the possible quadratic factors, but I don't see how to do this in the way the problem states. Taking $p(x)$ mod $3$, we have
$$x^4+x^3+2x+15\equiv x(x^3+x^2+2)\bmod 3.$$
Then, this cubic term is irreducible mod $3$, but how does this help me derive the desired conclusion?
| If I recall my algebra correctly, there's a theorem that says that if $p(x) \in \mathbb{Z}[x]$ is irreducible over $\mathbb{Z}[x],$ then it's irreducible over $\mathbb{Q}[x].$ Therefore, if $p(x)$ has no rational roots but is reducible over $\mathbb{Z}[x],$ then it'll be the product of two quadratics with integer coefficients, but then it should still be the product of two quadratics when going mod $3.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3384951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Units in $R[x,x^{-1}]$ Let $R$ be an integral domain. I am looking for a general way to describe the units in
$$ R[x,x^{-1}] := R[x,y]/(xy-1).$$
Clearly
$$\{rx^n \mid n \in \mathbb{Z}, r \in R^\times \} \subseteq R[x,x^{-1}]^\times$$
is a subgroup, but how do I know whether it's all? I was trying to argue with degrees or to write down multiplication of general elements with Cauchy sums, but it all becomes pretty messy, having also negative exponents.
| When $R$ is not integral it's not necessarily true that this is all there is : look at $R= \mathbb Z/4$ and $(2x+1)^2 = 4x^2+4x+1 = 1$ : even in $R[x]$ there can be other units.
If $R$ is an integral domain, those are indeed the only ones : take $fg = 1$, let $k$ (resp. $j$) be the highest index for which $f_k$ (resp. $g_j$) is nonzero, $k',j'$ similarly but with lowest.
Then $f_kg_j x^{k+j}$ is the highest degree monomial of $fg$, and $f_kg_j \neq 0$ by integrality. It follows that $k+j = 0$ and $f_kg_j = 1$.
Similarly, $f_{k'}g_{j'}x^{k'+j'}$ is the lowest degree monomial of $fg$ and $f_{k'}g_{j'} \neq 0$ by integrality. It follows that $k'+j'= 0$.
But $k'\leq k, j'\leq j$ by definition, therefore since both sums are $0$ we must have $k=k', j=j'$ and in particular, $f$ has only one nonzero monomial (so $f= rx^n$ for some $r,n$), similarly for $g$, and the rest follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3385219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Expected Number of rolls of 1 before first 6 is rolled Say we have a fair six-sided die, and we want to find the expected number of 1s rolled before the first six is rolled.
Apparently this can be solved by conditioning/recursion. I know that by that same method there are 5 expected rolls before rolling a 6 (with roll 6 being 6). Thus the expected number of 1s before that should be much less than 6.
Would I rewrite this problem to the expected number of 1s rolled in $6-1=5$ rolls (the rolls before the first six)? I'm not sure how to start solving this.
| If you know the number of expected rolls before first rolling a $6$, then this is the expected number of $1$s before the first $6$ plus the expected number of $2$s before the first $6$ plus ... plus the expected number of $5$s before the first $6$. But each of these is equal, by symmetry, so you just need to divide by $5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3385356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Determine $x+y$ where $x$ and $y$ are real numbers such that $(2x+1)^2+y^2+(y-2x)^2=\frac{1}{3}$ Determine $x+y$ where $x$ and $y$ are real numbers such that $(2x+1)^2+y^2+(y-2x)^2=\frac{1}{3}$
I used the quadratic equation to get $$x=\frac{y-1\pm\sqrt{-2y-3y^2-\frac{5}{3}}}{4}$$
But I don’t see how that helps, hints and solutions would be appreciated
Taken from the 2006 IWYMIC
| Hint: Show that $\frac{1}{3}$ is the unique minimum of the function $f(x,y)=(2x+1)^2+y^2+(y-2x)^2$. Then find the $(x,y)$ where this minimum occurs.
Your expression under the radical is also incorrect. It should be $-3y^2-2y-\frac{1}{3}$, or $-\frac{1}{3}(9y^2+6y+1) = -\frac{1}{3}(3y+1)^2$. Thus there is exactly one value of $y$ for which the expression under the radical is nonnegative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3385514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Approximation near singularity of $1/\sin$ I'm looking for an approximation $f(x)$ of $\frac{1}{\sin(x)}$ near the singularity at $x=0$.
Can you propose a function or literature or a key word, which leads me to $f(x)$? $f(x)$ must not have a singularity at $x=0$ and needs to be continous.
| Multiply by anything that is close to $1$ far from $x=0$ and has a minimum at $(0,0)$.
Like
$$\frac{x^2}{(x^2+\epsilon)\sin(x)}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3385686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
In how many ways can 10 blankets be given to 3 beggars such that each recieves at least one blanket? The question was to find the number of ways in which 10 identical blankets can be given to 3 beggars such that each receives at least 1 blanket. So I thought about trying the multinomial theorem...this is the first time I've tried it so I'm stuck at a point...
So $$x_1+x_2+x_3 = 10$$
Subject to the condition that :
$$1\leq x_1 \leq8$$
$$1\leq x_2 \leq8$$
$$1\leq x_3 \leq8$$
As each beggar can get at maximum 8 blankets and at minimum, 1.
So the number of ways must correspond to the coefficient of $x^{10}$ in:
$$(x^1+x^2+x^3+x^4+x^5+x^6+x^7+x^8)(x^1+x^2+x^3+x^4+x^5+x^6+x^7+x^8)(x^1+x^2+x^3+x^4+x^5+x^6+x^7+x^8)$$
= coeff of $x^{10}$ in $x^3(1+x^1+x^2+x^3+x^4+x^5+x^6+x^7)(1+x^1+x^2+x^3+x^4+x^5+x^6+x^7)(1+x^1+x^2+x^3+x^4+x^5+x^6+x^7)$
= coeff of $x^{10}$ in $x^3(1+x^1+x^2+x^3+x^4+x^5+x^6+x^7)^3$
= coeff of $x^{10}$ in $x^3(1-x^7)^3(1-x)^{-3}$
= coeff of $x^{10}$ in $x^3(1-x^{21}-3x^7(1-x^7))(1-x)^{-3}$
= coeff of $x^{10}$ in $(x^3-3x^{10})(1+\binom{3}{1}x + \binom{4}{2}x^2+...+ \binom{12}{10}x^{10})$
= $\binom{9}{7} - 3 = 33$
Is this right? From here I get the answer as $\binom{9}{7} - 3 = 33$ but the answer is stated as $36$. I don't understand where I'm making a mistake
| Try stars and bars. You have $10$ stars for the $10$ blankets:
$**********$
Now you can use $2$ bars to split this into $3$ sections. For example
$**|*******|*$
would mean beggar $1$ gets $2$ blankets, beggar $2$ gets $7$ blankets, and beggar $3$ gets $1$ blanket
Since each beggar should get at least $1$ blanket, we can't put the bars on the outside of the stars, and you also can't have the two bars between the same two stars. In other words, you need to choose $2$ out of the $9$ in-between locations, giving $9 \choose 2$ possible ways to do this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3385830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Geometry question involving the length of a chord Two chords $AB$ and $AC$ are drawn inside a circle with diameter $AD$. The angle $BAC = 60$, $AB = 24cm$, $EC = 3cm$, and $BE$ and $AC$ are perpendicular. What is the length of the chord $BD$?
Here's what I've tried:
$ABE = 30$ which implies $AE = 12cm$ and therefore $BE = 12\sqrt3cm$ and $BC = 21cm$. Call the intersection of $EB$ and $AD$ point $O$. So we have $AEO$ is similar to $ACD$ but I don't know where to go from here.
Thank you so much in advance!
| You are nearly there!
Note that angles CBD and CAD are equal. Angles BAD and EBC are then easily proved to be equal.
The right-angled triangle ABD is now similar to the triangle BEC of which you know all dimensions. Over to you?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3385957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is $PSL_2(\mathbb Z)$ a Fuchsian group of the first kind? We know that as a discrete subgroup of $PSL_2(\mathbb R)$, $PSL_2(\mathbb Z)$ is a Fuchsian group. But how to prove/disprove that it is of the first kind. i.e. if every point on the extended real line is its limit point of some orbit?
| Hint: Prove that every rational point on the real line is fixed by a parabolic element of $PSL(2,{\mathbb Z})$. A sub-hint: Think first about stabilizers of nonzero elements of ${\mathbb Z}^2$ in $SL(2,{\mathbb Z})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3386065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Finding all integers $k \geq 2$ such that $k^2 \equiv 5k \pmod{15}$. What is going on here? The question is as follows:
Find all integers $k \geq 2$ such that $k^2 \equiv 5k \pmod{15}$.
I have an issue related to this question (its not about the solution to the question):
I know that $\overline{k} \in \mathbb{Z}_{15}$ is invertible if and only if $k$ and $15$ are relatively prime. So, assume $\overline{k}$ is invertible. Then, $\overline{k}^2 = \overline{5}\overline{k}$ implies $\overline{k} = \overline{5}.$ But isn't $\overline{5}$ not invertible, since $5$ is not relatively prime with 15? What am I missing?
| If $k^2\equiv5k\mod 15,$ then $3 $ and $ 5 $ divide $ k^2-5k=k(k-5)$,
so $3$ divides $k$ or $k-5$ and $5$ divides $k$ or $k-5$.
$5$ | $k$ iff $5$ | $k-5$, so we have $3$ divides $k$ or $k-5$ and $5$ divides $k$ and $k-5$.
That means $15$ divides $k$ or $k-5$; i.e., $k\equiv 0$ or $5 \mod 15$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3386183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Sum of squared eigenvalues is at most trace of adjoint product Specifically, I'm trying to solve the following:
Let $T$ be a complex $n\times n$ matrix. Let $\lambda_1,\cdots,\lambda_n$ be the eigenvalues of $T$, where each eigenvalue is repeated a number of times equal to its algebraic multiplicity. Prove that
$$\sum_{k=1}^n|\lambda_k|^2\leq\operatorname{tr}(T^*T),$$
with equality if and only if $T$ is normal.
It seems like this should be a straightforward proof using a Schur decomposition, but I'm confused by the inequality. It should be the case that $\operatorname{tr}(T^*T)=\operatorname{tr}(TT^*)$, even if $T$ isn't normal, right? So how can this inequality be strict?
| Alternatively, we can use the Frobenius inner product: $\langle A,B\rangle=\operatorname{tr}(A^*B)$.
Replace $T$ with $T=U(D+N)U^*$ for some unitary $U$, diagonal $D$, and strictly upper triangular nilpotent $N$ (Schur's theorem). Then
\begin{align*}
\operatorname{tr}(T^*T)&=\langle T,T\rangle\\
&=\langle U(D+N)U^*,U(D+N)U^*\rangle\\
&=\langle D+N,D+N\rangle&\mbox{(because unitary matrices preserve inner products)}\\
&=\|D\|^2+\|N\|^2+\underbrace{\langle D,N\rangle}_{=0}+\underbrace{\langle N,D\rangle}_{=0}.&\mbox{(by definition of $D$ and $N$)}\\
\end{align*}
Since $\|N\|^2=0$ if and only if $T$ is diagonalizable and $T$ is diagonalizable if and only if $T$ is normal (spectral theorem), the desired result follows from the fact that the entries of $D$ are the eigenvalues of $T$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3386295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.