Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Determining the limit: What does it mean when one obtains zero in the calculation? Consider the sequence $a_n = \sqrt{n+\sqrt{n}}-\sqrt{n-\sqrt{n}}\\$. To determine the limit I did the following:
\begin{aligned}
a_{n} &=\left(\sqrt{n+\sqrt{n}}-\sqrt{n-\sqrt{n}}\right) \frac{\sqrt{n+\sqrt{n}}+\sqrt{n-\sqrt{n}}}{\sqrt{n+\sqrt{n}}+\sqrt{n-\sqrt{n}}} \\[10pt]
&=\frac{2 \sqrt{n}}{\sqrt{n+\sqrt{n}}+\sqrt{n-\sqrt{n}}}=\frac{2 \sqrt{n}}{\sqrt{n} \sqrt{1+\frac{1}{\sqrt{n}}}+\sqrt{n} \sqrt{1-\frac{1}{\sqrt{n}}}} \\[10pt]
&=\frac{2}{\sqrt{1+\underbrace{\frac{1}{\sqrt{n}}}_{\rightarrow0 \text{ for }n \rightarrow \infty}}+\sqrt{1-\underbrace{\frac{1}{\sqrt{n}}}_{\rightarrow0 \text{ for }n \rightarrow \infty}}} \\[10pt] &= \dfrac{2}{2} = 1.
\end{aligned}
However, my first thought was $a_n = \sqrt{n+\sqrt{n}}-\sqrt{n-\sqrt{n}} = \sqrt{n}\left(\sqrt{1 + \dfrac{1}{\sqrt{n}}}\space - \sqrt{1 - \dfrac{1}{\sqrt{n}}}\right) = \sqrt{n}\space (1-1) = 0$
with the same argument as above. Is it correct that one can't make a statement about the convergence in the latter calculation because we have $\infty \cdot 0$ ? If yes, why exactly is this the case?
Edit: To see that $\lim\limits_{n\to \infty} \dfrac{1}{\sqrt{n}} = 0$ pick some arbitrary $\epsilon > 0$. We want to find a $N$ s.t. $\forall n\geq N\colon |\dfrac{1}{\sqrt{n}} - 0|< \epsilon \Longleftrightarrow \dfrac{1}{\sqrt{n}} < \epsilon \Longleftrightarrow n >\dfrac{1}{\epsilon^2}$ meaning we can choose $N = \dfrac{1}{\epsilon^2} + 1$ for example.
| When one factor of a product goes to $\infty$ and the other goes to 0, you can't immediately conclude what the limit is. The easiest way to see this is to consider $\lim_{n \to \infty} n \cdot \frac{1}{n}$. Now replace the 1 by any other number.
The error in your calculation is that when you evaluate a limit by substitution, you have to substitute all instances of $n$ at the same time. In your calculation, in the second to last = sign, you substitute some instances to get $1-1$ while leaving the $\sqrt{n}$ unsubstituted. That is wrong because when you take a limit, all instances of $n$ are tending to the limit at the same time.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4043762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is there a tight bound on following binomial summations involving squares on arithmetic progressions? The summations of interest is following:
$$\sum_{i=0}^{\lfloor\sqrt n\rfloor}\binom{n}{i^2}$$
$$\sum_{i\in\{a,q+a,2q+a\dots,\lfloor\sqrt n\rfloor\}}\binom{n}{i^2}$$
where $q<n$ and $a\in\{0,1,\dots,q-1\}.$
Is there a tight asymptotic bound on these?
| As stated under the OEIS entry $\texttt{A003099}$ mentioned in comments, the quantity
$$f(n)=\frac{\sqrt n}{2^n}\sum_{k\geqslant 0}\binom{n}{k^2}$$
stays bounded, in $(\approx 0.587,\approx 0.827)$ for large $n$. More precisely, one obtains
For any $x\in\mathbb{R}$ we have $\displaystyle\color{blue}{\lim_{n\to\infty}f\big(\lfloor2(n+x)^2\rfloor\big)=\sqrt\frac2\pi\sum_{m\in\mathbb{Z}}e^{-4(m-x)^2}}$.
The idea is basically to put $k=n+m$ above, and use the following consequence $$\lim_{n\to\infty}\frac{\sqrt{2n(n+a)+b}}{2^{2n(n+a)+b}}\binom{2n(n+a)+b}{(n+m)^2}=\sqrt\frac2\pi e^{-(2m-a)^2}$$ of Stirling's formula. Thus, the exact values of the bounds are
\begin{align*}\limsup_{n\to\infty}f(n)&=\sqrt\frac2\pi\sum_{m\in\mathbb{Z}}e^{-(2m)^2}&={0.827112271364145742192103543257}\cdots\\
\liminf_{n\to\infty}f(n)&=\sqrt\frac2\pi\sum_{m\in\mathbb{Z}}e^{-(2m-1)^2}&={0.587247586271786487501375771221}\cdots\end{align*}
I'm sure that the second (more general) sum in the question can be handled similarly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4043880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Values of the infinite product $\prod_n\frac{(n+1)x}{1+nx}$ I am trying to compute the inverval of convergence and the explicit value of the infinite series $$\prod_{n=1}^{\infty}\left(\dfrac{(n+1)x}{1+nx}\right).$$ I believe the interval of convergence is $(-1,1)$ and exact value is $\dfrac{x}{1-x},$ but I might be wrong. Any help would be greatly appreciated.
| For numbers of the form $-\frac{1}{n}$ the product is not defined, for the other negatives values the product diverges to $\infty$ or $-\infty$. For $x \in [0,1)$ the product diverges to $0$. For $x>1$ the product diverges to $\infty$. So the product only converges for $x=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4043975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Normal subgroup and non-equivalent representations I am looking for an example to illustrate that to representations are not equivalent in a particular situation.
If H is a normal subgroup of G, $\pi$ a representation of $H$, and the second representation is defined as follows: For $g \in G$ define $\pi^g : G \to GL(V)$ by $\pi^g (h)=\pi (g^{-1}hg)$. For $g \in G$ we define the representation $\pi^g$ of H as before.
So now I have to find an example of G, H a representation $\pi$ of H and $g \in G$ such that $\pi$ and $\pi^g$ are not equivalent.
However, I am struggling a bit to find a nice and simple example to show this.
I've tried with $G= (\mathbb{C}, +)$ and $H=(\mathbb{R}, +)$ as this is a normal subgroup of G (as it is abelian) and as the representation of H has been given in the lectures. However, it seems as if this doesn't work
| Since $\pi(g^{-1}hg) = \pi(g)^{-1}\pi(h)\pi(g)$, you're just conjugating the outputs, i.e. you're using $\pi(g)$ to change bases. So, they're equivalent representations.
Edit: As the comments noted, you intended to write $\pi^g \colon H \to GL(V)$, in which case they don't need to be equivalent.
For instance, take $G = S_3$, the symmetric group on 3 elements, and $H = A_3 = \{1, (123), (132)\}$, the alternating group on 3 elements, written in cycle notation. Use $V = \mathbb{C}$ and $\pi \colon A_3 \to \mathbb{C}^\times$ by $\pi((123)^k) = \exp(2\pi i k/3)$.
Set $g = (12)$. Easy calculations give $\pi^g((123)) = \pi((123)^2) = \exp(2\pi i \cdot 2/3) = \overline{\pi((123))}$, the complex conjugate. Hence they're inequivalent representations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4044160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Convergence of double sum on the lattice I am working with a commutator $T$ acting on the lattice $\ell^2(\mathbb{Z}^2;\mathbb{C})$, the function space made up by the basis elements
\begin{align}\left|\vec{x}\right>\,:\,\mathbb{Z}^2&\rightarrow \mathbb{C}\\
\vec{y}&\mapsto \delta_{\vec{x},\vec{y}}
\end{align}
I am studying a paper in which the authors claim a particular (double) sum is convergent. It is the following,
\begin{equation}
C_N\sum_{r_1\in \mathbb{Z}}\sum_{x_1\in \mathbb{Z}}\left(1+\frac{1}{2}\left(|x_1+r_1|+|x_1|\right)\right)^{-N}.\qquad (1)
\end{equation}
Here, $N$ can be any real number, to each $N$ a certain constant $C_N$ is fit. In the paper, the authors claim this sum to be convergent, but I have a hard time realising why that is the case. At first thought, for fixed $r_1'\in \mathbb{Z}$, e.g. for $N=2$, the inner sum will look very much like the sum $\sum_{x_1\in \mathbb{Z}}\frac{1}{|x_1|^2}$, which I now converge to a finite number. But if I sum over this finite number an infinite number of times, obviously it will not converge. Is it possible to pick a proper $N$ such that the double sum in (1) does indeed converge?
| It converges for $N>2$. For this we can use the (stupid) estimate
$$ \frac{1}{(1+\vert x_1 + r_1 \vert + \vert x_1 \vert)^N} \leq \frac{1}{(1+\vert r_1 + x_1\vert)^{N/2}} \frac{1}{(1+\vert x_1 \vert)^{N/2}}.$$
Then we have
$$ \sum_{x_1, r_1 \in \mathbb{Z}} \frac{1}{(1+\vert x_1 + r_1 \vert + \vert x_1 \vert)^N}
\leq \sum_{x_1, r_1 \in \mathbb{Z}} \frac{1}{(1+\vert r_1 + x_1\vert)^{N/2}} \frac{1}{(1+\vert x_1 \vert)^{N/2}}
= \left(\sum_{k\in\mathbb{Z}} \frac{1}{(1+\vert k \vert)^{N/2}} \right)^2.$$
For the last equality we first fixed $x_1$ and summed over $r_1$ (shifting it by $x_1$). For $N>2$ we have $N/2>1$ and so the RHS does converge and hence so does the LHS.
We can also analyse it a bit more carefully to see that it does not converge for $1<N<2$ (clearly it does not converge for $N\leq 1$, not even the series over a single variable). For this we will use the integral test, which tells us
$$ \frac{N-1}{(2+\vert x_1 \vert)^{N-1}} = \int_1^\infty \frac{1}{(1+\vert x_1\vert +r)^N} dr \leq \sum_{r\in \mathbb{N}_{\geq 1}} \frac{1}{(1+\vert x_1 \vert + r)^N} . $$
Thus, we obtain
$$ \sum_{x_1\in \mathbb{Z}} \sum_{r_1\in \mathbb{Z}} \frac{1}{(1+\vert x_1 + r_1 \vert + \vert x_1 \vert)^N}
\geq \sum_{x_1\in \mathbb{Z}} \left( 2 \sum_{r_1\in \mathbb{N}} \frac{1}{(1+\vert x_1 + r_1 \vert + \vert x_1 \vert)^N} \right)
\geq 2\sum_{x_1\in \mathbb{Z}} \frac{N-1}{(2+\vert x_1 \vert)^{N-1}}.$$
Using the integral again, one sees that the RHS blows up and so our original series does not converge.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4044304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove: $\lim_{x \to \infty} f'(x) = 2$ indicate $\lim_{x \to \infty}f(x) = \infty$ This question is from last year exam in calculus.
In the answers they mentioned more than one way to solve this question.
The method they used is to build a new function but it was a bit complicated.
I would like to get some help.
| The key is to show that $f$ is bounded below by a function which goes to infinity as $x \to \infty$. It follows from your hypothesis that there exists an $x_0$ such that for all $x \geq x_0$, $f'(x) \geq 1$.
Now let $x$ be any number bigger than $x_0$. Consider the average rate of change
$$\frac{f(x) - f(x_0)}{x-x_0}$$
between $x$ and $x_0$. By the mean value theorem, there exists a $c > x_0$ such that
$$\frac{f(x) - f(x_0)}{x-x_0} = f'(c)$$
Since $f'(c) \geq 1$, this implies that there is a constant $K$ such that $f(x) \geq x +K$ for all $x > x_0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4044456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Simple proof that there exist that you can obtain difference of 1 between multiples of coprimes $a$ and $b$ Given two coprimes $a$ and $b$ (assume wlog that $a < b$), there are non-negative integers $n_a$ and $n_b$ such that $n_b \cdot b = n_a \cdot a + 1$. Easy to prove using Bézout's identity, but is there a simpler, more "intuitive" way of proving this?
Another way of putting it is that there exists $n \in \mathbb{N}$ such that the remainder of the Euclidean division of $nb$ by $a$ is $1$.
| $\gcd(a,b)=1$ implies $\bar b\in\mathbb Z/a\mathbb Z$ is invertible, so there exists $n_b\in\mathbb Z$ such that $n_bb\equiv1\pmod a$. Hence there exists $n_a\in\mathbb Z$ such that $n_aa=n_bb-1$, i.e. $n_bb=n_aa+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4044614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Chinese Remainder Theorem problem in ring theory I'm currently following my 2nd algebra course at uni where we are discussing rings.
The assignment goes like this
$$\text{Show that} \ \mathbb{R}[x]/(x^2-1) \ \text{is isomorphic to} \ \mathbb{R}\times\mathbb{R}$$
$\mathbb{R}[x]$ is the polynomial ring and $\mathbb{R}[x]/(x^2-1)$ is then a quotient ring. A hint to solve it is to use the Chinese Remainder Theorem.
I am stumped. The hint implies to me that $(x^2-1)$ must be the kernel of some ring homomorphism. It also to me implies that there is an ideal in $\mathbb{R}[x]$, let's call it K, so that $\mathbb{R}[x]/K=\mathbb{R}$ and I can't make either of those things to make sense. Is it also right to assume that $\mathbb{R}[x]/(x^2-1)=\{f(x)+x^2-1\vert f(x)\in\mathbb{R}[x]\}=\mathbb{R}[x]$, since all equivalence classes will contain just one element, since for two elements to be in the same equivalence class, the polynomials must be the same? Like if,
$$q(x)+x^2-1=r(x)+x^2-1\Leftrightarrow q(x)=r(x)$$
for $q(x),r(x)\in\mathbb{R}[x]$.
| Hint: Consider the map $\phi:\mathbb R[x] \to \mathbb{R}\times\mathbb{R}$ given by $\phi(f)=(f(1),f(-1))$. Prove that $f$ is a surjective ring homomorphism and compute its kernel.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4044763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Finding an expression for $M$ in terms of $t$. differential equations
The mass $M$ at time $t$ of the leaves of a certain plant varies according to the differential equation $$\frac{\mathrm{d}m}{\mathrm{d}t}= M-M^2$$
Given that at time $t=0$, $M=0.5$, find an expression for $M$ in terms of $t$
I wasn't sure how to start so I began by solving the differential equation to get $$\frac{1}{M-M^2}\mathrm{d}M=\mathrm{d}t$$
Then simplifying, I get $$ln\frac{M}{1-M}=t$$ using log rules. To cancel $ln$ I get $$\frac{M}{1-M}=e^t$$ and then I get $M=e^t-me^t$
However the answer is $$\frac{e^t}{1+e^t}$$
I don't understand how they got that?
| $$\frac{dM}{M-M^2}=\mathrm{d}t$$
$$\int \frac{dM}{M(1-M)}=t+c$$
$$\int \frac{dM}{M}+\int \dfrac {dM}{1-M}=t+c$$
$$\int \frac{dM}{M}-\int \dfrac {dM}{M-1}=t+c$$
$$\ln |M|-\ln {|M-1|}=t+c$$
$$\dfrac M{M-1}=Ce^t$$
$$M={(M-1)}Ce^t$$
$$M(1-Ce^t)=-Ce^t$$
This gives us:
$$M(t)=\dfrac {-Ce^t}{1-Ce^t}$$
Then apply initial condition:
$$M(0)=\dfrac 12 \implies C=-1$$
$$M(t)=\dfrac {e^t}{1+e^t}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4044876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Cumulative distribution function with 3 variables Let $X$ be the random variable whose cumulative distribution function is
$$
F_X (x) = \begin{cases}
0, & \text{for} \space x\lt 0 \\
\frac{1}{2}, & \text{for} \space 0\le x\le 1 \\
1, & \text{for} \space x\gt 1 \\
\end{cases}.
$$
Let $Y$ be a random variable independent of $X$ and uniformly distributed over the interval $(0,1)$. Define the random variable $Z$ as
$$
Z = \begin {cases}
X, & \text{if} \space X\le \frac{1}{2} \\
Y, & \text{if} \space X\gt \frac{1}{2} \\
\end{cases}
$$
Determine $\mathbb{P} (Z\le \frac{1}{5})$.
I believe that $X$ only takes the discrete values $0$ and $1$ with equal probability, but I'm not entirely sure. By intuition, I think that the answer is $\frac{1}{2}$. I'm unsure about this question, so any advice would be appreciated.
| You should define $F(1)$ as $1$ instead of $\frac 1 2$.
$$P(Z \leq \frac 1 5)$$ $$=P(X \leq \frac 1 5)+P(Y \leq \frac 1 5, X>\frac 1 2)$$ $$=P(X \leq \frac 1 5)+P(Y \leq \frac 1 5)P( X>\frac 1 2)$$
$$=\frac 1 2 +\frac 1 5 (1-\frac 1 2 )=\frac 3 5.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4045074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proving that $SU(n)$ is a smooth manifold Considering this post: Show that $SL(n, \mathbb{R})$ is a $(n^2 -1)$ smooth submanifold of $M(n,\mathbb{R})$ I dont understand how the manipulations in the limit are done, for instance:
$$ \det(A+tA)=(1+t)^n \det(A)$$
and how the limit overall evaluates to: $n \det(A)$
| It's best to write out what you have a question about on this post explicitly, but anyway:
$$
\det(A+tA)=\det((1+t)A)=(1+t)^n\det(A)
$$
by the property of determinants of $n\times n$ matrices that says that $\det(\lambda A)=\lambda^n \det(A)$. Here we just set $\lambda=1+t$. As for the limit in the post:
$$
\lim_{t\to 0}\frac{\det(A+tA)-\det(A)}{t}=\lim_{t\to 0}\frac{(1+t)^n\det(A)-\det(A)}{t}=\det(A)\lim_{t\to 0}\frac{(1+t)^n-1}{t}$$
$$=\det(A)\lim_{t\to 0}\frac{\sum_{k=0}^n{n\choose k}t^k-1}{t}=\det(A)\lim_{t\to 0}\sum_{k=1}^n{n\choose k}t^{k-1}=\det(A){n\choose 1}=n\det(A).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4045194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Can a set have a complement in intuitionistic ZF? Does IZF (ZF formulated in intuitionistic logic) prove that for any set $a$, $\{x: x \notin a \}$ does not exist?
| Consider the rank function $\operatorname{rank}$ defined by
$$\operatorname{rank} a = \bigcup\{\operatorname{rank}x + 1 \mid x\in a\}.$$
Then every element of $a$ has a rank smaller than $\operatorname{rank} a$, but you can show that any ordinal $\alpha$ has rank $\alpha$, so any ordinal greater than $\operatorname{rank} a$ is not a member of $a$.
Now I claim that the class $C=\{\beta\in\mathrm{Ord}\mid \alpha\in\beta\}$ is a proper class for any $\alpha$. Assume that $C$ is a set, then $\gamma:=\bigcup C$ is also an ordinal since $C$ is a set of ordinals. Then $\gamma\in C$, which is a contradiction since it implies $\gamma\in C\subseteq \gamma$.
Here are some remarks on my proof:
*
*$\mathsf{CZF}^-$ (Constructive ZF without Subset Collection) suffices for the proof.
*In fact, we can derive more on $C=\{\beta\in\mathrm{Ord}\mid \alpha\in\beta\}$.
Define a proper class $A$ is inexhaustible if for every subset $x\subseteq A$ of $A$ there is $y\in A$ such that $y\notin x$. (That is, $A$ is greater than every subset of it.) Classically, every proper class is inexhaustible, but $\mathsf{IZF}$ need not prove it.
We can see that if every set is subcountable (i.e., every set is an image of a subset of $\omega$), then $\mathcal{P}(1)$ is a proper class, but not an inexhaustible class. (It follows from that $\mathcal{P}(1)$ is a double complement of $2$ and no double complement of a set is inexhaustible.) However, the statement 'every set is subcountable' is not consistent with $\mathsf{IZF}$, although it is compatible with $\mathsf{CZF}$.
A slight modification of the previous proof shows $C$ is inexhaustible: let $x\subseteq C$, and take $\gamma:=\bigcup x$. Then we have $\gamma\in C$ but $\gamma\notin x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4045334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Torsion-free Lie groups Out of curiosity, I'm looking for some examples of Lie groups that are torsion-free.
For some reason (and perhaps there is a good reason), most torsion-free groups I've heard of are discrete. One Lie example that comes to mind is $\mathbb{R}^n$. Also, if $G$ is a Lie group and its torsion subgroup, call it $\Gamma$, is both normal and discrete, then $G/\Gamma$ provides another source of examples.
Question 1: What are some more examples of torsion-free Lie groups?
Question 2: Are there any examples that have non-trivial compact Lie subgroups?
| The group of real upper triangular matrices with $>0$ diagonal entries is a good example of a group without elemnt of finite order.
If a Lie group contains a compact group , this subgroup is a compact Lie group, as every closed subgroup of a Lie group is a Lie group (Chevalley). But a compact Lie group is either finite (hence torsion) or contains a maximal torus, hence $T^1$, the group of complex number of modulus 1, and therefore contains elemnets of order $n$ for every $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4045532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What is the mathematical term and symbol for division without remainders? What is the mathematical term and symbol for division without remainders?
Take for example 322 / 100.
I know 322 / 100 is division and the result is 3.22.
I know 322 % 100 is a modulo and the result is 22.
But what is the proper term and symbol for division with no remainder?
Eg, 322 SYMBOL 100 would give 3.
| The term is 'integer division'. The symbol used is $\lfloor \frac{322}{100} \rfloor$ or sometimes $[\frac{322}{100}]$ and $100\,|\,300$ if one is a divisor of the other.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4045651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Integration with respect to finite variation processes Hey can anyone explain me (or recommend a book) how we construct stochastic integral of (not necessarily continuous) process $H$ with respect to process with finite variation (also not continuous)?
| The stochastic integral is usually constructed for adapted càglàd (continuous on the left, limit on the right) processes as integrands and semimartingales $\big($which are càdlàg (continuous on the right, limit on the left) processes and include the FV-processes$\big)$ as integrator by using predictable simple processes and the property that the predictable simple processes are dense under the ucp-topology in the set of adapted càglàd processes. From this you can extend it to a bigger class of integrands. But all this you can read for example in the book of Protter ("Stochastic Integration and Differential Equations").
By the way, for integrators which are FV-processes you can also directly use the Lebesgue-Stieltjes integration path-by-path. A theorem states (also in the book I recomended) that the Lebesgue-Stieltjes integral and the stochastic integral are indistinguishable in that case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4045812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the best book to learn probability theory through a linear algebra perspective? I have heard from my linear algebra professor in undergraduate studies that probability theory can be examined using linear algebra.
As a math student who enjoys linear algebra, does anyone have a good textbook that uses linear algebra to examine probability theory?
| One book that explores probability from a linear algebraic point of view is Hilbert Space Methods in Probability and Statistical Inference by Christopher G. Small and Don L. McLeish. From the introduction:
In the traditional approach to probability and statistics, the
starting point is the concept of the sample space and a class of
subsets called events. The Hilbert space approach shifts the starting
point away from the sample space and replaces it with a Hilbert space
whose elements can be variously interpreted as random variables,
estimating functions, or other quantities depending on the data and
the parameter.
But be aware that this book is almost 30 years old (published 1994), and I don't know to what extent practicing probabilists or mathematical statisticians work within this framework today. (Also note that basic linear and matrix algebra is the natural language for multivariate probability and statistics. The Hilbert space view here is more general.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4045965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Infinite series raised to a power being a power series After thinking about how
$$\left(\sum_{k=0}^\infty \frac{1}{k!}\right)^{z} = \sum_{k=0}^\infty \frac{z^k}{k!}$$
I wondered about what kind of sequence $(a_n)_{n=0}^\infty$ satisfies
$$\left(\sum_{k=0}^\infty a_k\right)^{z} = \sum_{k=0}^\infty a_k z^k \tag{1}$$
for all $z \in \mathbb{C}$. For instance, any such sequence would satisfy
$$\left(\sum_{k=0}^\infty a_k z^k\right) \left(\sum_{k=0}^\infty a_k w^k\right) = \left(\sum_{k=0}^\infty a_k\right)^z \left(\sum_{k=0}^\infty a_k\right)^w = \left(\sum_{k=0}^\infty a_k\right)^{z+w} = \sum_{k=0}^\infty a_k (z+w)^k$$
I will use the notation $(a_n)$ to denote $(a_n)_{n=0}^\infty$ and $(a_n)^z$ to denote $(a_n z^n)$. I would like to find a product $(a_n) \cdot (b_n)$ of such sequences which satisfies
$$(a_n)^z \cdot (a_n)^w = (a_n)^{z+w} \tag{2}$$
for all $z,w \in \mathbb{C}$. If we only consider sequences which satisfy $a_n \binom{n}{k} = a_k a_{n-k}$ for all $k$, then a discrete convolution would be such a product since
$$(a_n)^z \cdot (a_n)^w = (c_n)$$
where
$$c_n = \sum_{k=0}^n a_k a_{n-k} z^k w^{n-k} = a_n \sum_{k=0}^n \binom{n}{k} z^k w^{n-k} = a_n (z+w)^n$$
as required.
Are there any other sequences which satisfy (2) where the product is a discrete convolution? Are there any other products which satisfy (2), for some subset of sequences satisfying (1)? And more importantly, is $(\frac{1}{n!})$ the only sequence satisfying (1)?
| Suppose $a_k$ is a sequence of such that such that $\sum_{k=0}^\infty a_k$ converges. Write $c = \sum_{k=0}^\infty a_k$. Assume $c>0$, let $\log c$ be the natural logarithm of $c$. Then
$$
\left(\sum_{k=0}^\infty a_k\right)^z = c^z
= \exp(z\log c) = \sum_{k=0}^\infty \frac{(\log c)^k}{k!} z^k
$$
So, in order to get the equation you request, there must be a number $c>0$ such that
$a_k = \frac{(\log c)^k}{k!}$.
Of course the famous example is where $c=e$, and $\log c = 1$, so $(\log c)^k = 1$.
I suppose, in case $c > 0$ fails, then we have to inestigate choosing branches of the logarithm.
For example, we could define $(-1)^z = \exp(i \pi z)$ so that
$$
a_k = \frac{i^k \pi^k}{k!} .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4046227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find a condition for the unilateral forward shift Let $H$ be an infinite dimensional, separable complex Hilbert space with orthonormal basis $(e_n)_n$.
Let $T$ be the unilateral forward shift with $Te_n= t_n e_{n+1}$, where $(t_n) \in \ell^{\infty}$
Let $\mathcal{K}(H)$ denote the set of compact operators acting on $H$.
Let $\pi:\mathcal{B(H)}\rightarrow\mathcal{B(H)}/\mathcal{K(H)}$ be the canonical quotient map,
Note that $\pi(T^*)= \pi(T)^*$.
Find a necessary condition on the sequence $(t_n)$ so that $\pi(T^*)\pi(T)=\pi(T)\pi(T^*)$.
I suspect that the condition will have something to do we the normality and compactness of $T$. But I'm not sure about this.
Any ideas or help will be appreciated!
| Since $\pi$ is multiplicative, the right point of view is to look at $\pi(T^*T)=\pi(TT^*)$. Even better, looking at $\pi(T^*T-TT^*)=0$, the condition required is that $T^*T-TT^*$ is compact.
In terms of the matrix units $\{E_{kj}\}$ coming from $\{e_n\}$,
$$
T=\sum_k t_kE_{k+1,k},\qquad\qquad T^*=\sum_k\overline{t_k}\,E_{k,k+1}.
$$
Then
$$
T^*T=\sum_k|t_k|^2\,E_{kk},\qquad\qquad TT^*=\sum_k|t_k|^2\,E_{k+1,k+1},
$$
and
$$
T^*T-TT^*=|t_1|^2\,E_{11}+\sum_{k>1}(|t_k|^2-|t_{k-1}|^2)\,E_{kk}.
$$
This is a diagonal operator, and it is compact if and only if $\lim|t_k|^2-|t_{k-1}|^2=0$.
Edit: Working on the basis elements
$$
\langle T^*Te_n,e_m\rangle=\langle Te_n,Te_m\rangle=t_n\overline{t_m}\langle e_{n+1},e_{m+1}\rangle=\langle |t_n|^2e_n,e_m\rangle,
$$
so $T^*Te_n=|t_n|^2e_n$. For $n,m>1$,
$$
\langle TT^*e_n,e_m\rangle=\langle T\overline{t_{n-1}}e_{n-1},e_m\rangle=\langle |t_{n-1}|^2e_n,e_m\rangle.
$$
So,
$$
(T^*T-TT^*)e_1=T^*Te_1=|t_1|^2e_1,
$$
and
$$\tag{$*$}
(T^*T-TT^*)e_n=(|t_n|^2-|t_{n-1}|^2)e_n,\qquad\qquad n>1.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4046433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Area of triangle within an ellipse and hyperbola that have the same foci
Question: An ellipse and a hyperbola have the same foci, $F_1$ and $F_2$. These curves cross at 4
points - let $P$ be one of the points. These curves also intersect the line $\overleftrightarrow{F_1F_2}$ at 4 points labelled $Q$, $R$, $S$ and $T$ in that order. If $RS$ = 20, $ST$ = 14 and $∆PF_1$ $F_2$ is isosceles,
compute the area of $∆PF_1$ $F_2$ .
I have no idea how to go about solving this. I know that the easiest way of getting the area is probably by SSS and Heron's Formula, and I did get that the major axis is $14 + 20 + 14=48$ so the $PF_1+PF_1=48$ and probably that $PF_1$ is congruent to $F_1F_2$
|
If the length of transverse axis of hyperbola is $2a$ and of conjugate axis is $2b$ AND the length of major axis of ellipse is $2d$ and of minor axis is $2e$, $RS = 2a = 20 \implies a = 10$.
Also, $d = 10 + 14 = 24$.
As point $P$ is on both the hyperbola and the ellipse,
$PF_1 - PF_2 = 20$
$PF_1 + PF_2 = 48$
So $PF_1 = 34, PF_2 = 14$.
As $\triangle PF_1 F_2$ is isosceles triangle, either $F_1F_2 = PF_1$ or $F_1F_2 = PF_2$.
If distance of foci from the center is $c$, $c = \frac{F_1F_2}{2}$ and we know $c^2 = d^2 - e^2 = a^2 + b^2$. From this, we confirm that the only possibility is $F_1F_2 = PF_1 = 34$.
So the sides of the isosceles triangle $\triangle PF_1F_2$ are $34, 34$ and $14$. You can find altitude and then the area or use Heron's formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4046740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Showing that a group has infinite order Let $G$ be an abelian group such that for all $g \in G$ and for all positive integers $k$, there exists some $h \in G$ such that $h^k = g$.
Now, I would like to show that the group is either the trivial group, or of infinite size. The trivial case is straightforward, but I am having trouble proving the infinite cardinality.
I start by supposing that $G$ has finite cardinality with $|G| = m>1$ where $m$ is some integer. I now pick an abitrary $g$ and $k=m+1$, and the corresponding $h$ so that we have $h^k=g$.
If I am on the right track, I'm lost as to where to go to from here. If I am not on the right track, well then I am quite lost!
Thank you for any help!
| If $m:=|G|$ is finite, then $h^m=e$ for all $h\in G$. So with $k=m$ (not $m+1$), we conclude from the existence of $h$ with $h^k=g$ that $g=e$ (and so $m=1$).
Alternative approach:
Suppose $G$ is not trivial and let $1\ne a\in G$. If $a$ has inifnite order, $G$ is infintei and we are done. So assume $a^m=1$ for some $m>1$. Then $x\mapsto x^m$ is not injective, but by assumption is surjective. This is possible only for infinite sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4046855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Range of numbers not divisible by $2$ or $3$ or $5$ How many numbers lie in the range of $1331$ and $3113$ (Endpoints Inclusive) such that they are not divisible by $2$ or $3$ or $5$?
I came across this Number System question and was not able to figure out the way to get the answer. At first I thought to separate out the numbers which are divisible by the LCM of $2$, $3$ & $5$ but this will still leave some numbers which will be exclusively divisible by any one of $2$ or $3$ or $5$.
Another method I found out to solve these type of questions is by using the Euler's Totient method but I am not aware of this.
So can someone please explain a general way to solve these types of question?
General question can be : How many numbers lie in the range of $A$ and $B$ such that they are not divisible by $x$ or $y$ or $z$ and so on?
| This is a classic application of the Inclusion-Exclusion Principle.
I will write $a|b$ as a shorthand for "$a$ divides $b$". You want to know how many numbers (between 1331 and 3113) are not divisible by 2, 3, or 5. First I will count the number divisible by either 2, 3, or 5. So let $1331 \le n \le 3113$ (the question does not specify whether the endpoints are included, but at this point it doesn't matter).
We want to count the number of $n$ satisfying:
$2|n$ OR $3|n$ OR $5|n$.
You count 891, 594, 356. This is the first term in the inclusion-exclusion formula. The answer is not just $891+594+356$, because we have double-counted those numbers in the overlap - namely, those numbers such that:
($2|n$ AND $3|n$), OR ($2|n$ AND $5|n$), OR ($3|n$ AND $5|n$)
This group can be simplified to:
$6|n$ OR $10|n$ OR $15|n$
There are 297, 178, 119 in each group. So to remove the double-counted elements, we should take them away from our sum, right?
Well, $(891+594+356) - (297 + 178 + 119)$ is still not the answer because we have removed too many items. To cut a verbose story short, consider those numbers divisible by 2, 3 and 5 (i.e. by 30). Well, We counted them three times in the first counting, and removed them three times in the second counting.
So we have to add them back in, and there are 59 such numbers. So the final answer is
$$891+594+356 - 297-178-119 + 59 = 1306$$
So your final answer is 477, if "between" includes the endpoints for you.
The general question
If $x, y, z$ are coprime numbers, then the general question is exactly the same as above. You just do
$$(\lfloor B/x \rfloor - \lfloor A/x \rfloor) + (\lfloor B/y \rfloor - \lfloor A/y \rfloor) + \ldots - (\lfloor B/xy \rfloor - \lfloor A/xy \rfloor) - \ldots + (\lfloor B/xyz \rfloor - \lfloor A/xyz \rfloor) .$$
If they aren't coprime, then i t's slightly different - suppose for example $x=6$, $y=9$, $z=12$. Then "$6|n$ AND $9|n$" is equivalent to $18|n$, not to $54|n$ as you might naively guess.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4047057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Prove that the algebraic sum of the ordinates of intersection of a circle and a parabola is 0 I consider two curves $x^2=4ay$ and $x^2+y^2=\lambda^2$
So
$$y^2+4ay-\lambda^2=0$$
And $$y_1+y_2\not =0$$
I just want to whether the question itself is right or not
| For $x^2=4ay$, sum of abscissae will be zero due to symmetry about y-axis.
For $y^2=4ax$, sum of ordinates will be zero due to symmetry about x-axis.
(Perhaps the question has been mixed up? Or was it originally a true/false question?)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4047225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Let $f'\ge0, f''\ge0 \ \forall x \in \mathbf{R}$. If there is $c$ such that $f'(c)>0$, then $\lim_{x\to\infty}f(x)=\infty$
Let $f$ be a twice differentiable function. Suppose $f'(x)\ge0, f''(x)\ge0 \ \forall x \in \mathbf{R}$. If there exists $c \ \in R$ such that $f'(c)>0$, show that $\lim_{x\to\infty}f(x)=\infty$. My attempt is to prove the statement by absurd. So, I would like to have a feedback on my proof, please. Setting myself that $f(c_x)>0$ in my proof seems to me a little bit strange..
Suppose by absurd that $f$ converges to $L$: $\lim_{x\to\infty}f(x)=L$. As $f''\ge0$, then $f$ is convex and we can deduce that $f'$ is increasing.
Consider the interval $[x,x+a]$ with $a>0$. As $f$ is differentiable on $\mathbf{R}$, it is differentiable on $[x,x+a]$ and so continuous on $[x,x+a]$. By mean value theorem there exists $c_x\in ]x,x+a[$ such that: $f'(c_x)=\frac{f(x+a)-f(x)}{a}$. As $x$ is "arbitrary" in the interval, we can suppose that $f'(c_x)=\frac{f(x+a)-f(x)}{a}>0$. But,
$f'(c_x)=\frac{f(x+a)-f(x)}{a} \iff \underbrace{\lim_{x\to\infty}f'(c_x)}_{(*)}=\lim_{x\to\infty}\frac{f(x+a)-f(x)}{a}=\frac{L-L}{a}=0\ngtr 0$. So we got a contradiction. Thus, $f$ diverges.
(*) As $f'$ is increasing, as $x$ goest to infinity the inequality might hold and so $\lim_{x\to\infty}f'(c_x)>0$ might hold.
| I think your proof is fine, but the idea behind the proof is somewhat obscured. To me, it seems the key point is to prove there is a line of positive slope that bounds $f$ from below. Indeed, if $f'(c) > 0$, then $f(x) - f(c) = f'(c_x)(x-c) \geq f'(c)(x-c)$. The right side clearly tends to $\infty$ as $x \to \infty$, hence so must $f$.
Edit, I should also point out that this proof also works for convex $f$ (not assumed to be twice differentiable), as convex implies $f$ is above any tangent line, which is all we really used.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4047411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $f'(t)$ exists then $\lim_{h,k\to0}\frac{f(t+h)-f(t-k)}{h+k}=f'(t)$ Consider $f'$ continuous...
I'm tried to prove it...with Taylor's expansion:
$f(t+h)=f(t)+hf'(t)+h\phi(t,h)$
$f(t-k)=f(t)-kf'(t)-k\phi(t,-k)$
Then:
$\frac{f(t+h)-f(t-k)}{h+k}=f'(t)+\frac{h}{h+k}\phi(t,h)+\frac{k}{h+k}\phi(t,-k)$
I don't know how I can prove $\lim_{h,k\to0}\frac{h}{h+k}\phi(t,h)=0$
| If $f'$ is continuous, by the MVT, we have $\frac{f(t + h) - f(t - k)}{(t + h) - (t - k)} = f'(\xi)$ for some
$\xi$ between $t+h$ and $t - k$. As $h, k \to 0$, we have $\xi \to t$.
What if $f'$ is not continuous?
A counterexample
Let $f: (a, b) \to \mathbb{R}$ be differentiable. Let $t \in (a, b)$.
Can we say $\lim_{h, k\to 0} \frac{f(t + h) - f(t - k)}{h + k} = f'(t)$?
No. $\lim_{h, k\to 0} \frac{f(t + h) - f(t - k)}{h + k}$ may not exist.
Let
$$f(x) = \left\{\begin{array}{cc}
x^2\sin\frac{1}{x} & x\ne 0 \\
0 & x = 0.
\end{array}
\right.$$
Let $t = 0$. $f$ is differentiable on $\mathbb{R}$ and its derivative is given by
$$f'(x) = \left\{\begin{array}{cc}
2x\sin\frac{1}{x} - \cos \frac{1}{x} & x\ne 0 \\[5pt]
0 & x = 0.
\end{array}
\right.$$
Let us prove that
$\lim_{h, k\to 0} \frac{f(h) - f( - k)}{h + k}$ does not exist.
Consider $h_n = \frac{1}{n}$ and $k_n = -\frac{1}{n} + \frac{1}{n^2}$.
Then as $n \to \infty$, we have $(h_n, k_n) \to (0, 0)$.
However, we have
\begin{align}
&\frac{f(h_n) - f( - k_n)}{h_n + k_n} \\
=\ & \sin n - \sin\left(n + 1 + \frac{1}{n-1}\right) + \frac{2n - 1}{n^2}\sin \frac{n^2}{n-1}\\
=\ & -2\cos\left(n + \frac{1}{2} + \frac{1}{2n - 2}\right)
\sin\left(\frac{1}{2} + \frac{1}{2n - 2}\right) + \frac{2n - 1}{n^2}\sin \frac{n^2}{n-1}.
\end{align}
Clearly, it does not converge to zero as $n\to \infty$.
Discussion
However, the following is true:
Suppose that $f$ is differentiable at $x$. Then
$$f'(x) = \lim_{h\to 0^{+},\ k\to 0^{+}} \frac{f(x+h) - f(x-k)}{h+k}.$$
Calculus by Spivak, 3rd Ed., page 164, question 22(b).
See: Prove that $f'(x) = \lim_{h\to 0^+ \\k\to 0^+} \frac{f(x+h) - f(x-k)}{h+k}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4047593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $M$ is connected and $\int_M \omega = 0,$ then $\omega = 0$? Let $M$ be a connected, orientable, and compact $(r+1)$-dimensional manifold and $\omega$ a $(r+1)$-differential form on $M$. If $\int_M \omega = 0$, then $\omega =0$?
I want to say that the answer is yes. The intuitive reason behind it is that if $\omega\neq 0$, since $M$ is connected then "it must be $\omega>0$ or $\omega<0"$, so we have $\int_M \omega \neq 0$. I've been trying to formalize this intuition by stating what means $\omega>0$ first. Since we can write $\omega(x) = a(u) du_1\wedge \cdots \wedge du_{r+1},x\in M,$ for $x = \varphi(u)$ and some chart $\varphi$, then $\omega>0$ would mean that $a(u)>0$ for every $x\in M$. But i'm struggling to use the fact that $M$ is connected to ensure that $a(u)>0$ should happen.
How can I proper formalize this intuition?
| This is not true. For instance, take $M$ to be a manifold (without boundary) and $\omega$ an exact form, so that $\omega=d\eta$. Then by Stokes' Theorem
$$
\int_{M}d\eta=\int_{\partial M}\eta=0.
$$
but being exact does not imply being zero. E.g. we can find $\eta$ so that $d\eta\ne 0$. For a simple example, take any nonconstant function $f$ on $S^1$, then $df\in \Omega^1(S^1)$ is exact but not zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4047719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
What sequence is this $\prod_{i=1}^n \frac{2i}{2i + 1} = $? I'm trying to find a closed form solution for $\prod_{i=1}^n \frac{2i}{2i + 1} $, but I'm confused what kind of sequence this is. Does it have a closed form?
| Closed Form
$$
\begin{align}
a_n
&=\prod_{k=1}^n\frac{\color{#C00}{2k}}{\color{#090}{2k+1}}\tag{1a}\\
&=\frac{\color{#C00}{2^nn!}\,\color{#090}{2^nn!}}{\color{#090}{(2n+1)!}}\tag{1b}\\[3pt]
&=\frac{4^n}{(2n+1)\binom{2n}{n}}\tag{1c}
\end{align}
$$
Bounds
$$
\begin{align}
\frac{\sqrt{n+1}\ a_n}{\sqrt{n}\ a_{n-1}}
&=\sqrt{\frac{n+1}{n}}\frac{2n}{2n+1}\tag{2a}\\
&=\sqrt{\frac{4n^3+4n^2}{4n^3+4n^2+n}}\tag{2b}\\[9pt]
&\le1\tag{2c}
\end{align}
$$
Therefore, $\sqrt{n+1}\ a_n$ is decreasing.
$$
\begin{align}
\frac{\sqrt{n+\frac34}\ a_n}{\sqrt{n-\frac14}\ a_{n-1}}
&=\sqrt{\frac{4n+3}{4n-1}}\frac{2n}{2n+1}\tag{3a}\\
&=\sqrt{\frac{16n^3+12n^2}{16n^3+12n^2-1}}\tag{3b}\\[12pt]
&\ge1\tag{3c}
\end{align}
$$
Therefore, $\sqrt{n+\frac34}\ a_n$ is increasing.
Since $\sqrt{n+1}\ a_n$ is decreasing and greater than $\sqrt{n+\frac34}\ a_n$, which is increasing, and their ratio tends to $1$, they tend to a common limit, $L$.
$$
\begin{align}
L
&=\lim_{n\to\infty}\sqrt{n+1}\prod_{k=1}^n\frac{2k}{2k+1}\tag{4a}\\
&=\lim_{n\to\infty}\frac{\sqrt{n+1}}{2n+1}\frac{4^n}{\binom{2n}{n}}\tag{4b}\\
&=\frac{\sqrt\pi}2\tag{4c}
\end{align}
$$
Explanation:
$\text{(4a)}$: definition of $L$
$\text{(4b)}$: apply $(1)$
$\text{(4c)}$: Theorem $1$ from this answer
Thus $(2)$, $(3)$, and $(4)$ yield
$$
\bbox[5px,border:2px solid #C0A000]{\sqrt{\frac{\pi}{4n+4}}\le\prod_{k=1}^n\frac{2k}{2k+1}\le\sqrt{\frac{\pi}{4n+3}}}\tag5
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4048022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proof of relation between divided difference and interpolation polynomial coefficient Given data points $\{(x_i,f(x_i)\}_{i=0}^{m}$, if we define divided difference recursively as:
$$f[x_0,\cdots,x_{k+1}] = \frac{f[x_1,\cdots,x_{k+1}]-f[x_0,\cdots,x_k]}{x_{k+1}-x_0} \text{ with the definition that} f[x_0] = f(x_0)$$
then it is true that the unique interpolating polynomial $L_m(x)$ of degree $m$ such that $L_m(x_i) = f(x_i)$ for all $i = 0\cdots m$ is given by:
$$
L_m(x) = \sum_{k=0}^mf[x_0,\cdots,x_k]\omega_k(x), \text{ where } \omega_k(x) = \prod_{i=0}^{k-1}(x-x_i).
$$
The question asks to show this. That is, show the coefficients of the interpolating polynomial of degree $m$ in the basis $\{\omega_k\}_{k=0}^{m}$ is exactly $a_k = f[x_0,\cdots,x_k]$, the divided differences.
To tackle this problem, I've attempted to use induction. The base step is trivial as $a_0 = f(x_0)$, but I am stuck on the inductive step. I supposed the degree $m$ interpolating polynomial of some data $\{(x_i,f(x_i)\}_{i=0}^{m}$ is indeed given by:
$$
L_m(x) = \sum_{k=0}^mf[x_0,\cdots,x_k]\omega_k(x)
$$
and I want to argue that $L_{m+1}(x) = \sum_{k=0}^{m+1}f[x_0,\cdots,x_k]\omega_k(x)$ is the interpolating polynomial of degree $m+1$ interpolating $\{(x_i,f(x_i)\}_{i=0}^{m+1}$, with any additional point $(x_{m+1},f(x_{m+1}))$ added.
Clearly, $L_{m+1}(x)$ interpolates $\{(x_i,f(x_i)\}_{i=0}^{m}$ by definition of the $\omega_k$'s, but I am stuck at showing $L_{m+1}(x_{m+1}) = f(x_{m+1})$. Hints will be extremely appreciated!
| This proof was more delicate than I expected.
First observe that by an easy induction, for all $x_0,\ldots, x_m$ we have
$$f[x_0,\ldots, x_m]=f[x_m,\ldots,x_0]\,. \quad \quad (1)$$
Recall $L_m$ is defined to be the degree $m$ polynomial interpolating $f$ at $x_0,\ldots,x_m$. Our goal is to verify by induction that
$$L_m(x)+f[x_0,\ldots,x_{m+1}] \, \omega_{m+1}(x)=L_{m+1}(x) \, \quad \quad (2) $$
for all choices of $x,x_0,\ldots, x_m.$
Since both sides of (2) are degree $m+1$ polynomials and (2) is clear for $x=x_0,\ldots,x_m$, it suffices to verify it also holds for $x=x_{m+1}$. To this end, define $Q_k$ as the degree $k$ polynomial interpolating $f$ at $x_1,\ldots,x_{k+1}$ and denote
$$\psi_m(x):=\prod_{j=1}^m (x-x_j)=\frac{\omega_{m+1}(x)}{x-x_0} \,. \quad \quad (3)$$
By the induction hypothesis, equation (2) holds for polynomials of lower degree, so
$$Q_m(x)=Q_{m-1}(x)+f[x_1,\ldots,x_{m+1}] \psi_{m}(x) \, \quad \quad (4) $$
and by considering the points in the reverse order $x_m,\ldots,x_1,x_0$
$$L_m(x)=Q_{m-1}(x)+f[x_m,\ldots,x_0] \psi_{m}(x) \, \quad \quad (5) $$
Subtracting the last two equations and setting $x=x_{m+1}$ gives
$$Q_m(x_{m+1})-L_m(x_{m+1})=\Bigl(f[x_1,\ldots, x_{m+1}]- f[x_m,\ldots,x_0] \Bigr)\psi_{m}(x_{m+1})\quad \quad \quad \quad \quad \; \; \; \;$$
$$=f[x_0,\ldots, x_{m+1}]\, \omega_{m+1}(x_{m+1}) \,, \quad \quad (6) $$
using equations (1) and (3) in the last step.
Since $Q_{m}(x_{m+1})=f(x_{m+1})=L_{m+1}(x_{m+1})$, equation (6) gives (2) for $x=x_{m+1}$. This completes the induction step and the proof of (2).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4048158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is the probability of winning this game?
A tennis player has a 60% chance of winning any given point in a tennis game. Calculate the probability that she will win the game within the first 6 points, stating any assumptions you make.
(A game is won when a player has won at least 4 points and won at least 2 more points than their opponent.)
So there are 3 choices:
*
*she wins 4 points with the probability 60%$^{4}$
*she wins 4 points and lose 1 point with probability 4 $\times$ 60%$^{4}$ $\times$ 40%
*she wins 4 points and lose 2 points 15 $\times$ 60%$^{4}$ $\times$ 40%$^{2}$
So the total probability of her winning the game is 0.69984. Is this right?
| To win in $\leq 6$ points, she must lose at most $2$ points in $6$, so
P(win) = $\binom 6 0\cdot 0.4^0\cdot 0.6^6 + \binom 6 1\cdot 0.4^1\cdot 0.6^5 + \binom 6 2\cdot 0.4^2 \cdot 0.6^4 = 0.54432$
PS
The above formulation is the simplest way to get the answer. However, just for corroboration, let us also solve it by summing up
P(win in exactly $4,5,\;or\; 6$) points
To win in exactly $4$ points, she must win all: $0.6^4 =0.1296$
To win in $5$ points, she must win the fifth point and three of the first four: $\left(\binom 4 3 \cdot 0.6^3\cdot0.4\right)\cdot0.6 = 0.20736$
To win in $6$ points, she must win the sixth point, and three of the first five:$\left(\binom 5 3 \cdot 0.6^3\cdot0.4^2\right)\cdot0.6 = 0.20736$
$0.1296+ 0.20736 +0.20736 = 0.54432$, as before
But error prone method with possibly confusing binomial coefficients
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4048393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that if $\lim_{n\to\infty} \frac{x_n}{y_n} = 0$, then $\lim_{n\to\infty} x_n \div \frac{x_n + y_n}{2} = 0$ Let $\{x_n\}$ and $\{y_n\}$ be positive sequences. Assume $\lim_{n\to\infty} \frac{x_n}{y_n} = 0$. I have to prove if the claim $\lim_{n\to\infty} x_n \div \frac{x_n + y_n}{2} = 0$ is always true, always false, or sometimes true and sometimes false depending on the sequences.
My attempt:
\begin{align*}
x_n + y_n &\geq y_n \\
0 < \frac{1}{x_n + y_n} &\leq \frac{1}{y_n} \\
0 < \frac{x_n}{x_n + y_n} &\leq \frac{x_n}{y_n}
\end{align*}
Then by Squeeze Theorem, $\lim_{n\to\infty} \frac{x_n}{x_n + y_n} = 0 \implies \lim_{n\to\infty} \frac{2x_n}{x_n + y_n} = 0$ which is what we wanted. Could you check if my proof makes sense. Is there any cases of sequences of x_n and y_n such that this statement is not true?
| Alternative approach
$\frac{y_n}{x_n} \to \infty \implies$
$\frac{1}{2} \times \frac{x_n + y_n}{x_n} = \frac{1}{2}
\times \left[1 + \frac{y_n}{x_n}\right] \to \infty.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4048522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
There appears to be a contradiction while solving recurrence relation Consider the equation :
$$
y[n]-\frac{1}{2}y[n-1]=x[n]
$$
where $x[n]:=\left(\frac{1}{3}\right)^{n}u[n]$. First, I was asked to find the homogenous part of the solution having a form $y_{h}:=A\frac{1}{2^{n}}$. Indeed the homogenous solution
$$
y_{h}[n]=\frac{1}{2^{n}}y[0]
$$
where I insist $A=y[0]$. Furthermore, The particular solution is given of the form $y_{p}[n]:=B\left(\frac{1}{3}\right)^{n}u[n]$ I shall determine $B$ as follow :
$$
B\left(\frac{1}{3}\right)^{n}u[n]-\frac{1}{2}B\left(\frac{1}{3}\right)^{n-1}u[n-1]=\left(\frac{1}{3}\right)^{n}u[n]
$$
\begin{align*}
\implies B&=\frac{\displaystyle\left(\frac{1}{3}\right)^{n}u[n]}{\displaystyle\left(\frac{1}{3}\right)^{n}u[n]-\frac{1}{2}\displaystyle\left(\frac{1}{3}\right)^{n-1}u[n-1]}\\ \\
&=\frac{\displaystyle\left(\frac{1}{3}\right)^{n}}{\displaystyle\left(\frac{1}{3}\right)^{n}-\displaystyle\frac{1}{2}\left(\frac{1}{3}\right)^{n-1}}\\ \\
&=-2
\end{align*}
So now that we have $B=-2$, We are given the initial rest condition (a condition that states if $x[n]=0$, $\forall n<0$ then $y[n]=0$ $\forall n<0$). so what I did was sub $n=0$ in difference equation
$$
y[0]-\frac{1}{2}y[-1]=x[0]
$$
$$
\implies y[0]=1
$$
$$
**\text{So this means $A=y[0]=1$}**
$$
However, the major issue is that :
$$
y[0]-\frac{1}{2}y[-1]=x[0]
$$
$$
\implies y[0]=1
$$
$$
\implies y[n]:= y_{h}[n]+y_{p}[n] \implies y[0]=y_{h}[0]+y_{p}[0]=1
$$
$$
\implies A\left(\frac{1}{2}\right)^{0} -2\left(\frac{1}{3}\right)^{0}=1
$$
$$
\implies A=3
$$
So I have $A=1$ and $A=3$, an obvious contradiction and I hope someone can help me.
Remark : $u[n]$ is the unit-step function where :
$$
u[n]:=
\begin{cases}
1&\text{if $n\geq0$}\\
0&\text{if $n<0$}
\end{cases}
$$
| The homogeneous solution gives
$$
y_h(n) = 2^{1-n}c_0
$$
now making for the particular $y_p(n) = 2^{1-n}c_0(n)$ after substituting into the complete recurrence we have
$$
c_0(n)-c_0(n-1) = \frac{2^{n-1}}{3^n}
$$
and solving this recurrence we have
$$
c_0(n) = 1-\left(\frac23\right)^n
$$
hence
$$
y(n) = y_h(n)+y_p(n) = 2^{1-n}c_0 + 2^{1-n}\left(1-\left(\frac23\right)^n\right), \ \ \ n\ge 0
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4049259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
is there any way to represent irrational numbers with a finite amount of integers? I know that rational numbers can be represented with two integers $\frac{a}{b}$.
But is there any way to represent irrational numbers with an finite amount of integers?
My best guess is $\frac{a}{b} ^ \frac{c}{d}$.
it can represent any root of any number, but I don't know if it can represent things like $\sqrt{2}^\sqrt{2}$.
| Suppose you have an encoding scheme allowing you to express a subset $C\subseteq \mathbb{R}$ of real numbers using, for each, a finite number of integers. Then you have a bijection between $C$ and
$$
\cup_{k\in \mathbb{N}} \mathbb{N}^k
$$
and since this is countable, then so is $C$. But the irrational numbers are uncountable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4049395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Deconstructing the numerator in derivatives I'm trying to deconstruct $\frac{d(\sqrt{x^2+y^2})}{dt}$ into possibly $\frac{dx}{dt}$ and $\frac{dy}{dt}$, but I have no idea where to start or what to search.
| The chain rule says $\frac{d f(g(t))}{dt}=f'(g(t))g'(t)$. In this case, take $f(z)=\sqrt z$ and $g(t)=x(t)^2+y(t)^2$ so that
$$\frac{d\sqrt{x(t)^2+y(t)^2}}{dt}=\frac1{2\sqrt{x(t)^2+y(t)^2}}\cdot\frac d{dt}(x(t)^2+y(t)^2)$$
and the rest follows pretty quickly with another application of the chain rule. I would hesitate to call $f$ a numerator, I'd say it's the function which your taking the derivative of, not sure if there's a slick term like "integrand".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4049526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\mathcal{B}$ an algebra $\implies f^{-1}(\mathcal{B}) $ an algebra Im trying to prove the following statement:
Let $f: \Omega \to E$.
$\mathcal{B}$ an algebra on $E$ $\implies f^{-1}(\mathcal{B}) $ an algebra on $\Omega$.
To show that $f^{-1}(\mathcal{B})$ is an algebra, following has to hold:
*
*$\Omega \in f^{-1}(\mathcal{B})$
*Stable under complementation
*Stable under finite union
My question:
In the first point, since $\mathcal{B}$ is an algebra on $E$, it follows that $E \in \mathcal{B}$. But how do I even know if $f^{-1}(E)$ is defined? Don't we need the restriction that the function needs to be bijective in order to gurantee that $f^{-1}$ is possible?
EDIT (after suggestion by Henno Brandsma)
*
*$\Omega \in f^{-1}(\mathcal{B})$ since $f^{-1}[E] = \Omega$.And $E \in \mathcal{B}$
*Then regarding the complement: I take any $A \in \mathcal{B}$ so the complement would be $A^c = E \setminus A$. Note that $A^c \in \mathcal{B}$. So $f^{-1}[E\setminus A]= \Omega \setminus f^{-1}[A]$ and $f^{-1}[A]$ is defined since we know $f^{-1}[E] = \Omega$ and $A \subset E$. So we have the complement of $f^{-1}[A]$ as well.
*This one follows from the fact that $f^{-1}[\bigcup_i A_i] = \bigcup_i f^{-1}[A_i]$.
| Use that
*
*$f^{-1}[E]=\Omega$
*$f^{-1}[E\setminus A] = \Omega\setminus f^{-1}[A]$ for all $A \subseteq E$,
*$f^{-1}[\bigcup_i A_i] = \bigcup_i f^{-1}[A_i]$
Note that $f^{-1}[A]$ is always defined, it's just the set $$\{x \in \Omega\mid f(x) \in A\}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4049832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Verification of solution: $\lambda\left(\bigcup\limits_{k=1}^\infty A_k\right) < \infty$
$A_1$, $A_2$ .... $\subset \mathbb{R}^n$ given through $$ A_k := \{
(x,y)\in \mathbb{R}^n:\text{ } k\leq x^2+y^2 \leq k+\frac{1}{2k^2}\}
$$ Show:
a) $\bigcup\limits_{k=1}^\infty A_k$ is Borel-measurable
b) $\lambda \left(\bigcup\limits_{k=1}^\infty A_k\right) < \infty $
I do understand how a) works.
But b) is a bit weird in my opinion.
Edit: $\lambda$ is Lebesgue measure.
My solution:
I defined $r:= x^2+y^2$. Then $r\in [k,k+\frac{1}{2k^2}]$ for $A_k$. I also see $A_k \cap A_{k+1} = \varnothing$.
Then it is $$ \lambda\left(\bigcup\limits_{k=1}^\infty A_k\right) \leq \sum\limits_{k=1}^\infty \lambda(A_k) = \sum\limits_{k=1}^\infty \left(k+\frac{1}{2k^2}-k\right) = \sum\limits_{k=1}^\infty \frac{1}{2k^2} < \infty $$
Official solution:
In the offical solution they use open balls $\overline{B_{r_k}}$ and $B_{\rho_k}$ with $r_k = \sqrt{k+\frac{1}{2k^2}}$ and $\rho_k = \sqrt{k}$ $\Rightarrow A_k = \overline{B_{r_k}}\setminus B_{\rho_k}$. Then $\sum\limits_{k=1}^\infty \lambda(A_k) \leq \sum\limits_{k=1}^\infty \frac{\pi}{2k^2} < \infty$
My question:
Is my way to prove this also right? And if it isn't: Why? I understand the official solution but it didnt come to my mind at first and I am unsure if this is the only right way to prove b).
| Let start defining $c = \lambda (\{(x,y): x^{2}+y^{2} \leq 1\}$ (it can be proved that $c= \pi$, but we don't need it for this solution). Let us prove the following lemma:
Lemma:
*
*If $s \geqslant 0$, then $\lambda (\{(x,y): x^{2}+y^{2} \leq s\}= cs$.
*If $0 \leqslant r \leqslant R$, then $\lambda (\{(x,y): r \leq x^{2}+y^{2} \leq R\})= c(R-r)$.
Proof: Item 1.
\begin{align*}
\lambda (\{(x,y): x^{2}+y^{2} \leq s\} & = \lambda (\{(\sqrt{s}x,\sqrt{s}y): x^{2}+y^{2} \leq 1 \}= \\
& = \lambda (\sqrt{s}\{(x,y): x^{2}+y^{2} \leq 1 \}= \\
& =(\sqrt{s})^2 \lambda (\{(x,y): x^{2}+y^{2} \leq 1\}= \\
& = sc=cs
\end{align*}
where we used that, for any $E$ measurable and $\alpha \geqslant 0$, $\lambda(\alpha E) = \alpha^2 \lambda(E)$.
Item 2.
Using the fact that $\lambda (\{(x,y): x^{2}+y^{2} = r\})=0$, we have
\begin{align*}
\lambda (\{(x,y): r \leq x^{2}+y^{2} \leq R\}) &= \lambda (\{(x,y): x^{2}+y^{2} \leq R\}) - \lambda (\{(x,y): x^{2}+y^{2} < r\}) =\\
&= \lambda (\{(x,y): x^{2}+y^{2} \leq R\}) - \lambda (\{(x,y): x^{2}+y^{2} \leq r\}) =\\
&= cR-cr = c(R-r)
\end{align*}
This completes the proof of the lemma. $\square$
Now, using the lemma above and since, for all $i , j \in \{ 1, \dots, \} $, $A_i$ and $A_j$ are disjoint,
$$ \lambda\left(\bigcup\limits_{k=1}^\infty A_k\right) \leq \sum\limits_{k=1}^\infty \lambda(A_k) = \sum\limits_{k=1}^\infty c\left(k+\frac{1}{2k^2}-k\right) = \sum\limits_{k=1}^\infty \frac{c}{2k^2} < \infty $$
Remark: As you see, your answer is almost correct, it is missing some details and a factor $c$ (whose actual value is $\pi$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4050008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What we need to show, $ab=1\bmod k$ or $(ab)\bmod n=1\bmod k$? For proving $U_k(n)≤U(n)$ I need to show that,
For each divisor $k$ of $n$, $U_k(n)$ is subgroup of $U(n)$ where, $U_k(n)=\{x\in U(n) : x=1\bmod k\}$
My attempt: as $U(n)$ is finite group for each $n\in\mathbb{Z}^+$.
Hence we may use, "finite subgroup test" to show that, $U_k(n)$ is subgroup of $U(n)$.
Clearly, $1\in U_k(n)$ since $1\equiv 1\bmod k$.
Let $a,b\in U_k(n)$ then $a=1\bmod k$ and $b=1\bmod k$, then clearly, $ab=1\bmod k$ so that $ab\in U_k(n)$ and so by finite subgroup test, $U_k(n)$ is subgroup of $U(n)$.
But, where we used $k$ divides $n$? Further, is we computed $ab$ under the operation multiplication modulo $n$? since by finite subgroup test, we need to show $U_k(n)$ is **closed under the operation of $U(n)$. Hence, I think we need to show $(ab)\bmod n=1\bmod k$ Am I correct?
(If i am correct, then how to show $(ab)\bmod n=1\bmod k$)
Please help me......
| I assume by $U(n)$ you mean $\mathbb{Z}/n\mathbb{Z}$?
We consider the example n=16 and k=3, than for $a \in \mathbb{Z}/n\mathbb{Z}$ we can not say what $a \mod k$ is, since we can choose another representant. In our example let $a=[4]$ and clearly $4 \mod 3 = 1 \mod 3$, but if we use another representant, say $[4]=[20]$, than $20 \mod 3 \neq 1 \mod 3$. This is where you use that $k$ divides $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4050161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Test whether a given function is polynomial You have a black box function to which you can give real number inputs and from which you can receive real number outputs. How would you test whether it is likely to be a polynomial?
One expensive idea is to use finite differences:
*
*Choose a maximum degree n of the "polynomial" you are testing.
*Choose a consecutive sequence with random step size, and evaluate the function there to get an output sequence. E.g., $$[ 2, 2.3, 2.6, 2.9,\dots] \to [ 4.81, 5.02, 5.05, 4.90,\dots]$$
*Using the output sequence as S[0], define S[n] so that its k^th entry S[n][k] = S[n-1][k+1]-S[n-1][k]. E.g. S[1] = [5.02-4.81,5.05-5.02,4.90-5.05,...] = [0.21,0.03,-0.15,...]
*If the function is a polynomial (of degree at most n), then the sequence S[n+1] should be all zeros.
Some issues about programming this method:
*
*Would be expensive for large n
*If S[0] has large values, computer arithmetic will produce bad results for S[1] and beyond.
| I don't know much about "likely". But $N+1$ distinct points uniquely determine a degree $\le N$ polynomial by Lagrange interpolation. If the obtained points are $(x_i, y_i)$, for $i=0, \ldots N$, then first define:
$$L_{N, j}(x) = \prod_{k \ne j}^{N}\frac{x-x_k}{x_j-x_k}$$
(which is clearly a polynomial of degree $N$). The unique polynomial passing through those points is $$p(x) = \sum_{j=0}^{n}y_j L_{N, j}(x),$$
which is a polynomial of degree at most $N$. If you compute one more point from your black box, then you can compare it with this polynomial to see whether it fits.
By the same argument, there is no way to deterministically rule out polynomials given a finite set of input values, since the polynomial described above can always be constructed.
If you choose a constant step size (like $1$), your $p(x_{N+1})$ may be easy to calculate. In that case, you would need to only compute
$$\sum_{j=0}^{n}y_{j}\prod_{j \ne k}(x_{N+1} - x_j) = \sum_{j=0}^{n}y_{j}\prod_{j \ne k}(N+1-j)$$
which doesn't sound over-complex.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4050307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Proof by counterexample
If $n$ is prime, then $\sqrt{n}$ is irrational. Prove this statement.
If I were to prove this using proof by contradiction, I would do:
Suppose $n$ is prime and $\sqrt{n}$ is rational. Let $\sqrt{n}=\frac{a}{b}$ where $a,b\in \mathbb{R}$ and have no common factor other than 1.
And then I would go on and get a contradiction like $a$ is even and $b$ is even.
However, I was wondering if I could also use proof by counterexample:
($n$ is prime) $\Rightarrow$ ($\sqrt{n}$ is irrational)
The negation of this statement is:
($n$ is prime) $\land$ ($\sqrt{n}$ is rational)
A counter example to this is $n=2$.
Therefore, this statement is disproved and hence, its negation is proven to be true.
Is this also a valid proof?
| The claim is in plain text
"For every prime $n$, $\sqrt{n}$ is irrational."
The negation is in plain text
"There is some prime $n$, such that $\sqrt{n}$ is rational."
This is not a statement about all primes, but only about some prime. Therefore we cannot disprove it by a single counterexample.
To see the flaw : Assume $\sqrt{5}$ would be rational. Then the statement would be false since $5$ is prime, but $\sqrt{5}$ would not be irrational. You see, that the case $n=2$ is not enough.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4050471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\lim_{x\to 0^{+}} \frac{\ln (1- \cos 2x)}{\ln \tan 2x}$ I have to solve the limit
$$\lim_{x\to 0^{+}} \frac{\ln (1- \cos 2x)}{\ln \tan 2x}$$
applying Taylor's series.
$$\lim_{x\to 0^{+}} \frac{\ln (1- \cos 2x)}{\ln \tan 2x}=\lim_{x\to 0^{+}} \frac{\ln (1- \cos 2x)}{\ln \frac{\sin 2x}{\cos 2x}}= \lim_{x\to 0^{+}} \frac{\ln (2 \cdot( sin x)^2)}{\ln \sin 2x - \ln \cos 2x}= \lim_{x\to 0^{+}} \frac{\ln 2+ 2 \ln sin x}{\ln \sin 2x - \ln \cos 2x}= \lim_{x\to 0^{+}} \frac{\ln 2+ 2 \ln sin x}{\ln 2 + \ln \sin x + \ln \cos x - 2\ln \cos x + 2 \ln \sin x}= \lim_{x\to 0^{+}} \frac{\ln 2+ 2 \ln sin x}{\ln 2 + 3\ln \sin x }= \lim_{x\to 0^{+}} \frac{\ln 2( sin x)^2}{\ln 2( sin x)^3}$$
$$\frac{\ln 2( sin x)^2}{\ln 2( sin x)^3} \sim \frac{\ln (2 x^2- \frac{2}{3} x^4+ o(x^4))}{\ln (2 x^3- x^5+ o(x^5))}= \frac{\ln (x^2)+ \ln(2 - \frac{2}{3} x^2+ o(x^2))}{\ln(x^3)+\ln (2 - x^2+ o(x^2))} \sim \frac{2\ln x+ \ln 2}{3\ln x+\ln 2} \sim \frac{2}{3}$$
The suggested solution in my book is $2$. can someone indicate where I made mistakes?
| The OP's error has been pegged to an error with the logarithm of a difference being equated to the difference of logarithms. An alternative approach, which avoids all this, is to note that
$${\ln(1-\cos2x)\over\ln\tan2x}={\ln(2\sin^2x)\over\ln\displaystyle\left({2\tan x\over1-\tan^2x}\right)}={2\ln\sin x+\ln2\over\ln\sin x-\ln\cos x+\ln2-\ln(1-\tan^2x)}$$
Since $\ln\sin x\to-\infty$ as $x\to0^+$ while all the other terms tend to finite limits (i.e. $\ln2\to\ln2$, $\ln\cos x\to\ln\cos0=\ln1=0$, and $\ln(1-\tan^2x)\to\ln(1-\tan^20)=\ln1=0$), the requested limit is easily seen to equal $2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4050600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
combine equal pairs into equivalence classes Given a set $S$ of values, and a set $P \subset S \times S$ of pairwise equivalences, what is an algorithm for partitioning $S$ into equivalence classes?
$P$ is guaranteed to be an equivalence relation.
Assume that $P$ "covers" every member of $S$.
Efficiency isn't all that crucial: I'm interested in $\#S < 100000$ and $\#P < 10000$.
This is like building up pairs into clusters, but because this is for exact equality rather than mere similarity, I'm finding that clustering algorithms are red herrings.
| Equivalence classes on the set $S$ are the same as the connected components of the graph whose edges are given by $P$. Fortunately, there are standard algorithms for computing the connected components: here is one example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4050787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Map Two Polar Disks To Four Dimensional Spherical Coordinates ($\left(\mathbb{R}^2\right)^2\to\mathbb{R}^4$) We have two points $(x_1, y_1), (x_2, y_2) \in \mathbb{R}^2$ with polar parameterizations $(r_1, \theta_1), (r_2, \theta_2)$ (i.e., $x_1=r_1 cos(\theta)$ and so on.) How would the point $(x_1, y_1, x_2, y_2) \in \mathbb{R}^4$ be represented with four-dimensional spherical coordinates $(r, \phi_1, \phi_2, \phi_3)$?
Please correct my proposed answer if I have made a mistake or upvote it if you concur.
Finally, how would one inductively generalize equivalency to $\mathbb{R}^{2^n}$ or $\mathbb{R}^{2n}$?
| We begin by equating the components of the two numbers in polar coordinates and the single number in spherical coordinates. Here, $r_3 = \sqrt{r_1^2 + r_2^2}$.
$$r_1 \cos \left( \theta_1 \right) = r_3 \cos \left( \phi_1 \right)$$
$$r_1 \sin \left( \theta_1 \right) = r_3 \sin \left( \phi_1 \right) \cos \left( \phi_2 \right)$$
$$r_2 \cos \left( \theta_2 \right) = r_3 \sin \left( \phi_1 \right) \sin \left( \phi_2 \right) \cos \left( \phi_3 \right)$$
$$r_2 \sin \left( \theta_2 \right) = r_3 \sin \left( \phi_1 \right) \sin \left( \phi_2 \right) \sin \left( \phi_3 \right)$$
Next, the standard definition of $arccos$ does not include the full input plane, so we form a continuous extension to $\mathbb{R}^2 \setminus \mathbb{R}^+$. (We exclude the point intersecting $\mathbb{R}^+$ with $\theta = 0$ or $\theta = 2 \pi$). With an abuse of notation, we will still refer to this continuous extension as $arccos$. An interesting application of this definition of $arccos$ is that $\mathbb{R}^3 \setminus 0$ is not homologically trivial, which is an exercise in Munkre's Analysis on Manifolds.
$$\phi_1 = \arccos \left( r_1 r_3^{-1} \cos \left( \theta_1 \right) \right)$$
$$\phi_2 = \arccos \left( r_1 r_3^{-1} \frac{ sin (\theta_1) }{sin (\phi_1)} \right)$$
$$\phi_3 = \arccos \left( r_2 r_3^{-1} \frac{ \cos \left( \theta \right) }{ \sin \left( \phi_1 \right) \sin \left( \phi_2 \right) } \right)$$
Fortunately, the other direction is simpler to compute.
$$\theta_{1} = \arccos(r_3 r_1^{-1} \cos \left( \phi_1 \right))=\arcsin \left( r_3 r_1^{-1} \sin \left( \phi_1 \right) \cos \left( \phi_2 \right) \right)$$
$$\theta_2 = \arccos \left( r_3 r_2^{-1} \sin \left( \phi_1 \right) \sin \left( \phi_2 \right) \cos \left( \phi_3 \right) \right)=...$$
$$\frac{ r_1 \cos \left( \theta_1 \right) }{ \cos \left( \phi_1 \right) } = \frac{ r_1 \sin \left( \theta_1 \right) }{ \sin \left( \phi_1 \right) \cos \left( \phi_2 \right) }$$
$r_1 = \left|\frac{ \cos \left( \theta_1 \right) }{ \cos \left( \phi_1 \right) } - \frac{ \sin \left( \theta_1 \right) }{ \sin \left( \phi_1 \right) \cos \left( \phi_2 \right) }\right|^{-1}$ provided that $cos(\phi_1)\neq0$, $cos(\phi_2)\neq 0$, and $sin(\phi_1)\neq 0$.
$r_2 = \left|\frac{1}{sin(\phi_1) sin(\phi_2)} \left( \frac{cos (\theta_2) }{cos(\phi_3)} - \frac{sin(\theta_2)}{sin(\phi_3)} \right)\right| ^{-1}$ provided that $sin(\phi_1)\neq 0$, $sin (\phi_2) \neq 0$, $cos(\phi_3)\neq 0$, and $sin(\phi_3) \neq 0$.
I wonder if there would be an easier formula for $r_1$ and $r_2$ by taking the components of $r_3$ along two axes. (In this case, $r_1$, $r_2$, and $r_3$ would all be tangent vectors on the same plane, though finding a parameterization or normal vector for the plane is not trivial; one cannot simply compute $\vec{r_1} \times \vec{r_2}$.)
By stipulating $r_3>0$ and $\theta_1, \theta_2, \theta_3 \in [0, 2 \pi)$, proof of injection (being a one-to-one map) should follow immediately. Likewise, every pair of $(x_1, y_1)=(r_1, \theta_1)$ and $(x_2, y_2) = (r_2, \theta_2)$ should produce a value for $(r, \phi_1, \phi_2, \phi_3)$, allowing surjection to follow as well. $f(r_1, \theta_1, r_2, \theta_2)= (r, \phi_1, \phi_2, \phi_3)$ should then satisfy the criteria of an isomorphic map. If feel that I have missed something. Please correct me in the comments or my answer directly if you have sufficient reputation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4050947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
If Anna goes from point A to point B, each step can only move up or move right. How many method(s) is / are there?(reference the grid below) If Anna goes from point A to point B, each step can only move up or move right. How many method(s) is / are there?(reference the grid below)
I’ve just recently learned permutations and combinations. Therefore, I understood how to answer problems regarding grids but only the type of questions with definite number of columns and rows. As shown in the picture above, the grids are different in size. I’m confused on how to solve this. I’d be happy if anyone could help.. Thank you.
| Simpler such problems ask for the number of paths from $A$ to $B$ "in a $(7\times11)$-grid". Here the numbers $7$ and $11$ give the full information about the grid in question, and there is then also a simple answer in terms of a formula containing these two numbers.
But your grid is much more complicated. To describe the grid you needed an explicit figure, or the full contents of a $01$-matrix of size $5\times6$. A "formula" for the number of paths would have to operate with the full contents of this arbitrary matrix.
Instead you can think about an algorithm that solves this kind of problem for arbitrary input figures. Such an algorithm could begin with writing $1$ at $A$ and then writing recursively the sum of the "entering numbers" at each "ready" lattice point, until you are at $B$, as in Pascal's triangle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4051081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Why doesn't the multiplication rule work here? I have the following question:
How many ways are there to select $3$ candidates from $8$ equally
qualified recent graduates for openings in an accounting firm?
I was given the following solution to it: $\binom{8}{3}$, however, I got the answer: $8 \times 7 \times 6$ by doing the multiplication rule, I was wondering why my answer doesn't work.
My initial reason is that in the multiplication rule there is the underlying assumption that order matters and here obviously order doesn't matter. However, I am posting this here as that is the first time I come to such a realization and I am not sure that my reasoning is correct, especially since that I never learned multiplication rule with the idea of order in my mind. Any clarification would be appreciated.
| $\binom{8}{3}$ is the same as $\frac{8!}{3!5!}$ which is $\frac{8.7.6}{3!}$, so you just need to divide by the extra $3!$
That's because you have overlapping choices by e.g choosing ABC, is the same as ACB,BAC,BCA,CAB and CBA, 6 ways to get the same ABC.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4051216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Proving a set is a manifold
Prove that the following set is a manifold and find it's dimension $$M=\{(x,y)\in\Bbb{R^3}\times\Bbb{R^3}|x\cdot y=0, \|x\|=\|y\|=1\}$$
So $M$ can be thought of as the intersection of the following equations: $$x_1y_1+x_2y_2+x_3y_3=0$$ $$x_1^2+x_2^2+x_3^2=1$$$$y_1^2+y_2^2+y_3^2=1$$
I tried defining an implicit function $F(x,y)$ s.t $F(x,y)=0$ iff $(x,y)\in M$ but didn't know how to take it from here. Any help would be appreciated.
| What if you consider the $\mathcal C^1$ function $\mathbf F : \mathbb R^6 \to \mathbb R^3$ defined as: $$\mathbf F(\mathbf x)= \begin{bmatrix}
x_1y_1+x_2y_2+x_3y_3 \\
x_1^2+x_2^2+x_3^2 -1 \\
y_1^2+y_2^2+y_3^2-1 \\
\end{bmatrix} $$
Can you check $\text{rank}(D \mathbf F(\mathbf x))$ now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4051372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Tips on how to start calculating a definite integral i would appreciate if you could help me with this problem.
I don't want the answer only how should i proceed with solving an integral like this
$$I=\int_{0}^{\pi}\frac{x \sin^{2018}(x)}{\cos^{2018}(x)+\sin^{2018}(x)}dx$$
I am completely overwhelmed on how to proceed with this and i am stuck. so i would appreciate some tips to start.
after reading the tips i have done this
$$\int_{0}^{\pi}\frac{(\pi-x) \sin^{2018}(\pi-x)}{\cos^{2018}(\pi-x)+\sin^{2018}(\pi-x)}dx$$
after that i distributed it
$$\int_{0}^{\pi}\frac{\pi\sin^{2018}(\pi-x)-x\sin^{2018}(\pi-x)}{\cos^{2018}(\pi-x)+\sin^{2018}(\pi-x)}=\int_{0}^{\pi}\frac{\pi\sin^{2018}(x)}{\cos^{2018}(x)+\sin^{2018}(x)}-I$$
But after that i'm lost again, i know i should be look for a way to make some terms cancel but that's as far as i got
please don't give me the direct solution, i would like to solve it by myself
Thanks in advance!
| Start with $$\int_a^b f(x)\,\mathrm dx=\int_a^b f(a+b-x)\,\mathrm dx$$
Edit: Since you are still stuck after performing the above step, you get $$I=\int_{0}^{\pi}\frac{(\pi-x) \sin^{2018}(x)}{\cos^{2018}(x)+\sin^{2018}(x)}\, \,\mathrm dx$$
and add this to $I$ to get $$2I=\pi \int_{0}^{\pi}\frac{ \sin^{2018}(x)}{\cos^{2018}(x)+\sin^{2018}(x)}\,\mathrm dx $$
If you are still stuck, use the property
$\displaystyle \int_0^{2a}f(x)\,\mathrm dx=2\int_0^a f(x)\,\mathrm dx$, when $f(2a-x)=f(x)$. If you are unaware of this property, you may see it's proof in this answer.
I hope, now you can take it from here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4051534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Help on how to proceed on this trig integral i would appreciate if you could help me with this problem.
$$I=\int_{0}^{\pi}\frac{x \sin^{2018}(x)}{\cos^{2018}(x)+\sin^{2018}(x)}dx$$
I am completely overwhelmed on how to proceed with this and i am stuck. so i would appreciate some tips to start
after reading some tips i have done this
$$\int_{0}^{\pi}\frac{(\pi-x) \sin^{2018}(\pi-x)}{\cos^{2018}(\pi-x)+\sin^{2018}(\pi-x)}dx$$
after that i distributed it
$$\int_{0}^{\pi}\frac{\pi\sin^{2018}(\pi-x)-x\sin^{2018}(\pi-x)}{\cos^{2018}(\pi-x)+\sin^{2018}(\pi-x)}=\int_{0}^{\pi}\frac{\pi\sin^{2018}(x)}{\cos^{2018}(x)+\sin^{2018}(x)}-I$$
But after that i'm lost again, i know i should be look for a way to make some terms cancel but that's as far as i got
please don't give me the direct solution, i would like to solve it by myself
Thanks in advance!
| HINT
Note that for even $n$ (as in your case) we have
$$\int_0^{\pi} \frac{\sin^n x}{\sin^n{x}+\cos^n x}dx=\int_0^{\pi} \frac{\cos^n x}{\sin^n{x}+\cos^n x}dx$$
As asked for, this is a hint :) If you need any more help please don't hesitate to ask.
This is how I'd continue. We know that
$$\int_0^{\pi} \frac{\sin^n x}{\sin^n{x}+\cos^n x}dx=\int_0^{\pi} \frac{\cos^n x}{\sin^n{x}+\cos^n x}dx$$
Adding, we find that
$$2\int_0^{\pi} \frac{\sin^n x}{\sin^n{x}+\cos^n x}dx=\int_0^{\pi} \frac{\sin^n x}{\sin^n{x}+\cos^n x}dx+\int_0^{\pi} \frac{\cos^n x}{\sin^n{x}+\cos^n x}dx=\pi$$
So $$\int_0^{\pi} \frac{\sin^n x}{\sin^n{x}+\cos^n x}dx=\frac{\pi}{2}$$
and in particular $$\int_0^{\pi} \frac{\sin^{2018} x}{\sin^{2018}{x}+\cos^{2018}x}dx=\frac{\pi}{2}$$
We now know that
$$I=\pi\int_0^{\pi} \frac{\sin^{2018} x}{\sin^{2018}{x}+\cos^{2018}x}dx-I=\pi\times\frac{\pi}{2}-I$$
Hence $2I=\pi\times\frac{\pi}{2}=\frac{\pi^2}{2}$ and $I=\frac{\pi^2}{4}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4051683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$L^{P} ( \Omega ; W^{1,P}(Y))$ can someone define this space. $L^{P}$ and $W^{1,P}$ are sobolve and Lp spaces. How norm can define? I have this space $L^{P} ( \Omega ; W^{1,P}_{per}(Y))$ can someone define it. As far I understands it takes a function f from $L^{P}(\Omega)$ and maps it to Sobolov space with periodicity $Y$.
Or $f : W^{1,P}_{per}(Y) \rightarrow L^{P}(\Omega)$.
i am confused here???
here $L^{P}$ and $W^{1,P}$ are sobolov and Lp spaces. Y is the periodicity of domain $\Omega$.
How norm can define?
$||f||_{L^{P} ( \Omega ; W^{1,P}_{per}(Y))} = \int_{\Omega} ||f||_{W^{1,P}_{per}(Y)} dx$
| Functions in $L^P(\Omega; W_{per}^{1, P}(Y))$ are functions $f: \Omega \rightarrow W_{per}^{1, P}(Y)$ s.t. the norm on the space is well-defined at $f$. A suitable norm is
$$
\lVert f \rVert_{L^P(\Omega; W_{per}^{1, P}(Y))} := \left (\int_\Omega \lVert f(x) \rVert_{W_{per}^{1, P}(Y)}^p ~\mathrm{d}x \right)^\frac{1}{p}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4051807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
are there infinitely many primes $p,q$ such that $pq=a^2+b^4$ Are there infinitely many primes $p<q$, $p,q\neq 2,3$ such that $pq=a^2+b^4$ where $a,b\in \mathbb{Z}$ ? I've no idea if this is a very easy or very hard question. Any known result about this ?
Thank you for your comments !
EDIT : using mod 4 arguments you can derive very easily conditions on a and b. Also using Gaussian integers, this product pq boils down to the product of 4 gaussian irreducible elements. Note that p,q must be congruent to 1 mod 4.
| The answer should certainly be yes.
For example, with $b=1$ and $a = 2 + 5 x$, we'll have $a^2 + b^4 = 5 (1+4x+5x^2)$, and Bunyakovsky's conjecture implies there are infinitely many integers $x$ for which $1+4x+5x^2$ is prime.
However, no proof of this is known.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4051990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Integrate $\frac{\log(x^2+4)}{(x^2+1)^2}$.
Using residue calculus show that
$$\int_0^{\infty}\frac{\log(x^2+4)}{(x^2+1)^2}dx=\frac{\pi}2\log 3-\frac{\pi}6.$$
I was thinking of using some keyhole or semi-circular contour here. But the problem is apart from poles at $x=-i$ and $x=i$, the logarithm has singularities when $x=\pm 4i$.
I consider $C_R$, a semicircle contour oriented clockwise with radius $R$ centered at origin. The semicircle resides in the lower half plane, so that it encloses $x=-i$ as a pole. I set
$$\int_{C_R} \frac{\log(2-ix)}{(x^2+1)^2}dx$$
But it seems like it leads to wrong answer.
| One thing you could do to avoid a keyhole is to denote
$$I[a] = \int_0^\infty \frac{\log\left(a(x^2+1)+3\right)}{(x^2+1)^2}\:dx \implies I'[a] = \frac{1}{2}\int_{-\infty}^\infty \frac{1}{x^2+1}\cdot \frac{1}{ax^2+a+3}\:dx$$
then apply a semicircular contour and calculate the residues as normal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4052162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Evaluating $\lim_{m \to \infty} \binom{m}{k} \cdot 2^{-m}$ for $0 \leq k \leq m$. I am attempting to evaluate the limit $\lim_{m \to \infty} \binom{m}{k} \cdot 2^{-m}$ for $0 \leq k \leq m$, and I have been able to confirm via Mathematica that this limit is indeed 0. How would one actually go about proving this?
| For simplicity let $m$ be even. $\binom{m}{k}$ is largest when $k=\frac12m$ [see here]. Hence $$\binom{m}{k}2^{-m}\le\frac{m!}{(m/2)!(m/2)!}2^{-m}.$$ Now using this you can find the limit to be zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4052324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Hamilton equations-Symplectic Euler method We know that $\dot{q} = \frac{\partial H}{\partial p}$ and $\dot{p} = -\frac{\partial H}{\partial q}$, and we also know the values $Q$ and $P$ respectively of $q$ and $p$ at a later time step $\Delta t$. How could we prove that the quantities
$$
\begin{align}
Q &= q + {\Delta}t\frac{\partial H}{\partial p}(q,p),\\
P &= p - {\Delta}t\frac{\partial H}{\partial q}(q,p)
\end{align}
$$
are not symplectic, while
$$
\begin{align}
Q &= q - {\Delta}t\frac{\partial H}{\partial p}(q,p),\\
P &= p + {\Delta}t\frac{\partial H}{\partial Q}(Q,p)
\end{align}
$$
are symplectic?
Clarification:
The sets of equations define different numerical integrators: in the first case (qi+1,pi+1) directly in terms of (qi,pi), and in the second case qi+1 in terms of (qi,pi), and pi+1 in terms of (qi+1,pi).
| See canonical transformations via generating functions.
You might also directly compute the preservation of the symplectic form,
\begin{align}
\sum _i dP_i∧dQ_i &=\sum_i \left(dp_i-Δt \sum_j[H_{q_iq_j}(Q,p)dQ_j+H_{q_ip_j}(Q,p)dp_j]\right)∧ dQ_i
\\
&=\sum_i dp_i ∧ dQ_i-Δt \sum_i\sum_j[H_{q_iq_j}(Q,p)dQ_j+H_{q_ip_j}(Q,p)dp_j]∧ dQ_i
\\
&=\sum_i dp_i ∧ dQ_i-Δt \sum_jdp_j∧ \sum_i H_{q_ip_j}(Q,p) dQ_i
\\
&=\sum_i dp_i ∧ \left[dQ_i-Δt \sum_j H_{p_iq_j}(Q,p) dQ_j\right]
\\&=\sum_i dp_i ∧ dq_i
\end{align}
Due to the symmetric coefficient matrix $\sum_{i,j}\sum_i\sum_j H_{q_iq_j}(Q,p)dQ_j∧ dQ_i=0$ and the same for the $p$ coordinates.
Note that except for a separable Hamiltionian like $H(q,p)=T(p)+V(q)$, the first equation
$$
Q=q+ΔtH_p(Q,p)
$$
would be implicit.
For the Hamiltonian compare
\begin{align}
H(q,p)&=H(Q-ΔtH_p,p)=H-ΔtH_q·H_p+O(Δt^2)
\\
\text{ and }
H(Q,P)&=H(Q,p-ΔtH_q)=H-ΔtH_p·H_q+O(Δt^2),
\end{align}
the arguments on the right all $(Q,p)$, to see that it stays largely constant. One can modify this to $\tilde H=H+\tfrac12ΔtH_q·H_p$ where the Taylor expansion to one order higher gives
\begin{align}
\tilde H(q,p)&=\tilde H(Q-ΔtH_p,p)=H-\tfrac12ΔtH_q·H_p-\tfrac12H_{qp}[H_p,H_q]+O(Δt^3)
\\
\text{ and }
\tilde H(Q,P)&=\tilde H(Q,p-ΔtH_q)=H-\tfrac12ΔtH_p·H_q-\tfrac12H_{pq}[H_q,H_p]+O(Δt^3).
\end{align}
This means that this modified energy functional has a global error of $O(Δt^2)$, the global $O(Δt)$ first-order error will mainly manifest as a time dilation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4052524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solving integral with substitution gives wrong result I want to solve the integral:
$$ \int_{-\infty}^{\infty} \frac{x^2}{(1+|x|^3)^4} \ dx.$$
I thought I could solve that via the substitution $u=1+|x|^3$ since then one has the integral:
$$ \int_{-\infty}^{\infty} \frac{1}{3u^4} \ du.$$
But that seems to be wrong (probably the substitution does not work here since it is not differentiable at 0), and that's why this integral is 0, but it should be $\frac29$. So how do I solve this correctly?
| The function is even, so rewrite the integral as
$$\int_{-\infty}^\infty \frac{x^2}{(1+|x|^3)^4}dx = 2\int_0^\infty\frac{x^2}{(1+x^3)^4}dx$$
Now try your substitution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4052723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Calculate the limit $\lim\limits_{n \to \infty}\left(\frac{1}{\sqrt[3]{(8n^{3}+2)}^{2}}+\cdots+\frac{1}{\sqrt[3]{(8n^{3}+6n^{2})}^{2}}\right)$ I have to calculate the limit $$\lim_{n \to \infty}\left(\frac{1}{\sqrt[3]{(8n^{3}+2)}^{2}}+\frac{1}{\sqrt[3]{(8n^{3}+4)}^{2}}+\frac{1}{\sqrt[3]{(8n^{3}+6)}^{2}}+\cdots+\frac{1}{\sqrt[3]{(8n^{3}+6n^{2}-2)}^{2}}+\frac{1}{\sqrt[3]{(8n^{3}+6n^{2})}^{2}}\right)$$
I tried to use Sandwich Theorem like this $$\frac{3n^{2}}{\sqrt[3]{(8n^{3}+6n^{2})}^{2}} \leq \Bigg(\frac{1}{\sqrt[3]{(8n^{3}+2)}^{2}}+\frac{1}{\sqrt[3]{(8n^{3}+4)}^{2}}+\cdots+\frac{1}{\sqrt[3]{(8n^{3}+6n^{2}-2)}^{2}}+\frac{1}{\sqrt[3]{(8n^{3}+6n^{2})}^{2}}\Bigg) \leq \frac{3n^{2}}{\sqrt[3]{(8n^{3}+2)}^{2}}$$
And for result I got that limit is $\frac{3}{4}$
Is this correct?
| You could have done it a bit faster and get more than the limit using generalized harmonic numbers.
$$S_n=\sum_{k=1}^{3n^2}\frac{1}{\left(8 n^3+2k\right)^{3/2}}=\frac 1{2\sqrt 2}\Bigg[H_{4 n^3+3 n^2}^{\left(\frac{3}{2}\right)}-H_{4
n^3}^{\left(\frac{3}{2}\right)} \Bigg]$$
Now, using the asymptotics
$$H_{p}^{\left(\frac{3}{2}\right)}=\zeta \left(\frac{3}{2}\right)-\frac 2{p^{1/2}}+\frac 1{2p^{3/2}}-\frac 1{8p^{5/2}}+O\left(\frac{1}{p^{9/2}}\right)$$ apply it twice and continue with Taylor series to get
$$S_n=\frac{3}{16 \sqrt{2}\, n^{5/2}}\Bigg[1-\frac{9}{16 n}+\frac{45}{128 n^2}-\frac{1713}{4096
n^3}+O\left(\frac{1}{n^4}\right) \Bigg]$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4052938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Use Schur to prove that for any norm $\lim_{n \rightarrow \infty} ||A^n|| = 0 \text{ iff } \rho(A) < 1$ where $\rho$ denotes spectral radius. I know I need to prove both directions of the biconditional, but am having some trouble in both ways.
First, looking left to right, If we assume the limit equals zero, it means that the $A$ matrix keeps getting smaller each time we apply $A$. Thus, since eigenvalue represent the effect of the matrix the maximum would have to be less than one but this doesn't seem very rigorous.
Then, I'm not sure how to proceed by assuming the spectral radius is less than 1 except for doing the backwards reasoning of what I talked about above. Any help is appreciated.
| $\rho(A)=\lim\sup ||A^{n}\|^{1/n}$. If $\rho(A) <1$ and $r =\frac {1+\rho (A)} 2$ then $\|A\|^{n}<r^{n}$ for $n$ sufficiently large and $r^{n} \to 0$ since $r<1$.
Suppose $\|A^{n}\| \to 0$. There exists $k$ such that $\|A^{k}\|< \frac 1 2$. If $\lambda $ is an eigen value of $A$ then $\lambda ^{k}$ is an eigen value of $A^{k}$ so $|\lambda^{k} |<1$ which implies $|\lambda|<1$. Hence, $\rho(A) <1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4053097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Does there exist a differentiable function from $\mathbb R^3$ to $\mathbb R$ whose zeros are a line? Suppose we are given a line in $\mathbb R^3$. Is there a differentiable function, $f$, from $\mathbb R^3$ to $\mathbb R$ such that the solutions to $f(\vec{x})=0$ are precisely the points on the line?
I would like to be able to describe a line using the zeros of a single function rather than using a parametrization.
EDIT: I deleted a false claim about not being able to use a polynomial.
| Consider $f(x,y,z) = x^2 + y^2$. The solutions of $f(x,y,z)=0$ are the $z$ axis. Then just compose with an affine transformation that sends your line to the $z$ axis and you're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4053273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How can I work this limit exercise? Consider points M, N and O with coordinates (1, 0), (0, 1) and (0, 0), respectively, additionally P point with coordinates (x, y) in the graph $y = \sqrt{x}$ . Calculate:
(a) $\lim_{x \to 0^{+}} \frac{perimeter \bigtriangleup NOP}{perimeter \bigtriangleup MOP}$
(b) $\lim_{x \to 0^{+}} \frac{area \bigtriangleup NOP}{area \bigtriangleup MOP}$
One of my first ideas for this exercise, after drawing the points M, N, O and the $y = \sqrt{x}$ graph, was to see the formulas of perimeter and area for triangles.
perimeter $\bigtriangleup $: is the sum of all its three sides.
area $\bigtriangleup $: is always half the product of the height and base
However, I do not find how to use the formulas of perimeter and area of a triangle with a point P = (x, y) in the graph $y = \sqrt{x}$.
| $perimeter \bigtriangleup NOP = 1(NO) + \sqrt{x^2+(\sqrt{x}-1)^2}(NP) + \sqrt{x^2+(\sqrt{x})^2}(OP)$
$perimeter \bigtriangleup MOP = 1(MO) + \sqrt{(x-1)^2+(\sqrt{x})^2}(MP) + \sqrt{x^2+(\sqrt{x})^2}(OP)$
$area \bigtriangleup MOP = \frac{1}{2} * 1 * \sqrt{x}$
$area \bigtriangleup NOP = \frac{1}{2} * 1 * x$
I think you can finish remain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4053482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I evaluate $\int_{1/3}^3 \frac{\arctan x}{x^2 - x + 1} \; dx$? I need to calculate the following definite integral:
$$\int_{1/3}^3 \frac{\arctan x}{x^2 - x + 1} \; dx.$$
The only thing that I've found is:
$$\int_{1/3}^3 \frac{\arctan x}{x^2 - x + 1} \; dx = \int_{1/3}^3 \frac{\arctan \frac{1}{x}}{x^2 - x + 1} \; dx,$$
but it doesn't seem useful.
| Hint
$$\arctan{x}+\arctan{\dfrac{1}{x}}=\dfrac{\pi}{2}$$
so
$$\int_{\frac{1}{3}}^{3}\dfrac{\arctan{x}}{x^2-x+1}dx=I$$
let
$x=\dfrac{1}{u}$,then
$$I=\int_{\frac{1}{3}}^{3}\dfrac{\arctan{\frac{1}{x}}}{x^2-x+1}dx$$
so
$$2I=\dfrac{\pi}{2}\cdot\int_{\frac{1}{3}}^{3}\dfrac{1}{x^2-x+1}dx$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4053720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Proof of the Existence of Integer Solutions for $(f_1)^2 + (f_2)^2 +( f_3)^2... = I^2$
Using fractions($f_n$) where integer $n$ is the number of fractions, prove that the sum of the squares $f_1$ to $f_n$ has no integer($I$) solutions when $n >= 1$ given each fraction has a distinct positive integer numerator and denominator and every fraction is a fully simplified non integer:
$$\left(f_1\right)^2 + \left(f_2\right)^2 + \left(f_3\right)^2 ... = I^2$$
I could not find a proof, but using distinct positive integers $a,b,c,d,e,f$ I managed to prove this for the case of
$$\left(\frac{a}{b}\right)^2 + \left(\frac{c}{d}\right) ^2 = \left(\frac{e}{f}\right)^2 $$
by simplifying:
$$\frac{(ad)^2 + (bc)^2}{(bd)^2} = (\frac{e}{f})^2 $$
Since $(ad)^2 + (bc)^2$ must satisfy a Pythagorean triple, we know that each side length(excluding the hypotenuse) must be the product of two distinct integers, which can be proven through Euclid's formula :
$$ac = (2m)(n)$$
$$bd = (m -n)(m + n)$$
Q.E.D.
Although this still did not prove useful to answering my question, I later found a proof that the sum of the sqaures $f_1$ to $f_2$ has no integer solutions using distinct positive integers $a,b,c,d$ where $(\frac{a}{b})^2 + (\frac{c}{d} ) ^2 = I^2 $ using the following:
$$\frac {a^2}{b^2} = \frac {I^2d^2 - c^2}{d^2}$$
Both sides are in their simplest form since
$$\gcd(a^2,b^2) = 1 = \gcd(c^2,d^2) = \gcd(I^2d^2 - c^2, d^2)$$
so $a^2 = e^2d^2-c^2, b^2 = d^2$.
Given $b,d > 0$, $b=d$. Therefore no solutions exists.
But this lead me to question what if there were more than $2$ integer fractions? This would mean that
the denominator of some integer fraction could be the product of $2$ or more distinct integers, which suggests that this proof which uses $b = d$ does not nesscacarily need to be true for there to be an integer solution where $n > 2$.
So I made a conjecture that there are no integer solutions for the sum of any number of squared fractions where the numerator and denominator are distinct positive integers and the fractions are fully simplified non-integers.
Edit:
This turned out to be false,
New questions:
Provide a proof
A method to finding integer solutions
(The one that finds the most affective method will be more likely to receive the bounty)
| This proof is INVALID, but pieces of it might give insights to the forms of solutions...
In fact as long as just the denominators are distinct, the sum of squares is never an integer, so it can't be the square of an integer.
Let's call the fractions $\frac{a_i}{b_i}$, where $a_i, b_i \in \mathbb{N}, b_i \geq 2$, and we require that for each index $i$, $\gcd(a_i,b_i)=1$ and for all index pairs $i \neq j$ that $b_i \neq b_j$. We look for solutions where:
$$S = \sum_{i=1}^n \frac{a_i^2}{b_i^2} \in \mathbb{Z}$$
Suppose by way of contradiction that the set of such solutions is not empty. Let $N$ be the smallest count with at least one solution using exactly $n=N$ fractions in this set. Then let $B$ be the smallest positive integer which appears as a denominator $b_i$ in any solution of size $N$.
Consider a "minimal" solution which has $n=N$ and has $b_i=B$ for some $i$. Let $p$ be a prime factor of $B$ (which is at least $2$). Let $\beta$ be the set of denominators $b_i$ which are a multiple of $p$. This is not the empty set since $p \mid B \Rightarrow B \in \beta$. If $\beta$ is the entire set $\{b_1, \ldots b_N\}$, then we have
$$ \sum_{i=1}^N \frac{a_i^2}{(b_i/p)^2} = p^2 \sum_{i=1}^N \frac{a_i^2}{b_i^2} = p^2 S \in \mathbb{Z} $$
is a solution with the same $N$, and one of the denominators is $B/p$, but this contradicts the choice of $B$ as the smallest denominator in any solution of size $N$.
If $\beta$ is not the entire set $\{b_1, \ldots b_N\}$, then the set difference $\beta^c = \{b_1, \ldots b_N\} \setminus \beta$ is not empty. The least common multiples of these two subsets of denominators satisfy $p \mid \mathop{\rm lcm}(\beta)$ and $p \nmid \mathop{\rm lcm}(\beta^c)$. Splitting the sum $S$ over these two subsets:
$$ S = \sum_{b_i \in \beta} \frac{a_i^2}{b_i^2}
+ \sum_{b_i \in \beta^c} \frac{a_i^2}{b_i^2} \\
S (\mathop{\rm lcm} \beta^c)^2 = \sum_{b_i \in \beta} \frac{a_i^2 (\mathop{\rm lcm} \beta^c)^2}{b_i^2} + \sum_{b_i \in \beta^c} a_i^2 \left(\frac{\mathop{\rm lcm} \beta^c}{b_i}\right)^2 \\
\sum_{b_i \in \beta} \left(\frac{a_i \mathop{\rm lcm}(\beta^c)}{b_i}\right)^2 = S (\mathop{\rm lcm} \beta^c)^2 - \sum_{b_i \in \beta^c} a_i^2 \left(\frac{\mathop{\rm lcm} \beta^c}{b_i}\right)^2 \in \mathbb{Z} $$
For each addend term in the left side, $p \mid b_i$ by definition of $\beta$. Since $\gcd(a_i, b_i) = 1$, $p \nmid a_i$, and we already noted $p \nmid \mathop{\rm lcm}(\beta^c)$. So the fraction is not an integer. Every $b_i$ in these fractions is a distinct integer, since they're a subsequence of the entire sequence of $N$ different $b_i$. Therefore the left side is a solution to the problem with $n = |\beta| < N$, contradicting the choice of $N$ as minimal.
EDIT: Here's the error. The $b_i$ values are distinct, but they are not necessarily the denominators of the new fractions when reduced to lowest form. The actual reduced denominators could duplicate some values, meaning the formula is not another solution to the problem as I modified it.
All possibilities lead to contradiction, so there are no solutions at all.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4053903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can someone give me an example of two compatible atlas for the same differential structure? I'm a student of physics and i just began studying this topic. I read the definition of "compatible atlas" but i can't make clear examples of it by myself. Can someone help me?
| Here's a simple example. Take the usual atlas for the unit circle consisting of the four charts by (open) semicircles (two projecting to the $x$-axis, two projecting to the $y$-axis). Call these maps $\phi_i$, $i=1,2,3,4$. Now take an atlas consisting of the two charts by stereographic projection from the north pole and south pole respectively. Call these maps $\psi_j$, $j=1,2$. The two atlases are compatible, because on overlaps the maps $\phi_i\circ\psi_j^{-1}$ and $\psi_j\circ\phi_i^{-1}$ are smooth maps from appropriate open subsets of $\Bbb R$ to $\Bbb R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4053999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\forall x,y, \, \, x^2+y^2+1 \geq xy+x+y$ Prove that $\forall x,y\in \mathbb{R}$ the inequality $x^2+y^2+1 \geq xy+x+y$ holds.
Attempt
First attempt: I was trying see the geometric meaning, but I´m fall.
Second attempt: Consider the equivalent inequality given by $x^2+y^2\geq (x+1)(y+1)$
and then compare $\frac{x}{y}+\frac{y}{x} \geq 2 $ and the equality $(1+\frac{1}{x}) (1+\frac{1}{y})\leq 2$ unfortunelly not is true the last inequality and hence I can´t conclude our first inequality.
Third attempt:comparing $x^2+y^2$ and $(\sqrt{x}+\sqrt{y})^2$ but unfortunelly I don´t get bound the term $2\sqrt{xy}$ with $xy$.
Any hint or advice of how I should think the problem was very useful.
| Look for perfect squares
$x^2 + y^2 + 1 \ge xy + x+y\iff $
$x^2 - 2xy+y^2 \ge -xy + x + y - 1\iff$
$(x-y)^2 \ge x+y - xy - 1$.
Now as the LHS is $\ge 0$ if we can show the RHS is $\le 0$ that be great. If not... well we may hit an inspiration on the way. We can always factor
$x+y - xy - 1 = x-xy + y-1 = x(1-y) + y-1) = x(1-y)-(1-y) = (x-1)(1-y)$.
Hmmm, no need for that to be negative but if it is positive then either $(x-1)$ and $1-y$ are both positive or both negative.
If both are positive then $x > 1$ and $y< 1$. So $x-y > x-1$ and $x-y > 1-y$ and so $(x-y)(x-y) > (x-1)(1-y)$.
And if both are negative then $x < 1<y$ and so $(x-y)^2 = (y-x)^2$ and $y-x > y-1>0$ and $y-x > 1-x>0$ and $(y-x)^2 > (y-1)(1-x) = (x-1)(1-y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4054130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 9,
"answer_id": 6
} |
Is the set $\{(z, \bar{z}), z\in\mathbb{C}\} \subseteq \mathbb{C}^2$ Zariski closed? Question is in the title. It feels like the answer should be no, but I don't see a simple way to prove it.
I mean, if it was Zariski closed, then there would exist a polynomial expression in z and $\bar{z}$ that would be 0 for all $z\in\mathbb{C}$. And it's somewhat clear from a complex analysis perspective that this cannot exist (by considering partial derivatives after z and $\bar{z}$). But it feels like it should be more elementary than this. Am I missing something obvious?
| Suppose for contradiction that this set is Zariski closed, i.e. there exist polynomials $p_1, \dots, p_k \in \mathbb{C}[x,y]$ such that $p_i(a,b) = 0$ for all $i$ if and only if $a = \overline{b}$.
Now for each $i$, we have that $p_i(z,z) \in \mathbb{C}[z]$ has infinitely many zeros (e.g. any real number), so $p_i(z,z) = 0$.
Finally, for any $z \in \mathbb{C}$, we have $p_i(z,z) = 0$ for all $i$, so $z = \overline{z}$. This is a contradiction.
Edit: made a silly mistake at first (assumed $k = 1$), should be fixed now.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4054291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Definition of set containment Let $X$ and $Y$ be sets. Does "$X$ contains $Y$" mean $Y \subseteq X$ or $Y\in X$, or is it ambiguous?
| The symbols $\subset$ and $\subseteq$ are used for sets which are subsets of another, which can also be described as containment. The former is used when the sets are not equal, the latter when they can be.
The symbol $\in$ is used for elements of a set, rather than whole sets. For example, you could say: $$y\in Y\subseteq X \implies y\in X$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4054432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
How many numbers greater than $4000$ can be formed using $0,2,4,5,7$?
How many four digit or five digit numbers greater than $4000$ can be formed using $0,2,4,5,7$, if none of the digits are repeated and the numbers formed are even?
I've tried all the ways I can think of to do this question but I always get a different answer to the answer sheet (the correct answer is 108 according to the book). I'm not really sure how to guarantee that it ends in an even number. Any help would be greatly appreciated!
| The strategy should be taking care of the first and last digits, then permutate the rest of the numbers for the middle digits. To end with an even number, you can only use 0,2,4 for the last digits. You can not use 0 as the first digit.
Choosing from 0,2,4,5,7
*
*For 5 digits even number
a. First digit even (2 choices), last digit even (2 choices), the rest 3 permutate for the middle 3 digits (3x2 choices)
$\to$ 2x2x3x2=24
b. First digit odd (2 choices) , last digit even (3 choices ), the rest 3 permutate for the middle 3 digits(3x2 choices)
$\to$ 2x3x3x2= 36
*For 4 digits even number greater than 4000
a. First digit odd (2 choices ), last digit even (3 choices ), the rest 3 permutate for the middle 2 digits (3x2 choices)
$\to$ 2x3x3x2=36
b. First digit even (1 choice), last digit even (2 choices), the rest 3 permutate to for the middle 2 digits (3x2 choices)
$\to$ 1x2x3x2=12
Total: 24+36+36+12=108
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4054580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
95 vectors in a cube (APMC 1995) Let $C$ denote the cube $[-1,1] \times [-1,1] \times [-1,1]$. Let $v_1,v_2,...,v_{95}$ be points (treated as vectors from the origin to these points) inside or on the boundary of $C$. Consider the set $W$ of $2^{95}$ vectors of the form $s_1v_1+s_2v_2+...+s_{95}v_{95}$ where each $s_i=1$ or $-1$.
(a) If $d=48$, then show that there exists a vector $w=(a,b,c)$ in $W$ such that $a^2+b^2+c^2 \le d$.
(b) Does there exist $d<48$ that satisfies (a)?
(c) What is the smallest value of $d$ that satisfies (a)?
Source : Austrian-Polish Mathematical Competition 1995 Problem 8
Generalisation : If we have $n$ vectors $v_1,v_2,...,v_n$ instead and consider $2^n$ vectors of the form $s_1v_1+s_2v_2+...+s_n v_n$ where $s_i=1$ or $-1$, what is the smallest number $d=d_n$ such that there is a vector $w=(a,b,c)$ in $W$ st $a^2+b^2+c^2 \le d$? At least what is a good upper bound for $d_n$?
Attempt (for the generalisation) :
Since $\sum_{s_i=\pm 1} \left|\sum_{j=1}^n s_jv_j\right|^2=2^n \sum_{j=1}^n|v_j|^2$ and $|v_j|\le \sqrt 3$, it follows that for $d\ge 3n$, there exists $w\in W$ st $|w|^2\le d$. I don't see how to improve this bound.
|
Lemma: If $v_1, \dots, v_5$ are five vectors in the cube $C = [-1, 1]^3$, then there exist two indices $1\leq i < j \leq 5$ and $s, t \in \{-1, 1\}$ such that $s v_i + t v_j \in C$.
Proof: We cut the cube $C$ into eight small cubes: $[0, 1] \times [0, 1]\times [0, 1]$, $[-1, 0] \times [0, 1] \times [0, 1]$ ... etc.
Two small cubes are called "opposite to each other" if they are images of each other with respect to the symmetry around the origin. E.g. $[0, 1] \times [0, 1] \times [0, 1]$ is opposite to $[-1, 0] \times [-1, 0] \times [-1, 0]$.
The eight small cubes then form four groups, each group being two small cubes that are opposite to each other.
Since we have five vectors, there are two of them that lie in the same group, say $v_i, v_j$. If they live in the same small cube, then $v_i - v_j$ is in $C$; otherwise, they live in opposite small cubes, and $v_i + v_j$ is in $C$.
Lemma: If $n > 4$ and we have $n$ vectors $v_1, \dots, v_n \in C$, then there exists $n - 1$ vectors $w_1, \dots, w_{n - 1} \in C$ such that the set $\{\sum_i t_i w_i: t_i \in \{-1, 1\}\}$ is a subset of $\{\sum_i s_i v_i: s_i \in \{-1, 1\}\}$.
Proof: We apply the previous lemma to $v_1, \dots, v_5$. Without loss of generality, assume that $sa_1 + ta_2$ is in $C$, with $s, t \in \{-1, 1\}$. Then it suffices to choose $w_1 = sa_1 + ta_2, w_2 = a_3, w_3 = a_4, \dots$.
Lemma: If $n \geq 4$ and we have $n$ vectors $v_1, \dots, v_n \in C$, then there exists $4$ vectors $w_1, \dots, w_4\in C$ such that the set $\{\sum_i t_i w_i: t_i \in \{-1, 1\}\}$ is a subset of $\{\sum_i s_i v_i: s_i \in \{-1, 1\}\}$.
Proof: Use the previous lemma and induction on $n$.
Now it is enough to use your argument (applied to $n = 4$) to conclude that $d = 12$ works for any $n \geq 4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4054699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
function in $\mathbb{Q}$ with positive derivative A function $f: \mathbb{Q} \to \mathbb{Q}$ is differentiable over $\mathbb{Q}$ with $f'(q)$>$0$ for every $q$ in $\mathbb{Q}$.
Normaly when the derivative is positive for everything in the domain the function is rising. I'm looking for an counter example that this isn't always for a function defined in $\mathbb{Q}$.
| Consider the function
$$f(q)=\begin{cases}q, & \textrm{if }q^3<2 \\ q-1, & \textrm{if }q^3\ge2.\end{cases}$$
We have
$$f'(q)=\lim_{h\to 0}\frac{f(q+h)-f(q)}{h}=1 \quad \forall q \in \mathbb{Q}.$$
However, $f(q)$ is not monotonically increasing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4054946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I find the minimum of a quartic function with a missing coefficient? $$ f(x) = 7x^4 + kx^3 + 8 $$
I have a question asking me to state the range of this function. To do this it seems I have to find the relationship between k and the minimum of the function. I've been told that there is a way to do this with calculus, but this is an algebra class and I haven't learned any calculus yet. I've tried coming up with random factors and then using the remainder theorem to solve for k, but that just gives me an identity.
How do I find the range of this function without knowing k?
Alternatively, is it possible to find k using the given information?
| $$ f(x) = 7x^4 + kx^3 + 8 $$ has a stationary point $(\alpha,f(\alpha))$ whenever for some real $v,$ $$ g(x) = 7x^4 + kx^3 + 8 + v $$ has a repeated zero $\alpha,$ i.e., $$g(x)=7(x-\alpha)^2(x^2+\beta x+\gamma)\\=7x^4+(7\beta-14\alpha)x^3+(7\alpha-14\alpha\beta+7\alpha^2)x^2+(-14\alpha\gamma+7\alpha^2\beta)x+7\alpha^2\gamma.\\$$
Comparing coefficients gives $$\alpha=0$$ or $$\gamma=\frac12\alpha\beta\\\beta=\frac23\alpha\\k=7(\frac23\alpha)-14\alpha\\\alpha=-\frac{3}{28}k.\\$$
Now, $f(x)=x^3(7x+k)+8,\ $ so $f(x)\rightarrow\infty$ as $x\rightarrow\pm\infty.$
Therefore,
*
*if $k=0,$ $f(x)$ has one stationary point, the minimum point $(0,8),$ so $$f(x)\in[8,\infty);$$
*otherwise, $f(x)$ has two stationary points, the inflection point $(0,8)$ and the minimum point $(-\frac{3}{28}k,8-\frac{27}{87808}k^4),$ so $$f(x)\in[8-\frac{27}{87808}k^4,\infty).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4055140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Evaluating $\lim_{x\to-\infty} xe^{-x}$ Not sure how to think about this one.
$$\lim_{x\to-\infty} xe^{-x}$$
If I place "$-\infty$" in place for $x$ I get
$$(-\infty)e^{\infty}$$
the limit of which would seem to be $0$ as $x$ approaches $-\infty$.
But in fact the limit is $-\infty$ (if looking at the drawn graph) as $x$ approaches $-\infty$.
| Intuitively we can see the limit is $-\infty$:
To make it rigorous we want to find $x_0(\epsilon)<0$ (a function in terms of $\epsilon<0$) such that
$$x<x_0(\epsilon)\implies xe^{-x}<\epsilon$$
We know that $$e^x \ge1+x \implies e^{-x}\ge 1-x$$So for $x\le 0 $ we have $$xe^{-x}\le x(1-x)$$Let $$x(1-x)\lt \epsilon \implies -x^2 +x - \epsilon \lt 0 \implies x^2 - x+ \epsilon\gt 0$$By choosing $x_0(\epsilon) = \frac{1 - \sqrt{1 - 4\epsilon}}{2} - 1$, the proof is complete. We can check it easily $$x_0(1-x_0) =\epsilon - \sqrt{1 - 4\epsilon} - 1 \lt \epsilon \iff 1 + \sqrt{1 - 4\epsilon} \gt 0$$which is true since $\sqrt{1 - 4\epsilon} \gt 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4055301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Very confused about the definition of subbasis for a topology in Munkres's book In Munkres's Topology textbook, it says,
Definition
A subbasis S for a topology on X is a collection of subsets of X whose union equals X.
The topology generated by the subbasis S is defined to be the collection T of all unions of finite intersections of elements of S.
I am having a very hard time understanding the first sentence. Let's say $X = \{1, 2, 3\}$. Then let $A = \{\{1\}, \{2\}, \{3\}\}$, or $A = \{\{1, 2\}, \{3\}\}$. Either way, $A$ is a collection of subsets of $X$, and the union of the elements in $A$ is $X$, but $A$ does not seem like a subbasis here...
What did I misunderstand here? What should be the correct interpretation of "a collection of subsets of X whose union equals X"?
Thank you in advance!
Edit:
So turned out $A$ is a subbasis in both cases, thanks @Surb for your comment.
But here is what I don't understand (which made me mistakenly think A may not even be subbasis in the first place). If $A = \{\{1\}, \{2\}, \{3\}\}$, then wouldn't "the collection T of all unions of finite intersections of elements of $A$" be {$\varnothing$}, since any intersection of any two elements in $A$ is empty? But since T is a topology, T must also contain $X$, but, in this case, it does not?
| The first "subbase" is {{1}, {2}, {3}}. The set of all possible unions are {{1}, {2}, {3}, {1,2}, {1, 3}, {2, 3}, {1, 2, 3}}.
The set of all possible intersections is {{1}, {2}, {3}, {}}.
The set of all possible unions and intersections is {{}, {1}, {2}, {3}, {1,2}, {1, 3}, {2, 3}, {1, 2, 3}}. That is a topology, in fact it contains all subsets, the "discrete topology" for {1, 2, 3}.
The second subbase is {{1, 2}, {3}}.
The set of all possible unions is {{1, 2}, {3}, {1, 2, 3}}.
The set of all possible intersections is {{1, 2}, {3}, {}}.
The set of all unions and intersections is {{}, {3}, {1, 2}, {1, 2, 3}}. Again, that is a topology for {1, 2, 3}.
If you could not see that, are you clear what a "topology" for a set is? A topology for a set, X, is a collection of subsets or X that has four properties:
*
*It contains X itself.
*It contains the empty set, {}.
*It contains all unions of its sets.
*It contains all finite intersections of its sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4055477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Continuous function iff $p \not\in fr(A)$ Let $M$ be a metric space, $A \subset M$ and let $\chi_A: M \to \mathbb{R}$ be a map defined by:
$$\chi_A(x)=
\begin{cases}
1 & \text{if $x \in A$} \\
0 & \text{if $x \not\in A$}
\end{cases}$$
For a $p \in M$, show that $\chi_A$ is continuous map in $p$ if and only if $p \not\in fr(A)$.
My idea was supposing that $p \in fr(A)$ and try to get a contradiction, but I couldn't reach a solid result. Any leads?
| $M$ splits into $3$ disjoint sets: $M=\operatorname{int}(A) \cup \operatorname{int}(A^\complement) \cup \operatorname{Fr}(A)$, based on whether $x$ has a ball inside $A$, a ball inside $A^\complement$, or whether every ball around $x$ intersects both $A$ and $A^\complement$.
In the former two cases there is one $\delta$ that works for all $\varepsilon>0$ (the function is locally constant). In the latter no $\delta>0$ can work for $\varepsilon = \frac12$.
So $\chi_A$ is exactly non-continuous at $x \in \operatorname{Fr}(A)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4055689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I know if two events are independent or not? Let's say we flip a coin twice:
The sample space (set of all possible outcomes) is:
$$\Omega = \{HH, HT, TH, TT\}$$
Now let's say we have the following two events (sets of outcomes):
$$A= \{HH, HT, TH\}, B = \{HT, TH, TT\}$$
where $A$ is the event that at least one coin is heads and $B$ is the event that at least one coin is tails.
Now the intersection between A and B ($A\cap B$) is the set of outcomes that are shared between A and B, the event (C) where at least one coin is heads AND at least one coin is tails:
$$C = A\cap B = AB = \{HT, TH\}$$
In this trivial example, I can calculate the probability of C by taking the size of the event over the size of the sample space: $$P(C) = \frac{size(C)}{size(\Omega)} = \frac{2}{4} = 0.5$$
But I am struggling to understand how I would find this answer if the sets were too large or complicated to count out.
The calculation $P(A)*P(B) = \frac{9}{16}$ clearly gives a different answer. I think this means that events A and B are not independent. Two events are dependent if the outcome of the first event affects the outcome of the second event, so that the probability is changed. So conceptually, how/why is A affecting B (or vice-versa)?
And more generally, if it's not conceptually clear that one event is influencing another, does that mean the only way to determine if they are independent is by collecting data from observations?
| I have a potential answer to my question.
I think the issue I am having is conceptualizing the definition of "dependent events."
I think this is a more useful definition: Events are dependent if the outcome of one affects the probability that the conditions of the other event will be satisfied (AKA, that it would occur).
In the example in the question, we can see that the outcomes of event A will indeed affect the probability that event B will occur. If the outcomes of A are either HT or TH, then the probability of B occurring is 100%. That is because the conditions of B, at least one Tail, is satisfied by both those outcomes. If the outcome of A is HH, then the probability of B occuring is 0%. Again, because the conditions of B, at least one Tail, is not met by that outcome.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4055842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Existence condition for rational polynomials First of all sorry for the stupid question but my knowledge in mathematics is not big. I am studying by myself how determine the existence condition for a polynomial.
In particular I am not sure to perform correctly the following exercise.
My book says: determine the the existence condition of the following expression before simplifying this expression
$$\left(\frac{a^2}{4a^2+4ab+b^2}-\frac{a-b}{6a+3b}\right):\frac{a^3-b^3}{12a+6b}$$
First of all I would rewrite the expression in this way:
$$\frac{\left(\frac{a^2}{\color{red}{(2a+b)^2}}-\frac{a-b}{\color{green}{3(2a+b)}}\right)}{\frac{a^3-b^3}{\color{blue}{6(2a+b)}}}$$
so since I should have all denominators in the expression different from $0$, I have imposed the following:
$$\begin{cases} \color{red}{(2a+b)^2}\neq 0\\
\color{green}{3(2a+b)}\neq 0\\
\color{blue}{6(2a+b)}\neq 0\\
a^3-b^3\neq 0\\
\end{cases}\iff b\neq -2a; a\neq b $$
$\textbf{My doubt:}$ I am not sure on what I have done. In particular I have imposed, in addition to the first three condition (red, blue and green) referred to the denominator of the fractions in the expression, also that the term $\frac{a^3-b^3}{\color{blue}{6(2a+b)}}$ has to be different from $0$.
I have thought that since it is a factor that the exercise tell us to divide thanks to the symbol "$:$", in addition to the existence condition on this fraction I should be sure that this fraction is not identically $0$ since it represents a denominator for the term $\left(\frac{a^2}{4a^2+4ab+b^2}-\frac{a-b}{6a+3b}\right)$: so I have to impose also the condition in black, right?
Can you help me please?
| Everything that you have done is correct, though I'm going to add some extra notes here as you seem a little uncertain and say that you're teaching yourself.
You're looking at what are called rational polynomials as they are the ratio of polynomials (or "fractions") and so the existence condition is basically that these things are not allowed to be undefined. That happens if you divide by $0$, and so that is why you're checking the conditions that you're checking.
The exercise is first to determine if each of the polynomial fractions exists, and then to determine if their ratio (indicated by the ":" as you noted) exists, which again requires division as you have understood.
Everything else is fine; you can simplify your calculations a little by noticing that the term $(a+2b)$ is present and dominant in the red, green and blue terms, so all three are controlled by requiring $a\not=2b$.
Finally, yes, you need to be sure that $a^3 - b^3 \not= 0$ as well for the final ratio/division check. This is the first time when it might matter what number field you're working in: $a^3-b^3 = (a-b)(a^2+2ab+b^2)$ and so if you're working over $\mathbb C$ then there will be two additional complex roots that $a$ and $b$ are not allowed to take on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4056044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
On leap years and conditional probability Edit
(Copying this from the end of my post and pasting it at the start because people are still voting to close this.)
Before anyone flags this as a duplicate, I understand that the following exact problem has been posted before, but my post is about why my logic (for the second question below) is wrong (and not about what the correct answer should be).
Problem
A year in the 2020s (i.e. from 2020 to 2029 inclusive) is selected uniformly at random, a month is selected uniformly at random from that year and a day is selected uniformly at random from that month.
*
*What is the probability that the day is the 29th of February?
*Given that the day is the 29th, what is the probability that the month is February?
My idea for Question 1
The first question is very clear-cut.
First, we observe that only leap years have a "29th of February" and out of the entire decade, only 3 are leap years (i.e. 2020, 2024 and 2028), so we have a $\frac 3 {10}$ chance of picking a leap year.
By the same logic, we have a $\frac 1 {12}$ chance of picking February out of the leap year we have chosen and we have a further $\frac 1 {29}$ chance of picking the 29th of February from the February of the leap year we have chosen.
Thus, $\mathbb{P}(\mathrm{29th\ of\ February}) = (\frac 3 {10})(\frac 1 {12})(\frac 1 {29}) = \frac 1 {1160}$.
Okay. Easy.
My idea for Question 2
Now, I understand that the second question is on conditional probability and Bayes' Theorem should be invoked. However, smart me thought I saw a short-cut (obviously incorrectly) and I have been wondering why my logic (as explained below) is wrong.
Since we are "given that the day is the 29th", I thought that we need only pick a year and a month. In other words, once we have a year and a month set, the probability that the day is the 29th should be $1$ right?
Thus, $\mathbb{P}(\mathrm{February\ |\ 29th}) = (\frac 3 {10})(\frac 1 {12}) = \frac 1 {40}$.
Again. Easy. Or so I thought.
Suggested solution for Question 2
From the solution given (as explained below), it seems I have oversimplified Question 2. According to my professor, we need to break the question down much more than I had.
First, we must observe that, out of the entire decade (i.e. 120 months), we have 3 months with 29 days, 40 months with 30 days (since 4 months in a year have 30 days) and 70 months with 31 days (since 7 months in a year have 31 days). We do not count the "normal Februarys (i.e. the Februarys with 28 days)" as it is obvious that we cannot possibly have a 29th day from those months.
Thus, $\mathbb{P}(\mathrm{February\ |\ 29th}) = \frac {(\frac 3 {120})(\frac 1 {29})} {(\frac 3 {120})(\frac 1 {29}) + (\frac {40} {120})(\frac 1 {30}) + (\frac {70} {120})(\frac 1 {31})} = \frac {279} {9965}$.
While I perfectly understand my professor's logic for the second question, I cannot help but wonder why my idea is flawed. Any intuitive explanations will be greatly appreciated!
| For (1) you have a probability of 29th February chosen of $(\frac 3 {10})(\frac 1 {12})(\frac 1 {29}) = \frac 1 {1160}$
The equivalent probability of 29th March is $(\frac {10} {10})(\frac 1 {12})(\frac 1 {31})$ and there are $7$ months with $31$ days
The equivalent probability of 29th April is $(\frac {10} {10})(\frac 1 {12})(\frac 1 {30})$ and there are $4$ months with $30$ days
So that makes the probability of February given a 29th was chosen:
$$\dfrac{(\frac 3 {10})(\frac 1 {12})(\frac 1 {29}) }{1(\frac 3 {10})(\frac 1 {12})(\frac 1 {29}) +7(\frac {10} {10})(\frac 1 {12})(\frac 1 {31})+4(\frac {10} {10})(\frac 1 {12})(\frac 1 {30}) }= \dfrac{\frac 3 {29} }{\frac 3 {29} +\frac{70}{31}+\frac {40} {30}} =\dfrac{279}{9965}$$ which is the suggested solution.
It is about $0.028$, slightly more than your $\frac1{40}=0.025$. The reason is that because leap-year Februaries are shorter than Marches or Aprils, choosing a 29th is slightly more likely in such Februaries, so the reverse conditionality takes this into account.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4056177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Prove or give a counterexample: Over C,$\,$ $0$ is the only eigenvalue $\iff$ $T$ is nilpotent. Suppose $V$ is a complex vector space and $T$ is a linear map on $V$.
Prove or give a counterexample: $\ $ $0$ is the only eigenvalue of $T$ $\iff$ $T$ is nilpotent.
Two things already known :
(1) $\ $ If $T$ is nilpotent, then $0$ is the only eigenvalue of
$T$.
(2) $\ $ On finite-dimensional complex vector spaces, if $0$
is the only eigenvalue of $T$, then $T$ is nilpotent.
So we only need to consider one direction. The question is equivalent to:
On infinite-dimensional complex vector spaces, $\,$ $0$ is the only eigenvalue $\,$ $\Longrightarrow$ $\,$$T$ is nilpotent $\ ?$
Any insights are much appreciated.
| Consider the operator $T:(x_i)_{i\in\mathbb{N}}\mapsto(0,0,x_2,x_3,\ldots)$, that is $T\mathbf{x}=R\mathbf{x}-x_1\mathbf{e}_2$, where $R$ is the right-shift operator.
Then $0$ is an eigenvalue since $T\mathbf{e}_1=\mathbf{0}$.
But there are no other eigenvalues (as in the proof that $R$ has no eigenvalues). Moreover, $T$ is not nilpotent since $T^n\mathbf{e}_2=\mathbf{e}_{2+n}\ne\mathbf{0}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4056336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find covariance of estimator and derivative of the log-likelihood function Problem:
Given an estimator $\hat k$. The estimation method is either max likelihood or other method. We know that it's unbiased.
Let $L$ be the likelihood function and $\ell = ln L$.
Find $\Bbb Cov( \frac{d \ell}{d k},\hat k)$
My attempt:
$$\Bbb Cov( \frac{d \ell}{d k},\hat k) = E( \frac{d \ell}{d k} \cdot \hat k) - E( \frac{d \ell}{d k}) \cdot E( \hat k)$$
As unbiased $E( \hat k) = k$. Also, $\frac{d \ell}{d k}$ is score function. We know that $E(s(k)) = 0$
Hence, we have
$$E( \frac{d \ell}{d k} \cdot \hat k) - E( \frac{d \ell}{d k}) \cdot E( \hat k) =E( \frac{d \ell}{d k} \cdot \hat k) $$
I decided to take $\hat k$ into the differential.
$$=E( \frac{d \ell \cdot \hat k}{d k})$$
Since the expectation is linear, I can take the derivative out of it.
$$= \frac{d E(\ell \cdot \hat k)}{d k}$$
However, I am still stuck. Am I doing something wrong?
Note. The friend of mine did manage to solve this, so the task has enough information to be solved.
| As all the commentators noted, the likelihood function makes a little sense if your estimator is obtained by, say, the method of moments. I would assume that, in this case, they don't correlate, so the covariance is zero.
Let $\hat k$ be MLE.
As unbiased, let replace the true parameter $k$ in log-likelihood function with $E[\hat k]$. Then, if you swap the derivative and expectation, you will have something like $\ell (\hat k)$. The MLE maximizes the log-likelihood function, consequently, the derivative is zero. The further steps are trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4056593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is $\kappa(A^{-1})= \kappa(A)$? In my syllabus we have the alternative definition of the condition of a matrix:
$$\kappa(A)= \frac{\text{max}_{\| \vec{y} \| =1}\| A \vec{y} \|}{\text{min}_{\| \vec{y} \| =1}\| A \vec{y} \|}$$
In it, it also says that by definition of the condition of a matrix it follows that $\kappa(A^{-1})= \kappa(A)$. So there is no explanation for this. Therefore, my question is: Why is $\kappa(A^{-1})= \kappa(A)$?
| First you must accept that if $A^{-1}$ doesn't exist then by convention $\kappa(A)=\infty$.
Once you've postulated that $A^{-1}$ exists, you want to show
$$\max_{\| x \|=1} \| A^{-1} x \|=\left ( \min_{ \| x \|=1 } \| Ax \| \right )^{-1}.$$
To see that, take a minimizer $x^*$ on the right side. Let $b=\frac{Ax^*}{\| Ax^* \|}$; then look at $A^{-1} b=\frac{x^*}{\| Ax^* \|}$. You picked $x^*$ so that among unit vectors, $\| Ax^* \|$ is as small as possible, so $\| A^{-1} b \|=\frac{1}{\| Ax^* \|}$ is as large as possible among unit vectors $b$.
Intuitively, if $A$ maps $x^*$ to some much smaller vector $b$, then $A^{-1}$ maps $b$ back to $x^*$, which is much bigger than $b$. Juggling the normalizations obfuscates this intuition a little bit, I think.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4056723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$f (x) = 1$ if the digit $9$ appears in the decimal representation of $x$ and $f (x) = 0$ otherwise — show that $f$ is continuous
The function $f : [0, 1] \to \Bbb R$, where $f (x) = 1$ if the digit $9$ appears in the decimal representation of $x$ and $f (x) = 0$ otherwise. We use a decimal representation that does not end in repeating $9$s. Show that $f$ is continuous at $x$ if and only if $f (x) = 1$. (Use only $\varepsilon$, $\delta$ definition of continues. You are not allowed use limit).
I intuitively know that if $x$ is a decimal which appears $9$ then we can find another decimal in any neighbourhood with the same property. if not then again we can find another decimal in any neighbourhood which appears $9$. But how write it formally I couldn't. After several days I gave up. Actually I understand the basic idea. But I couldn't write formally anything. Please help me.
| The set of numbers with a $9$ in the decimal representation is dense in $[0, 1]$ (just replace the $n$-th digit by $9,$ for some high $n.$) This shows that the function is not continuous if it is $0.$ On the other hand, the set of numbers which do not have a nine is not dense (a number which has a nine in the $k$-th position will have it if perturbed by something smaller than $10^{-k-2}.$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4056938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Calculate the distance $h(t)=1+e^{-t}\cos(t)$ that drops A weight is hanging from a spring, and oscillates with heights modeled by the function $h(t)=1+e^{-t}\cos(t)$ at time $t\geq 0.$ Does it travel an infinite distance (in other words, does it converge or diverge) as $t\to\infty$ and also, what is the total distance that it drops ?
Note: the distance dropped indicates only the downward motion of the function, not when the function increases.
I have tried to plot this and it seems from the graph that it only drops to $\frac{3\pi}{4}$ and increases from that point on. I got a final answer of $\frac{1}{1-e^{-\pi}}.$ For the other question of whether the weight travels infinitely, the answer is a simple no.
| We have $\cos(t)=\frac{1}{2}\cdot(e^{it}+e^{-it}),$ so rewrite to
$$1+e^{-t}\cdot\frac{1}{2}\cdot(e^{it}+e^{-it})=1+\frac{1}{2}\cdot(e^{t(i-1)}+e^{t(-i-1)})$$
Now integrate over t to get $t+1/2(\frac{e^{t(i-1)}}{i-1}-\frac{e^{t(-i-1)}}{i+1})$ from $0$ to infinity to get infinity and thus it travels an infinite distance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4057053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Dimension of $L^p$ space Let $X = \{a,b,c\}$, $\mu$ is a measure on $2^X$ with $\mu(\{a\})=0, \mu(\{b\})=1, \mu(\{c\})=\infty$.
What is the dimension of $L^1(\mu)$, $L^2(\mu)$ and $L^\infty(\mu)$?
Consider a function $f$ on $2^X$, then we have
$||f||_{L^1} = |f(a)|\mu(\{a\})+|f(b)|\mu(\{b\})+|f(c)|\mu(\{c\}) = |f(b)|+|f(c)|\cdot\infty$.
Similarly,
$||f||_{L^2} = (|f(b)|^2+|f(c)|\cdot\infty)^{\frac{1}{2}}$, and $||f||_{L^\infty}=\text{esssup}\{|f(a)|,|f(b)|,|f(c)|\}=\sup\{|f(b)|,|f(c)|\}$.
For $f\in L^1(\mu)$ or $f\in L^2(\mu)$, $|f(b)|<\infty$, $f(c)=0$, $f(a)$ can be finite or infinite (two choices), so $L^1(\mu)=L^2(\mu)=3$.
For $f\in L^\infty(\mu)$, $|f(b)|<\infty$, $|f(c)|<\infty$, so $L^\infty(\mu)=4$.
Is this correct?
| Note that any function space on $X$ is a subset (up to equivalence classes) of $\mathbb R^X$. The collection $\{\mathbb 1_{\{a\}},\mathbb 1_{\{b\}},\mathbb 1_{\{c\}}\}$ forms a basis for this space, so your dimension of any $L^p$ space is at most $3$. Also your functions shouldn't be allowed to have infinite values technically if you want to talk about the standard dimension of a vector space, because the extended real line is not a field.
Now let $f\in L^1(\mu)$. Then as you say $|f(a)|\mu(\{a\})+|f(b)|\mu(\{b\})+|f(c)|\mu(\{c\})<\infty$. This is only possible if $f(c)=0$. Thus we can write $f=f(a)\mathbb 1_{\{a\}}+f(b)\mathbb 1_{\{b\}}$. However remember that $L^1(\mu)$ is actually the quotient space $\mathcal L(\mu)/\ker(\|\cdot\|_1)=\mathcal L(\mu)/\operatorname{span}(\mathbb 1_{\{a\}})$, so the dimension of $L^1(\mu)$ must be $1$. I leave the rest to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4057187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the number of solution of the equation $\ x^3 - \lfloor x\rfloor = 3$? What is the number of solution of the equation $\ x^3 - \lfloor x\rfloor = 3$ ? (where $\lfloor x \rfloor\ $ is the greatest integer $\le x$)
I tried plotting the graphs of these equations on Desmos graph calculator and that they intersect each other in between x = 1 and 2 but I couldn't figure out a way to get to this conclusion on my own.
Is there any way by which I can determine where these functions intersect?
| Notice that $x-1 < \lfloor x \rfloor \leq x$, so $x^3-x+1 > x^3- \lfloor x \rfloor \geq x^3-x$, and $x^3-x+1 > 3 \geq x^3-x$. The first part of the inequality shows that $x<2$, and the second shows that $x \geq 1$. This means that $\lfloor x \rfloor=1$, so $x^3-1=3$ and $x=\sqrt[3]4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4057330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
When did we move from $\mathbb{Z}\left[\sqrt{d}\right]$ to the ring of integers $\mathcal{O}_{\mathbb{Q}\left[\sqrt{d}\right]}$ and why? Gauss made great progress in number theory in $\mathbb{Z}$ by working in $\mathbb{Z}[i]$ (or equivalently $\mathbb{Z}\left[\sqrt{-1}\right]$), so much so that we call $\mathbb{Z}[i]$ the Gaussian integers now. And it was even known to the old mathematicians that solutions to Pell's equation $x^2 - dy^2 = 1$ could be better analysed by working in $\mathbb{Z}\left[\sqrt{d}\right]$.
But now in modern number theory we study much more the ring of integers $\mathcal{O}_{\mathbb{Q}\left[\sqrt{d}\right]}$. I find this confusing, as if we want to study Pell's equation with $d = 5$, we have that $\mathcal{O}_{\mathbb{Q}\left[\sqrt{5}\right]} = \mathbb{Z}\left[\frac{1 + \sqrt{5}}{2}\right]$ instead of $\mathbb{Z}\left[\sqrt{5}\right]$, which is not what we need. I was under the assumption that modern number theory usually tries to generalise its techniques but I don't see how this is a sensible generalisation and I don't see why the ring of integers is any more useful than just plain old $\mathbb{Z}\left[\sqrt{d}\right]$. So my question is:
Why is the ring of integers defined the way it is?
| We want rings of integers of number fields to be Dedekind domains, which are roughly speaking rings where prime factorization holds at least on the level of ideals. Integral closure is necessary for that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4057514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Expected value of the sum of two dice. Came across this question:
We roll two dice. Let $X$ be the sum of the two numbers appearing on the dice.
*
*Find the expected value of $X$.
*Find the variance of $X$.
I'm not sure how to do either, but this was my thinking for part 1:
$$E(X) = 2((1/6)^2) + \\
3(2(1/6)^2) + \\
4(2(1/6)^2 + (1/6)^2) + \\
5(2(1/6)^2 + 2(1/6)^2) + \\
6(2(1/6)^2 + 2(1/6)^2 + (1/6)^2) + \\
7(2(1/6)^2 + 2(1/6)^2 + 2(1/6)^2) + \\
8(2(1/6)^2 + 2(1/6)^2 + (1/6)^2) + \\
9(2(1/6)^2 + 2(1/6)^2) + \\
10(2(1/6)^2 + (1/6)^2) + \\
11(2(1/6)^2) + \\
12((1/6)^2)$$
The reason I multiplied some by 2 is because it could possibly switch up or permute. So, for example, for 4, the two sums that could give us 4 are (3,1) and (2,2), so I multiplied one of the probabilities by 2 because (3,1) could come as either (3,1) or (1,3) whereas (2,2) can only come in one form.
| What you have is correct so far.
However, a simpler approach is to write $X=X_1+X_2$, where $X_1,X_2$ are the values on the individual dice, and use the facts that:
*
*$\mathbb{E}(X_1+X_2)=\mathbb{E}(X_1)+\mathbb{E}(X_2)$ for any two variables $X_1,X_2$;
*$\mathrm{Var}(X_1+X_2)=\mathrm{Var}(X_1)+\mathrm{Var}(X_2)$ provided $X_1,X_2$ are independent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4057700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Latin word that Euler used for "illustrative example" I'm getting old and forgetful.
Everyone here knows the difference between theorem, corollary, lemma, proposition, conjecture, axiom, and postulate.
I once heard another one, a Latin term, that IIRC meant something like "illustrative example." The only example I ever found of it being used was from Euler.
It may be rare, but I like having a complete list. Can someone help me out and remind me what this mathematical term is?
| Here is one, which I like:
scholium NOUN (pl. scholia)
Pronunciation /ˈskəʊlɪəm/
historical
A marginal note or explanatory comment made by a scholiast.
‘They fall into two categories: the first, a group of ten plays which have been transmitted to us in our medieval manuscripts complete with the accumulation of ancient notes and comments that we call scholia.’
Origin
Mid 16th century modern Latin, from Greek skholion, from skholē ‘learned discussion’.
Lexico
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4057867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\lambda(E_1 \cap ... \cap E_n) > 0$ We use the Lebesgue measure $\lambda$ on $\mathbb{R}$. Let $E_t \subset (0,1)$, $t=1, ..., n$ be measurable subsets such that $\lambda(E_1)+...+\lambda(E_n) > n-1$.
How do I show that $\lambda(E_1 \cap ... \cap E_n) > 0$?
My idea is to let $S_t=(0,1)$ \ $E_t$. Then we also have an inequality for $\lambda(S_t)$. But I don't know how to express $\bigcap_t E_t$ with $S_t$.
| Hint If $A, B \subseteq (0,1)$ then
$$1 \geq \lambda(A \cup B)=\lambda(A)+ \lambda(B)-\lambda(A \cap B)$$
and hence
$$
\lambda(A \cap B) \geq \lambda(A)+ \lambda(B) -1
$$
Prove now by induction that if $A_1,.., A_k \subseteq (0,1)$ then
$$
\lambda (A_1 \cap A_2 \cap ... \cap A_k)\geq \lambda (A_1) +\lambda( A_2)+... +\lambda(A_k)-k+1
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4057994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Books and resources on internet for coloring and domination in graphs I am currently undergraduate student and I want to explore about coloring and domination of graphs. There are lot of materials on web related to this but asking advice like which resource or book to follow might help me in progressing.
I know the concept of coloring and I do not have any idea regarding domination in graphs. I will be thankful if someone provides links of resources or books which will be easy for me to follow as a beginner in the field.
| The problem with studying any specific "sub-subdiscipline" in math, is that each one relates to many other topics within the broader topic. Coloring for example, relates closely to combinatorics or computational algorithms.
That being said, I used A First Course in Graph Theory by Chartrand and Zhang (2012) in my undergraduate days to learn about domination numbers and coloring. These have two good sections on these topics. Beyond that, finding easy-to-read texts specific to coloring and domination numbers gets tricky. Graph Theory by Diestel gives an up-to-date graduate level overview of graph theory in general, with a small section on coloring, and casual mentions of the topic in other chapters. However, discussion of domination is missing, probably because it is a settled NP-complete problem, so Chartrand might be the end of the story, beyond individual papers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4058150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to determine the leading-order asymptotic behaviour of this integral? $\int^{1}_{0} \cos(xt^{3})\tan(t)\, dt$
as x → ∞
I am stuck on how to apply the stationary phase method when $\tan(t)$ vanishes at the stationary point. Should I expand tan in its Taylor series? Is the stationary point $0\,$? Thanks in advance
| Just to show something different from @joriki's answer to [this question], working first the antiderivative
$$I=\int\cos(xt^{3})\tan(t)\, dt$$ let $x t^3=u^3$ to make
$$I=\frac 1{x^{1/3}}\int\cos \left(u^3\right) \tan \left(\frac{u}{x^{1/3}}\right)\,du$$ Since $x\to \infty$, using $\tan(\epsilon)\sim \epsilon$,we have
$$I\sim \frac 1{x^{2/3}}\int\ u\, cos \left(u^3\right)\,du$$ The integral can be computed using the gamma function or the exponential integral function to make
$$\int\ u\, cos \left(u^3\right)\,du=-\frac{1}{6} u^2 \left(E_{\frac{1}{3}}\left(-i
u^3\right)+E_{\frac{1}{3}}\left(i u^3\right)\right)$$ Back to $t$
$$I \sim -\frac{1}{6} t^2 \left(E_{\frac{1}{3}}\left(-i t^3
x\right)+E_{\frac{1}{3}}\left(i t^3 x\right)\right)$$ Using the bounds
$$J=\int_0^1\cos(xt^{3})\tan(t)\, dt\sim\frac{1}{6} \left(\frac{\Gamma
\left(\frac{2}{3}\right)}{x^{2/3}}-E_{\frac{1}{3}}(-i x)-E_{\frac{1}{3}}(i x)\right)$$
Limited to the first order
$$E_{\frac{1}{3}}(-i x)+E_{\frac{1}{3}}(i x)\sim -\frac {e^{-i x }} x$$
So the leading order of $J$ is
$$\frac{\Gamma \left(\frac{2}{3}\right)}{6\, x^{2/3}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4058290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proving that if $a^n=b^n$, then $a=b$, for positive $a$ and $b$, and natural $n$ In high school algebra, natural number exponents are defined as
$$\begin{aligned}a^1 &= a\\
a^{(n + 1)} &= a^n a\end{aligned}$$
With these I used induction to prove the 3 exponent laws. I also proved some ancillary theorems such as the nth power of a positive is positive, distribution of powers over fractions, and so on. Then I got to this statement:
For positive $a$, $b$ and natural number $n$,
$$\text{if }a^n = b^n\text{ then } a = b\text{.}$$
I have a proof, but it is very different from any of the other proofs. It requires $\Sigma$ notation, which none of the others require. However, I can't figure out how to do it without $\Sigma$. Is it possible?
Here's my proof:
$$a^n = b^n\\
a^n - b^n = 0\\
(a - b) \left(\sum_{k=1}^n \frac{a^n b^k}{a^k b}\right) = 0\\
a = b \quad\text{or}\quad \sum_{k=1}^n \frac{a^n b^k}{a^k b} = 0$$
As the summation has at least $1$ term and every term is positive, the summation must be positive, which is not equal to $0$. This leaves $a = b$.
Q.E.D.
| You could write it explicitly. Suppose $a,b$ are not-negative real numbers and $n$ is a positive natural number. Is easy to show that
\begin{align}
a^n - b^n
= (a-b)( a^{n-1} + a^{n-2}b^{1} + a^{n-3}b^{2} + ... + a^{1}b^{n-2} + b^{n-1})
\end{align}
This expression should be familiar from geometric sums and can be seen by distributing the first factor, most of the term cancel out. From this, the second factor is not-negative,
$$ a^{n-1} + a^{n-2}b^{1} + a^{n-3}b^{2} + ... + a^{1}b^{n-2} + b^{n-1} \geq 0 \; , $$
since it is a sum of positive terms. So, the same way you proved,
\begin{align}
0 &= a^n - b^n \, \\[5pt] \Rightarrow \; 0&=a-b \\[5pt]
\Rightarrow \; a & = b \; . \qquad \qquad \blacksquare
\end{align}
Note that you need $n$ to be is nonzero and that $a,b$ to be non-negative. For example $(2)^2=(-2)^2$ but $2 \neq -2$, well as $x^0 = 1 $ for any $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4058384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Can you solve a linearly independent ODE's homogeneous part without the particular solution. For example $y''-6y'+9y=xe^{3x},\hspace{5mm} y(0)=0,y'(0)=0 $
I was wondering if it is possible to compute the homogenous solution even before working out the particular solution to this equation.
As an aside, I was wondering if the initial conditions are (0,0) for y' and y then is it not the case that I don't need the particular solution. As, in effect I am only lookin at the homogenous side of the equation.
I think my misunderstanding comes form the fact that in $y_{c}=y_{p}+y_{h}$ i see $y_{p}$ as a translation or rotation of the solution space away from the origin, and therefore if the inital condition are centered at (0,0) then I know the solution must exist in $y_{h}$.
| Solving the problem $$y''-6y'+9y=xe^{3x},\hspace{5mm} y(0)=0,y'(0)=0$$
as if it was a homogenous problem and plugging in indeed results in the actual (IVP-solved) general solution to the inhomogenous problem.
General solution:
$$y(x)=\frac{1}{6}e^{3x}x^3+e^{3x}c_1+e^{3x}c_2$$
IVP-General solution:
$$y(x)_{IVP}=\frac{1}{6}e^{3x}x^3$$
A counter-example would be:
$$y''-6y'+9y=e^{3x}+cos(x),\hspace{5mm} y(0)=0,y'(0)=0$$
Where the general solution is:
$$y(x)=\frac{2}{25}\cos(x)-\frac{3}{50}\sin(x)+e^{3x}\left(\frac{1}{2}x^2+\frac{1}{2}c_1x+\frac{1}{2}c_2\right)$$
The IVP-general solution on the other hand is:
$$y(x)_{IVP}=\frac{2}{25}\cos(x)-\frac{3}{50}\sin(x)+e^{3x}\left(\frac{1}{2}x^2+\frac{15}{50}x-\frac{4}{50}\right)$$
In which the constants $c_1$ and $c_2$ are clearly not zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4058552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Geometric interpretation of $\operatorname{SO}(3)/\operatorname{SO}(2)=S^2$ I understand (in a hand-waving sense) the argument that if an element of $\operatorname{SO}(3)$ is a general rotation in 3D characterized by a direction in 3D and a rotation angle, then "dividing out" the degree of freedom associated with the rotation ($\operatorname{SO}(2)$) leaves a direction in space, and the set of all possible directions can be mapped to the surface of a sphere.
However, can I understand this statement by choosing a particular set of rotations about the $z$-axis (which is $\operatorname{SO}(2)$) as my subgroup of $\operatorname{SO}(3),$ and then forming all cosets of it ? Each coset would be a specific rotation in 3D applied to the set of all rotations about the $z$-axis. I think that I should be able to see a geometrical way that each coset can be mapped to one (and only one) point on the surface of the sphere ($S^2$). However, I can't quite make a geometrical connection.
| The group $SO(3)$ naturally acts on $3$ dimensional space, and preserves the unit sphere within this. Furthermore, it acts transitively on this unit sphere, so picking a point $x$ on $S^2$, mapping $g\in SO(3)$ to $g.x$, we have a continuous surjective map $SO(3)\rightarrow S^2$. One may check that the fibre over a point is a copy of the circle group $SO(2)$, a conjugate of the stabiliser of $x$ (or a coset of the stabiliser, if you prefer), so in this sense, each point on $S^2$ gets a copy of $SO(2)$, and together these fill out all of $SO(3)$.
Morally, this is just the orbit stabiliser theorem of group theory, after noticing that the proof holds in the category of topological spaces, if your group action is nice.
We can also notice that our fibres seem to change and depend on our point $x$ chosen, and this reflects that although this map is locally a product, it is not globally so. That is, as topological spaces, we do not have $SO(3)\cong SO(2)\times S^2$, even though we can find isomorphisms when we restrict to the fibres over each hemisphere of $S^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4058702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Orthonormal Basis - Angle of Rotation with respect to Standard Orthonormal Basis I have an orthonormal basis ${\bf{b}}_1$ and ${\bf{b}}_2$ in $\mathbb{R}^2$. I want to find out the angle of rotation. I added a little picture here. I essentially want to find $\theta$
I know that I can compute the angle between two vectors but then there are $4$ combinations here
*
*$\theta_{11} = \arccos({\bf{b}}_1^\top{\bf{e}}_1)$
*$\theta_{12} = \arccos({\bf{b}}_1^\top{\bf{e}}_2)$
*$\theta_{21} = \arccos({\bf{b}}_2^\top{\bf{e}}_1)$
*$\theta_{22} = \arccos({\bf{b}}_2^\top{\bf{e}}_2)$
How would one know which angle is correct? Importantly, here I used the labels $b_1$, $b_2$ in the same order as $e_1$ and $e_2$ but that's not necessarily the same order geometically!
| The simplest way is to construct orthogonal matrix with column vectors $B=[b_1 \ \ b_2 \ \ b_1 \times b_2]$.
(I assume here that $b_1$ and $b_2$ are normalized to unit length)
and to use trace of such matrix $\text{tr}(B)$.
Then we have immediately $\text{tr}(B)=1+2\cos(\theta)$.
See also https://en.wikipedia.org/wiki/Rotation_matrix#Determining_the_angle
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4058884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Show that $G$ has four inequivalent 1-dimensional complex representations I am looking at the exercise below in relation to representation theory
We define a group structure on $\lbrace \pm 1, \pm a, \pm b , \pm c \rbrace$ through the relations $a^2 =b^2 =c^2 =abc = 1 $ and denote the resulting group by G.
*
*Show that $G$ has four inequivalent 1-dimensional complex
representations, and find those (Hint: $\lbrace \pm 1 \rbrace $ is a
normal subgroup of G and $G / \lbrace \pm 1 \rbrace \simeq
\mathbb{Z} / 2 \mathbb{Z} \times \mathbb{Z} / 2 \mathbb{Z} $.)
*Define $\pi: G \to GL_2(\mathbb{C})$ by $\pi (\pm 1 )= \pm I_2 ,
\pi(\pm a)=\pm \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix}, \pi
(\pm b)=\pm \begin{pmatrix} 0 & 1 \\
-1 & 0 \end{pmatrix}, \pi(\pm c)= \pm \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix} $. Prove that $\pi$ is a well-defined
irreducible
representation of $G$.
*Use (1) and (2) to find the character table of G.
I have shown part 2. by using the inner product of the character, but I am having some difficulties in part 1. (and therefore also in part 3). In part 1, I am not quite sure how I am to use a normal subgroup in order to show the wanted. Additionally, I am having some difficulties in deciding what the representations explicitly are.
| The cyclic group $\mathbb{Z}/n\mathbb{Z}$ has 1D representations $x\mapsto\zeta^x$ where $\zeta=\exp(2\pi i/n)$ is a primitive $n$th root of unity. In general, a finite abelian group has only 1D irreducible representations. If $A$ is abelian and $K$ the kernel of a representation $\rho:A\to GL_1(\mathbb{C})$, then the image is a cyclic subgroup of $S^1$ (roots of unity), and by the first isomorphism theorem, $K/A$ must be cyclic.
Given a normal subgroup $N\trianglelefteq G$ and a representation $\rho:G/N\to GL(U)$ of its quotient, there is also a projection map $\pi:G\to G/N$, and the composition $(\rho\circ\pi):G\to GL(U)$ is a rep of $G$.
In particular, for $V=\mathbb{Z}_2\times\mathbb{Z}_2$, there are three (proper nontrivial) subgroups $W$, each isomorphic to $\mathbb{Z}_2$ generated by a nontrivial element of $V$. Each of the quotients $V/W$ has a unique nontrivial representation, taking the coset $W$ to $+1$ and the other coset of $W$ to $-1$. The corresponding representation of $V$ produces $\pm1$ depending on if an element is in $W$ or not (this only works because $W$ is index $2$).
Similarly, 1D reps of $Q_8/C_2=V$ become 1D reps of $Q_8$. The three aforementioned subgroups of $V$ become the three subgroups $H=\langle\mathbf{i}\rangle,\langle\mathbf{j}\rangle,\langle\mathbf{k}\rangle$ of $Q_8$, each index $2$. Then we also have the three 1D reps of $Q_8$ defined as producing $\pm1$ (within $GL_1(\mathbb{C})$) based on whether the element of $Q_8$ is in $H$ or not. (And of course the trivial rep is 1D.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4059184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Zero-Divisors & Units in $\mathbb{Z}/n\mathbb{Z}$ Let $n\in\mathbb{Z}_{\geq2}$. Considering the ring $\mathbb{Z}/n\mathbb{Z}$, we denote $(\mathbb{Z}/n\mathbb{Z})^*$ to be its ring of units. Furthermore, the map $\mathbb{Z}\to\mathbb{Z}/n\mathbb{Z};\,a\mapsto\overline{a}$ is meant to be the canonical projection modulo $n$.
claim 1: $\overline{a}\in\mathbb{Z}/n\mathbb{Z}$ zero-divisor $\iff$ $1<\gcd(a,n)<n$.
I am wondering if claim 1 is true or not. I was able to prove $''\implies''$ using Bézout as follows:
$$\gcd(a,n)=1\stackrel{\text{Bézout}}\iff\exists x,y\in\mathbb{Z}: ax+ny=1\iff ax\equiv 1\mod n \iff \overline{a}\in(\mathbb{Z}/n\mathbb{Z})^*.$$
So in particular $\overline{a}$ is not a zero-divisor.
If claim 1 is true, would it then imply the following proposition?
claim 2: $\overline{a}\in\mathbb{Z}/n\mathbb{Z}$ zero-divisor $\iff \overline{a}\notin(\mathbb{Z}/n\mathbb{Z})^*$
| Let $R$ be a finite ring. Then every non-zero element of $R$ is either a zero-divisor or a unit, but not both.
Proof: suppose that $a$ is a zero-divisor. Then clearly, $a$ cannot be a unit. For if $ab = 1$, and if we have $c \neq 0$ such that $ca = 0$, then we would have $cab = c1 = c = 0$. This is a contradiction.
On the other hand, suppose $a$ is not a zero-divisor. Then the map $x \mapsto a \cdot x$ is injective. For suppose we have $ax = ay$; then $a(x - y) = 0$; then $x - y = 0$; then $x = y$. Since this map is injective and its domain and codomain are both $R$, which is a finite set, the map must be surjective. Then there must exist $x$ such that $a \cdot x = 1$.
Thus, for $\overline{a} \neq 0$, we have $\overline{a}$ zero-divisor iff $\overline{a} \notin (\mathbb{Z} / n \mathbb{Z})^*$.
Now note that $\overline{a} \in (\mathbb{Z} / n \mathbb{Z})^*$ iff $gcd(a, n) = 1$. Clearly, if we have $\overline{a} \overline{x} = 1$, then $ax = 1 + bn$ for some $b \in \mathbb{Z}$, and thus $ax - bn = 1$, and thus $gcd(a, n) = 1$. And conversely, if $gcd(a, n) = 1$, then we have some $x, b$ such that $ax + bn = 1$. Then $ax = 1 - bn$. Then $\overline{a} \overline{x} = 1$.
Then $\overline{a}$ is a zero-divisor iff ($\overline{a}$ is not a unit and $\overline{a}$ is not zero) iff ($gcd(a, n) \neq 1$ and $gcd(a, n) \neq n$) iff $1 < gcd(a, n) < n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4059363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Rank inequality
Let $ A, B \in M_n (\mathbb{C})$ such that $(A-B)^2 = A -B$. Then $\mathrm{rank}(A^2 - B^2) \geq \mathrm{rank}( AB -BA)$.
I tried to apply the basic inequalities without results. How to start? Thank you.
| Hint: Let $P:=A-B$, then $A=P+B$, $P^2=P$ and
$$A^2-B^2= (P+B)^2-B^2=P^2+PB+BP=P+PB+BP\\
AB-BA=(P+B)B-B(P+B)=PB-BP$$
Now we can apply a basis transformation using an eigenbasis of $P$ so that the matrix of $P$ becomes diagonal with $k:={\rm rank}(P)$ ones and $n-k$ zeroes, so $PB$ gives the first $k$ columns of $B$ then $n-k$ zero columns, and similarly $BP$ gives the first $k$ rows of $B$ then $n-k$ zero rows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4059489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proof of whether or not a set P is an open set Let $P=\{(x_1, x_2,0) :x_1, x_2 \in \mathbb{R}\}$ be the $x_1$-$x_2$ plane in $\mathbb{R}^3$. Is $P$ an open subset of $\mathbb{R}^3$?
I know that $P$ is an open set if for each points $x_1, x_2$ in $P$, there exists a radius $r>0$ such that $B_r(x) \subseteq P$. I think that $P$ is an open set because it is just a circle in the $\mathbb{R}^3$ plane. This is assuming the $0$ doesn't change in the definition of open considering it is not a variable. I am not sure how to prove the set is open from here.
| Hint: it isn't open.
Negate the definition of an open set. Choose the origin (edit: tbc, the origin in $\mathbb{R}^3$). Let $r>0$ be given. Show that there exists a point within distance $r$ of the origin, yet which isn't in $P$. Conclude that the origin is not an interior point of $P$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4059637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Connection between a.s. equality of conditional expectations and a.s. equality of conditional probabilities Let $P$ be some distribution and let $(X,Y,Z), (X',Y',Z')$ be two random draws from $P$. I would like to understand the connection between the following two conditions
(1) $\mathbb{E}[Y|X,Z] = \mathbb{E}[Y|Z]$ almost surely
(2) $\mathbb{P}_P(Y<Y'|X,Z,X',Z') = \mathbb{P}_P(Y<Y'|Z,Z')$ almost surely
Does one imply the other, or is there some case where one holds and the other doesn't? I know that both must hold if $Y \perp X|Z$, but I would like to be able to characterise the connection between the two. This is related to my unanswered question here
| Equality of expectations seems like a very weak constraint. Certainly (1) does not imply (2), per this example:
*
*$Z$ is independent of everything else, so I will drop it from consideration
*First do this coin-flip: $P(X = 1) = P(X = 2) = 1/2$
*Then $Y$ depends on $X$ this way:
*
*If $X=1$ then $P(Y = 1) = 2/3, P(Y = -2) = 1/3, E[Y|X=1] = 0$.
*If $X=2$ then $P(Y = -1) = 2/3, P(Y = 2) = 1/3, E[Y|X=2] = 0$.
Clearly $E[Y|X] = E[Y] = 0$ surely, so (1) is true. However, (2) is false:
*
*$P(Y<Y' \mid X=1, X'=2) = P(Y=-2 \cup (Y = 1, Y' = 2) \mid X=1,X'=2) = 1/3 + 2/9 = 5/9$
*$P(Y<Y' \mid X=1, X'=1) = P(Y=-2, Y'=1 \mid X=1, X'=1) = 2/9$
UPDATE: (2) also does not imply (1), per this other example:
*
*Again, $Z$ is independent of everything else.
*First draw $X \sim Uniform(0,1)$
*Then $Y$ depends on $X$ this way: $P(Y = 2X) = P(Y = -X) = 1/2$, so $E[Y\mid X] = X/2$, and (1) is false.
However, (2) is true (almost surely) because
$$\forall x \neq x': P(Y < Y' \mid X=x, X'=x') = 1/2 = P(Y < Y')$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4059756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Eigenvalues of vectors of 2*2 matrix A (2 x 2) matrix $\begin{bmatrix}a & c+id\\c-id & b\end{bmatrix}$ where a, b, c, d are real constants will have two different eigenvalues unless it is a multiple of the identity matrix.
i used this
|A − λI| = det(A − λI) = 0 to prove that it will have 2 eigenvalues. How do i prove that it will only have 1 when it is a multiple of the identity matrix?
| The characteristic polynomial is $(\lambda-a)(\lambda-b)-(c^2+d^2)$. Note that the first term already has roots $a,b$, so the discriminant is already positive; $c^2+d^2\ge0$ and increasing $c^2+d^2$ will only increase the discriminant further.
The only possibility for the matrix to have only one eigenvalue is therefore $c=d=0$ and $a=b$, i.e. the matrix is a multiple of the identity matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4059879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove that $\sum(-1)^n\sin \frac{1}{n^\alpha \ln n}$ is divergent for any $\alpha\leq 0$?
How to prove that $\sum(-1)^n\sin \frac{1}{n^\alpha \ln n}$ is divergent for any $\alpha\leq 0$?
It is well-known that $\lim \sin n$ does not exist. But that procedure could not be easily adapted to those that $\lim \sin \frac{1}{n^\alpha\ln n}$ does not exists. How to do then for this above problem?
| Let $a_n=\frac{1}{n^\alpha \ln n}, n \ge 2$
For $\alpha=0$ we have that $a_n$ decreases to $0$ as $n \to \infty$ hence $\sin a_n$ does so even from $n=2$ on ($a_2=1/\ln 2 < \pi/2$) so the series converges by the alternating test.
If $0 < \beta=-\alpha \le 1$ we let $f_\beta(x)=f(x)=\frac{x^{\beta}}{\pi \ln x}, x>3/2$
Then $f'(x)=(\beta-\frac{1}{\ln x})\frac{f(x)}{x}$ so $f'(x)$ (eventually) decreases to zero as $x \to \infty$ (for monotonicity we use that the leading term of $f''$ which dominates is negative as it easily seen, so $f'$ eventually starts decreasing) and also $xf'(x) \to \infty, x \to \infty$ since $f(x) \to \infty, x \to \infty$.
Hence by Fejer's theorem, we get that $f(n)$ is equidistributed modulo $1$ so $\sin a_n=\sin \pi f(n)$ cannot have limit zero as $n \to \infty$, so the series doesn't converge.
If now $\beta>1$ and $k\ge 1$ the unique positive integer for which $0< \beta -k \le 1$, one applies the same ud modulo $1$ criterion as above but for the $k+1$ derivative of $f$ - namely, $f^{(k+1)}(x)$ decreses eventually to zero at infinity and $xf^{(k+1)}(x) \to \infty$ which is an easy inductive exercise since the leading term of $f^{(k)}$ will be $cf_{\beta-k}$ and which is as in the previous case.
This shows that $f(n)$ is still ud modulo $1$ by the easy inductive generalization of Fejer's theorem (using Van der Corput difference criterion) in this case hence $\sin \pi f(n)$ cannot converge to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4060231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Limit of non-archimedean absolute values is eventually constant Let $(x_n)$ be a Cauchy sequence in a field $K$ with respect to a non-archimedean absolute value $|\cdot|$ arising from a discrete valuation $v$ (meaning $|x|:=c^{v(x)}$ for some $c\in(0,1)$. Suppose $\operatorname{lim}|x_n|\neq 0$. I'd like to understand why the sequence $(v(x_n))$ becomes evetually constant.
I found this statement at the top of page three in these notes by Andrew Sutherland.
The only tool at hand seems to be the non-archimedean triangle inequality, but I'm not sure how to use it. What am I missing?
| Your questions on $p$-adic numbers are strange, as if to you it was something completely abstract, but $p$-adic numbers are very concrete. If $x\ne 0$ and $x_n\to x$ then $v(x_n-x)>v(x)$ for $n$ large enough so that $v(x_n)=v(x),|x_n|=|x|$. When $K$ is not complete and $(x_n)$ is Cauchy then $x=\lim x_n$ is in its completion instead of $K$ but it doesn't change anything.
$v(x_n-x)>v(x)\implies v(x)=v(x_n)$ is part of the definition:
If $v(x_n) > v(x)$ then $v(x)=v(x_n+(x-x_n))\ge \min(v(x_n),v(x-x_n))$ a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4060375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to calculate $p$-value in a clinical trial I'm trying to verify the $p$-value mentioned in this article:
$8\%$ ($n=\frac{1}{13}$) of mavrilimumab-treated patients progressed to mechanical ventilation by Day $28$, compared to $35\%$ ($n=\frac{9}{26}$) of control-group patients who progressed to mechanical ventilation or died ($\textbf{p=0.077}$).
How can I calculate the $p$-value for the data above?
I used $3$ different calculators and they all gave different results, none of them matches with the article.
Calculator 1: $p=0.12$
Calculator 2: $p = 0.06876$
Calculator 3: $p = 0.154$
| I can't explain the 'why'. However, looking at an abstract for the research, all of the $p$-values are different:
$\big[\frac{0}{13}, \frac{7}{26}\big]$ Globenewswire: $p=0.086$,
Scientific Abstracts: $\log$ rank $p=0.046$
$\big[\frac{13}{13}, \frac{17}{26}\big]$ Globenewswire: $p=0.0001$,
Scientific Abstracts: $p=0.018$
$\big[\frac{10}{11}, \frac{11}{18}\big]$ Globenewswire: $p=0.0093$,
Scientific Abstracts: $p=0.110$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4060510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Example of homeomorphisms with different supremum metric I hope you can help me. I want to know if there are homeomorphisms $f,g:X\to X$, when $(X,d)$ is compact, such that
$$D(f,g)<D(f^{-1},g^{-1}),$$
where $D(a,b)=\sup_{x\in X}d(a(x),b(x))$.
Any suggestion is welcome :)
| Yes. Let $X = [0,4]$. We shall specify piecewise linear homeomorphisms $f, g$ by their graphs in the square $[0,4]^2$. The graph of $f$ consists of the two line segments $F_1$ connecting $(0,0)$ and $(1,2)$ and $F_2$ connecting $(1,2)$ and $(4,4)$. The graph of $g$ consists of the two line segments $G_1$ connecting $(0,0)$ and $(1,3)$ and $G_2$ connecting $(1,3)$ and $(4,4)$. Drawing a picture is helpful. You can easily verify that the distance $D(f,g) = 1$ which is attained at $x=1$. The graphs of $f^{-1},g^{-1}$ are obtained by reflecting the graphs of $f,g$ at the diagonal $y = x$ (draw again a picture). You will see that $\lvert f^{-1}(3) - g^{-1}(3) \rvert > 1$, thus $D(f^{-1},g^{-1}) > 1 = D(f,g)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4060647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How many solutions are there in the following equation over the natural numbers if $x_1+x_2+x_3>x_4+x_5+x_6+x_7$
How many solutions are there in the following equation over the natural numbers such that $x_1+x_2+x_3+x_4+x_5+x_6+x_7=30$ if $x_1+x_2+x_3>x_4+x_5+x_6+x_7$ ?
I made up a combinatorics question in my mind , but i stuck in it. What i tried is :
Firstly , i wanted to use symmetry property but , it has odd number of variables . Hence ,i could not do anything.
Secondly , i thought about whether generating functions can be used or not , but i stuck in writing the desired generating formula for it.
So , i hope to find nice approaches to my question.
$NOTE=$Natural numbers start with zero.
| Using stars and bars you can write the number of solution as
$$
\sum_{i=0}^{14}\binom{i+3}3\binom {30-i+2}2,
$$
where the first factor counts the number of solutions to the equation $x_4+x_5+x_6+x_7=i $ and the second one those of $x_1+x_2+x_3=30-i $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4060760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
In definite integration, do I have to watch out for the behaviour of a function between the limits?
Calculate -
$$\int^{1}_{-1}\frac{e^\frac{-1}{x}dx}{x^2(1+e^\frac{-2}{x})}$$
This is my approach-
Substitute $t=e^\frac{-1}{x}$
$$\therefore \int^{\frac{1}{e}}_{e}\frac{dt}{1+t^2}$$
which gives us $\frac{\pi}{2}- 2\arctan{e}$, but the answer given is $\pi- 2\arctan{e}$. I'm guessing that my wrong answer has something to do with the exponential term.
| The answer to the question in the header is yes, and this is a really interesting example.
When you substitute a variable, the substitution applies to the whole interval of integration. If you say $t=e^{\frac{-1}{x}}$, you mean that all the values that $x$ takes on are related to the values that $t$ takes by that relation. Really you're applying the function $e^{\frac{-1}{x}}$ to the whole interval of the $x$'s, i.e. $[-1,1]$. Normally, when we do substitution the function you use maps an interval to an interval, so it suffices to just look at the endpoints. Not so with this function! What is the image of $[-1,1]$ under $e^{-\frac1{x}}$? It's $[0,\frac1e]\cup [e,\infty]$, which you can see in a number of ways, e.g. the image of $[-1,1]$ under $1/x$ is $[-\infty,-1]\cup [1,\infty]$, then map it through $e^{-x}$ to get the right interval. Thus the correct substitution ought to be \begin{eqnarray}
&&\int_{-1}^1\frac{e^{-\frac1x}}{x^2(1+e^{-\frac2x})}dx = \int_{[0,\frac1e]\cup [e,\infty]} \frac1{1+t^2} dt = \int_0^{\frac1e} \frac1{1+t^2} + \int_e^\infty \frac1{1+t^2} dt\\ &=& \arctan{\frac1e} - \arctan0 + \arctan{\infty} - \arctan{e} = \arctan\frac1e + \frac\pi2 - \arctan(e) \\&=& \pi-2\arctan(e)
\end{eqnarray}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4061071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.