Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Probability of passing a T/F exam? There are $n$ questions in an examination ($n \in \mathbb{N}$), and the answer to each question is True or False. You know that exactly $t$ of the answers are True ($0 \le t \le n$), so you randomly answer $t$ questions as True and the rest as False. What is your probability of getting at least a $50\%$ score in the exam (in terms of $n$ and $t$)?
My attempt:
WLOG assume you answer the first $t$ questions as True. Let there be $k$ questions out of the first $t$ of which the answer is True. Thus, out of the other $n-t$ questions where you replied False, $(n-t)-(t-k)=n-2t+k$ questions are really False. Thus, the fraction of correct answers for the whole examination is equal to $\frac{n-2t+2k}{n}$. If this is at least $1/2$, then $n+2k \ge 4t$. However, I can't calculate the probability of this happening.
| There is a typo. Instead of $n+2k \ge 4t$, it should be $n+4k \ge 4t$. I do not know if you can get a closed form but here is my work that may help you understand lower and upper bound of $t$ in terms of $n$, beyond which one is certain to get at least $50\%$ score.
a) Based on your work, $ \displaystyle k \geq \frac{4t - n}{4}$. So if $t \geq \dfrac{3n}{4}, k \geq \dfrac{n}{2}$ and we are bound to get at least $50\%$ score by choosing $t$ answers as TRUE.
Similarly if $ ~\displaystyle t \leq \frac{n}{4}$, you are certain to get at least $50\%$ score by randomly choosing $(n-t)$ answers as FALSE.
b) Now for $~ \displaystyle \frac n4 \lt t \lt \frac {3n}4$,
we must have $ \displaystyle \lceil \frac{4t - n}{4} \rceil \leq k \leq t$ to score at least $50\%$
So the desired probability can be written as,
$ \displaystyle \sum \limits_{k = k_l}^t {t \choose k} {n - t \choose t - k} / {n \choose t}$
where $ \displaystyle k_l = \lceil \frac{4t - n}{4} \rceil$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4322906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Why are these two functions so close in value? [Edit: rewritten 4/8/22 to increase specificity of question]
Recently, I was interested to estimate the following partial sum.
$$\sum_{j=1}^{n-1}\frac{1}{\sqrt{j(j+1)}}$$
I understand that the correct asymptotic formula is $\ln n$, as $\frac1{\sqrt{j^2+j}} \sim \frac 1j$ for large $j$, and therefore
$$\sum_{j=1}^{n-1}\frac{1}{\sqrt{j(j+1)}} \sim \sum_{j=1}^{n-1}\frac1j \sim \ln n.$$
However, the error between the first and last terms is curiously much lower than you might expect, based on the error of the two intermediary approximations. By taking the forward difference on each side, one discovers the asymptotic formula
$$\ln\Big(\frac{x+1}{x}\Big) \sim \frac 1{\sqrt{x(x+1)}}$$
which is indeed remarkably close for large $x$. A partial explanation is: the power series expansion of each of these functions at $x = \infty$ begins with $\frac 1x - \frac1{2x^2} + \mathcal O \big(\frac1{x^3}\big)$. Indeed,
$$\frac 1{\sqrt{x(x+1)}} - \ln\Big(\frac{x+1}{x}\Big) = \frac1{24x^3} + \mathcal O\Big(\frac1{x^4}\Big)$$
However, this is somewhat unsatisfying to me as an explanation, as it seems to explain the coincidence with another coincidence.
As the comments have noted, it is easy to find examples of functions which more closely approximate $\frac{1}{\sqrt{x(x+1)}}$ for a similar reason. However, the above approximation is exceptionally striking as $\ln\big(\frac{x+1}{x}\big)$ may be written as the forward difference of $\ln x$, leading back to the originally observed partial sum approximation. Other functions, which may be closer to $\frac{1}{\sqrt{x(x+1)}}$ in value, are not so neatly (partially) summable.
Is there a deeper or more intuitive way to understand the above partial sum asymptote, without relying on the 'worse' intermediate approximation of $\frac{1}{\sqrt{x(x+1)}} \sim \frac 1x$?
| Too long for a comment.
In the same spirit as @M. Wind's answer, we can build functions which are still closer.
Consider the three functions
$$f(x)=\log\Big(\frac{x+1}{x}\Big) \qquad \qquad g(x)=\frac 1{\sqrt{x(x+1)}}\qquad \qquad h(x)=\frac {12x}{12 x^2+6 x-1 }$$
$h(x)$ is a Padé approximant of $f(x)$.
As you wrote
$$f(x)- g(x)=-\frac{1}{24 x^3}+O\left(\frac{1}{x^4}\right) \qquad \text{but}\qquad f(x)- h(x)=-\frac{1}{24 x^4}+O\left(\frac{1}{x^5}\right)$$
Similarly
$$\Phi_1=\int_{10}^\infty \Big[f(x)-g(x)\Big]^2\,dx=2.73\times 10^{-9}$$
$$\Phi_2=\int_{10}^\infty \Big[f(x)-h(x)\Big]^2\,dx=1.88\times 10^{-11}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4323037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Optimization using Lagrange multipliers: 55 gallon steel drum I am currently trying to figure out this particular problem with not a great understanding of Lagrange multipliers. I understand the process, but am not sure how the values are generated. Could anyone help me solve this with explanation behind the steps?
Using the dimensions in the figure below, use the method of Lagrange multipliers to optimize the steel drum(cylinder). Find the smallest possible surface area while maintaining (constraint) the desired volume of $57.20$ gallons or $13,213\;in^3$.
Minimize: $\;\; SA(r,h)=2\pi rh+2\pi r^2$
Constraint: $\;\; V(r,h)= \pi hr^2$
diameter $\;\;= 22\;\frac12$ inch,
height $\;\; = 33\;\frac18$ inch.
| Here is an outline of how to solve this problem.
Generally, Lagrange multipliers is not a recipe for solution and requires some extra thinking. This problem follows this pattern.
The problem is $\min \{ S(r,h) | V(r,h) = K \}$, where $K$ is some positive
constant, $S(r,h) = r^2+rh$ and $V(r,h) = h r^2$ (I stripped out irrelevant constants to simplify).
Step 1: The first, and typically overlooked, step is to convince yourself that there actually is a solution.
This problem can be solved directly without Lagrange multipliers. Note that in this case, $V(r,h) = K$ implies that $h>0$ and so we see that $h = {K \over r^2}$ and $S(r,{K \over r^2}) = r^2 + {K \over r}$. Plotting this shows that $S$ is unbounded below so technically it has no minimum.
Clearly a negative $r$ is aphysical, there is an implied constraint that $r \ge 0$.
We (well, me) still need to convince ourselves that there is a minimum. Note that if we pick any feasible $r,h$, for example, $(r,h) = (1,K)$, then any solution must have cost no larger than this so the problem is equivalent to $\min \{ S(r,h) | V(r,h) = K , S(r,h) \le S(1,K), r \ge 0\}$. It is straightforward to see that the feasible set is compact hence a solution exists. Hence we can drop the artificial $S(r,h) \le S(1,K)$ constraint.
To say a solution exists means that there is some point $(r^*,h^*)$ that is feasible and $\epsilon>0$ such that for any feasible $(r,h) \in B((r^*,h^*), \epsilon)$ that $S(r^*,h^*) \le S(r,h)$.
Furthermore, the $V$ constraint shows that at a solution we must have $r>0$
(and hence is not binding at the solution).
The whole point of this diatribe is that $(r^*,h^*)$ is a local solution to the original problem and so the Lagrange multiplier conditions hold at
this point.
Step 2: Actually use the multipliers.
We have the equations $h^*+2r^*+2 \lambda r^*h^* = 0$, $r^* + \lambda (r^*)^2 = 0$. We know that $r^* >0$ so the last equation gives $r^* = - {1 \over \lambda}$ and the
other equation gives $h^*=-{2 \over \lambda}$. Substituting into the
$V$ constraint gives $\lambda = -\sqrt[3]{2 \over K}$.
Hence $r^*= \sqrt[3]{K \over 2}$, $h^* = 2\sqrt[3]{K \over 2}$ with optimal surface area $S(r^*,h^*) = 3 \sqrt[3]{K^2 \over 4}$.
The numbers in the problem are a bit inconsistent, the volume constraint is $13,213$ cu.in. but the diameter and height given result in a volume of around $13,171$ cu.in.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4323209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Skewness of a random variable that is Poisson distributed Let $X$ be a discrete random variable with $\sum_{x\in \Omega(X)}|x|^3P[X=x]<\infty$ and $Std(X)>0$.
Then the skewness of $X$ is defined by $$\eta(X)=E\left [\left (\frac{X-E[X]}{Std(X)}\right )^3\right ]$$
For $a,b\in n\mathbb{R}$ with $a\neq 0$ we have that $$\eta(aX+b)=\begin{cases}\eta(X), & a>0\\ -\eta(X), & -a<0\end{cases}$$
We have that $$\eta(X)=\frac{E[X^3]-3E[X]E[X^2]+2(E[X])^3}{(Std(X))^3} \ \ \ \ \ (\star)$$
Calculate the skewness of a random variable that is Poisson distributed with the parameter $\lambda> 0$.
$$$$
From $(\star)$ we have that $$\eta(X)=\frac{E[X^3]-3E[X]E[X^2]+2(E[X])^3}{(Std(X))^3}$$
By the Expectation of Poisson Distribution we have that $E(X)=\lambda$.
By the Variance of Poisson Distribution: $Var(X)=\lambda \Rightarrow E[X^2]-(E[X])^2=\lambda\Rightarrow E[X^2]-\lambda^2=\lambda\Rightarrow E[X^2]=\lambda+\lambda^2$.
Then $Std(X)=\sqrt{Var(X)}=\sqrt{\lambda}$.
So far we have $$\eta(X)=\frac{E[X^3]-3E[X]E[X^2]+2(E[X])^3}{(Std(X))^3}=\frac{E[X^3]-3\cdot \lambda \cdot \left (\lambda+\lambda^2\right )+2\lambda^3}{\sqrt{\lambda}^3}=\frac{E[X^3]-3\lambda^2-3\lambda^3+2\lambda^3}{\sqrt{\lambda}^3}=\frac{E[X^3]-3\lambda^2-\lambda^3}{\sqrt{\lambda}^3}$$ How can we calculate $E[X^3]$ ?
| Use properties of the exponential generating function:
$$\begin{align}
M_X(t) &= \operatorname{E}[e^{tX}] \\
&= \sum_{x=0}^\infty e^{tx} e^{-\lambda} \frac{\lambda^x}{x!} \\
&= \sum_{x=0}^\infty e^{-\lambda} \frac{(\lambda e^t)^x}{x!} \\
&= e^{\lambda (e^t - 1)} \sum_{x=0}^\infty e^{-\lambda e^t} \frac{(\lambda e^t)^x}{x!} \\
&= e^{\lambda (e^t - 1)}.
\end{align}$$
Now since $$\operatorname{E}[X^k] = \left[\frac{d^k M_X}{dt^k}\right]_{t=0}$$
we obtain the first three moments through differentiation:
$$M_X'(t) = M_X(t) \frac{d}{dt}[\lambda (e^t - 1)] = \lambda e^t M_X(t),$$ using the fact that $\frac{d}{dt}[e^{f(t)}] = f'(t) e^{f(t)}$. Then by the product rule,
$$M_X''(t) = \lambda \left(\frac{d}{dt}[e^t] M_X(t) + e^t M_X'(t)\right) = \lambda \left(e^t + \lambda e^{2t}\right) M_X(t),$$ where we have substituted the result for the first derivative. Next,
$$M_X'''(t) = \lambda \left(\frac{d}{dt}[e^t + \lambda e^{2t}] M_X(t) + (e^t + \lambda e^{2t}) M_X'(t)\right) \\ = \lambda (e^t + 2\lambda e^{2t} + \lambda (e^{2t} + \lambda e^{3t})) M_X(t) \\
= \lambda(e^t + 3\lambda e^{2t} + \lambda^2 e^{3t}) M_X(t).$$
Now evaluating each of these at $t = 0$ yields the desired moments:
$$\operatorname{E}[X] = M_X'(0) = \lambda \\
\operatorname{E}[X^2] = M_X''(0) = \lambda(1+\lambda) \\
\operatorname{E}[X^3] = M_X'''(0) = \lambda(1 + 3\lambda + \lambda^2).
$$
The rest is straightforward.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4323393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that $\int_{0}^{1} \frac{\ln(x)}{x-1} dx = \sum_{k=1}^{\infty} \frac{1}{k^2}$ Notice: I've seen questions alike these on MSE, however, from what I've found, none of them presents a solution like mine. That's why I hope some of you can confirm whether the use of my method produces a correct solution. Thanks.
The problem is stated as:
Prove that $\int_{0}^{1} \frac{\ln(x)}{x-1} dx = \sum_{k=1}^{\infty} \frac{1}{k^2}$
My attempt:
First of all, we use the fact that $\ln(x) = \ln(1+(x-1))$ which then can be written as $\sum_{k=1}^{\infty} \frac{(-1)^{k-1}(x-1)^k}{k}dx$, substituting this into the integrand, we have:
$$\int_{0}^{1} \sum_{k=1}^{\infty} \frac{(-1)^{k-1}(x-1)^{k-1}}{k}dx$$
We would like to interchange the summation and the integral, since that'd simplify the problem alot. In order to do this, one has to show that the integrand converges uniformly on $[0,1]$. Since we have a power series, it's easy to see that this does indeed converge uniformly within our radius of convergence given by $R = 1$.
In other words, we have uniform convergence for $|x-1| < 1 \Leftrightarrow x \in (0,2)$. Therefore, it also converges uniformly on the interval $x \in [\beta, 1]$ for $\beta > 0$. Our original expression can therefore be rewritten to:
$$ \lim_{\beta \rightarrow 0^+} \int_{\beta}^{1} \sum_{k=1}^{\infty} \frac{(-1)^{k-1}(x-1)^{k-1}}{k}dx$$
Now, we can interchange the summation with the integral, since we are integrating within our radius of convergence for the power series, for which we have uniform convergence, as previously stated. After some algebraic simplifications, we have:
$$ \lim_{\beta \rightarrow 0^+} \sum_{k=1}^{\infty} - \frac{(-1)^{k-1}(\beta-1)^{k}}{k^2 }$$
Once again, we can move our limit inside the summation, since we have that the sum converges uniformly on the interval $[\beta, 1]$, which can easily be verified by the Weierstrass M - test.
Finally, we have that:
$$ \sum_{k=1}^{\infty} \lim_{\beta \rightarrow 0^+} - \frac{(-1)^{k-1}(\beta-1)^{k}}{k^2} =\sum_{k=1}^{\infty} - \frac{(-1)^{k-1}(-1)^{k}}{k^2} = \sum_{k=1}^{\infty} \frac{1}{k^2} $$
QED.
Feel free to add any comments on my solution. I'm mostly unsure of the step where I add the limit for $\beta$.
Thanks.
| It's a nice approach. The only thing I would clean up a bit:
When applying the Weierstrass M-test, use the fact that
$ \sum_{k=1}^{\infty} \left|-\frac{(-1)^{k-1}(\beta-1)^{k}}{k^2 }\right|\leq \sum_{k=1}^{\infty} \frac{1}{k^2 }<\infty$
so that the sum is dominated by a sum that's independent of $\beta$. Then the limit as $\beta \to 0^+$ may proceed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4323771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
How can there exist a quotient map to a space of higher dimension than the domain? I don't need a completely formal explanation, just some intuition. My professor stated
identifying antipodal points on $S^1$, yields the projective plane $P^2$. That means there exists a quotient map (and thus a surjective map) $q: S^1\to P^2$.
Intuitively, this confuses me. How can identifying points together raise the dimension of the resulting space? Lowering or keeping the dimension both make sense to me, but I can't wrap my mind around it raising the dimension. Like, I knew that surjective linear maps $T: X\to Y$ require that $\dim Y \leq \dim X $, but this clearly doesn't hold for all continuous maps, just linear ones.
Could someone offer some intuition behind this, or how they think about it? I can wrap my head around the codomain being "larger" than the domain (like a continuous bijection $f: [0, 1]\to\mathbb{R}$, even though since there's a bijection $\mathbb{R}$ technically isn't larger) but not the codomain having higher dimension.
| It’s just an error, by you or by your professor. The projective plane results from identifying antipodal points on $S^2.$
However, there do exist counterintuitive quotient maps that increase dimension. A space-filling curve is a continuous surjection $[0.1]\to [0,1]^2$; since the domain is compact and the codomain is Hausdorff, this is also a closed map, thus in particular, a quotient map. (This cannot happen with differentiable functions, or with injective continuous functions, happily.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4324096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Existence and Uniqueness of the Cauchy Problem for General Competition Systems? I have the original system
\begin{equation}
\begin{array}{lcl}
\dot u_{1} & = & A_{1}u_{1}(1 - u_{1} - a_{12}u_{2} - \dots - a_{1n}u_{n}) \\
\dot u_{2} & = & A_{2}u_{2}(1 - a_{21}u_{1} - u_{2} - \dots - a_{2n}u_{n}) \\
& \vdots & \\
\dot u_{n} & = & A_{n}u_{n}(1 - a_{n1}u_{1} - \dots - a_{n(n-1)}u_{n-1} - u_{n}),
\end{array}
\end{equation}
which is a general n-dimensional competition system. We use the fact that $A_{i} = a_{ii}$ to write this system as
\begin{equation}
\dot u_{i} = \Phi_{i}(t; u_{1}, u_{2}, \dots, u_{n}),
\end{equation}
where
\begin{equation*}
\begin{array}{lcl}
\Phi_{1} & = & u_{1}(a_{11} - a_{11}u_{1} - a_{11}a_{12}u_{2} - \dots - a_{11}a_{1n}u_{n}) \\
\Phi_{2} & = & u_{2}(a_{22} - a_{22}a_{21}u_{1} - a_{22}u_{2} - \dots - a_{22}a_{2n}u_{n}) \\
& \vdots & \\
\Phi_{n} & = & u_{n}(a_{nn} - a_{nn}a_{n1}u_{1} - \dots - a_{nn}a_{n(n-1)}u_{n-1} - a_{nn}u_{n}).
\end{array}
\end{equation*}
Here, the $a_{ij}$'s are smooth, positive 1-periodic functions of t defined over $\mathbb{R}$. Before I properly consider the periodic case, I need to consider properties of the time-averaged system (which, in this case, will be autonomous). We define this as
\begin{equation}
\dot w_{i} = \phi_{i}(w_{1}, w_{2}, \dots, w_{n}),
\end{equation}
where
\begin{equation*}
\phi_{i}(w_{1}, w_{2}, \dots, w_{n}) = \int_{0}^{1} \Phi_{i}(w_{1}, w_{2}, \dots, w_{n}) \ dt.
\end{equation*}
It follows that the system (7) can be written as
\begin{equation}
\dot w_{i} = w_{i}\bigg(\bar{a_{ii}}(1 - w_{i}) - \sum_{k \neq i} \bar{a_{ik}}w_{k}\bigg), \ \ \ i = 1, 2, \dots, n,
\end{equation}
where
\begin{equation*}
\bar{a_{ij}} = \int_{0}^{1} a_{ij}(t) \ dt.
\end{equation*}
Now, my goal is to prove that the Cauchy problem for this system $\dot w_{i}$ has a unique global solution whenever the initial data $w_{0} = (w_{0_{1}}, w_{0_{2}}, \dots, w_{0_{n}})$ satisfy $w_{0_{i}} \in \mathbb{R}_{+0} = \mathbb{R}_{+} \cup \{0\} \forall i \in [1,n].$
Unfortunately, I have not been able to find much information on conditions for the existence and uniqueness of nonlinear systems of ODEs, or at least for general n-dimensional competition systems like this one.
How should I approach a proof for this? I'm a bit stuck.
| I think the hardest part here is to prove that solution exists for all positive times. Let me show this. Since $a_{ij}(t) > 0$, then $\overline{a}_{ij} > 0$. But that allows us to say that $\dot{w}_i \leqslant 0$ as long as $w_i \geqslant 1$ and other variables are non-negative. So, for any initial condition $(w_1^{(0)}, w_2^{(0)}, \dots, w_n^{(0)})$ this leads to the following statement: the trajectory doesn't leave a hyper-parallelepiped $\bigcap\limits_{i=1}^{n} \lbrace 0 \leqslant w_i \leqslant w_i^{(0)} \rbrace $. It is due to Bony-Brezis theorem: when $w_i = 0$, then $\dot{w}_i = 0$, and $\dot{w}_i < 0$ when $w_i = w_i^{(0)}$, which means that vector field is pointing inward at the boundary of this hyper-parallelepiped (when $w_i = w_i^{(0)}$) or is being tangent when $w_i = 0$. If trajectory does not leave a compact subset of phase space for $ t > 0$, it exists for all positive times (see, for example, theorem here, p. 19). For each starting point outside of $\bigcap\limits_{i=1}^{n} \lbrace 0 \leqslant w_i \leqslant 1 \rbrace $ we've shown that trajectory enters a compact set $\bigcap\limits_{i=1}^{n} \lbrace 0 \leqslant w_i \leqslant w_i^{(0)} \rbrace $ and does not leave it. The hyper-parallelepiped $\bigcap\limits_{i=1}^{n} \lbrace 0 \leqslant w_i \leqslant 1\rbrace $ is also invariant, so we have covered other points as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4324269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Existence of $k$-form with nonzero integral Say that $N$ is an oriented, compact, connected manifold without border. If $\operatorname{dim}(N) = k$, does it always exists some $k$-form $\omega$ such that $\int_N \omega \neq 0$?
I know how to proceed in some particular cases (e.g., $S^k$), but I have no idea how to prove the general case, if it is even true. In the literature I've checked (mostly Lee's Introduction to Smooth Manifolds, 2nd edition, and Guillemin/Pollack Differential Topology) there is nothing, also - but I may have missed something.
| Let $(U, \varphi)$ be an oriented local chart of $M$. On $\varphi (U) \subset \mathbb R^k$ you have a $k$-form $\omega_0 = dx^1 \wedge \cdots \wedge dx^k$ and a bump function $b : \varphi(U) \to \mathbb R$ (i.e. a smooth function with compact support in $\varphi(U)$), then
$$\varphi^*(b \omega_0)$$
is a smooth $k$-form on $M$ (by extending to zero outside of $U$). By definition of the integration,
$$ \int_M \omega = \int_{\varphi (U)} b\omega_0 = \int_{\varphi (U)} b(x) dx^1\cdots dx^k.$$
One can choose this to be non-zero (e.g. choose $b\ge 0$ and $b>0$ on an open subset)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4324491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find inverse of element in a binary field The question states:
Let us consider the field $GF(2^4)$ with multiplication modulo $x^4+ x^3+1$
Find all y such that $1010(y + 0011) = 1111$, in other words find y that satisfies $(x^3+x)(y +x+1) = x^3+x^2+x+1$
I tried to find the inverse for element $1010$ to multiply both sides of the equation and continue from there, I tried by solving the equation:
$(x^3+x)(ax^3+bx^2+cx+d) = 1$
But the solution I get is that $a=0,b=1,c=1,d=0$ which doesn't produce the inverse element.
What is the inverse element of $(1010)$ in this field and how can I arrive at that solution?
Any help is greatly appreciated.
| We know $$x^4+x^3+1=0 .\tag1$$
It will be useful to know
$$
x^4 = x^3+1\tag2$$
$$
x^5 = x^4+x=(x^3+1)+x = x^3+x+1\tag3$$
$$
x^6 = x^5+x^2 = (x^3+x+1)+x^2 = x^3+x^2+x+1\tag4
$$
These will be useful for doing a calculation like
$(x^3+x)(ax^3+bx^2+cx+d) = 1$ that the OP proposes.
Compute
\begin{align}
1 &= (x^3+x)(ax^3+bx^2+cx+d)
\\
&=(a)x^6+(b)x^5+(c+a)x^4+(d+b)x^3+(c)x^2+(d)x+0
\\
&=(a+b+d+c+a+b)x^3+(a+c)x^2+(a+b+d)x+(a+b+c+a)
\\
&=(d+c)x^3+(a+c)x^2+(a+b+d)x+(b+c)
\end{align}
Thus
\begin{align}
d+c&=0\\a+c&=0\\a+b+d&=0\\b+c&=1
\end{align}
So $d=c$, then $a=c$, then $b=a+d=c+c=0$, then $c=d=a=1$. So
$$
(x^3+x)^{-1} = x^3+x+1
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4324627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Show that $S=\{\mathbf x \in \mathbb R^n:\mathbf A \mathbf x=\mathbf b,\mathbf x \ge \mathbf 0\} \ne \emptyset$ has at least one extreme point Show that the standard polyhedron defined by $S=\{\mathbf x \in \mathbb R^n:\mathbf A \mathbf x=\mathbf b,\mathbf x \ge \mathbf 0\} \ne \emptyset $ has at least one extreme point and the set of its extreme points is a finite set.
I know that we need to find $\mathbf x \in S$ such that $$\forall \mathbf A,\mathbf B \in S, \forall \lambda \in (0,1):\mathbf x= \lambda \mathbf A+(1-\lambda)\mathbf B \implies \mathbf A=\mathbf B$$
And that shows that the set has at least one extreme point, I can find such $\mathbf x$ but unfortunately it depends on $\mathbf A$ and so it does not work for arbitrary $\mathbf A,\mathbf B \in S$.
I've seen the theorem in many sources, but could not find any proof of that.
| I think that I can prove the existence at least one extreme point if $A$ is a $n \times n$ matrix.
Proof:
I will do the proof by induction.
Case $n=1$:
If $A$ is not singular and $S \neq \emptyset$ then $S=\{a\}$ (consecuence of Rouche-Frobeniu´s theorem) for some $a \in \mathbb{R}$ and then $S$ has one extreme point.
If $A$ is singular then $S=\mathbb{R}^{+} \cup \{0\}$ and then $\{0\}$ is the extreme point.
Case general:n=k
If $A$ is not singular and $S \neq \emptyset$ then $S=\{a\}$ (consecuence of Rouche-Frobeniu´s theorem) for some $a \in \mathbb{R}^{n}$ and then $S$ has one extreme point.
If $A$ is singular and $S \neq \emptyset$ then we have that $S=(a+V) \cap \{x \geq 0\}$ for some $a \in \mathbb{R}^{n}$ and a subespace $V \subseteq \mathbb{R}^{n}$.
How $A$ is singular, then exists $v=(v_{1},...,v_{n}) \in V : v \neq 0$ so exists $j \in \{1,...,n\} : v_{j} \neq 0$, and we can suppose that some index is negative (doing $\cdot (-1)$).
Then rearranging if necessary we can suppose that $a=(a_{1},...,a_{n})$, $v=(v_{1},...,v_{n})$ and $v_{n} < 0$ and $\frac{|v_{n}|}{|a_{n}|} \leq \frac{|v_{j}|}{|a_{j}|} \forall j \in \{1,...,n\}$.
Then we have that $a+\frac{v_{n}}{a_{n}}v=(a_{1}+\frac{v_{1}v_{n}}{a_{n}},...,a_{n-1}+\frac{v_{n-1}v_{n}}{a_{n}},0)$ and $a_{i}+\frac{v_{i}v_{n}}{a_{n}} \geq 0$ with $i \in \{1,...,n-1\}$, so $a+\frac{v_{n}}{a_{n}}v \in S$.
But $S \subseteq \{x \geq 0\}$ so if $a+\frac{v_{n}}{a_{n}}v=\lambda c + (1-\lambda) d$ with $\lambda \in (0,1)$ and $c,d \in S$ then $c_{n}=d_{n}=0$.
So $a+\frac{v_{n}}{a_{n}}v,c,d$ are solutions of the sistem:
$$
\begin{pmatrix}
a_{11}& \cdots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{n1} & \cdots & a_{nn} \\
\end{pmatrix}
\begin{pmatrix}
x_{1} \\
\vdots \\
x_{n-1}\\
0
\end{pmatrix}
=
\begin{pmatrix}
b_{1} \\
\vdots \\
b_{n}
\end{pmatrix}
$$
and how $x_{n}=0$ we can delete the last column of our matrix and our solutions are solutions of the sistem:
$$
\begin{pmatrix}
a_{11}& \cdots & a_{1n-1} \\
\vdots & \ddots & \vdots \\
a_{n1} & \cdots & a_{nn-1} \\
\end{pmatrix}
\begin{pmatrix}
x_{1} \\
\vdots \\
x_{n-1}
\end{pmatrix}
=
\begin{pmatrix}
b_{1} \\
\vdots \\
b_{n}
\end{pmatrix}
$$
we denote
$$A'=
\begin{pmatrix}
a_{11}& \cdots & a_{1n-1} \\
\vdots & \ddots & \vdots \\
a_{n1} & \cdots & a_{nn-1} \\
\end{pmatrix},
X'=
\begin{pmatrix}
x_{1}\\
\vdots\\
x_{n-1}
\end{pmatrix},
B=
\begin{pmatrix}
b_{1}\\
\vdots\\
b_{n}
\end{pmatrix}
$$
then we have that $rg(A') \leq n-1$, and how $a+\frac{v_{n}}{a_{n}}v$ is solution of $A'X'=B$ we have that $rg(A')=rg(A'|B) \leq n-1$, so we can delete one row of $A'$ and $B$. We can suppose that we delete the $n$-row.
So our sistem is equivalent to the sistem:
$$
\begin{pmatrix}
a_{11}& \cdots & a_{1n-1} \\
\vdots & \ddots & \vdots \\
a_{n-11} & \cdots & a_{n-1n-1} \\
\end{pmatrix}
\begin{pmatrix}
x_{1} \\
\vdots \\
x_{n-1}
\end{pmatrix}
=
\begin{pmatrix}
b_{1} \\
\vdots \\
b_{n-1}
\end{pmatrix}
$$
and by induction hipotesis we have that this last sistem restricted to $\{x' \geq 0\}$ has at least one extreme point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4324758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Show that vector v is equal to the zero vector Let H = Span{$\vec v$1, $\vec v$2} where {$\vec v$1, $\vec v$2} is an orthogonal set of nonzero vectors. Let $\vec v$ ∈ H such that
$\vec v$ ⋅$\vec v$1 = 0 and $\vec v$ ⋅ $\vec v$2 = 0. Show that $\vec v$ = 0. Hint: Compute $||\vec v||^2 = \vec v ⋅ \vec v$ by writting $\vec v$ as a linear combination of $\vec v_{1}, \vec v_{2}$ and taking the dot product with $\vec v$.
Since I know that ||$\vec v$|| = 0 if and only if $\vec v$ = $\vec 0$
I found the weights like this: $$c_{1} = \frac{\vec v ⋅ \vec v_{1}}{\vec v_{1}⋅\vec v_{1}} =\frac{\vec 0 ⋅ \vec v_{1}}{\vec v_{1}⋅\vec v_{1}} = 0$$
$$c_{2} = \frac{\vec v ⋅ \vec v_{2}}{\vec v_{2}⋅\vec v_{2}} = \frac{\vec 0 ⋅ \vec v_{2}}{\vec v_{2}⋅\vec v_{2}} = 0$$
Using that to create a linear combination of $\vec v_{1},\vec v_{2}$
$$\vec v = 0\vec v_{1} + 0\vec v_{2}$$
$$\vec v = 0$$
Is this even correct? Or am i just overthinking the question?
| Following the given hint we have
$$v=av_1+bv_2 \implies v\cdot v=av\cdot v_1+bv\cdot v_2=0+0=0$$
and then we also have
$$v=av_1+bv_2 \implies v\cdot v= a^2|v_1|^2+b^2|v_2|^2=0$$
and the latter holds $\iff a=b=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4324877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Ways of arranging $k$ cards in a deck of $N$ cards I have a deck of $N$ cards numbered $1, \ldots N$. I want to count the number of ways that $k$ cards may appear in order. For example, if $k = 3$, I want to count the number of ways that of having a deck such that card $1$ appears before card $2$ appears before card $3$. The remaining $N - 3$ cards can be arranged arbitrarily. Note: card $1$ must appear before card $2$, but there can be $\geq 0$ cards inbetween them.
I am struggling to start on this question - I thought perhaps I could apply stars and bars, but I wasn't sure how to guarantee the ordering.
| (Answer from comments)
There is $1$ way to arrange $k$ cards in any specific order (ascending, in this case).
There are $\binom{N}{k}$ ways to position those $k$ cards within a deck of $N$ cards.
Finally, there are $(N-k)!$ ways to arrange the remaining cards.
The total is therefore:
$$\binom N k \cdot (N -k)!=\frac{N!}{k!}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4325072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculate the sum of the series $\sum_{n=1}^{\infty} \frac{n+12}{n^3+5n^2+6n}$ They tell me to find the sum of the series
$$\sum a_n :=\sum_{n=1}^{\infty}\frac{n+12}{n^3+5n^2+6n}$$
Since $\sum a_n$ is absolutely convergent, hence we can manipulate it the same way we would do with finite sums. I've tried splitting the general term and I get
$$\frac{n+12}{n(n+2)(n+3)}=\frac{A}{n}+\frac{B}{n+2}+\frac{C}{n+3}=\frac{2}{n}+\frac{-5}{n+2}+\frac{3}{n+3}$$
and so
$$\sum a_n =\sum \frac{2}{n}-\frac{5}{n+2}+\frac{3}{n+3}$$
Now, if I was to split the series in the sum of three different series I would get three different divergent series and so, obviously $\sum a_n$ wouldn't converge.
I also suspect about being a telescopic series althought the numerators of each fraction makes it difficult to find the cancellation terms. I also know that I can rearrange the terms in my series, althought I cannot see how would this solve the problem.
If anyone could give me a hint I would really appreciate it.
| If you know how to play with harmonic numbers, it is pretty simple
$$\sum_{n=1}^p \frac 1 {n}=H_p$$
$$\sum_{n=1}^p \frac 1 {n+2}=H_{p+2}-\frac{3}{2}$$
$$\sum_{n=1}^p \frac 1 {n+3}=H_{p+3}-\frac{11}{6}$$
Now, use three times the asymptotics
$$H_q=\log (q)+\gamma +\frac{1}{2 q}-\frac{1}{12 q^2}+O\left(\frac{1}{q^4}\right)$$ and continue with Taylor or long division and you will have
$$\sum_{n=1}^{p}\frac{n+12}{n^3+5n^2+6n}=2-\frac{1}{p}-\frac{3}{p^2}+O\left(\frac{1}{p^3}\right)$$
Use it for $p=10$; the approximation gives exactly $1.87$ while the exact value is $\frac{1615}{858}=1.88228$ so a relative error of $0.65$%.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4325327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Translating a diagram chase into an element-free proof One part of the four lemma says that:
Consider the following diagram with exact rows in an abelian category $\mathsf{A}$:
If $m$ and $p$ are monomorphisms and $l$ is an epimorphism, then $n$ is a monomorphism.
We can very simply prove this by diagram chasing. I'll copy a proof below:
*
*Let $c \in C$ be such that $n(c) = 0$.
*$t(n(c))$ is then $0$.
*By commutativity, $p(h(c)) = 0$.
*Since $p$ is injective, $h(c) = 0$.
*By exactness, there is an element $b$ of $B$ such that $g(b) = c$.
*By commutativity, $s(m(b)) = n(g(b)) = n(c) = 0$.
*By exactness, there is then an element $a′$ of $A′$ such that $r(a′) = m(b)$.
*Since $l$ is surjective, there is $a$ in $A$ such that $l(a) = a′$.
*By commutativity, $m(f(a)) = r(l(a)) = m(b)$.
*Since $m$ is injective, $f(a) = b$.
*So $c = g(f(a))$.
*Since the composition of $g$ and $f$ is trivial, $c = 0$.
I want to "translate" this proof into an element-free proof using universal properties.
The beginning is simple. The composition $\ker n\to C\to C'\to D'$ is zero. By commutativity of the diagram, so is $\ker n\to C\to D\to D'$. But $p:D\to D'$ is monic, so we conclude that $\ker n\to C\to D$ is zero. This seems to be the first 4 steps in the proof above.
However I don't know how to continue. (If an element-free proof of this cannot simply be a translation of the diagram chase, I would still want to learn it.)
| Thanks to the answer and the comment by Jackozee Hakkiuz, I did the following proof. (The notations are not exactly the same, but it should be clear. Also the proposition 1.3.3 is simply the fact that the pullback of an epimorphism is epic.)
I hope that this may be useful to someone :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4325492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Find a lower bound of $-\frac{1}{4}\sqrt{(1-2a+x)^2-8(-3-4a-3x-2ax)}-\frac{1}{4}(1-2a+x)$ I have to find a lower bound (that does not depend on $x$) of the following quantity:
$$-\frac{1}{4}\sqrt{(1-2a+x)^2-8(-3-4a-3x-2ax)}-\frac{1}{4}(1-2a+x)$$
where $x\geq 0,\, a>0$.
Really I have tried to observe that:
$-\frac{1}{4}\sqrt{(1-2a+x)^2-8(-3-4a-3x-2ax)}-\frac{1}{4}(1-2a+x)>-\frac{1}{2}\sqrt{(1-2a+x)^2-8(-3-4a-3x-2ax)}=A$
but then I can't find a lower bound of A...can you help me?
| Let $a$ be a fixed positive real. Let $x \geqslant 0$ be allowed to vary.
Let $A=2,\quad B=1-2a+x,\quad C=-3-4a-3x-2ax.$
Consider the quadratic $Ay^2+By+C=0.$
Then your expression is the left-hand root of this quadratic:
$$-\frac{1}{4}\sqrt{(1-2a+x)^2-4A(-3-4a-3x-2ax)}-\frac{1}{4}(1-2a+x).$$
(For we see that $A>0,\; C<-3,\; B^2-4AC>0$ for any $x$, and thus the parabola always has two roots.)
So I think you cannot have such a bound. For we can, by increasing $x$ more and more, move the axis of symmetry of the parabola further and further to the left. This must in turn push the left-hand root towards $-\infty$.
The axis of symmetry: $\;y= \frac{-B}{2A}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4325672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Independence of $A$ and $B$ implies the independence of $\neg A$ and $B$ Does the following apply? $$P(A\mid B)=P(A)\implies P(\neg A\mid B)=P(\neg A)$$
My rough answer is that suppose $A$ is the probability of rainy and $B$ is the probability of toothache. Then both $P(A\mid B)=P(A)$ and $P(\neg A\mid B)=P(\neg A)$ apply. But can we prove this mathematically?
| Firstly, $$p(A|B)=p(A)\implies \frac{p(A\cap B)}{p(B)}=p(A)\implies p(A\cap B)=p(A)p(B)$$
Then, $$p(A'|B)=\frac{p(A'\cap B)}{p(B)}$$
$$=\frac{p(B)-p(A\cap B)}{p(B)}$$
$$=1-\frac{p(A)p(B)}{p(B)}$$
$$=1-p(A)=p(A')$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4325837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Solve $a^x = bx$ without graphing?
Solve $a^x = bx$ without graphing?
First of all, I am only in 8th grade, and while I do study topics from a much higher level, I do not yet understand calculus. I also have searched various internet resources, but the only solution to this problem seems to be graphing.
I would like to know if a problem in the format of $a^x = bx$ (where $a$ and $b$ are constants) can be solved with a moderate accuracy (such as first 3 decimal digits) without graphing OR trial and error. So far, I could make this problem a logarithmic one: $log_a (bx) = x$, but I am unsure of how to progress from here.
If you need an example, here is one:
$10^x = 7x.$
Furthermore, if the previous is solvable, will it be simple to solve this problem:
$100(1.05)^x = 10x + 50.$
Thanks for your effort and time.
| As said in comments, the only explicit solutions of the zero's of function
$$f(x)=a^x-bx$$ are given in terms of Lambert function (which is not elementary) (just have a look here)
$$x=-\frac{1}{\log (a)}W\left(-\frac{\log (a)}{b}\right)$$ and, in the linked page, you will find expansion allowing the evaluation of $W(t)$.
If, for the time being, you cannot use Lambert function, without graphing, you can do good work. For the next, I shall assume that $a$ and $b$ are real and positive.
We have
$$f'(x)=a^x \log (a)-b \qquad \text{and} \qquad f''(x)=a^x \log ^2(a)$$
The first derivative cancels at
$$x_*=\frac{1}{\log (a)}\log \left(\frac{b}{\log (a)}\right)$$
Since $f(0)=1$, if
$$f(x_*)=\frac{b}{\log (a)}\left(1-\log \left(\frac{b}{\log (a)}\right)\right)$$ is negative, then you conceive that there two roots, one on each side of $x_*$. They correspond to the two branches of Lambert function. If this is not the case, no real solution.
For illustration, considering your case $a=10$ and $b=7$, we have $x_*=0.482882$ and $f(x_*)=-0.340115$; then two roots.
If, as in this case, $a$ and $b$ are "similar", you can approximate the solutions expanding $f(x)$ as a Taylor series around $x_*$ and have
$$x_\pm=x_* \pm \sqrt{-2 \frac{f(x_*)} {f''(x_*)} }$$
For the working case, this would give (as estimates)
$$x_-=0.277449 \qquad \text{and} \qquad x_+=0.688316$$ while the exact solutions are respectively be $x_-=0.259894$ and $x_+=0.673319$ which is not too bad (this was done with my phone). Now, if you want to polish the roots, you need a numerical method such as Newton. I give you below the iterates for each root
$$\left(
\begin{array}{cc}
n & x_n \\
0 & 0.277449 \\
1 & 0.259315 \\
2 & 0.259893 \\
3 & 0.259894
\end{array}
\right)$$
$$\left(
\begin{array}{cc}
n & x_n \\
0 & 0.688316 \\
1 & 0.673998 \\
2 & 0.673320 \\
3 & 0.673319
\end{array}
\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4326046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Using Borel-Cantelli to find limsup
Suppose $Y_1, Y_2, \dots$ is any sequence of iid real valued random variables
with $E(Y_1)=\infty$ . Show that, almost surely, $\limsup_n
(|Y_n|/n)=\infty$ and $\limsup_n (|Y_1+...+Y_n|/n)=\infty$.
I have solved the first part by considering non-negative integer iid r.vs $X_n=\text{floor}(|Y_n|)$ and using $E(X)=\sum_0^\infty P(X\ge n)$ then doing some clever tricks so I can apply the (2nd) Borel-Cantelli lemma, but I'm not really sure how I can use the same approach to solve the second part seeing as it is tempting to set $S_n=\text{floor}(|Y_1+...+Y_n|)$ but then the $S_i$ are not iid. I'm pretty sure its gonna be Borel-Cantelli again (since limsup) so I need to come up with the right events. Please can someone nudge me in the right direction.
Hints only please
EDIT:
Suppose $\limsup_n |a_1+...+a_n|/n\lt \infty$. Then set $S_n=\sum^n_1 a_k$ $$\frac{|a_n|}{n}=\frac{|S_n - S_{n-1}|}{n}\\ \le\frac{|S_n|}{n}+\frac{|S_{n-1}|}{n-1} $$ bounded
| Hint: If $(a_n)$ is a sequence of real numbers with $\lim \sup {|a_1+a_2+\cdots+a_n|}/n <\infty$ then $\lim \sup \frac {|a_n|} n <\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4326233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Some hypergeometric transformation In Concrete Mathematics, the identity
$$\sum_{k\geq 0}\binom{m+r}{m-n-k}\binom{n+k}{n}x^{m-n-k}y^k\\=\sum_{k\geq 0}\binom{-r}{m-n-k}\binom{n+k}{n}(-x)^{m-n-k}(x+y)^{k}$$
is proven, where $n\geq 0$ and $m$ are integers, and $x,y,r$ are complex numbers.
From this identity, the hypergeometric equality
$$F(a,-n;c;z)=\frac{(a-c)^{\underline n}}{(-c)^\underline n}F(a,-n;1-n+a-c;1-z)$$
can then "clearly" be derived, where $n\geq 0$ is an integer, and $a,b,c$ are complex numbers.
I don't understand how this hypergeometric equality can be proven using the first identity. Any suggestions?
| We represent the binomial identity
\begin{align*}
\sum_{k\geq 0}&\binom{m+r}{m-n-k}\binom{n+k}{n}x^{m-n-k}y^k\\
&=\sum_{k\geq 0}\binom{-r}{m-n-k}\binom{n+k}{n}(-x)^{m-n-k}(x+y)^k\tag{1}
\end{align*}
using hypergeometric series by following the presentation in Concrete Mathematics.
LHS
We start with the LHS of (1) and calculate the quotient $\frac{t_{k+1}}{t_{k}}$ of consecutive terms of the sum. We obtain
\begin{align*}
t_k&=\binom{m+r}{m-n-k}\binom{n+k}{n}x^{m-n-k}y^k\\
&=\frac{(m+r)!(n+k)!}{(m-n-k)!(r+n+k)!n!k!}x^{m-n-k}y^k\\
\\
\frac{t_{k+1}}{t_k}&=\frac{(n+k+1)(m-n-k)}{(r+n+k+1)(k+1)}\,\frac{y}{x}\\
&=\frac{(k+\color{blue}{n+1})(k+\color{blue}{n-m})}{(k+\color{blue}{n+r+1})(k+1)}\left(\color{blue}{-\frac{y}{x}}\right)
\end{align*}
The first summand $k=0$ of the LHS of (1) is $\binom{m+r}{m-n}x^{m-n}$ and we conclude
\begin{align*}
\sum_{k\geq 0}&\binom{m+r}{m-n-k}\binom{n+k}{n}x^{m-n-k}y^k\\
&=\binom{m+r}{m-n}x^{m-n}\color{blue}{F\left(n+1,n-m;n+r+1;-\frac{y}{x}\right)}\tag{2.1}
\end{align*}
RHS
We proceed in the same way with the RHS and obtain
\begin{align*}
u_k&=\binom{-r}{m-n-k}\binom{n+k}{n}(-x)^{m-n-k}(x+y)^k\\
&=\frac{(-r)!(n+k)!}{(m-n-k)!(-r-m+n+k)!n!k!}\left(-x\right)^{m-n-k}(x+y)^k\\
\\
\frac{u_{k+1}}{u_k}&=\frac{(n+k+1)(m-n-k)(-1)}{(-r-m+n+k+1)(k+1)}\,\frac{x+y}{x}\\
&=\frac{(k+\color{blue}{n+1})(k+\color{blue}{n-m})}{(k+\color{blue}{n-m-r+1})(k+1)}\,\color{blue}{\frac{x+y}{x}}
\end{align*}
The first summand $k=0$ of the RHS of (1) is $\binom{-r}{m-n}(-x)^{m-n}$ and we conclude
\begin{align*}
\sum_{k\geq 0}&\binom{-r}{m-n-k}\binom{n+k}{n}(-x)^{m-n-k}(x+y)^{k}\\
&=\binom{-r}{m-n}(-x)^{m-n}\color{blue}{F\left(n+1,n-m;n-m-r+1;\frac{x+y}{x}\right)}\tag{2.2}
\end{align*}
Replacing $n$ with $q$ in (2.1) and (2.2) we obtain from (1)
\begin{align*}
&\color{blue}{F\left(q+1,q-m;q+r+1;-\frac{y}{x}\right)=\binom{-r}{m-q}\binom{m+r}{m-q}^{-1}(-1)^{m-q}}\\
&\qquad \color{blue}{\cdot F\left(q+1,q-m;q-m-r+1;\frac{x+y}{x}\right)}\tag{2.3}
\end{align*}
Substitutions
Finally we show the identity (2.3) can be written as
\begin{align*}
\color{blue}{F(a,-n;c;z)=\frac{(a-c)^{\underline n}}{(-c)^{\underline n}}F(a,-n;1-n+a-c;1-z)}\tag{3.1}
\end{align*}
Comparing the LHS of (2.3) and (3.1) we use the substitutions
\begin{align*}
a&:=q+1\\
n&:=-q+m\\
c&:=q+r+1\tag{3.2}\\
z&:=-\frac{y}{x}
\end{align*}
With these substitutions the LHS of (2.3) and (3.1) coincide. The upper parameters $q+1$ and $q-m$ are the same at both sides of (2.3). We now use the substitutions to derive the argument, the lower parameter and the factor in front of the hypergeometric series.
*
*From $z=-\frac{y}{x}$ we see
\begin{align*}
1-z=1+\frac{y}{x}=\frac{x+y}{x}
\end{align*}
and the arguments coincide.
*Looking at the lower parameter $n-m-r+1$ in (2.3) we obtain with (3.2)
\begin{align*}
q-m-r+1&=(-n)-r+1\\
&=(-n)-(c-q-1)+1\\
&=(-n)-(c-a)+1\\
&=1-n+a-c
\end{align*}
in accordance with the lower parameter of the RHS in (3.1).
*We obtain
\begin{align*}
\binom{-r}{m-q}&=\binom{a-c}{n}=(a-c)^{\underline{n}}\\
\binom{m+r}{m-q}(-1)^{m-q}&=\binom{-q-r-1}{m-q}(-1)^{m-q}=\binom{-c}{n}\\
&=(-c)^{\underline{n}}
\end{align*}
and the claim (3.1) follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4326420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find the area of the shaded region in the triangle below? For reference: In figure $G$ is the centroid of the triangle $ABC$; if the area of the $FGC$ triangle is $9m^2$, the area of the FGB triangle is $16m^2$
Calculate the area of the shaded region. (Answer:$7m^2$)
If possible by geometry
My progress:
$S_{FGC} = \frac{b.h_1}{2} = \frac{FG.h_1}{2}\implies FG = \frac{18}{h_1}\\
S_{FGB}=\frac{b.h_2}{2} = \frac{FG.h_2}{2} \implies FG = \frac{32}{h_2}\\
\therefore \frac{18}{h_1} = \frac{32}{h_2}\implies \frac{h_1}{h_2} = \frac{32}{18}=\frac{16}{9}\\
S_{ABG} = S_{BCG} = S_{ACG}$
...??? I'm not able to develop this
| With the usual conventions:
\begin{aligned}
\overrightarrow{GF}\times\overrightarrow{GB} &= (0,0,32)\\
\overrightarrow{GF}\times\overrightarrow{GC} &= (0,0,-18)\\
\overrightarrow{GA}+\overrightarrow{GB}+\overrightarrow{GC} &= \overrightarrow{0}
\end{aligned}
The area you search:
\begin{multline*}
\frac{1}{2}\big\lvert\overrightarrow{GF}\times\overrightarrow{GA}\big\rvert=\frac{1}{2}\Big\lvert\overrightarrow{GF}\times\big(-\overrightarrow{GB}-\overrightarrow{GC}\big)\Big\rvert
=\frac{1}{2}\big\lvert-\overrightarrow{GF}\times\overrightarrow{GB}-\overrightarrow{GF}\times\overrightarrow{GC}\big\rvert=\frac{1}{2}\big\lvert(0,0,-14)\big\rvert = 7
\end{multline*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4326621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
how to evaluate a limit similar to Euler identity? I have a limit as $x$ approaches infinity of $(1+4/(7x))^x$. Solutions I saw included transforming it into $e$ to the power of natural $\log$ etc. but I've seen a way simpler transformation but can't remember it. Any help appreciated.
| If you're familiar with L'Hospital:
$$\lim_{x\to \infty}\ln((1+4/7x)^x)=\lim_{x\to \infty}x\ln(1+4/7x)=\lim_{x\to \infty}\frac{\ln(1+4/7x)}{1/x}=\lim_{x\to \infty}\frac{-4/x(7x+4)}{-1/x^2}=\lim_{x\to \infty}4x/(7x+4)=4/7 $$
This implies: $$\lim_{x\to \infty}(1+4/(7x))^x=e^{\lim_{x\to \infty}\ln((1+4/7x)^x)}=e^{4/7}$$
This doesn't use the property that $\lim_{x\to \infty}(1+t/x)^x=e^t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4326820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Do you know the website of the journal Ars Combinatoria?
*
*Do you know the website of the journal Ars Combinatoria? Is this journal still publishing articles? I have a paper accepted in 2018 by this journal, but now I cannot get any news and message from this journal and its editors. This journal and its editorial board disappeared for a long time. Several colleagues have been experiencing the same situation as me.
*Do you agree that I should resubmit this manuscript elsewhere for publication?
Related links
*
*https://www.researchgate.net/publication/328891537
*https://www.researchgate.net/publication/281146616
*https://www.researchgate.net/post/Do_you_know_the_web_sites_of_the_journals_Ars_Combinatoria_and_Utilitas_Mathematica_Are_these_two_journals_ceased
*https://www.linkedin.com/feed/update/urn:li:activity:6873570054856945664
*https://qifeng618.wordpress.com/2018/11/13/some-papers-published-by-feng-qi-in-ars-combinatoria/
| I also submitted a paper on this journal in 2018. It got published in 2020 (volumes 151) and I never got any notification for that. I knew it from my google scholar account last year.
This is the website of the journal that I found
http://www.combinatoire.ca/ArsCombinatoria/index.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4327006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove $5^n > n^5 $ for $n≥6$ How can I prove
$5^n > n^5 $ for $n≥ 6 $
using mathematical induction ?
So my solution is
Step 1: For $n = 6$
$5^6 = 15625$, $6^5 = 7776$
then $5^n > n^5 $ is true for $n = 6 $
Step 2: Assume it's true for $n = k$, $5^k > k^5 $
Step 3: Prove it's true for $n = k+1$
$5^{k+1} > (k+1)^5 $
Here when things get messy I don't know how to prove it for $n = k+1$
| $$P(n): 5^n > n^5 \text{ : } \forall n \ge6$$
*
*Base Case: $5^6 >6^5$ is True
*Inductive Case:$P(k)\implies P(k+1)$
The induction step, proves that if the statement holds for any given case $n = k$, then it must also hold for the next case $n = k + 1$. These two steps establish that the statement holds for every natural number $n$. The base case does not necessarily begin with $n = 0$, but often with $n = 1$, and possibly with any fixed natural number $n = N$(here, $N = 6$), establishing the truth of the statement for all natural numbers $n\ge N.$
$$\color{blue}{P(k): 5^k > k^5 \implies (5^k5 = 5^{k +1})>5k^5} $$
To prove $P(k+1) $ is true we need to prove $(5^{k+1} = \color{blue}{5^k.5 > 5k^5} )> (k+1)^5$
Now,
$\color{blue}{1< 5k < 10k^2 < 10k^3< 5k^4}$ is true for $k\ge3 \implies \forall k\ge6$ You can prove this by considering that $P(1<n<5)$ is not true but we know this is obvious.
$k^4 = k^4 \implies \color{red}{5k^4 = 5k^4} \text{ for } (k\ge 1 \implies \text{ also true for } k\ge 6)$
Consider $\color{red}{5k^4 = 5k^4}$ for $k> 5$ We have $\color{blue}{5k^4 \le \color{green}k\times k^4} \text{As } k> 5 $
Combining the above results we have
$\color{blue}{1< 5k < 10k^2 < 10k^3< 5k^4 < k^5} \text{ is true for} k > 5 \implies \text{ also true for } k > 6$
$$\begin{align*}
(k+1)^5
& = k^5 + 5k^4 + 10k^3 + 10k^2 + 5k + 1\\
& < k^5 + k^5 + k^5 + k^5 + k^5 = 5k^5 < 5^{k+1}
\text{thus } P(k + 1) \text{is True}
\end{align*}$$
Now, if True of $P(k)$ $\implies $ True of $P(k+ 1) $ for $k> 6$ then $P(N>6) $ is True.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4327161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proving $End_A(V) $ is a local ring whose Jacobson -radical is a nil-radical ideal Consider the following question asked in my quiz on Algebraic Geometry.
Let A be a ring and V be an indecomposible A-module which is artinian and well as noetherian. Then $End _AV$ is a local ring whose Jacobson -radical is a nil - radical ideal.
Attempt: Let f belongs to $ End_A V$ and Then there exists $m \in \mathbb{N}$ and $m'\in \mathbb{N}$ such that $Ker f^n = Ker f^m$ for all $n \geq m $ and $Img f^n = Img f^{m'}$, $n \leq m'$ . I can write $V = Ker f^m \oplus Img f^{m'}$ . Now, V is indecomposible implies that either $ Kerf^m =0$ (1)or $ Img f^{m'} =0$(2).
If (1) is true then $f^m$ is injective. If (2) is true then f is nilpotent.
But still I am not sure how should I proceed to prove what is asked.
Can you please help with outlines?
| An injective endomorphism on an Artinian module is an isomorphism. So you have shown that all elements are nilpotent or units.
If $x$ is nilpotent, then $1-x$ is a unit
By a well-known characterization of local rings this would mean that the ring is local. Needless to say its maximal ideal is the set of all nilpotent elements, and hence it is nil.
(Actually I found another old post which covers the two points above too: here).
Alternatively there is an even more sledgehammery approach follows using an interesting theorem that endomorphism rings of modules of finite length are semiprimary. For noncommutative rings in general it may not be local because the indecomposability hypothesis was not used. To show it is local, we have to explain why the semisimple ring $R/J(R)$ has no nontrivial idempotents, and therefore is a division ring.
If $R/J(R)$ had a nontrivial idempotent, it would lift to a nontrivial idempotent of $R$. But a nontrivial idempotent of $R=End(V_A)$ by definition decomposes $V$ into two nonzero halves. Since this has been ruled out, $R/J(R)$ is a division ring, hence $R$ is a local semiprimary ring.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4327383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find a point $P_2$ on an ellipse, whose chord with $P_1$ is a max distance $d$ from its nearest side I'm not sure if this solution is available in closed form, but after drawing it out I do think there will be two unique solutions always. I unfortunately have no clue where to start.
Given:
*
*An ellipse with x radius $a$, y radius $b$ and centre point $(0, 0)$
*A point on that ellipse $P_1$
*A distance $d$
Find all points $P_{2j}$ (i.e. find their coordinates) on the ellipse which satisfy the following condition (where $C$ is the chord joining $P_1$ and $P_{2j}$):
*
*The distance between $C$ and a line which is tangent to the ellipse and parallel to $C$ (on the side of the smaller segment formed by $C$ i.e. the "nearer" side) is equal to $d$
In other words, the maximum distance between $C$, and the nearer side of the ellipse from $C$, must equal $d$.
The diagram I've drawn below is an example of this. $P_{21}$ and $P_{22}$ are the desired points, whereas $P_{23}$ is an example of an invalid point. (The distances are not fully accurate)
The reason I believe the solution is closed, is because as you sweep the point $P_2j$ around the ellipse, the distance (required to be $d$) increases, reaches a turning point, and then decreases.
| The parametric equation of the ellipse $\dfrac{x^2}{a^2} + \dfrac{y^2}{b^2} = 1 $ is
$ r = (x, y) = ( a \cos t , b \sin t ) \hspace{20pt} t \in \mathbb{R} $
Now $P_1 = (x_1, y_1) = (a \cos t_1, b \sin t_1 ) $
Suppose $Q = (a \cos s, b \sin s)$ is the point where the tangent is to be drawn.
The unit normal vector at $Q$ is
$n = \dfrac{(b \cos s , a \sin s )}{\sqrt{b^2 \cos^2 s + a^2 \sin^2 s}} $
The distance between the tangent line and the line $P_1 P_2 $ that is parallel to it is
$(P_1 Q) \cdot n = \dfrac{ ab( (\cos s - \cos t_1 ) \cos s + (\sin s - \sin t_1 ) \sin s ) }{\sqrt{b^2 \cos^2 s + a^2 \sin^2 s}} = d $
Simplifying,
$(P_1 Q) \cdot n = \dfrac{ ab( 1 - ( \cos t_1 \cos s + \sin t_1 \sin s )) }{\sqrt{b^2 \cos^2 s + a^2 \sin^2 s}} = d $
Therefore,
$ d {\sqrt{b^2 \cos^2 s + a^2 \sin^2 s}} = a b( 1- ( \cos t_1 \cos s + \sin t_1 \sin s ) ) $
Squaring,
$ d^2 (b^2 \cos^2 s + a^2 \sin^2 s ) = a^2 b^2 ( 1 + \cos^2 t_1 \cos^2 s + \sin^2 t_1 \sin^2 s + \dfrac{1}{2} \sin 2 t_1 \sin 2 s - 2 ( \cos t_1 \cos s + \sin t_1 \sin s ) ) $
After using the identities $\cos^2 s = \frac{1}{2} (1 + \cos 2 s ) $ and $\sin^2 s = \frac{1}{2} (1 - \cos 2 s)$, the last equation becomes of the form
$A \cos s + B \sin s + C \cos 2 s + D \sin 2 s + E = 0 $
where
$A = 2 a^2 b^2 \cos t_1 $
$B = 2 a^2 b^2 \sin t_1$
$C = -\dfrac{1}{2} \left((a b) ^ 2 \cos(2 t_1) - d ^ 2 (b ^ 2 - a ^ 2) \right) $
$D = -\dfrac{1}{2} a^2 b^2 \sin(2 t_1) $
$E = \dfrac{1}{2} d ^ 2 (a ^ 2 + b ^ 2) - \dfrac{3}{2} a ^ 2 b ^ 2$
Which can be solved for $s$, using the substitution $z = \tan \dfrac{s}{2} $ that results in a quartic polynomial equation in $z$.
Once $s$ is found (there will be two solutions), the equation of the line $P_1 P_2 $ is
$ n \cdot ((x, y) - P_1) = 0 $
where $n = ( b \cos s, a \sin s ) $
This line we need to intersect with the ellipse to find $P_2$. This can be done as follows. Since $(x, y)$ is on the ellipse then $P_2 =(x_2, y_2) = (a \cos t_2 , b \sin t_2)$. Substitute this into the equation of the line, you get,
$ (b \cos s , a \sin s ) \cdot ( a (\cos t_2 - \cos t_1) , b (\sin t_2 - \sin t_1) ) = 0 $
Dividing through by $ ab $,
$ \cos s (\cos t_2 - \cos t_1) + \sin s (\sin t_2 - \sin t_1 ) = 0 $
In which $t_1$ and $s$ are known, and the unknown is $t_2$. This equation is of the form
$ \cos(t_2 - s) = \cos(t_1 - s) $
Since $t_2 \ne t_1$ then
$ t_2 - s = - (t_1 - s) $
from which,
$ t_2 = 2 s - t_1 $
and $P_2 = (a \cos t_2, b \sin t_2 ) $
As a numerical example, I've taken the ellipse with $a = 10, b = 5, t_1 = \dfrac{\pi}{3}$ and the distance $d = 3 $. The figure below shows the two solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4327580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Let $x+y+z = 5$ where $x,y,z \in \mathbb{R}$, prove that $x^2+y^2+z^2\ge \frac{5}{3}$ My thinking:
Since $x+y+z = 5$, we can say that $x+y+z \ge 5$.
By basic fact: $x^2,y^2,z^2\ge 0$
If $x+y+z \ge 5$, then $\frac{\left(x+y+z\right)}{3}\ge \frac{5}{3}$
If $\frac{\left(x+y+z\right)}{3}\ge \frac{5}{3}$ and $x^2,y^2,z^2\ge 0$, hence $\frac{x^2+y^2+z^2}{3}\ge \frac{5}{3}$...???
I'm not sure about this, can someone help?
| $(x-1)^2+(y-1)^2+(z-1)^2\ge 0$. Hence $x^2+y^2+z^2-2(x+y+z)+3\ge 0$, so $x^2+y^2+z^2\ge 10-3=7>5/3$. Moreover $(x^2+y^2+z^2)/3\ge 7/3>5/3$
Your proof is not justified: the last "hence" statement does not follow from the previous statements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4327738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Surjection between two finite sets of different sizes Suppose I have two sets $A$ and $B$. $A = \{1, 2\}$ and $B = \{2, 4, 6, 8\}$. I am trying to prove that there is no surjection from $A$ to $B$ by finding at least one $y$ value from $B$ such that for every element $x\in A$, $f(x)\ne y$. I am not entirely sure how to prove this and kind of stuck.
| A fully rigorous argument here can be extremely hard. To give a mostly rigorous argument: Suppose for contradiction that $f:A\to B$ is a surjection. Then $f(A)⊆ B$ and $f(A)=\{f(1),f(2)\}$ so that there are only two elements in $f(A)$. Since there are four elements in $B$ then there must be some element in $B\smallsetminus f(A)$ (the "relative complement"). If we call this element $y$ then there is no $x$ such that $f(x)=y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4327899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find the requirement for a list of vectors to be linearly independent
Suppose there is a list $((r,1,1),(0,s,1),(1,1,t))$(r,s,t$\in \mathbb Q$) and then find the requirement needed this list to be linearly independent in $\mathbb R^3$.
I find that this question is not very approachable to me. I know the requirement for a list being linearly independent should be: $a(r,1,1)+b(0,s,1)+c(1,1,t)=0 \iff a=b=c=0$
Then
$\begin{array}
ar+c=0 \\
a+bs+c=0 \\
a+b+ct=0
\end{array}$
Then the aim becomes if these three equations are satisfied, then it must be a=b=c=0. Then I think all I need to do is to find the proper value of r,s, and t such that they will force a=b=c=0.
However,(if my idea is correct) I don't know how to find such r,s, and t. The way I tried is to use the equivalent negation, i.e. if $a\neq0$ or $b\neq0$ or $c\neq0$. Then one of these equations can't be zero if r,s,t are rational numbers, but I failed. Any suggestions?
| You have the right idea looking for negation, just have some details off - we have the associated set of equations associated with the scaled sums of the vectors equaling zero given by $$ \begin{matrix}ar + c = 0 \\ a + bs + c = 0 \\ a + b + ct = 0. \end{matrix}$$ From here, notice that $$ ar + c = 0 \Rightarrow c = -ar$$ which tells us something important - if $a = 0$, then $c = 0$ - however, note that this would imply that $b$ would have to be zero and as such the vectors would be linearly independent! As such, continuing from here we can assume that $a \not= 0$. Now, let's handle the cases where either $r = 0$ or $r \not= 0$ separately :
First, the case where $r = 0$. Then $c = 0$, so we get from the third equation $$ a + b = 0 \Rightarrow a = -b $$ and plugging that into the second equation gives us $$ a + bs = 0 \Rightarrow -b + bs = 0 \Rightarrow b(s - 1) = 0 $$ which means that if $b \not= 0$, $s = 1$. However, if $b = 0$ then $a = -b = 0$, and hence if $r = 0$ we have that the set of vectors given is linearly independant if and only if $s = 1$.
Now, we handle the case where $r \not= 0$. In this case, note that we have $$c = -ar \Rightarrow a = -c(\frac{1}{r})$$ which we weren't able to derive before, since $\frac{1}{r}$ is undefined for $r = 0$. Now, plugging this into the third equation gives us $$ a + b + ct = 0 \Rightarrow -c(\frac{1}{r}) + b + ct = 0 \Rightarrow b + c(t - \frac{1}{r}) = 0 \Rightarrow b = c(\frac{1}{r} - t) $$ and a substitution of both the identity for $a$ and $b$ into the second equation gives $$ a + bs + c = 0 \Rightarrow c(-\frac{1}{r}) + c(\frac{s}{r} - st) + c = 0 \Rightarrow c(1 + \frac{s - 1}{r} - st) = 0 \Rightarrow c(r + s - 1 - str) = 0$$ with the final implication being particularly useful in this case since $r \not= 0$. Now, since we know that if $c = 0$ then $a = 0$ (because $r \not= 0$) then this is equivalent to the statement that $$ r + s - 1 - str = 0. $$ From here, notice something interesting - if $r = 0$ then $$r + s - 1 - str = 0 \Rightarrow s - 1 = 0 \Rightarrow s = 1$$ which means that we can summarize the entire discussion up to this point (including the case where $r = 0$) by the following: the vectors $\lbrace (r, 1, 1), (0, s, 1), (1, 1, t) \rbrace$ are linearly independant if and only if $r + s - 1 - str \not= 0$.
While this post (and problem) was very computation heavy, this problem comes with a lesson for other problems like it - from a setup, you will be able to extract givens from other assumptions (in this case, is was that $c = -ar$ under the assumption that the sums of the scaled vectors equal $0$). From there givens, think extremely deeply about what such a given would imply (in this case it was that if $a = 0$ then $a = b = c = 0$), and see how those findings interact with other assumptions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4328108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
what does "heuristically" means? What does it mean when an author illustrates a result or proof of some result "heuristically" ? And why is that useful ?
| Here is a very common example: We define the Riemann integral as sums of rectangles approximating the area under the curve, and then take the limit as the "width" of the rectangles go towards zero. This, along with the picture, is a more or less heuristic definition of the Riemann integral, in that the formalism is not in focus, but we concentrate on visualization and understanding.
Picture is from https://isquared.digital/blog/2020-05-27-riemann-integration/
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4328293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
if $G$ only has one subgroup of order $p$ and one subgroup of order $q$, then $G$ is a cyclic group I'm having some trouble proving the following:
Let $G$ be a group with $|G|=pq$ where $p$ and $q$ are two distinct prime numbers. Show that if $G$ only has one subgroup of order $p$ and one subgroup of order $q$, then $G$ is a cyclic group.
To prove so use the following:
If $G$ is a group and if $H_1,H_2 \unlhd G$ such that $H_1 \cap H_2=\{1\}$ and $H_1H_2=G$, then:$$G\simeq H_1 \times H_2$$
If $P\leq G$ and $Q\leq G$ are groups such that $|P|=p$ and $|Q|=q$ then $$G\simeq P\times Q$$
proves that $G$ is cyclic. To prove this we just need to show that:
*
*$P,Q \unlhd G$
*$P \cap Q = \{1\}$
*$PQ=G$
But I'm not being able to prove any of these. How can this be done?
| Since conjugation by an element of $G$ is an automorphism of $G$, we have $\lvert gPg^{-1}\rvert=\lvert P\rvert$ and $\lvert gQg^{-1}\rvert=\lvert Q\rvert$ for all $g\in G$, so that $gPg^{-1}=P$ and $gQg^{-1}=Q$ for all $g\in G$ by uniqueness of orders. Hence $P,Q\unlhd G$.
It is well-known that $P\cap Q\le P$ and $P\cap Q\le Q$, so, by Lagrange's Theorem, the coprimality of $p$ and $q$ implies that $|P\cap Q|=1$; therefore, $P\cap Q=\{1\}$.
Since $p$ and $q$ are coprime, it is clear that $\lvert PQ\rvert=pq=|G|$; thus $PQ=G$ (as $PQ\subseteq G$).
To show $G$ is cyclic, observe that $P\cong\Bbb Z_p$ and $Q\cong\Bbb Z_q$. Then it is routine to show that
$$\begin{align}
\Bbb Z_{pq}&\cong \Bbb Z_p\times \Bbb Z_q\\
&\cong P\times Q\\
&\cong G
\end{align}$$
by the Chinese remainder theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4328529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Construct focii of ellipse given center and four tangent lines We are given 4 distinct lines $a,b,c$ and $d$ which are said to be tangents to an ellipse. Let's consider that the 4 meetings of these lines form an convex quadrilateral $ABCD.$
There is a theorem which garantees that the center $E$ of ellipse inscribed in $ABCD$ must lie in the line joining the midpoints of $AC$ and $BD$ (diagonals of $ABCD).$
I've been wondering how can we construct such an ellipse (namely the focii or the axis of the ellipse) with the four lines and its center $E.$
EDIT: in case of people wondering: yes, any convex quad admits at least one (but usually infinite) inscribed ellipse
EDIT2: I forgot to mention that I'm looking for an straightedge and compass construction. Sorry.
| The equation of an ellipse in the plane is given by
$ (r - E)^T Q (r - E) = 1 \hspace{20pt} (1)$
where $E$ is the center, and $Q$ is a $2 \times 2$ symmetric matrix.
At any point on the ellipse the gradient vector which is the normal (perpendicular) vector to the ellipse curve is given by
$ g = 2 Q (r - E) \hspace{20pt} (2) $
If the tangency point on $AB$ is $r_1$ then the gradient vector at $r_1$ will be parallel to the normal to the line segment $AB$, that is,
$ Q (r_1 - E) = \alpha n_1 \hspace{20pt} (3)$
where $n_1$ is known (it is the normal vector to $AB$). From this it follows that
$ r_1 - E = \alpha Q^{-1} n_1 \hspace{20pt} (4)$
Since $r_1$ is on the ellipse we can plug this expression into the equation of the ellipse, to deduce that
$\alpha = \dfrac{1}{\sqrt{n_1^T Q^{-1} n_1 } } \hspace{20pt} (5)$
On the other hand, premultiplying $(4)$ by $n_1^T$ and using $(5)$
$n_1^T (r_1 - E) = \sqrt{n_1^T Q^{-1} n_1 } $
The equation of the line segment $AB$ is $n_1^T (r - A) = 0 $, and we have
$n_1^T (r_1 - E) = n_1^T (r_1 - A + A - E) = n_1^T (A - E )$
because $n_1^T (r_1 - A) = 0$. Thus we have
$n_1^T (A - E) = \sqrt{ n_1^T Q^{-1} n_1 } \hspace{20pt} (6.1)$
Similar equations can be written for the other three line segments $BC, CD, DA$,
whose normals are $n_2, n_3, n_4$:
$n_2^T (B - E) = \sqrt{ n_2^T Q^{-1} n_2 } \hspace{20pt}(6.2) $
$n_3^T (C - E) = \sqrt{n_3^T Q^{-1} n_3 } \hspace{20pt} (6.3) $
$n_4^T (D - E) = \sqrt{n_4 ^T Q^{-1} n_4 } \hspace{20pt} (6.4)$
Only $3$ equations out of $4$ are needed to find the matrix $Q^{-1}$. Using the theorem mentioned in the question, a matrix $Q$ satisfying any three of the equations $(6.1) - (6.4)$ will satisfy the fourth equation if the center $E$ is on the line segment between the midpoint of $AC$ and the midpoint of $BD$. ( I don't have a proof of that but I verified it numerically).
Solving any three of equations $(6.1) - (6.4)$ can be done quite easily. Let matrix
$M = Q^{-1} = \begin{bmatrix} M_{11} && M_{12} \\ M_{12} && M_{22} \end{bmatrix} $
Then from $(6.1)$,
$n_1^T Q^{-1} n_1 = \left( n_1^T (A - E) \right)^2 \hspace{20pt} (7) $
The right hand side of $(7)$ is known, while the left hand side is linear in the entries of $Q^{-1} $, namely,
$n_1^T Q^{-1} n_1 = M_{11} n_{1x}^2 + M_{22} n_{1y}^2 + 2 M_{12} n_{1x} n_{1y} \hspace{20pt} (8) $
Two more equations can be written creating a $3 \times 3$ in the unknowns $M_{11}, M_{22}, M_{12} $, and solved.
Once $Q^{-1}$ is found, the ellipse specification is complete. Finding all the parameters of the ellipse follows directly by diagonalizing matrix $Q$ as follows:
$Q = R D R^T \hspace{20pt} (9)$
It can be assumed that $D_{11} \le D_{22}$. The semi-major axis $a = \dfrac{1}{\sqrt{D_{11}}}$ and the semi-minor axis $b = \dfrac{1}{\sqrt{D_{22}}} $
The focii are along the major axis, and are given by
$F_1 = E + a e R_1 $ and $F_2 = E - a e R_1 \hspace{20pt} (10) $
where $R_1$ is the first column vector of the matrix $R$ specified in $(9)$, and $e$ is the eccentricy,
$e = \sqrt{ 1 - \left( \dfrac{b}{a} \right)^2 }$
The figure below shows an example with the above method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4328760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Proving that a sequence of reals, $(x_n)$, converges to $x$ iff $\lim_{n \to \infty} d(x_n, x) = 0$ In this situation, $d(x, y) = |x - y|$, the standard metric.
I think this is a very basic proof, but I want to make sure I am being thorough enough.
Forward direction: $(x_n)$ converging to x means, by definition, that for any $\epsilon > 0$ there exists $N > 0$ such that whenever $n \ge N$, $|x_n - x| \lt \epsilon$. This result can also be written as $d(x_n, x) \lt \epsilon$, or $d(x_n, x) - 0 \lt \epsilon$. Therefore $\lim_{n \to \infty} d(x_n, x) = 0$.
Backward direction: $\lim_{n \to \infty} d(x_n, x) = 0$ means that for any $\epsilon > 0$ there exists $N > 0$ such that $n \ge N$ implies $d(x_n, x) - 0 = d(x_n, x) \lt \epsilon$. So, $|x_n - x| \lt \epsilon$. So, $(x_n)$ converges to $x$.
Is it this simple? Or am I missing any steps? Thank you so much for any help. I am still relatively new to mathematical proof writing.
Note - this is Lemma 1.1.1 (and exercise 1.1.1) in Tao's Analysis 2, and I just want to make sure I am being precise enough in the answer.
| Your proof is fine. However you need to make sure to put absolute when the limit is not zero as;
$$|d(x_n,x)- L| < \epsilon.$$
When $L=0$, (as in your case), the absolute value dose not matter.
Notation:
Your theorem is (clear fact) when the metic is the usual matrix, namely
$$d(x, y) = |x-y|.$$
But in the case of $d(x,y)$ is a general metric, it is very important fact which assures that whenever we have a convergent sequence in the metric space (as instance the space of continuous functions on closed interval with a norm, $(C[a,b], d=\| f-g\|_\infty)$, the theorem proves that the distance between the sequence and its limit according to $d$, namely $d(x_n,x)$ is convergent to zero. This theorem is very important in proving many results. See the proof of completeness of the Euclidian space $\mathbb{R}^n$ and $\mathbb{C}^n$ as example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4328934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
An application of Baire Category Theorem I am trying to prove a proposition that $BV[a.b]\cap C[a.b]$ equipped with the $||\cdot||_\infty$ is Baire 1 category set, which will tell us that $E=\{f:V(f)=\infty, f\in C[a,b]\}$ is a dense Baire 2 category set in $C[a.b]$.
My attempt: I define $F_n=\{f: V(f)\leq n, f\in C[a,b]\}$, then we know that $\cup_{n=1}^{\infty}F_n=BV[a.b]\cap C[a.b]$. I am trying to show that this is a Baire 1 category set, then we are done. In order to show that, we just need to prove the following:
1.$F_n$ is closed.
2. $F_n$ has no interior point for every n.
I have figured out the second claim by using sawtooth functions, but I have some problems when i try to prove the first claim. We suppose $f_n\rightarrow f$ uniformly, then by the definition and some easy calculation, we know that for every $\epsilon>0$, there exists a $m_0$ such that $V(f)\leq V(f_{m_0})+2n\epsilon$, where $n$ is the number of partition (where $a=x_0\leq x_1\leq \cdots\leq x_{n}=b$). So when n goes larger and larger, we can't give an estimation for $V(f)$, this is why i get confused.
My questions: $F_n$ is closed or not? if so, how to prove that? if not so, how do we prove the proposition at first? Any help will be truly grateful.
| For the first question, I will prove that $F_1$ is closed. So the proof will work for any $F_n$.
Let $(f_n)_{n\in \mathbb{N}}$ be a convergent sequnce in $F_1$, say $f_n\rightarrow f_0$ for some $f_0\in BV[a,b]\cap C[a,b]$. We wish to show that for any partition $P$ we have $V(f_0,P)<1$. So let $P=(a=t_1,t_2,…,t_k=b)$ be a partition for $[a,b]$. Let $\epsilon _n =\dfrac{1}{n}$ for all $n$. Then for any natural number $n$, there exists a natural number $N_n$ such that $sup_{x\in [a,b]} |f_{N_n}(x)-f_0(x) | < \dfrac{\epsilon _n}{2k}$. For all $n$, we have $V(f_0,P)=\displaystyle \sum _{i=1}^{k-1} |f_0(t_{i+1})-f_0(t_i)| \leq \displaystyle \sum _ {i=1}^{k-1} |f_0(t _{i+1})-f_{N_n}(t _{i+1} )|+|f_{N_n}(t_{i+1})-f_{N_n}(t_i)|+| f_{N_n}(t_i)-f_0(t_i)|$
$\leq 1+ 2k \dfrac{\epsilon _n}{2k}=1+\epsilon _n $. If we take the limit as $n\rightarrow \infty$, we get $V(f_0,P)\leq 1$. Hence, $f_0\in F_1$.
I assumed completeness of $C([a,b])$ is known.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4329155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Theorem 8.5 (The chain rule) from "An introduction to manifolds" by Tu - How should I understand the notation $(...)f$ where $f$ is a smooth function? Theorem 8.5
If $F : N \rightarrow M$ and $G : M \rightarrow P$ are smooth maps of manifolds and $p \in N$, then
$$(G \circ F)_{*, p} = G_{*, F(p)} \circ F_{*, p}$$
Proof
Let $X_p \in T_p N$ and $f$ be a smooth function at $G(F(p))$ in $P$. Then:
$$((G \circ F)_* X_p)f = X_p (f \circ G \circ F)$$
$$((G_* \circ F_*) X_p)f = (G_* (F_* X_p))f = (F_* X_p)(f \circ G) = X_p (f \circ G \circ F)$$
I just don't understand this:
As $X_p$ is a tangent vector, why do we write it before the functions (and not after)? What I mean is, should it rather be:
$$(f \circ G \circ F) X_p$$ for example? The same for our function $f$, shouldn't it be
$$f((G \circ F)_* X_p)$$ for example?
It seems like a simple proof, but I just can't understand the notation here, as it's strange to see the argument first and then the function.
| One way to define tangent vectors is to say that $T_p M$ is the set of all derivations at $p$ i.e. all linear maps $d: C^{\infty}(M) \longrightarrow \mathbb{R} $ that satisfy the Leibniz rule: $d(fg) = f(p)d(g) + g(p)d(f)$. This is also the definition which Tu is using in his book. So in this context $X_p f$ just means the real number you get when applying the derivation $X_p$ to the function $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4329238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Appropriate test for convergence of a series when numerator and denominator have same degree? Say we have a series like this:
$$
\sum^\infty_{n=1} \frac{16n^5+2n^2+4}{6n^5+10n+1}
$$
Somehow this confused me when I saw the same power in the numerator and denominator.
My thinking: for large enough $n$, we can assume that the series behaves like $\sum \frac{16n^5}{6n^5}=\sum \frac{8}{3}$, which is divergent. So I used the Limit Comparison Test with
$$
a_n = \frac{16n^5+2n^2+4}{6n^5+10n+1}
$$
And
$$
b_n = \frac{8}{3}
$$
To find
$$
\frac{a_n}{b_n}= \frac{16n^5+2n^2+4}{6n^5+10n+1} \times \frac{3}{8} = \frac{48n^5+6n^2+12}{48n^5+80n+8} = \frac{48+6 \frac{1}{n^3} + 12 \frac{1}{n^5}}{48+ 80 \frac{1}{n^4}+8 \frac{1}{n^5}} \to 1
$$, as $n \to \infty$.
So this means the initial series is divergent.
Is that the right approach in this case when numerator and denominator have the same degree or is there something more obvious/easier that I overlooked?
| We know that if a series is convergent then the sequence in the infinite sum must converge to zero. So when the sequence does not converge to zero, the series is divergent. Here, our sequence obviously converges to $\frac{16}{6}\neq 0$. Hence, the series is divergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4329418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A special pair of sequences in $c_0$ This could be a rather trivial question but I've spent some time on it and I can't come up with anything:
Do there exist sequences of positive numbers, say $(\alpha_n),(\beta_n)$ in $c_0$, i.e. $\alpha_n\to0$ and $\beta_n\to0$ such that the following two conditions are satisfied:
*
*$\sum_{n=1}^\infty n\beta_n<\infty$
*there exists $\delta>0$ and a subsequence of indices $n_1<n_2<n_3<\dots$ such that $\beta_{n_k}\cdot\sum_{j=1}^{n_k}\alpha_j\ge\delta$, i.e. the sequence $\{\beta_n\sum_{j=1}^n\alpha_j\}_{n=1}^\infty$ does not converge to $0$.
All I have tried is to play around with some standard sequences, but once I get the one condition satisfied, the other one breaks down. I would appreciate any help!
A comment: in order for condition 2 to be satisfied, one needs to take $\alpha_n$ to be a sequence that is not in $\ell^1$. On the other hand, for condition (1) to be satisfied $(\beta_n)$ not only has to be in $\ell^1$, but it has to converge "fast enough".
| No, there do not exist such sequences. Since $\alpha_n=o(1)$, we must have $$\sum_{n=1}^N{\alpha_n}=\sum_{n=1}^N{o(1)}=o(N)$$
Thus $$\sum_{N=1}^{\infty}{\left(\beta_N\sum_{n=1}^N{\alpha_n}\right)}=\sum_{N=1}^{\infty}{o(N)\beta_N}\lesssim\sum_{N=1}^{\infty}{N\beta_N}<\infty$$ where ${\lesssim}$ indicates dropping a constant prefactor. Since $\sum_{N=1}^{\infty}{\left(\beta_N\sum_{n=1}^N{\alpha_n}\right)}$ is summable, its terms must tend to $0$, whence the claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4329571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What was my mistake solving $y''-2y'=-2x$? I somehow missed the negative numbers on the last two terms of this problem, and we were required to use variation of parameters.
Here is the work I did:
$y''-2y'=-2x$
First I solved the characteristic equation:
$\begin{align*}
r^2-2r&=0 \\
r(r-2)&=0 \\
r&=\{0,2\}
\end{align*}$
Therefore the complementary function $y_c$ is
$y_c=c_1+c_2e^{2x}$ for some arbitrary constants $c_1$ and $c_2$.
Then, to find out the particular function $y_p$, I solved the following system of equations for $v_1$ and $v_2$ given that $y_c=c_1y_1+c_2y_2$ and $ay''+by'+cy=G(x)$:
$\Bigg\{\begin{array}{c}
v_1'y_1+v_2'y_2=0 \\
v_1'y_1'+v_2'y_2'=\displaystyle\frac{G(x)}{a}
\end{array}$
$\Rightarrow\Bigg\{\begin{array}{c}
v_1'+v_2'e^{2x}=0 \\
2v_2'e^{2x}=-2x
\end{array}$
Using substitution to solve the system and subsequently integrating $v_1'$ and $v_2'$:
$v_2'=-xe^{-2x}$
$\begin{align*}
v_1'+(-xe^{-2x})e^{2x}&=0\\
v_1'-x&=0\\
v_1'&=x
\end{align*}$
$v_1=\int v_1' \,dx=\int x\,dx=\frac{1}{2}x^2$
$v_2=\int v_2'\,dx=\int -xe^{-2x}\,dx$
$\begin{array}{cc}
u=-x & v=-\frac{1}{2}e^{-2x}\\
du=-dx & dv = e^{-2x}
\end{array}$
$\begin{align*}
v_2&=\frac{1}{2}xe^{-2x}-\int\frac{1}{2}e^{-2x}\,dx\\
&=\frac{1}{2}xe^{-2x}+\frac{1}{4}e^{-2x}
\end{align*}$
Since $y_p=v_1y_1+v_2y_2$:
$\begin{align*}
y_p&=\frac{1}{2}x^2(1)+\left(\frac{1}{2}xe^{-2x}+\frac{1}{4}e^{-2x}\right)e^{2x}\\
&=\frac{1}{2}x^2+\frac{1}{2}x+\frac{1}{4}
\end{align*}$
The solution to the differential equation is the sum of $y_c$ and $y_p$: $\boxed{y=c_1+c_2e^{2x}+\frac{1}{2}x^2+\frac{1}{2}x+\frac{1}{4}}$
However, the correct answer that was given in the multiple choice answer choices was:
$y=c_1+c_2e^{2x}+\frac{1}{2}x^2+\frac{1}{4}(-2x-1).$
I'm not sure how I missed the negatives on the $x$ term and the final constant, since otherwise my answer was the same. What I'm equally as confused about is why the $-\frac{1}{4}$ doesn't "absorb" into $c_1$.
Thank you for your help!
| Note that the DE is:
$$(y'e^{-2x})'=-2xe^{-2x}$$
$$y'e^{-2x}=xe^{-2x}+\dfrac 12e^{-2x}+C$$
$$y'=x+\dfrac 12+Ce^{2x}$$
$$y(x)=\dfrac {x^2}2+\dfrac x2+C_1e^{2x}+C_2$$
The constant $\dfrac 14$ is abosorbed by $C_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4329775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Second Order Differential Equations $ay''+by'+cy=0$, without complex numbers How to solve the following equation without using Second Order Differential Equations formulas or Power Series:
$$af^{''}(x)+bf^{'}(x)+cf(x)=0$$
where $b^2-4ac<0$.
I know that the soltution is something like this:
$$f(x)=A\sin (qx)+B\cos (qx)$$
In any event, I would like to know how to solve this equation without that well-known formula that we find in any Differential Equations Course. I would like an elementary solution to this equation and without complex numbers.
For instance, if $b^2-4ac>0$ then we can solve it by using something like that: $$(e^{kx}g(x))^{'}=0$$. Can't we do something similar as well?
The second question I want to ask is if I want to solve the equation with Power Series is it correct the following approach:
I prove by induction that $f$ is infinitely derivable and after that I assume:
$$f(x)=\sum_{n=0}^\infty a_n x^n$$
I know that not all infinitely derivable can be written in this form, for example:
$e^{-\frac{1}{x^2}}$.
So it shoul be wrong. Nonetheless, there are several people who provide solutions to this problem by using this fact which for me seems to be blatantly erroneous.
| @Vercassivelaunos has explained how to reduce the problem to looking at the problem $g''+m^2 g=0$; by replacing $x$ by $mx$ we need only look at the problem $g''+g=0$.
Let us show that there is a unique solution, given that $g(0)=A, g'(0)=B$. Clearly $A\cos x+ B\sin x$ is a solution with these initial conditions, so look at $h(x):=g(x)-A\cos x+ B\sin x$; this satisfies $h''+h=0$, $h(0)=0,h'(0)=0$. We want to show that $h(x)=0$ for all $x$.
Multiplying $h''+h=0$ by $h'$ we see that $h'' h'+h'h=0$ for all $x$. Note that $h''h'+h'h=(\frac12 (h')^2+\frac12 h^2)'$. Therefore, by the Mean Value Theorem,
$$
\frac12 h'(x)^2+\frac12 h(x)^2=(x-0)(\frac12 (h'(\theta x)^2+\frac12 h(\theta x)^2)=0.
$$
Now a real sum of squares can only be zero if each summand is zero. So we have, as required, $h(x)=0$ for all $x$.
[For those of an applied bent, we are just doing the classical thing of conserving the total energy.]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4329937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Sequence $\ln(n)/n$ to $0$ I'm taking a real analysis course for my second year and I'm still new when it comes to convergence of a sequence using the M-epsilon definition.
I've simplified my sequence up to
$$\frac{\ln(n)}{n}<\epsilon,$$
but where do I go from here? How do I find an upper bound for $\ln(n)/n$, I have tried using $\epsilon/n$ and $(n-1)/n$ ,but it doesn't satisfy the inequality for a large $n$.
| To find the maximum of a sequence consider it as a function, namely
$$f(x) = \frac{\ln x}{ x}.$$
Assuming $x>0$ we find the extremum of this function
$$f'(x) = \frac{1-\ln x}{x^2}=0 \Rightarrow \ln x =1 \Rightarrow x=e.$$
Then
$$\sup_{x \in [1,\infty)} \frac{\ln x}{x} \leq \frac{1}{e} \Rightarrow | \frac{\ln n}{n}| \leq \frac{1}{e} \quad \forall n > 0 .$$
The second sequence is clear since we have
$$|\frac{1-n}{n}| = |1- \frac{1}{n}| < 1 \quad \forall n > 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4330094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Find the Volume of a Solid Revolution around the y axis Having trouble with this question from my OpenStax Calculus Volume 1 Homework, It is question 89 of Chapter 6 about Solid Revolution. I put my math below:
y=4-x, y=x, x=0 Find the volume when the region is rotated around the y-axis.
$$y= 4-x \Rightarrow x = 4 - y \implies R(y) = 4 - y$$
This is rotated around the y-axis should thus follow the general formula (c and d are position on the y axis):
$$V=\pi\int_{c}^{d} R^2(y) \, dy $$
Using this I inserted the relevant values and variables:
$$
V= \pi\int_{0}^{4} (4-y-y)^2 \,dy
= \pi\int_{0}^{4} (4-2y)^2 \,dy
= \pi\int_{0}^{4} (16 - 16y +4y^2) \,dy
= \pi\Big[16y - \frac{16y^2}{2} + \frac{4y^3}{3}\Big] \Bigg|_{0}^{4}
= \pi \ \Big[16(4) - \frac{16(4^2)}{2}+ \frac{4(4^3)}{3} \Big] -\pi\big[0 \big]
= \pi \Big[64 - 128 - \frac{256}{3} \Big]
= \pi \Big[\frac{64}{3} \Big]
= \frac{64\pi}{3}
$$
The Official Answer given in the textbook is $\frac{16\pi}{3} $ but I am not sure how to get there. Some help would be greatly appreciated.
| The region is as stated in the diagram and is rotated around y-axis.
If you are using disk method, it should be two integrals:
$ \displaystyle \int_0^2 \pi y^2 ~ dy ~ + \int_2^4 \pi (4-y)^2 ~ dy = \frac{8 \pi}{3} + \frac{8 \pi}{3} = \frac{16 \pi}{3}$
You can alternatively use the shell method which is a single integral in this case. The volume of revolution using shell method is,
$ \displaystyle \int_a^b 2 \pi h(x) \cdot x ~ dx ~, ~$ where $h(x)$ is the height of shell as a function of $x$, and $a$ and $b$ are bounds of $x$.
$h(x) = 4-x-x = 4 - 2 x$
So the integral is $ \displaystyle \int_0^2 2 \pi x \cdot (4-2x) ~ dx = \frac{16 \pi}{3}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4330576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Coupon Collector Score Suppose you have a non-uniform coupon collector problem. But, rather than quotas, each coupon gives you points. There are $n$ different groups of coupons, and the probability of receiving a coupon of each group is $p_i$. There are $N_i$ distinct coupons in group $i$ with each coupon within a group equiprobable. The number of points you receive for getting a new coupon in group $i$ is:
$$\dfrac{N_i}{p_i(N_i-k_i)}$$
where $k_i$ is the number of coupons you have already gathered from group $i$. So, obviously, the points for the first coupon of each group is just $\dfrac{1}{p_i}$. But, once you get more coupons from each group, each coupon is worth more.
Question: What is the expected number of points you will have after collecting $m$ distinct coupons?
I attempted to try to brute force the expected value of the score, but I was not having much luck. And when I tried to calculate it by monte carlo simulation, I wound up with a fairly large variance.
I can easily calculate the expected value for points after drawing one coupon. That is just the number of groups. And from there, I think for 2 coupons among $n$ groups, I think the expected number of points will be:
$$\sum_{i = 1}^n \sum_{j = 1}^n \dfrac{p_ip_j\left(\begin{cases}1 & i\neq j \\ \tfrac{N_i-1}{N_i} & i = j\end{cases}\right) \cdot \left(\dfrac{1}{p_i} + \dfrac{1}{p_j}\begin{cases}1 & i \neq j \\ \tfrac{N_i}{N_i-1} & i = j\end{cases}\right)}{\sum_{k=1}^n p_k\left(\begin{cases}1 & k \neq i \\ \tfrac{N_i-1}{N_i} & k = i\end{cases}\right)}$$
This just looks like a nightmare to calculate even for two coupons. Any advice on how to make this a bit easier to calculate? I am looking at similar coupon collector problems, such as the one here, but the addition of points for coupons is really throwing a wrench into my attempt at creating a generating function. I was hoping to expand my attempt at calculating the expected points from two coupons into a generating function, if I could see a pattern. I did not.
| I initially misread the post; this answer computes the expected number of points after $m$ draws, not after getting $m$ distinct draws.
Call a group dead if all of its coupons have been chosen previously, and alive otherwise.
Lemma If there are currently $a$ alive groups, the expected number of points from your next coupon is $a$.
Proof: Suppose you have already $k_i$ coupons from the $i^{th}$ group, for each $i\in \{1,\dots,n\}$. Let $X$ be the number of points from your next coupon. For each $i\in \{1,\dots,n\}$, let $G_i$ be the event that the next coupon is from group $i$.
$$
E[X]=\sum_{i=1}^n P(G_i)E[X\mid G_i]=\sum_{i=1}^n p_i\cdot
\left(\begin{cases}
\displaystyle\frac{N_i-k_i}{N_i}\times \frac{N_i}{p_i(N_i-k_i)} & k_i < N_i
\\
0 & k_i=N_i
\end{cases}\right)
=\text{#}\{i\mid k_i<N_i\}
$$
Now, let $X_t$ be the number of points you get on draw number $t$, for $t\in \{1,2,3,\dots\}$. Then
\begin{align}
E[X_t]
&= E[\text{# alive groups before turn $t$}]
\\&= \sum_{i=1}^n P(\text{group # $i$ alive before turn $t$})
\\&= \sum_{i=1}^n P\left(\bigcup_{j=1}^{N_i}\{\text{$j^\text{th}$ coupon in group $i$ not yet chosen}\}\right)
\\&= \sum_{i=1}^n \sum_{j=1}^{N_i} (-1)^{j-1}\binom{N_i}{j} \left(1-p_i\frac{j}{N_i}\right)^{t-1}
\end{align}
The last equation is a routine application of the principle of inclusion-exclusion. Therefore, if you draw $m$ coupons total, the expected number of points is
$$
\begin{align}
\sum_{t=1}^mE\left[ X_t\right]
&= \sum_{i=1}^n \sum_{j=1}^{N_i} (-1)^{j-1}\binom{N_i}{j} \sum_{t=1}^m\left(1-p_i\frac{j}{N_i}\right)^{t-1}
\\&= \sum_{i=1}^n \sum_{j=1}^{N_i} (-1)^{j-1}\binom{N_i}{j} \frac{1-(1-p_ij/N_i)^m}{p_ij/N_i}
\\&= \boxed{\sum_{i=1}^n \frac{N_i}{p_i}\sum_{j=1}^{N_i} (-1)^{j-1}\binom{N_i}{j}\frac1j\cdot \left(1-\left(1-p_i\frac{j}{N_i}\right)^m\right)}
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4330709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove the linear independence of $\{v_1,v_2+v_2 \}\,$?
Prove that if $\{v_1,v_2\}$ are linearly independent (LI), then also $\{v_1,v_1+v_2\}$ are linearly independent.
Is the converse true?
I have proved the first part, but now i am stuck on whether the converse is true.
that is, if the $ \{ v_1,v_1+v_2 \} $ are LI, so are $\{v_1,v_2\}$.
How can i prove this?
Thanks in advance.
| The reverse can be proved with the usual definition of linear independence.
Let $c_1,c_2\in\mathbb{R}$ be such that $c_1 v_1+c_2v_2=0$. Then
$$0=c_1v_1+c_2v_2=c_1v_1+c_2v_2+c_2v_1-c_2v_1=(c_1-c_2)v_1+c_2(v_1+v_2)$$
Because $v_1,v_1+v_2$ are linearly independent, we must conclude $c_1-c_2=c_2=0$, which leads to $c_1=c_2=0$ and proves $v_1,v_2$ are linearly independent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4331035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Find $p$ such that the integral is finite Let $X \subset \mathbb{R}^{n}$. For which $p \in[1, \infty)$ it holds that $f \in L^{p}(X)$ when $f(x)=|x|^{-1}$ and
*
*$X=B(0,1)$
*$X=\mathbb{R}^{n} \backslash B(0,1)$
*$X=\mathbb{R}^{n}$.
My attempt:
Suppose $0\leq a<b\leq \infty$ and we consider the annulus $E_{a,b}:=\{x\in\Bbb{R}^n\,:\, a<|x|<b\}$. Then, for any $p\in\Bbb{R}, $ we have $\int_{E_{a,b}}\frac{1}{|x|^{p}}\,dx=\int_a^b\frac{1}{r^{p}}A_{n-1}r^{n-1}\,dr=A_{n-1}\int_a^b\frac{1}{r^{p+1-n}}\,dr$, where $A_{n-1}$ is the surface area of the unit sphere $S^{n-1}\subset\Bbb{R}^n$
$A_{n-1} \frac{-(p+1-n)}{r^{p+2-n}}\bigr\vert_{a}^{b}$
In the first case, $a = 0$ and $b = 1$, in the second case $a = 1$ and $b = \infty$, and in the third case $a = 0$ and $b = \infty$. In the first case the integral is finite when $p +2 ≤ n$, in the second case the integral is finite when $p + 2 ≥ n$, and in the third case $p + 2 = n$.
Is my attempt correct?
| For any $\lambda\in \Bbb{R}$,
*
*$\int_0^1\frac{1}{x^{\lambda}}\,dx$ is finite if and only if $\lambda<1$.
*$\int_1^{\infty}\frac{1}{x^{\lambda}}\,dx$ is finite if and only if $\lambda>1$.
*clearly, $\int_0^{\infty}\frac{1}{x^{\lambda}}\,dx$ is finite if and only if $\int_0^1\frac{1}{x^{\lambda}}\,dx$ and $\int_1^{\infty}\frac{1}{x^{\lambda}}\,dx$ are both finite. This happens if and only if $\lambda<1$ and $\lambda>1$.... i.e this integral is never finite.
Now, take $\lambda=p+1-n$ and conclude.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4331243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Is it true $P(\sup_{k \in \mathbb{N}}X_k \geq \epsilon +x)=\dfrac{x}{\epsilon+x}$? Let $(X_k)_k$ be a non-negative martingale such that $\lim_{k \to \infty}X_k=0$ a.s. and $X_0=x\in ]0,\infty[.$
Prove or disprove the following: $$\forall\epsilon>0,P(\sup_{k \in \mathbb{N}}X_k \geq \epsilon +x)=\frac{x}{\epsilon+x} \ \ \ \ \ \ \ \ \ (1)$$
I deduced, from a maximal inequality, that $P(\sup_{k \in \mathbb{N}}X_k \geq \epsilon +x) \leq \dfrac{E[X_0]}{x+\epsilon}=\dfrac{x}{\epsilon+x}.$
Any suggestions how to prove $(1)?$
| In general, (1) may not hold: take $X_k=x\prod_{i=1}^k\left(\mathbf{1}_{A_i}/p_i\right)$ where $(A_i)$ is an independent sequence of events, $p_i=\mathbb P(A_i)>0$ and $\mathbb P\left(\bigcap_{i\geqslant 1}A_i\right)=0$.
Then the sequence $(X_k)$ can take only the values $x$, $0$, $1/p_1$, $1/(p_1p_2)$, $\prod_{i=1}^k 1/p_i$, $k\geqslant 1$ and so does $\sup_{k\in\mathbb N}X_k$, as $\prod_{i=1}^k 1/p_i\to \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4331414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solve $\sqrt{x-5}-\sqrt{9-x}\gt1,x\in\mathbb Z$
Solve $\sqrt{x-5}-\sqrt{9-x}\gt1,x\in\mathbb Z$
The statement tells us that $x\in[5,9]$. Also,
$$\sqrt{x-5}\gt1+\sqrt{9-x}$$
Since both sides are positive, we can square
$$x-5>1+9-x+2\sqrt{9-x}\\2x-15\gt2\sqrt{9-x}$$
$\implies 2x-15\gt0\implies x\gt7.5$
Since $x\in\mathbb Z\implies x=8,9$
But on back substitution, $x=8$ doesn't satisfy. Is there a way we could get the final answer without back sustitution?
| Starting from where you left off, since both sides of the inequality $$2x - 15 > 2\sqrt{9 - x}$$ are positive, the direction of the inequality is preserved if we square both sides, which yields
\begin{align*}
4x^2 - 60x + 225 & > 4(9 - x)\\
4x^2 - 60x + 225 & > 36 - 4x\\
4x^2 - 56x & > -189
\end{align*}
Since $(2a + b)^2 = 4a^2 + 4ab + b^2$, we can complete the square with $a = x$ and $b = 14$ to obtain
\begin{align*}
4x^2 - 56x + 196 & > 7\\
4(x^2 - 14x + 49) & > 7\\
(x - 7)^2 & > \frac{7}{4}
\end{align*}
Since $(8 - 7)^2 = 1 < \dfrac{7}{4}$, this eliminates $8$. Thus, the only integer solution is $x = 9$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4331630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Show that for any integrable function... Show that for any integrable function $f$ on $\mathbb{R}$ we have $$\lim_{n\to\infty}\int_{n}^{n+1}f(x) dx = 0$$
What’s the way to solve it. Can I use Lebesgue dominated convergence theorem? Or I have to use the idea that the step functions are dense in $L^{1}$?
| Note that $\int_{0}^{\infty}|f(x)|dx<\infty$ and by monotone convergence theorem, $\int_{0}^{n}|f(x)|dx\uparrow\int_{0}^{\infty}|f(x)|dx$. Now we use the simple Cauchy criterion:
\begin{align*}
\left|\int_{0}^{m}|f(x)|dx-\int_{0}^{n}|f(x)|dx\right|<\varepsilon
\end{align*}
for $m,n\geq N$. Now pick $m=n+1$ and we have
\begin{align*}
\left|\int_{n}^{n+1}f(x)dx\right|\leq\int_{n}^{n+1}|f(x)|dx<\varepsilon,\quad n\geq N.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4331922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Prove that the polynomial $P=2X+1 \in \Bbb Z_4[X]$ has an inverse element.
Prove that the polynomial $P=2X+1 \in \Bbb Z_4[X]$ has an inverse element. What happens if we consider $P$ as an element of $\Bbb Q [X]$?
If $P \in \Bbb Z_4[X]$, then any $Q=kX+1$ where $k \equiv 2 \pmod{4}$ will work as an inverse right? I suppose that the result will not change in $\Bbb Q [X]$ as the coefficients have a multiplicative inverses except $0$?
| You are correct regarding $Q=kX+1$ being an inverse whenever $k\equiv2\mod 4$.
However, the result changes drastically if we move to $\mathbb Q[X]$ as this is a polynomial ring of an integral domain. You should convince yourself that for $R$ and integral domain and $f,g\in R[X]$ we have
$$
\deg fg=\deg f+\deg g
$$
making invertibility for any non-constant polynomial impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4332102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
find the value of the product of roots of this quadratic equation It is given that one of the roots of the quadratic equation : $x^2 + (p + 3)x - p^2 = 0$,
where $p$ is a constant, is negative of the other. The question is : find the value of the product of roots.
| Without applying the factorization of a "difference of two squares" or Viete's relations, we can still use the information stated in the problem. If we call the two roots of the quadratic equation $ \ r \ $ and $ \ -r \ \ , $ then we have
$$ r^2 \ + \ (p + 3)·r \ - \ p^2 \ \ = \ \ 0 $$
and $$ [-r]^2 \ + \ (p + 3)·[-r] \ - \ p^2 \ \ = \ \ r^2 \ - \ (p + 3)·r \ - \ p^2 \ \ = \ \ 0 \ \ . $$
This means that $ \ r^2 \ = \ -(p + 3)·r \ + \ p^2 \ = \ (p + 3)·r \ + \ p^2 \ \Rightarrow \ 2·(p + 3)·r \ = \ 0 \ \ . $ So either $ \ r \ = \ 0 \ $ or $ \ p \ = \ -3 \ \ . $
But if $ \ r \ = \ 0 \ = \ -r \ \ , \ $ then $ \ 0^2 \ + \ (p + 3)·0 \ - \ p^2 \ \ = \ \ 0 \ \ $ would require $ \ p \ = \ 0 \ \ , $ which would then make the quadratic equation $ \ x^2 \ + \ 3·x \ = \ 0 \ \ . $ But that polynomial factors as $ \ x · (x + 3) \ = \ 0 \ \ , $ so we couldn't have both roots equal to zero.
Instead, it must be that $ \ p \ = \ -3 \ \ , $ making the equation $ \ x^2 \ + \ 0·x \ - \ (-3)^2 \ = \ x^2 \ - \ 9 \ = \ 0 \ \ , $ for which the roots are given by $ \ r^2 \ = \ 9 \ \Rightarrow \ r \ = \ +3 \ , \ -3 \ \ ; \ $ the product of the roots is thus $ \ -9 \ \ . $
Another way to arrive at this conclusion is that $ \ y \ = \ x^2 \ + \ (p + 3)·x \ - \ p^2 \ \ $ is the equation of an "upward-opening" parabola, for which we want the $ \ x-$intercepts to be $ \ x \ = \ -r \ $ and $ \ x \ = \ r \ \ . $ Its axis of symmetry is located midway between these intercepts, so we have $ \ h \ = \ 0 \ $ in the "vertex form" of the parabola's equation, $ \ y \ = \ (x - 0)^2 \ - \ p^2 \ \ . $ (The vertex is definitely "below" the $ \ x-$axis at $ \ (0 \ , \ -p^2) \ \ , $ so we know these $ \ x-$intercepts exist.) The equation of the parabola is therefore $ \ y \ = \ x^2 \ - \ p^2 \ \ , \ $ making $ \ p + 3 \ = \ 0 \ \ $ and the rest of the argument above follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4332357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Sum of sample expectations of function of 2 variables (Understanding MMD) I'm trying to understand the proofs to bound Maximum Mean Discrepancy (MMD) in the paper "A Kernel Two-Sample Test" by Gretton et al. (2012). These are given in the appendix. Specifically, I am stuck with A.3 (Bound when $p=q$ and $m=n$, p. 758) on how to bound the expected value of $MMD_b$.
The step I am having trouble with is the following, with samples $X, X'$ following the same distribution and both having the same number of samples $m$. The individual samples are are $x_i, x_i'$, respectively.
\begin{align}
&\frac{1}{m} \mathbb E_{X,X'}\left[\sum_{i=1}^m\sum_{j=1}^m\left(k(x_i, x_j) + k(x_i', x_j') -k(x_i,x_j') - k(x_i',x_j)\right)\right]^\frac{1}{2} \\
\leq & \frac{1}{m}\left[2m\mathbb E_xk(x,x) + 2m(m-1)\mathbb E_{x,x'}k(x,x')-2m^2\mathbb E_{x,x'}k(x,x')\right]^\frac{1}{2}
\end{align}
I am assuming that the inequality follows from Jensen's inequality and the concavity of the square root. Also, I am guessing that we are using the fact that for a function $g$ it holds that $\mathbb E [g] = \mathbb E_{X'}[\hat{\mathbb E}_{X'}(g)] = \mathbb E_{X'}[\frac{1}{m}\sum_{i=1}^mg(x_i')]$, where the $x_i'$ are from $X'$. Still, I am not seeing how to formally derive the second line from the first one.
Thanks in advance!
Related question on stats.SE.
| The trick is to handle the diagonal elements ($i=j$) separately, which gives the $2mE_xk(x,x)$ part. The rest follows from this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4332514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Constructing an LR test for a concrete example and find its critical areas and power function I am working on the following exercise:
We are given an observation of a discrete RV $X$ with PMF $f(x \mid \theta)$ and $\theta \in \{0,1,2\}$ as in the table below. Find the LR test for the hypothesis $H_0: \theta = 0$ and list all possible critical areas for such an LR test. From there take a level $\alpha$-test for $\alpha = 0.15$ and find its power function $\beta(\theta)$.
$$\begin{pmatrix}
x &f(x \mid 0) &f(x \mid 1) &f(x \mid 2) \\
1 &3/4 &1/4 &1/3 \\
2 &1/8 &1/8 &1/3 \\
3 &1/8 &1/2 &1/6 \\
4 &0 &1/8 &1/6 \\
\end{pmatrix}$$
I am new to LR tests and can not quite see through its definition yet. Here is what I got so far: We defined the test statistic for the LR test as
$$\lambda(X) := \frac{L(\hat{\theta_0} \mid X)}{L(\hat{\theta} \ \mid X)},$$
where $L$ is the likelihood function. If I am not mistaken we should have:
$$\lambda(X) = \begin{cases}
\frac{3/4}{3/4} = 1, & \text{for } X = 1 \\
\frac{1/8}{1/3} = 3/8, & \text{for } X = 2\\
\frac{1/8}{1/2} = 1/4, & \text{for } X = 3\\
\frac{0}{1/6} = 0, & \text{for } X = 4.
\end{cases}
$$
For the LR tests we defined the critical area $K$ as $K := \{x \in \mathcal{X} \ \mid \lambda(x) < k\}$, where $\mathcal{X}$ is the sample space. In the lecture we then said that we need to find some $k$ such that $\sup_{\theta \in \Theta_0} P_{\theta}(\lambda < k) \le \alpha$, however, I do not see how I should do this here. Could you please help me?
| Assuming a two-sided alternative hypothesis, your work so far is correct.
Here is a hint on how to proceed further:
A critical region of the form $\lambda(x)<k$ essentially means that you reject $H_0$ for small values of $\lambda$.
Observe that
$$\lambda(4)<\lambda(3)<\lambda(2)<\lambda(1) \tag{$\star$}$$
Your critical region $R$ (say) would consist of sample points taken according to $(\star)$. So possible critical regions would be $\{4\}$ or $\{4,3\}$ etc. depending on the level restriction on the test. For a level $\alpha=0.15$ test, $4$ is the first sample point to enter $R$, followed by $3$, and so on until the size of the test exceeds $0.15$.
As you can see,
$$P_{H_0}(X=4)=0<0.15$$
And
$$P_{H_0}(X=4)+P_{H_0}(X=3)=\frac18=0.125<0.15$$
But $$P_{H_0}(X=4)+P_{H_0}(X=3)+P_{H_0}(X=2)=\frac28=0.25>0.15$$
So the tests with critical regions $R=\{4\}$ and $R=\{4,3\}$ are both valid level $0.15$ likelihood ratio tests. Of course, the more points you add in $R$, higher is the power of the test. On the other hand, the test with $R=\{4,3,2\}$ is not a level $0.15$ likelihood ratio test.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4332692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Path complex integral of $\dfrac{1}{z}$ along a ball not centered at $0$ I have some troubles to solve the integral $$I=\int_{\gamma} \dfrac{1}{z}dz,$$ with the parametrisation of a circumference with radius $r$ and centre $i$: $$\gamma(t)=i+re^{it}.$$
I know if $\gamma$ would centered at $0$, then $I=2\pi i$. But in this particular case i have some troubles with the algebra since i don’t know how to deal with the denominator: $$I=\int_{\gamma} \dfrac{1}{z}dz=\int_{0}^{2\pi}\dfrac{ire^{it}}{i+re^{it}}dt.$$
| If $r\lt1,$ then the integral is $0$ by Cauchy's integral theorem. If $r=0,$ then the integral does not exist. If $r\gt0,$ then the integral is $2\pi{i}$ by Cauchy's residue theorem. There is no need to actually concern yourself with the parametrization with the contour and convert the integral into a Riemann integral.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4332843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that $A$ is closed. let $f:X\to Y$ continuous, open and onto. Then $ Y $ is Hausdorff if and only if the set $ \{(x, y): f (x) = f (y) \} $ is closed. I already did the one for $ Y$ is Hausdorff so the set is closed. Someone help me to come, I don't know how to do it.
Let $A=\{(x, y): f (x) = f (y) \}$. What I will prove is that the $ A'$ is open. Let $ x, y \in A'$. So, $ f (x) \neq f (y) $, since $ Y $ is $ T_2 $, then we have $ U_ {f (x)} $ and $ V_ {f (y)} $ two open such that $ U_ {f (x)} \cap V_ {f (y)} = \varnothing. $ Now, $$f^{-1}(U_ {f (x)}) \cap f^{-1}(V_ {f (y)})=f^{-1}(U_ {f (x)} \cap V_ {f (y)} )=f^{-1}(\varnothing)=\varnothing $$
From there we will already obtain that $ A '$ is open, so $ A $ is closed. The other implication is missing.
| Suppose $A$ is closed and let $y_1 \neq y_2$ in $Y$. By ontoness there are $x_1, x_2 \in X$ so that $f(x_1)=y_1, f(x_2)= y_2$. By definition $(x_1,x_2) \in X^2\setminus A$ and as $A$ is closed, there is a basic open subset $U \times V$ of $X^2$ so that $$(x_1, x_2) \in U \times V \subseteq X^2\setminus A\tag{1}$$
Now if $z \in f[U] \cap f[V]$ there would be $x_3 \in U, x_4 \in V$ so that $z=f(x_3)=f(x_4)$ but then $(x_3,x_4) \in (U \times V)\cap A$ would contradict the inclusion in $(1)$. So $f[U] \cap f[V]=\emptyset$.
By openness of $f$, $f[U]$ and $f[V]$ are the required disjoint open neighbourhoods of $y_1, y_2$ resp. and $Y$ is $T_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4333022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Checking if this transformation $L: R^3 \to P_4$ exists? I've been given three vectors: $\begin{bmatrix}1&2&3\end{bmatrix}$, $\begin{bmatrix}0&1&1\end{bmatrix}$, and $\begin{bmatrix}1&1&1\end{bmatrix}$ and asked if it is possible for there to be a linear transformation $L:\mathbb{R}^3\rightarrow P_4$ such that $\begin{bmatrix}1&2&3\end{bmatrix}\mapsto x^2$, $\begin{bmatrix}0&1&1\end{bmatrix}\mapsto x^2-1$, and $\begin{bmatrix}1&1&1\end{bmatrix} \mapsto 1$.
My professor goes about saying that this must be true since the vectors are linearly independent, but I'm having trouble figuring out why linear independence means this transformation must be possible. Can anyone help clarify why this is the case?
| Let $x_1=[1,2,3]^T,$ $x_2=[0,1,1]^T,$ $x_3=[1,1,1]^T.$ $1,x,x^2$ is linear independent in $P_4.$ So let $y_1=x^2=[0,0,1]^T,$ $y_2=x^2-1=[-1,0,1]^T,$ $y_3=1=[1,0,0]^T.$
You want to find a linear transformation $A,$ such that $Ax_i=y_i,$ $i=1,2,3.$ Write it into $$A[x_1,x_2,x_3]=[y_1,y_2,y_3]\Rightarrow AX=Y$$
Here $X=[x_1,x_2,x_3]$ is invertible since $\{x_1,x_2,x_3\}$ is linear independent. So $A=YX^{-1}$ is a unique solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4333163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to measure changing dimensions of right scalene triangle as hypotenuse rotates? This is a high school level geometry question.
As shown in the top half of the image below, suppose we start with a right scalene triangle with hypotenuse = 5 and sides b and c = to 3 and 4, respectively.
Suppose we begin rotating the hypotenuse outward (rotated hypotenuse shown as dashed line "d" in bottom half of image). We rotate hypotenuse line "d" out by a unit of 1 from angle "B", denoted as dashed line "e" in the image. How do we measure lines f and g, formed in the new triangle as line d rotates outward?
It's obvious to me the new angle D formed as line "d" rotates will <> angle B. The little black square in the new triangle formed is my recollection of a 90 degree angle designation.
| Another solution is to solve for angle C when you know, for that new dashed-line triangle (the larger one), that hypotenuse ("a") = 5 and one side ("e") = 1. Then use those new angles that are formed from the shift of line "a", and the fact that "e" = 1, to solve for "f" and "g". In any event I end up with the same solution for "f" and "g" when I solve for the equations recommended by Intelligenti Pauca, using the quadratic formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4333295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
$a$ sum of $2$ squares and $b$ sum of $3$ squares imply $a^2b$ is sum of $3$ squares. I have been given the following problem to solve and am having a hard time finding a solution. I feel that one way, perhaps the only way, is to use a polynomial identity but I can't determine it.
Let $a,b$ two positive integers. If $a$ is sum of $2$ squares and $b$ is sum of $3$ squares then $a^2b$ is sum of three squares.
NOTE.-The sums are mean "primitive" (i.e. with coprime solutions).
| Any positive integer $n$ that is not divisible by 4, also $n \neq 7 \pmod 8,$ can be written primitively as the sum of three squares. The number of such primitive representations is a certain multiple of $h(-4n).$ This number is the count of primitive binary quadratic forms of discriminant $-4n.$ We always have $h(-4n) \geq 1$ because there is always the form $x^2 + n y^2.$
Oh, permutations of a triple are considered separate, also negating one or more variables...a triple (x,y,z) with $ x > y > z > 0$ therefore counts as $48$ representations. Anyway,
For $n > 1$ and $ n \equiv 1 \pmod 8,$ $\; \; R_{0}(n) = 12 h(-4n).$
For $ n \equiv 3 \pmod 8,$ $ \; \; R_{0}(n) = 8 h(-4n).$
For $ n \equiv 5 \pmod 8,$ $ \; \; R_{0}(n) = 12 h(-4n).$
For $ n \equiv 2 \pmod 8,$ $ \; \; R_{0}(n) = 12 h(-4n).$
For $ n \equiv 6 \pmod 8,$ $ \; \;R_{0}(n) = 12 h(-4n).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4333519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Compute explicitly $\sum_{x=1}^\infty\sum_{y=1}^\infty \frac{x^2+y^2}{x+y}(\frac{1}{2})^{x+y}$ It is possible to compute explicitly the following series?
$$\sum_{x=1}^\infty\sum_{y=1}^\infty \frac{x^2+y^2}{x+y}\Big(\frac{1}{2}\Big)^{x+y}$$
@Edit I tried to sum and subtract $2xy$ in the numerator.
In this way I get the following
\begin{align}
\sum_{x=1}^\infty\sum_{y=1}^\infty (x+y)\Big(\frac{1}{2}\Big)^{x+y}-2\sum_x x\Big(\frac{1}{2}\Big)^x\sum_y\frac{y}{x+y}\Big(\frac{1}{2}\Big)^y.
\end{align}
The first series should be easy to be computed. It remain the second one, in particular the following
\begin{align}
\sum_y\frac{y}{x+y}\Big(\frac{1}{2}\Big)^y=\sum_y\Big(\frac{1}{2}\Big)^y-x\sum_y\frac{1}{x+y}\Big(\frac{1}{2}\Big)^y.
\end{align}
Therefore the problem is reduced to computing:
\begin{align}
\sum_{y=1}^\infty \frac{1}{x+y}\Big(\frac{1}{2}\Big)^y.
\end{align}
But I don't know well how to do it.
| $$ \sum_{y=1}^\infty \frac{1}{x+y} \left(\frac{1}{2}\right)^y = \Phi(1/2,1,x) - 1/x$$
where $\Phi$ is the Lerch Phi function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4333629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Solve this system of equations for real numbers Solve this system of equations for real numbers:
$$x^2+xy=3, $$
$$4y^2+3xy=22. $$
I think it's pretty obvious what to do to start, I think we should sum the two equations and get that:
$$x^2+4xy+4y^2=25$$
$$(x+2y)^2=5^2$$
$$\Rightarrow x+2y=5,-5$$
I'm not sure where to go from here, I've tried factorizing the equations and substituting what we've got but I've not gotten anywhere. Hints are appreciated.
| $$x^2+xy=3$$
$$4y^2+3xy=22$$
Summing these two terms we get
$$x^2+4xy+4y^2=25$$
$$(x+2y)^2=25$$
$$=>x+2y=5,-5$$
First option is $x+2y=5$
$$x+2y=5$$
$$x=5-2y$$
If we put this into the first equation we get
$$(5-2y)^2+(5-2y)\cdot y=3$$
$$25-15y+2y^2=3$$
$$-2y^2+15y-22=0$$
$$-2y^2+4y+11y-22=0$$
$$(-2y+11)(y-2)=0$$
Since their product is zero one of the factors must be zero, so we have two options
$$-2y+11=0$$
$$y=\frac{11}{2}$$
or
$$y-2=0$$
$$y=2$$
We know that $x=5-2y$, so
$$y=\frac{11}{2}$$
$$x=-6$$
or
$$y=2$$
$$x=1$$
These same steps work for $x=2y=-5$ and we end up with the answer being
$$(x,y)\in{(-6,\frac{11}{2}),(1,2),(-1,-2),(6,\frac{-11}{2})}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4333811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Solving a nonhomogenous system of eqns with one eigenvalue I have the system:
$\left[\begin{array}{@{}c@{}}
x' \\
y'
\end{array} \right]= \left[\begin{array}{@{}c@{}}
3&2 \\
-2 & -1
\end{array} \right]\left[\begin{array}{@{}c@{}}
x' \\
y'
\end{array} \right]+\left[\begin{array}{@{}c@{}}
2e^{-t} \\
e^{-t}
\end{array} \right]$
Which I should solve using the fundamental matrix.
So I start with obtaining the homogenous solution:
I find the eigenvalues;
\begin{pmatrix}
3-\lambda&2 \\
-2 & -1-\lambda
\end{pmatrix}
which gives the determinant: $\lambda^2-2\lambda+1=0$. Thus $\lambda_1=1$. Plugging that in the matrix in the original equation, I get that x=y. So a solution to the homogenous system would be: $y_h=e^{t}\left[\begin{array}{@{}c@{}}
1 \\
1
\end{array} \right]$
Since there is no second solution to the determinant, I would ideally form the fundamental matrix:
\begin{pmatrix}
e^{t} & e^0 \\
e^{t} & e^0
\end{pmatrix}
but this is to no avail. So how do I find the solution of this nonhomogenous system using the fundamental matrix with one eigenvalue?
Thanks
UPDATE:
I set up the generalized eigenvector formula
\begin{equation}
v_2(A-\lambda I)=v_2
\begin{pmatrix}
3-\lambda&2 \\
-2 & -1-\lambda
\end{pmatrix}=v_1
\end{equation}
\begin{equation}
v_2(A-\lambda I)=v_1=
\begin{vmatrix}
3-\lambda&2 & | 1 \\
-2 & -1-\lambda & |-1
\end{vmatrix}
\end{equation}
I now get as given by Moo, with Gaussian elimination, the matrix:
\begin{equation}
\begin{vmatrix}
1 &1 & | 1/2 \\
0 & 0 & |0
\end{vmatrix}
\end{equation}
and have the second eigenvector: $e_2=e^{t}\left[\begin{array}{@{}c@{}}
\frac{1}{2} \\
0
\end{array} \right]$
.
So the homogeneous solution is:
\begin{equation}
y_h=e^{\lambda_1 t}e_1+e^{\lambda_2t}e_2=e^{t}\left[\begin{array}{@{}c@{}}
1 \\
-1
\end{array} \right]+e^{t}\left[\begin{array}{@{}c@{}}
\frac{1}{2} \\
0
\end{array} \right]
\end{equation}
At this stage, it remains to find the particular solution. We know that it must be in the form of:
\begin{equation}
y_p=Ce^{-t}
\end{equation}
and thus the general solution is:
\begin{equation}
y_p=y_h+Ce^{-t}=e^{t}\left[\begin{array}{@{}c@{}}
1 \\
-1
\end{array} \right]+e^{t}\left[\begin{array}{@{}c@{}}
\frac{1}{2} \\
0
\end{array} \right]+Ce^{-t}
\end{equation}
But can this be said?
| You have what is called a deficient matrix, so you need to find a generalized eigenvector.
We have the system
$$\left[\begin{array}{@{}c@{}}
x' \\
y'
\end{array} \right]= \left[\begin{array}{@{}c@{}}
3&2 \\
-2 & -1
\end{array} \right]\left[\begin{array}{@{}c@{}}
x \\
y
\end{array} \right]+\left[\begin{array}{@{}c@{}}
2e^{-t} \\
e^{-t}
\end{array} \right]$$
We find a repeated eigenvalue of $\lambda_{1,2} = 1$ and we can find a single eigenvector of
$$v_1 = \begin{bmatrix} -1 \\ 1 \end{bmatrix}$$
Finding generalized eigenvectors is not a simple topic and requires work to learn the ins and outs, but in this case, we will use this example.
Solve for $v_2$ using the row-reduced-echelon-form (RREF) of $[A-\lambda I]v_2 = [A -I]v_2 = v_1 $
We get the augmented matrix
$$
\left[\begin{array}{rr|r}
1 & 1 & -\dfrac{1}{2} \\ 0 & 0 & 0
\end{array}\right]
$$
We can choose
$$v_2 = \begin{bmatrix} -\dfrac{1}{2} \\ 0 \end{bmatrix}$$
Update For the eigenvalues, we find
$$|A - \lambda I| = \begin{vmatrix}
-\lambda +3 & 2 \\
-2 & -\lambda -1 \\
\end{vmatrix} = (-\lambda+3)(-\lambda - 1) -2(-2) = \lambda ^2-2 \lambda +1 = 0$$
This results in
$$\lambda_{1, 2} = 1$$
To find the generalized eigenvector, we solve (you are actually using the eigenvalue $\lambda = 1$ below)
$$[A - \lambda I]v_2 = [A - 1 I]v_2 = \begin{bmatrix}
2 & 2 \\
-2 & -2 \\
\end{bmatrix}v_2 = v_1 = \begin{bmatrix}
-1 \\
1 \\
\end{bmatrix}$$
That is
$$\begin{bmatrix}
2 & 2 \\
-2 & -2 \\
\end{bmatrix}v_2 = \begin{bmatrix}
-1 \\
1 \\
\end{bmatrix}$$
As an augmented matrix, this is
$$
\left[\begin{array}{rr|r}
2 & 2 & -1 \\ -2 & -2 & 1
\end{array}\right]
$$
The RREF (Gaussian Elimination) is
$$
\left[\begin{array}{rr|r}
1 & 1 & -\dfrac{1}{2} \\ 0 & 0 & 0
\end{array}\right]
$$
From this, we can choose
$$v_2 = \begin{bmatrix} -\dfrac{1}{2} \\ 0 \end{bmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4335192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to prove that if $\mu$ is finite, then it is sum-finite Im having some trouble with how to prove that if $\mu$ is finite then it is sum-finite.
I know that if $\mu$ is finite on a measurable space (E,A) if $\mu<\infty$ and that $\mu$ is sum-finite on a measurable space (E,A) if $\mu=\sum_{n=1}^{\infty} \mu_n$, where $\mu_n$, for $n\in \mathbb{N}$, is finite measures.
But I dont know how to tie these definitions together.
| Hint: The trivial measure $\zeta$ that assigns 0 measure to all sets is finite.
Solution:
$$ \mu = \mu + \sum_{n=2}^\infty\zeta $$ is an expression for $\mu$ as a countable sum of finite measures.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4335302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Residue of Rankin Selberg L-function Let $f$ be a normalized holomorphic cusp form with weight $k$, level $N$. The Fourier expansion of $f$ can be written as
\begin{align*}
f(z)=\sum_{n=1}^{\infty} \lambda_f(n)n^{(k-1)/2} e^{2\pi inz}
\end{align*}
The Rankin-Selberg convolution is defined as $L(f\times f) = \sum_{n=1}^{\infty}\frac{\lambda_f(n)^2}{n^s}$ for $\Re s>1$. How to calculate the residue at $s=1$ i.e. $\mathrm{Res}_{s=1}L(f\times f)$?
Thanks in advance!
| The integral representation of $L(s,f\times f)$ (with suitable normalization) was shown (by Rankin and by Selberg, in 1939), to be obtained by integrating $|f|^2$ against the Eisenstein series $E_s$. If we have the normalizations set up appropriately, then the residue at the first pole is the residue of that integral, which is the integral of $|f|^2$ against the residue of $E_s$ at $s=1$ (a constant, etc.) So, up to normalization (which is not so hard to nail down), that first residue is the integral of $|f|^2$...
In addition to the two papers from 1939, by now most introductions to modular forms include this computation. I treat the simplest case in a small essay http://www.math.umn.edu/~garrett/m/v/basic_rankin_selberg.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4335434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Binary programming problem. Any closed solution and/or lower bound for this particular case? Consider the following problem:
$$\begin{array}{ll} \underset{{\bf x} \in \{0, 1\}^N}{\text{minimize}} & {\bf x}^\top {\bf A} \, {\bf x}\\ \text{subject to} & {\bf B} {\bf x} = {\bf c}\end{array}$$
where ${\bf A} \in \{0, 1\}^{N \times N}$ is symmetric, ${\bf B} \in \{0, 1\}^{M \times N}$ and ${\bf c} \in \{0, 1\}^{M}$. Additionally, $N = kM$.
I'm wondering if this kind of problem has been studied. Has this problem a name? Does this problem have a closed form solution?
Alternatively, is there some results on the lower bound of this problem? I.e., does there exist a number $\phi$ such that the following holds?
$$\min_{{\bf x} \in \{0, 1\}^N} {\bf x}^\top {\bf A} \, {\bf x} \geq \phi$$
I'm not looking for an algorithm for the solution of a specific instance. Rather, I'm looking for theoretical results on this kind of problem.
| First, there is an obvious reduction of the problem. Suppose that $c_i=0$, and let $S=\lbrace j\in \lbrace 1,\dots,n\rbrace : B_{ij}=1\rbrace.$ If $S=\emptyset,$ the $i$-th constraint reduces to $0 = 0$ and can be dropped. Otherwise, a necessary condition for feasibility is that $x_i=0\ \forall i\in S,$ which lets you reduce the number of variables (and drop the constraint). After applying this for each 0 component in $c$, you are left either with an infeasible problem (because a constraint now reads $0=1$) or a version of the problem where all right hand side values are 1.
Assuming the problem did not implode, you now have a variant of the exact cover problem. The variation is that you have a quadratic rather than linear objective function. The exact cover problem (also known in optimization contexts as the equality-constrained set covering problem, the exact hitting set problem and probably six other things) with a linear objective has been studied. See, for instance, this paper by Balas and Padberg. I don't know whether the quadratic case has been studied.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4335539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does conjugation imply equivariantly conjugation? Let $G$ be a finite group acting on a finite dimensional vector space $V$. Let $A$, $B\in End_G(V)$ be two equivariant endomorphisms, i.e.
$$
gAg^{-1}=A, ~gBg^{-1}=B, \forall g \in G.
$$
Now suppose that $A$ and $B$ are conjugate, i.e. there exists a $J\in Gl(V)$ such that
$$
JAJ^{-1}=B.
$$
Then is it true that $A$ and $B$ are $G$-equivariantly conjugate? More precisely, can we find a $\tilde{J}\in Gl(V)$ such that $\tilde{J}$ is also $G$-equivariant and
$$
\tilde{J}A\tilde{J}^{-1}=B?
$$
A natural candidate of $\tilde{J}$ is $\frac{\sum_{g\in G}gJg^{-1}}{|G|}$. But I cannot prove it is invertible.
If the statement is not true, is there any counter-example?
| Consider $G = S_3$ and let $V = 1 \oplus \text{sgn}$ be the direct sum of the trivial and sign representations.
Let $A = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}$ and $B = \begin{pmatrix} 2 & 0 \\ 0 & 1 \end{pmatrix}$, then these are both $G$-equivariant endomorphisms of $V$ conjugate by any $J = \begin{pmatrix} 0 & a \\ b & 0 \end{pmatrix}$ where $a,b \in \mathbb{C}^*$. These are the only possible change of basis matrices.
Now, for any $\sigma \in S_3$ such that sgn($\sigma) = -1$ we have that $\sigma$ acts as $\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}$ on $V$, by abuse of notation label this matrix $\sigma$, then
$$\sigma J \sigma^{-1} = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}\begin{pmatrix} 0 & a \\ b & 0 \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} = \begin{pmatrix} 0 & -a \\ -b & 0 \end{pmatrix} = -J.$$ So $J$ is not $S_3$-equivariant for any $a, b$.
In fact, by Schur's Lemma, $\text{dim}_G(V,V) = \text{dim}_G(1 \oplus \text{sgn},1 \oplus \text{sgn}) = 2$ so any $G$-equivariants are linear combinations of $A$ and $B$ above, so $P$ never stood a chance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4335866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Formulas of basic modal logic involving only $\top$, $\bot$, propositional connectives and modalities I am given a Kripke model $\mathcal{M}=(W, R, L)$ where $W=\{w_1, w_2, w_3, w_4\}$ $R=\{(w_1, w_2), (w_2, w_3), (w_4, w_1), (w_4, w_3)\}$, and for all $w \in W$, $L(w)=\varnothing$.
For each $w \in W$, I then want to find a basic modal formula that is only true at $w$. The formulas may only involve $\top$,$\bot$, and propositional connectives and modalities. The modalities that I have is necessity $\square$ and and possibility $\Diamond$.
I think that the formula $\square \bot$ is only true at $w_3$ since $w_3$ has no accessible worlds. But I am stuck finding formulas that are only true at the other worlds.
| Hint: Continuing your approach, at what world(s) is $\square\square\bot$ true? What about $\square\square\square\bot$?
Now can you take Boolean combinations of these formulas and the one you found to isolate the individual worlds?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4335958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A question about the standard euclidean group $\mathbb{SE}(3)$ I have a little perplexity about the following fact:
If we consider the standard euclidean group $\mathbb{SE}(3)$, an element $g$ can be represented by a matrix
$$\mathbb{SE}(n) \ni g=\begin{pmatrix}R & v \\ 0 & 1\end{pmatrix} \, \, , \, \, R \in \mathbb{SO}(3) \, , \, v \in \mathbb{R}^3.$$
So $g$ can be thought as a mapping $g : \mathbb{R}^4 \rightarrow \mathbb{R}^4$ parametrized by $R,v$.
What I don't understand is, isn't the $\mathbb{SE}(3)$ group originally used to describe a rotation + translation in a $3$-dimensional space? Or should we use $\mathbb{SE}(2)$ to describe such transformations in $\mathbb{R}^3$?
| The additional last row is only to write it conveniently as a matrix group. Then we can also take the vector $w\in \Bbb R^3$ with one component more and have the action on $\Bbb R^4$ as
$$
\begin{pmatrix} R & v \cr
0 & 1 \end{pmatrix} \begin{pmatrix} w \cr 1 \end{pmatrix}=
\begin{pmatrix} Rw+v \cr 1\end{pmatrix}.
$$
But of course we can still view this just as the action on $\Bbb R^3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4336194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$|S_{2N_m}(f)(0)| \ge c\alpha_m\log N_m + \mathcal O(1)$ (Stein & Shakarchi's Fourier Analysis) This question is related to the construction of a continuous function with a divergent Fourier series (at $0$), as done in Stein and Shakarchi's Fourier Analysis. The construction uses Lemma $2.4$, which I have posted here. For completeness, I shall provide all the necessary background.
For each $N\ge 1$, we define the following two functions on $[-\pi,\pi]$;
$$f_N(\theta) = \sum_{1\le |n|\le N} \frac{e^{in\theta}}{n} \quad \quad \widetilde{f_N}(\theta) = \sum_{-N \le n \le -1} \frac{e^{in\theta}}{n}$$
which are trigonometric polynomials of degree $N$. It is shown that $|\widetilde{f_N}(0)| \ge c\log N$, and $f_N$ is uniformly bounded in $N$ and $\theta$. From these, we construct $P_N$ and $\widetilde{P_N}$ by defining $$P_N(\theta) = e^{i(2N)\theta} f_N(\theta)\quad\quad \widetilde{P_N}(\theta) = e^{i(2N)\theta} \widetilde{f_N}(\theta)$$
Furthermore, for a $2\pi$-periodic integrable function $f$ on the circle, we define the $N$th partial sum of its Fourier series by $S_N(f)= \sum_{|n|\le N} \hat f(n) e^{in\theta}$.
Lemma $2.4$: $$S_M(P_N) = \begin{cases}P_N & M\ge 3N\\ \widetilde{P_N} & M = 2N\\ 0 & M < N\end{cases}$$
Finally, we take a convergent series of positive terms $\sum \alpha_k$ and a sequence of integers $\{N_k\}$ satisfying $N_{k+1} > 3N_k$ for all $k$, and $\alpha_k\log N_k \xrightarrow{k\to\infty} \infty$. For example, $\alpha_k = \frac{1}{k^2}$ and $N_k = 3^{2^k}$ would suffice. The required continuous function $f$, with divergent Fourier series at $0$, is defined as $$f(\theta) = \sum_{k=1}^\infty \alpha_k P_{N_k}(\theta)$$
Using the Weierstrass M-test, the convergence of $\sum_{k=1}^\infty \alpha_k P_{N_k}(\theta)$ due to convergence of $\alpha_k$ and uniform boundedness of $P_N$ (note that $|P_N| = |f_N|$), and the continuity of $\{P_{N_k}\}$, we conclude that the series which defines $f$, converges uniformly (and absolutely) to a continuous periodic function.
Then, the author goes on to say that by the aforementioned lemma, we have $$|S_{2N_m}(f)(0)| \ge c\alpha_m\log N_m + \mathcal O(1) \xrightarrow{m\to\infty} \infty$$
Question: How does one conclude $$|S_{2N_m}(f)(0)| \ge c\alpha_m\log N_m + \mathcal O(1)$$
My thoughts: $$\begin{align*}
S_{2N_m}(f)(0) = \sum_{|n| \le 2N_m} \hat f(n) &= \sum_{|n| \le 2N_m} \sum_{k=1}^\infty \alpha_k \hat{P_{N_k}}(n)\\ &= \sum_{k=1}^\infty \sum_{|n| \le 2N_m} \alpha_k \hat{P_{N_k}}(n)\\ &= \sum_{k=1}^\infty\alpha_k S_{2N_m} (P_{N_k})(0)
\end{align*}$$
since limits and finite sums always commute, and
$$\hat f(n) = \frac{1}{2\pi}\int_{-\pi}^\pi \sum_{k=1}^\infty \alpha_k P_{N_k}(\theta) e^{-in\theta}\, d\theta = \sum_{k=1}^\infty \frac{1}{2\pi}\int_{-\pi}^\pi \alpha_k P_{N_k}(\theta) e^{-in\theta}\, d\theta = \sum_{k=1}^\infty \alpha_k \hat{P_{N_k}}(n)$$
where the exchange of the integral and infinite sum is justified by the uniform convergence of $\sum_{k=1}^\infty \alpha_k P_{N_k}(\theta)$, which follows from the Weierstrass M-test.
| I figured it out on my own, so I am posting an answer for completeness's sake.
$$\begin{align*}
S_{2N_m}(f)(0) = \sum_{|n| \le 2N_m} \hat f(n) &= \sum_{|n| \le 2N_m} \sum_{k=1}^\infty \alpha_k \hat{P_{N_k}}(n)\\ &= \sum_{k=1}^\infty \sum_{|n| \le 2N_m} \alpha_k \hat{P_{N_k}}(n)\\ &= \sum_{k=1}^\infty\alpha_k S_{2N_m} (P_{N_k})(0)\\
&= \sum_{k=1}^{m} \alpha_k S_{2N_m} (P_{N_k})(0) \\ &= \alpha_m S_{2N_m} (P_{N_m})(0) + \sum_{k=1}^{m-1} \alpha_k S_{2N_m} (P_{N_k})(0)\\ &= \alpha_m \widetilde{P_{N_m}}(0) + \sum_{k=1}^{m-1} \alpha_k P_{N_k}(0)
\end{align*}$$
We have $$\left| \alpha_m \widetilde{P_{N_m}}(0) \right| = \alpha_m |\widetilde{f_{N_m}}(0)| \ge c \alpha_m \log N_m$$
and $$\left|\sum_{k=1}^{m-1} \alpha_k P_{N_k}(0) \right| \le \sup_{n\ge 1} |f_n(0)| \sum_{k=1}^\infty \alpha_k < \infty$$
so $$\begin{align*}|S_{2N_m}(f)(0)| &= \left|\alpha_m \widetilde{P_{N_m}}(0) + \sum_{k=1}^{m-1} \alpha_k P_{N_k}(0) \right| \\ &\ge \left|\alpha_m \widetilde{P_{N_m}}(0) \right| - \left|\sum_{k=1}^{m-1} \alpha_k P_{N_k}(0)\right|\\ &\ge c\alpha_m \log N_m - \sup_{n\ge 1} |f_n(0)| \sum_{k=1}^\infty \alpha_k \end{align*}$$
giving $$|S_{2N_m}(f)(0)| \ge c\alpha_m\log N_m + \mathcal O(1)$$
Note: To get $$\sum_{k=1}^\infty\alpha_k S_{2N_m} (P_{N_k})(0) = \sum_{k=1}^{m} \alpha_k S_{2N_m} (P_{N_k})(0)$$ we have used $N_{k+1} > 3N_k$ for all $k$, and Lemma $2.4$ as stated above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4336301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
double sum $\sum _{j=0}^{\infty } \sum _{k=0}^{\infty } \sqrt{j^2+k^2} z^{j+k}$ I am interesting in the real zeros of the function:
$$\sum _{j=0}^{\infty } \sum _{k=0}^{\infty } \sqrt{j^2+k^2} z^{j+k}$$
looking where change the value but it is hard to compute , using Mathematica
$$\text{NSum}\left[(-1)^{j+k} \sqrt{j^2+k^2},\{j,1,\infty \},\{k,1,\infty \},\text{WorkingPrecision}\to 20\right]$$ gives ComplexInfinity, but i think it is not true cause the values is 0.14540 aproximately calculate with maple use Levin Transform this method it is capable to calculate to -3 to -1, using a double transform I have been calculating the following values
and for the z =1 need a condensation alternative series and gives the values 0.0137496275139485217544804966782611 it is must be to reals zeros between -4 and -3 and 1 and 2
it is possible to get the same result with other methods??
| using $$\sum _{j=1}^{\infty } \sum _{k=1}^{\infty } \frac{(-1)^{j+k}}{\left(j^2+k^2\right)^s}=2^{-2 s} \left(2^{2 s}-2\right) \zeta (2 s)-2^{-3 s} \left(2^s-2\right) \zeta (s) \left(\zeta \left(s,\frac{1}{4}\right)-\zeta \left(s,\frac{3}{4}\right)\right)$$ formula from Wolfram world takins s=-1/2 is equal to 0.145403 ==$$1/4 - 2 Sqrt[
2] (-2 + 1/Sqrt[2]) Zeta[-(1/2)] (Zeta[-(1/2), 1/4] -
Zeta[-(1/2), 3/4])$$ agree with the above result , the rest it is impossible to calculate at the moment
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4336429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Least prime divisor of $n!-1$ forms divergent series.
If we have a sequence $\left\{\alpha_{n}\right\}_{n=3}^{\infty}$
such that $\alpha_{n}$ is the least prime divisors of $n !-1$
To Show: $$\sum_{n=3}^{\infty} \frac{1}{\alpha_{n}} \rightarrow \infty$$
I need help in completing my proof :
$\Rightarrow$ Claim : The least prime divisor of $n!-1$ is greater than $n$.
If on the contrary we have prime $p$ s.t. $p \leq n$
then clearly $p \mid n !$
and if we assume $p$ also divides $n !-1$ then $p$ divides $n !-(n !-1)=1$
Hence contradiction.
So the least prime divisor of $n !-1$ is greater than $n$.
Now how to prove that $\sum_{n=3}^{\infty} \frac{1}{\alpha_{n}} \rightarrow \infty$ once I have shown $\alpha_n \in \{n+1, \ldots, n !-1 \}$.
| Wilson's theorem helps to prove that the sequence diverges : For all integers $n>1$ for which $n+2$ is prime , we have $$(n+2)\mid n!-1$$
This follows from $$(n+1)!\equiv -1\mod (n+2)$$ and $$n+1\equiv -1\mod (n+2)$$
In this case, the smallest prime factor is obviously $n+2$
Hence the sum contains all the reciprocals of the primes $p\ge 5$. Since the sum of the reciprocals of the primes diverges, the claim follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4336595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Proof of Cantor-Bernstein-Schröder theorem using the Knaster-Tarski Theorem I'm currently reading Hrbacek's Introduction To Set Theory and exercise 4.1.11 goes like this:
Where Lemma 1.7 is $$\text{If } A_1\subseteq{B}\subseteq{A} \text{ and } |A1| = |A| , \text{ then }|B| = |A|, $$
$f$ is a bijection from $A$ to $A_1$ such that $A_1\subseteq{B}\subseteq{A}$ and $g$ is defined as:$$g(x)= \begin{cases}
f(x) & \text{if $x\in{C}$}\\
x & \text{if $x\in{D}$}\\
\end{cases} $$
I did the exercise, but it seems to me that the proof still works if we simply let $F(X) = f[X]$.
Is this true or am I missing something?
| If you define $F(X)=f[X]$, then the fixed-point of $F$ will fail to contain $A-B$ (for example, $\varnothing$ is a fixed-point!). Then $g$ will fail to map the elements of $A-B$ into $B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4336709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Calculating $\int_{-\infty}^{\infty}\frac{\cos x}{x^2+x+1}dx$ using the Residue Theorem I am suposed to evaluate $$\int_{-\infty}^{\infty}\frac{\cos x}{x^2+x+1}dx$$ using
$$f(z)=\frac{e^{iz}}{z^2+z+1}$$ My idea was to define a contour $$\gamma = [-R,R] \cup \alpha$$ where $$\alpha$$ is the upper semicircumference of radius $R$. But I am stuck here.
| Some hints:
The polynomial $z^2+z+1$ has roots at $r_1=\frac{-1 + \sqrt{3}i}{2}$ and $r_2=\overline{r}_1$. If you are using the upper semicircle, then you need the residue at $r_1$. The idea is to take
$$
z\mapsto \frac{e^{iz}}{(z-r_1)(z-r_2)}
$$
and cancel out the $(z-r_2)$ term. This can be done if you expand $e^{iz}$ about $z=r_2$. In other words, write $e^{iz}=e^{ir_2}e^{i(z-r_2)}$. Then expand the power series and you end up with the residue of $r_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4336822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
If $A$ is an infinite set, prove that $A$ has a proper infinite subset I considered taking this contradictory approach: Let $A_1\in A$. Assume that the proper subset $A\smallsetminus\{A_1\}$ is finite such that there exists a bijection $f:A\smallsetminus\{A_1\} \to C_k$ with $C_k=\{1,2,3,\ldots,k\}$ with $k \in N$.
What I wanted to do next is somehow prove that since $A\smallsetminus\{A_1\}$ is finite, then $A$ is finite, contradicting the given and proving my assumption false. However I have no idea how to proceed from here, or even if what I've done so far is plausible. Any help please?
| I must assume that you mean that there exists a subset $B\subsetneq A$ such that it is bijective with $A$. Because otherwise it would only be enough to remove a point or two and you would have what you want: let $x\in A$, then $B=A\setminus\{x\}\subsetneq A$. End.
Since $A$ is infinity, then there exists $f:\mathbb N\to A$ injective. We take $X=f(\mathbb N)\subset A$. then $X=\{a_1,a_2,\dots\}$ with $a_i\neq a_j$ if $i\neq j$. Let $B=(A\setminus X)\cup \{a_2,a_4,\dots,a_{2n},\dots\}\subset A$. Now let's define $g:A\to B$, given that $$f(a)=\left\{\begin{array}{ccl}a&,&a\in A\setminus X\\
a_{2n}&,&a=a_{n}\end{array}\right.$$
It is clearly surjective, let's see that $g$ is injective. We suppose that $g(y)=g(z)$.
If $y,z\in A\setminus X$, then $y=g(y)=g(z)=z$.
If $y,z\in X$, then $y=a_n$ and $z=a_m$ for some $n,m\in\mathbb N$. So $g(y)=g(z)\Rightarrow a_{2n}=a_{2m}\Rightarrow 2n=2m\Rightarrow n=m \Rightarrow a_n=a_m\Rightarrow y=z$.
If $y\in A\setminus X$ and $z\in X$, then $y=g(y)=g(z)=a_n$ for some $n$, so $y=a_n\in X$. Contradiction.
Hence $g$ is a biyection between $A$ and $B\subsetneq A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4337007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Proving single solution of initial value problem is increasing Given the initial value problem
$$
y'(x)=y(x)-\sin{y(x)}, y(0)=1
$$
I need to prove that there is a solution defined on $\mathbb{R}$ and that the solution, $u(x)$, is an increasing function where $\lim_{x\to -\infty}{u(x)=0}$
The first part is quite easy, let $f(x, y) = y - \sin{y}$ $$f_y(x, y)=1-\cos{y}\Longrightarrow \left|f_y(x,y)\right| \le 2$$
hence $f(x, y)$ is Lipschitz continuous on $y$ for every $(x, y)\in (-\infty, \infty)\times (-\infty, \infty)$ therefore the initial value problem has a single solution on $\mathbb{R}$
I know that the solution, $u(x)$ has the form $$u(x)=y_0+\int_{x_0}^x{f(t, u(t))dt}=1+\int_0^x{(u(t)-\sin{u(t))}dt}$$
and also $$u(x)=\lim_{n\to\infty}u_n(x)$$ where $$u_n(x)=y_0+\int_{x_0}^x{f(t, u_{n-1}(t))}dt=1+\int_0^x{(u_{n-1}-\sin{(u_{n-1})})}dt,\space\space\space u_0(x)=y_0=1$$
But I'm wasn't able to find a way to prove the solution is increasing and has the desired limit.
EDIT:
I got a hint to look at $y\equiv 0$
EDIT2:
Let assume that the solution, $u(x)$ is decreasing at $(x_1, u(x_1))$ because $u(x)$ is continuous, there must be $(x_2, u(x_2)$ where $u'(x_2)=0=u(x_2)-\sin{u(x_2)}\Longrightarrow u(x_2)=0$
now if I look at the initial value problem
$$y'(x)=y(x)-\sin{y(x)}, y(x_2)=0$$
I can prove that it has a single solution in $\mathbb{R}$ like I already did, and $u(x)$ is my solution, but $u_1(x)\equiv 0$ is also a solution to this problem, contradiction, hence $u'(x)>0$ for $x\in\mathbb{R}$, i.e $u(x)$ is increasing.
But I still don't know how I can show the limit at $-\infty$
| Let us introduce the new variable $z$ defined by $z = -x$.
Then
$$
y'(x)
= \frac{dy}{dx}
= \frac{dy}{dz}\frac{dz}{dx}
= -\frac{dy}{dz}
$$
Therefore, we have the differential equation with the reversed variable direction
$$
\frac{dy}{dz}
=
-y + \sin y
$$
This is in fact the dynamics of a simple damped pendulum. Let $V(y) = y^2/2$ be the Lyapunov function of the new ODE. Then,
$$
\frac{dV}{dz}
=
y\frac{dy}{dz}
=
-y^2 + y\sin y
$$
and $dV/dz < 0$ for all $ y \in \mathbb{R}\setminus \{0\} $ and $dV/dz = 0$ at $y = 0$. Hence, $\mathbb{R}$ is positively invariant with respect to the ODE, meaning that every solution starting at $z = z_0$ is defined for all $z \ge z_0$. Let $v$ be the solution of the ODE with $y(0) = 1$. Then, we have that $\lim_{z \to \infty}v(z) = 0$.
Finally, we conclude that $\lim_{x \to -\infty}u(x) = 0$.
We can also show that $v(z)$ cannot change its sign and monotonically decreases whenever $v(z)>0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4337129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Minimize the expected value of the product of 2 normally distributed variables So, there are 2 variables, $X$ and $Y$, both are normally distributed. We are given that $E(X)=E(Y)=0$ and $Var(X)=2$, while $Var(Y)=8$. Additionally, $Corr(X,Y)=-\frac{1}{2}$. The question is to find the smallest value of $E(X^5Y^3)$.
My first instinct was to somehow use the definition of covariance:
$$Cov(X^5,Y^3)=E(X^5Y^3)-E(X^5)E(Y^3)$$
$$E(X^5Y^3)=Cov(X^5,Y^3)+E(X^5)E(Y^3)$$
I knew that I could find the $E(Y^3)$ using the moment generating function. Since we are given that $Y\sim N(0,8)$, the moment generating function is
$$M_Y(t)=e^{4t^2}$$
So, $$E(Y^3)=\frac{d^3}{dt^3}M_Y(0)=0\implies E(X^5Y^3)=Cov(X^5,Y^3)$$ The same goes for $E(X^5)=\frac{d^5}{dt^5}M_X(0)=0$. Another idea is to now play with the definition:
$$Cov(X,X^4Y^3)=E(X^5Y^3)-E(X)E(X^4Y^3)=E(X^5Y^3)$$
So that $$Cov(X,X^4Y^3)=Cov(X^5,Y^3)$$
Because we are given the correlation for a reason, I got $Corr(X,Y)=\frac{Cov(X,Y)}{\sqrt{Var(X)Var(Y)}}=-\frac{1}{2}=\frac{Cov(X,Y)}{4} \implies Cov(X,Y)=-2$.
However, I do not see the connection with my previous result. Perhaps I need to use some different method and not covariance. Can anyone point me in the right direction?
$\pmb{Edit:}$ I have used a suspicious formula from the paper by Kan, URL: https://www-2.rotman.utoronto.ca/~kan/papers/moment.pdf on page 5 and got the following result for my particular case:
$$E(X^5Y^3)=\frac{45}{128}\sum_{j=0}^{1}{\frac{-1}{(2-j)!(1-j)!(2j+1)!}}=\frac{45}{128}\left(-\frac{1}{2}-\frac{1}{6}\right)=-\frac{15}{64}=-0.234375$$
But this result requires verification. Can anyone tell me if the formula is legit and if yes, then did I use it correctly?$\pmb{Edit\space №2:}$ The question about the formula is resolved. I have also found that $E(X^5Y^3)=5760\sqrt{7}Corr(X^5,Y^3)$ from the definition of correlation coefficient, so the question is: is it possible to minimize the correlation coefficient now?
| The formula is correct, but your application of the formula is not. The correct expectation using this formula is
$$\begin{align}
\operatorname{E}[X^5 Y^3]
&= \sigma_1^5 \sigma_2^3 \frac{5! 3!}{2^{(5+3)/2}} \sum_{j=0}^{\lfloor \min(5,3)/2 \rfloor} \frac{(2\rho)^{2j+1}}{(\frac{5-1}{2} - j)! (\frac{3-1}{2} - j)! (2j+1)!} \\
&= (\sqrt{2})^5 (\sqrt{8})^3 (45) \sum_{j=0}^1 \frac{(-1)^{2j+1}}{(2-j)!(1-j)!(2j+1)!} \\
&= 5760 \left( \frac{-1}{2! 1! 1!} + \frac{-1}{1! 0! 3!}\right) \\
&= -5760 \left( \frac{1}{2} + \frac{1}{6} \right) \\
&= -3840.
\end{align}$$
That said, there is no guarantee that $(X,Y)$ is bivariate normal. As such, the formula is not applicable because you are interested in the minimum value of this expectation under all distributions that satisfy the given criteria, and it is not assured that the minimum is attained for the case when $(X,Y)$ is in fact bivariate normal.
We can use Cauchy-Schwarz in the form $$|\operatorname{E}[X_1 X_2]|^2 \le \operatorname{E}[X_1^2]\operatorname{E}[X_2^2]$$ to show that $\operatorname{E}[X^5 Y^3] \ge - 5760\sqrt{7}$, but this lower bound is not necessarily attainable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4337485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How to get started on proving that the limit of a set A exists in its Accumulation Set Let $X$ and $Y$ be metric spaces. If $Y$ is complete, $A\subset X$, $f:A\rightarrow Y$ is a uniformly continuous function. Prove
(1) If $x_0\in \bar{A}\backslash A$, $\{x_n\}_{n=1}^{+\infty}\subset A$, $\lim_{n\rightarrow+\infty} x_n=x_0$. Then the limit
\begin{equation*}
\lim_{n\rightarrow\infty}f(x_n)
\end{equation*}
exists.
Since $A \subset \bar{A}$ but how can $x_0$ exist in A as given that $x_0$ is part of the set that is whats not in A but in A. So from the definitions that means $x_0 \in A'$ which are the accumulation points of A. But idk where to go from here
| $f(x_1), f(x_2) ,\ldots$ is a sequence in $Y$ and you want to show it converges. You are given that $Y$ is complete. This is a big hint that you need to show that the sequence $f(x_1), f(x_2), \ldots$ is Cauchy.
You know that $x_1, x_2, \ldots$ is Cauchy. See if you can use an important property of $f$ to establish that $f(x_1), f(x_2), \ldots$ is Cauchy.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4337679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
On cardinality of a set of continuous functions What is the cardinality of the set of real valued continuous functions $f$ on $[0,1]$ such that $f(x)$ is rational whenever $x$ is rational?
I know that it's at least infinitely countably many because for any rational $q$, the contant function $f(x)=q$ on $[0,1]$ is in the set in question. Also, other functions like $f(x)=x^n$, for any integer $n$, are in this set.
But, what throws me at this point is my inability to infer whether there are other functions that make the set uncountable. If the case there aren't, how do I prove that it's only countably many?
| The cardinality of this set of functions $\mathcal{F}$ is $\mathfrak{c}$, the continuum.
The cardinality of $\mathcal{F}$ is at least $\mathfrak{c}$ because given a real irrational number $\alpha \in (0,1)$ and any real number $\beta$, we can construct a function with the given properties and having $f(\alpha) = \beta$. In particular, if we fix $\alpha = 1/\sqrt{2}$ or your favorite irrational in the interval, this construction will describe as many different functions as there are real values for $\beta$, and $|\mathbb{R}| = \mathfrak{c}$.
Let $0 = a_1 < a_2 < a_3 < \cdots$ be any strictly increasing sequence of rational numbers converging to $\alpha$, and let $1 = b_1 > b_2 > b_3 > \cdots$ be any strictly decreasing sequence of rational numbers converging to $\alpha$. The function $f$ is:
$$ f(x) = \begin{cases}
\frac{\lfloor k \beta \rfloor}{k} + \left(\frac{\lfloor(k+1)\beta\rfloor}{k+1} - \frac{\lfloor k \beta \rfloor}{k}\right) \frac{x-a_k}{a_{k+1}-a_k} & a_k \leq x < a_{k+1} \\
\beta & x = \alpha \\
\frac{\lceil k \beta \rceil}{k} + \left(\frac{\lceil (k+1) \beta \rceil}{k+1} - \frac{\lceil k \beta \rceil}{k}\right) \frac{b_k-x}{b_k-b_{k+1}} & b_{k+1} < x \leq b_k
\end{cases} $$
$f(x)$ takes a rational value at every rational $x$, since all floor ($\lfloor \cdot \rfloor$) and ceiling ($\lceil \cdot \rceil$) function values are integers, $k$ is a positive integer, and all $a_k$ and $b_k$ are rational. $f$ is linear on the intervals $(a_k,a_{k+1})$ and $(b_{k+1},b_k)$, and continuity at the corner points $a_k$ and $b_k$ is easy to show. For continuity at $\alpha$, note that if $a_k \leq x < a_{k+1}$, since $f$ is linear on the interval,
$$ |f(x)-\beta| \leq \max(|f(a_k)-\beta|, |f(a_{k+1})-\beta|) $$
$$ |f(a_k)-\beta| = \left| \frac{\lfloor k \beta \rfloor}{k} - \beta \right| = \frac{k \beta - \lfloor k \beta \rfloor}{k} < \frac{1}{k} $$
$$ |f(x)-\beta| \leq \max \left(\frac{1}{k}, \frac{1}{k+1}\right) = \frac{1}{k} $$
Similarly, $b_{k+1} < x < b_k$ also implies $|f(x)-\beta| \leq \frac{1}{k}$. Since $j > k$ implies $1/j < 1/k$ and $f(\alpha)-\beta = 0$, it's also true that $|f(x)-\beta| \leq \frac{1}{k}$ for all $x \in [a_k,b_k]$. So for any real $\epsilon > 0$, choose positive integer $K > 1/\epsilon$, and $|x-\alpha| < \min(\alpha-a_K, b_K-\alpha)$ implies $|f(x)-f(\alpha)| \leq 1/K < \epsilon$; $f$ is continuous at $\alpha$.
The cardinality of $\mathcal{F}$ is at most $\mathfrak{c}$ because we can define an injection from the set of functions to the set of sequences of rational numbers, which has cardinality $\mathfrak{c}$. This mapping is
$$ \Phi : f \mapsto \left(f(0), f(1), f\!\left(\frac{1}{2}\right), f\!\left(\frac{1}{4}\right), f\!\left(\frac{3}{4}\right), f\!\left(\frac{1}{8}\right), \ldots\right) $$
To show this $\Phi$ is injective, suppose $f_1$ and $f_2$ are continuous functions mapping to the same sequence. That is, $f_1\left(\frac{m}{2^n}\right) = f_2\left(\frac{m}{2^n}\right)$ for numbers of the form $\frac{m}{2^n}$ (with $m,n$ non-negative integers) in the domain. Suppose by way of contradiction that $f_1(x_0) \neq f_2(x_0)$ at any real $x_0 \in (0,1)$. Since each function is continuous, there exist a $\delta_1>0$ and $\delta_2>0$ such that
$$|x-x_0| < \delta_1 \implies |f_1(x)-f_1(x_0)| < \frac{1}{2}|f_1(x_0)-f_2(x_0)|$$
$$|x-x_0| < \delta_2 \implies |f_2(x)-f_2(x_0)| < \frac{1}{2}|f_1(x_0)-f_2(x_0)|$$
Let $\delta = \min(\delta_1, \delta_2)$. Since numbers of the form $\frac{m}{2^n}$ are dense, there is such a number $x_1 = \frac{m}{2^n}$ in the interval $(x_0-\delta, x_0+\delta)$. Then $|x_1-x_0| < \delta_1$ and $|x_1-x_0| < \delta_2$ and $f_1(x_1) = f_2(x_1)$, so combining the continuity statements with the triangle inequality gives
$$ \begin{align*} |f_1(x_0)-f_2(x_0)| &= \big|[f_1(x_0)-f_1(x_1)] - [f_2(x_0)-f_2(x_1)]\big| \\
&\leq |f_1(x_1)-f_1(x_0)| + |f_2(x_1)-f_2(x_0)| \\
&< \frac{1}{2}|f_1(x_0) - f_2(x_0)| + \frac{1}{2}|f_1(x_0)-f_2(x_0)| \\
&= |f_1(x_0)-f_2(x_0)|
\end{align*} $$
Contradiction: it is not possible that $f_1(x_0) \neq f_2(x_0)$. So if continuous functions have the same sequence $\Phi(f_1) = \Phi(f_2)$, the functions are identical; $\Phi$ is injective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4338045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
A net converges to a point iff every subnet accumulates in that point. While working on a takehome for my functional analysis course I stumbled upon this small lemma
A net $(x_i)_{i\in I}$ in a topological space $X$ converges to a point $x\in X$ if and only if every subnet has a accumulation point in $x$.
This is a slightly stronger formulation of the following well known result in topology.
A net $(x_i)_{i\in I}$ in a topological space $X$ converges to a point $x\in X$ if and only if every subnet converges to $x$.
I managed to come up with the following proof, but I doubt my judgement because it seems a little unbelievable for me to come up with a stronger version of an existing mathematical result. Can you check my proof?
the implication from left to right is trivial because if $(x_i)_{i\in I}$ converges to $x$ then so will any subnet.
Convergence to $x$ implies that the subnet has a accumulation point in $x$ as well, because this is a weaker statement.
Now if $(x_i)_{i \in I}$ does not converge to $x$, it has a subnet which does not converge to $x$, $(x_{\sigma(j)})_{j\in J}$.
This means there is an open neighbourhood $U$ of $x$ such that for any $j \in J$ there exists a $j' \geq j$ such that $x_{\sigma(j')} \not\in U$.
Using the map
$$J \to J : j \to j'$$
we find the subnet $(x_{\sigma(j')})_{j \in J}$ which has no accumulation point in $x$.
| The question is a duplicate so I will close it. Thank you for your help Henno.
Every subnet of $(x_d)_{d\in D}$ has a subnet which converges to $a$. Does $(x_d)_{d\in D}$ converge to $a$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4338272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
When does $x + x^{-1}$ divide $x^n +x^{-n}$? I have been attempting to solve the following problem and am not sure if I have solved it correctly and how to prove a certain case at the end,
Let x be a real number such that $t = x + x^{−1}$ is an integer greater
than 2. Prove that $t_n = x^n + x^{−n}$ is an integer for all positive
integers n. Determine the values of n for which t divides $t_{n}$.
For the first part I noted that $t_{n+1} = t*t_n-t_{n-1}$ which is very simple to prove just by multiplying the ts together. Then I proved that $t_{2}$ was an integer employing a similar method to the formula and argued that $t_{n}$ must then be an integer by induction.
For the second part of the question I first considered $x+x^{-1} \cong 0\mod( x+x^{-1})$ which then implies $x\cong-x^{-1}$. With this fact consider $x^n + x^{-n}$ where n is odd. I get $x^{2k+1} + x^{-(2k+1)}\cong x^{2k+1} + (x^{-1})^{2k+1}\cong x^{2k+1} - x^{2k+1} \cong 0 \mod (x+x^{-1})$.
My question is then how do I prove that n cannot be even for t to divide $t_n$ and also is it okay to use modular arithmetic for this question since x is not actually an integer.
For the first bit I was thinking I could maybe do something with infinite descent since $t_{n+2}\cong -t_n$ so if t divides $t_n$ then it must divide all $t_{n}$ with the same parity. So if t doesn't divide $t_{2}$ there is a contradiction I'm just not sure how to show this.
| Define a sequence of polynomials $\{T_n\}_{n\ge 0}$ in $\mathbb{Z}[X]$ by $T_0=1$, $T_1=X$ and
$$
T_{n+1} = 2XT_n-T_{n-1}.
$$
These are called the Chebyshev Polynomials of the first kind, and they have many fascinating properties. For example,
Claim: For all $n$, we have $T_n\left(\frac{x^{-1}+x}{2}\right)=\frac{x^n+x^{-n}}{2}\in\mathbb{Q}[x,x^{-1}]$.
Proof: By induction on $n$. The cases $n=0,1$ are clear. Now, for the induction step,
$$
\begin{align*}
T_{n+1}\left(\frac{x+x^{-1}}{2}\right)&=2\left(\frac{x+x^{-1}}{2}\right)T_n\left(\frac{x^{n}+x^{n-1}}2\right)-\frac{x^{n-1}+x^{1-n}}{2}\\
&= \frac12\left[(x+x^{-1})\left(x^{n}+x^{-1}\right)-x^{n-1}-x^{1-n}\right]\\
&=\frac{x^{n+1}-x^{-n-1}}{2}
\end{align*}
$$
as desired. $\square$
Corollary: Putting $x=e^{i\theta}$, we find that $T_n(\cos \theta)=\cos(n\theta)$ for all $\theta\in\mathbb{R}$. In fact, $T_n$ is the unique polynomial with this property.
To answer your question, we consider a slightly altered version of the Chebyshev polynomials. Define a sequence $\{f_n\}_{n\ge 1}$ by $f_n=2T_n(\frac X2)$. Note that $f_0=2$ and $f_1=X$. Furthermore, for all $n\ge 1$, we have
$$
f_{n+1} = 2T_{n+1}(\frac{X}{2}) = 2\left(2\cdot \frac X2T_n(X/2)-T_{n-1}(X/2)\right)=X\cdot 2T_{n}(X/2) - 2T_{n-1} = Xf_n-f_{n-1},
$$
so the $f_n$ still have integer coefficients. Moreover, the property of $T_n$ turns into
$$
f_n(x+x^{-1}) = x^n+x^{-n}.
$$
Now, suppose that we have some $a\in\mathbb{C}$ such that $k:=a+a^{-1}$ is an integer. Then $a^{n}+a^{-n}=f_n(a+a^{-1})=f_n(k)$ must also be an integer, since we are evaluating a polynomial with integer coefficients at an integer!
Next, we can write $f_n=Xg+f_n(0)$, where $g$ is some polynomial, and $f_n(0)$ is the constant coefficient of $f_n$. Suppose that $a+a^{-1}=k$ is an integer, then
$$
a^{n}+a^{-n} = f_n(a+a^{-1}) = k\cdot g(k)+f_n(0).
$$
Because $g(k)$ is a polynomial with integer coefficients evaluated in an integer, it must be an integer. Therefore, $a^{n}+a^{-n}\equiv f_n(0)\pmod k$, so $a+a^{-1}$ divides $a^n+a^{-n}$ if and only if it divides $f_n(0)$.
Evaluating the recursion in $0$, we find that $f_0(0)=2$ and $f_1(0)=0$ and
$$
f_{n+1}(0) = -f_{n-1}(0).
$$
We find that (this can very easily be proven by induction)
$$
f_n(0) = \begin{cases}0\quad&\text{if $2\nmid n$}\\(-1)^{n/2}2&\text{if $2\mid n$}\end{cases}
$$
So if $n$ is odd, we have $a+a^{-1}\mid 0 = f_n(0)$ and $a+a^{-1}\mid a^{n}+a^{-n}$. However, if $n$ is even, then $a^n+a^{-n}\equiv \pm 2 \pmod{a+a^{-1}}$. Because it was given that $a+a^{-1}>2$, we do not have divsibility in this case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4338641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Shouldn't the definition of a prime number be changed to account for negative factors? At the moment, from what I can gather the current definition of a Prime Number is; "a number that is divisible only by itself and $1$ (e.g. $2, 3, 5, 7, 11$)". However such a prime number like $7$ can also be made by multiplying $-1$ and $-7$. Hence shouldn't the definition be changed to "it can only be divisible by itself and one as well as $-1$ and its negative counterpart"?
| Wikipedia says that a prime is not a product of other natural numbers:
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4338766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Are there such things as 3-dimensional (and higher) analogues of matrices, and if so, do they have any applications? A matrix is a group of numbers arranged in a rectangle. I wonder, has anyone studied 3-dimensional and higher analogues of matrices? For example, there could be such a thing as a 2 by 2 by 2 3d matrix, whose entries are all equal to 1. Has anyone else defined these entities, and more importantly, are they used in mathematics?
| I will just elaborate a little on a famous example alluded to in previous comments/answers:
Suppose you start at the North pole and walk forwards straight down to the equator, then sidestep one quarter of the way round the equator, before walking backwards to the North pole. Then all your walking was along straight lines (geodesics) and you never turned - yet at the end you are back where you started, but facing $\pi/2$ radians to the direction you started. In effect you have been turned by the curvature of the surface of the Earth.
If you divide the amount you got turned $(\pi/2)$ by the area you have traversed around $(4\pi R^2)/8$, then you get the curvature of the Earth: $\frac1{R^2}$. We are assuming the Earth is a perfect sphere here.
More generally, given a point on a surface, you can take a small square and divide the amount you get turned going around the square, by its area. Taking the limit as the area of the square shrinks to $0$ gives you the curvature of the surface at that point.
Now suppose we have a point in an $n$-dimensional space. We must pick two co-ordinate axis' to draw our little square parallel to. Then we must pick two more co-ordinate axis' so we can see how much a vector pointing along the first, gets turned into the second. Then along these four co-ordinate axis' we can take the limit as before, of angle turned divided by area.
Doing this for all combinations of co-ordinate axis', we get an $n\times n\times n\times n$ grid of numbers called the Riemann curvature tensor. Einstein's field equations equation from general relativity relate these numbers for the curvature of spacetime, to the Stress-energy tensor - intuitively describing how matter curves the space around it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4338933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Study the injectivity and surjectivity of the function f Let $f:\mathbb{R}\to\mathbb{R}$, where
$
f(x) =
\begin{cases}
2x+1, & \text{if $x$ is rational} \\
\sqrt2 x+3, & \text{if $x$ is irrational}
\end{cases}
$
The injectivity:
I worked out the injectivity of the function by finding a rational number $x_1$ and an irrational number $x_2$ such that $f(x_1)=f(x_2)$, but $x_1!=x_2$, which are $x_1=2, f(2)=2\cdot 2+1=5$ and $x_2=\sqrt{2}$, $f(\sqrt{2})=\sqrt{2\cdot 2}+3=5,f(2)=f(\sqrt{2})$ and $2!=\sqrt{2}$ which means that it's not injective.
The surjectivity:
If $x$ is rational $\implies y=2x+1\implies x=\frac{y-1}{2}$, and if $x$ is irrational $x=\frac{y-3}{\sqrt{2}}$;
I don't really know how to show that the function is surjective, but I know that when $y=2x+1$(the first equation), it will touch all the rational numbers, but I'm not sure about the second equation if there's an y it doesn't touch and if it doens't then it means that it's not surjective...
I was wondering if there is a general way to study the injectivity and surjectivity of these types of functions. Thanks for the help.
| Your proof that $f$ is not injective is correct.
The given function $f$ is not surjective. Consider the irrational number $\sqrt{2}+3$. Since $2x+1$ is rational for any rational number $x$, we should find an irrational number $x$ such that
$$\sqrt2 x+3=\sqrt{2}+3$$
which holds only if $x=1$ which is rational. Contradiction.
More generally, in the same way, we show that $f(\mathbb{R})$ does not include all irrational numbers of the form $\sqrt{2}q+3$ where $q$ is any rational number different from zero. As you already noted $f(\mathbb{R})\supset \mathbb{Q}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4339099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Showing that $ \sup_{x \in A}(|f(x)|-|g(x)|)=\sup_{x\in A} |f(x)|-\inf_{x\in A}|g(x)|$ Let $f,g$ be real, bounded functions.
I'm not sure that this equation was true or if there exists a counter-example.
For proving that $$\sup_{x \in A}\big(|f(x)|-|g(x)|\big) \,=\, \sup_{x\in A} |f(x)|-\inf_{x\in A}|g(x)|,$$
I use the definition of $\sup$ and $\inf$, but i don't have the result. I prove only that $$\sup_{x \in A}\big(|f(x)|-|g(x)|\big)\,\leq\,\sup_{x\in A} |f(x)|-\inf_{x\in A}|g(x)|.$$
| \begin{align} \sup_{x \in A}(|f(x)|-|g(x)|)&=\sup_{x\in A} |f(x)|-\inf_{x\in A}|g(x)|\end{align}
Let, $h= |f|=|g|$ and $A=I\subset \Bbb{R} \text { an interval }$
Then, \begin{align} \sup_{x \in I}(h(x)-h(x))&=\sup_{x\in I} h(x)-\inf_{x\in I} h(x)\end{align}
$0=\sup_{x\in I} h(x)-\inf_{x\in I} h(x)$
$0=\omega_h(I)$
Now, choose a function $h:I\to \Bbb{R}$ such that $\omega_h(I) \neq 0$
Does this provides a counter example?
$h: [0, 1]\to \Bbb{R}$ defined by
$$h(x)=\begin{cases} 1 &\text{ if } x\in [0,\frac{1}{2}] \\ 2 &\text{ if } x\in (\frac{1}{2},1]\end{cases}$$
Then, \begin{align} \omega_h(I) &=\sup_{x\in I} h(x)-\inf_{x\in I} h(x)\\&=2-1\\&=1\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4339238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Lang's proof that there exists $x>0$ such that $\cos x=0$ In Undergraduate Analysis on p. 90, Lang assumes the existence of two functions $f$ (sine) and $g$ (cosine) satisfying the conditions $f(0)=0$, $g(0)=1$, $f'=g$ and $g'=-f$. He then goes on to show that there exists $x>0$ such that $\cos x=0$.
With your help, I would like to make some of the steps in his proof more explicit.
Suppose that no such number exists. Since $\cos$ is continuous, we
conclude that $\cos x$ cannot be negative for any value $x>0$ (by
intermediate value theorem). Hence $\sin$ is strictly increasing for
all $x>0$, and $\cos x$ is strictly decreasing for all $x>0$...
It's clear why $\sin$ is strictly increasing on the interval $(0,\infty)$. Why is $\cos$ strictly decreasing on the interval $(0,\infty)$? This would require $\sin x>0$ for all $x\in(0,\infty)$. But how can I show this?
... Let $a>0$. Then $0<\cos 2a=\cos^2a-\sin^2a<\cos^2a$. By induction,
we see that $\cos(2^n a)<(\cos a)^{2^n}$ for all positive integers
$n$. Hence $\cos(2^na)$ approaches $0$ as $n$ becomes large, because
$0<\cos a<1$. Since $\cos$ is strictly decreasing for $x>0$, it
follows that $\cos x$ approaches $0$ as $x$ becomes large, and hence
$\sin x$ approaches $1$. In particular, there exists a number $b>0$
such that $$\cos b<\frac{1}{4}\text{ and }\sin b>\frac{1}{2}.$$
If $\lim_{n\rightarrow\infty}\cos(2^na)=0$ for all $a>0$, how can I conclude that $\lim_{x\rightarrow\infty}\cos x=0$? Let $\epsilon>0$. Then I would have to show that there exists $s\in\mathbb{R}$ such that
$$(\forall x)(x\in(s,\infty)\implies|\cos x|<\epsilon).$$
It's not immediately clear how to find such an $s$. I would have to use the fact that $\lim_{n\rightarrow\infty}\cos(2^na)=0$ for all $a>0$, but I don't know how.
| Try this, which I think has some common elements with @egreg’s answer:
Our functions $f$ and $g$ are differentiable, with $f'=g$, $g'=-f$, so that $f^2+g^2=1$ identically, and $|f|\le1$, $|g|\le1$.
For $g$ to have no zero, it must have a limit, $\lim_{x\to\infty}g(x)\ge0$. There are two cases, positive limit or zero; let me treat the latter case first.
If $\>0=\lim_{x\to\infty}g(x)$, then $\lim_{x\to\infty}g'(x)=0$ as well, so $\lim_{x\to\infty}f(x)=0$, contradicting $f^2+g^2=1$.
In case $\lim_{x\to\infty}g(x)=a>0$, then $\forall x, f'(x)\ge a$, and $\forall x\ge0, f(x)\ge ax$ as well, contradicting boundedness of the function $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4339382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Two orthogonal basis of polynomials with respect to a same inner product have the same roots Let $V_n$ be the vector space generated by the set $\{1, x, \ldots, x^n\}$.
We say that two polynomials $f(x)$ and $g(x)$ are orthogonal (with respect to a inner product) if
$$\langle f(x), g(x) \rangle = 0.$$
Consider the inner product
$$\langle f(x), g(x) \rangle = \int_{a}^{b}f(x)g(x)dx.$$
Let $\{p_0(x), \ldots, p_n(x)\}$ and $\{q_0(x), \ldots, q_n(x)\}$ two orthogonal basis of $V_n$ with respect to the inner product above such that the $p_j(x)$ and $q_j(x)$ have degree $j$, $j \in \{0, \ldots, n\}$. It's not hard to prove that $p_n(x)$ is orthogonal to all polynomials with degree less than $n$.
The following preposition is what I mention in the comments (and I know how to prove).
Preposition 1. The polynomial $p_n(x)$ has $n$ distinct roots in $(a,b)$ for any $n \geq 1$.
The other preposition (that I'm trying to prove) is the following:
Preposition 2. The polynomials $p_n$ and $q_n$ have the same roots. That's, if the roots of $p_n(x)$ are $x_1, \ldots, x_n$ so the roots of $q_n(x)$ also are $x_1, \ldots, x_n$.
My idea was have some contradiction with the sign of the inner product, but I don't know how to progress or if It's not the right idea:
Exists real coefficients $c_0, c_1, \ldots, c_n$ such that $q_n(x) = c_np_n(x) + \ldots + c_1p_1(x) + c_0p_0(x)$. We know that $p_n(x)$ is orthogonal to all polynomials with degree less than $n$, so
\begin{align*}
\langle p_n(x), q_n(x) \rangle &= \int_{a}^{b} p_n(x)(c_n p_n(x) + \ldots + c_0p_0(x))dx \\
&= c_n\int_{a}^{b} p_n(x)p_n(x)dx \\
&= c_n\langle p_n(x),p_n(x)\rangle
\end{align*}
| With little loss we can assume the $p_k$ have been arranged so that they have unit norm, namely $\langle p_k, p_k \rangle = 1$.
You have defined $c_n$ such that $q_n = \sum_k c_k p_k$. Then for $k < n$, using orthogonality of the $p_k$,
$$\langle q_n, p_k\rangle = c_k.$$
But the left side is also zero since $q_n$ is orthogonal to every polynomial of lower degree. Thus $c_0, c_1, c_{n-1} = 0$ and $q_n$ is a multiple of $p_n$ and both must have the same roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4339519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Transition matrix exercise I found this exercise on the internet ( I translated it from French so sorry if it's scuffed. ) I have no idea how to start it, any hint would be appreciated.
Let $(X_n)$ be a Markov chain with $Q$ being its transition matrix.
Let $E$ be the set of values the process take and let $x,y\in E$.
Prove that $\sum_{n=0}^{\infty}Q^n(y,x)\le \sum_{n=0}^{\infty}Q^n(x,x)$
EDIT based on @Michh's answer :
So I kinda get the general idea now but I didn't get these two transitions in his proof :
1/which conditional expectation property did we use get from line 1 to line 2 here? :
$$\begin{align}
\mathbb{E}_y[V_x] &= \mathbb{E}_y[V_x \mathbf{1}_{\{T_x<\infty\}}]\\
&= \mathbb{E}_y[ \mathbf{1}_{\{T_x<\infty\}}\mathbb{E}_y[V_x|\mathcal{F}_{T_x}]].
\end{align}$$
2/Also I dont get how we switched from conditional $y$ to conditional $x$ here $\mathbb{E}_y[V_x|\mathcal{F}_{T_x}]=\mathbb{E}_x[V_x]$. here I thought Markov's strong property only gives us independence so I get that we can get rid of $\mathcal{F}_t$
| Denote by $V_x = \sum_{n=0}^\infty \mathbf{1}_{\{X_n = x\}}$ the number of visits of state $x$. Then what you want to prove is
$$ \mathbb{E}_y [V_x] \leq \mathbb{E}_x[V_x].$$
To do this the idea is to apply the strong Markov property at the hitting time of $x$: $$T_x = \inf\{n\geq 1\colon \, X_n = x\}.$$
First notice that on $\{T_x = \infty\}$, we have $V_x = 0$ $\mathbb{P}_y$-a.s. Then
$$\begin{align}
\mathbb{E}_y[V_x] &= \mathbb{E}_y[V_x \mathbf{1}_{\{T_x<\infty\}}]\\
&= \mathbb{E}_y[ \mathbf{1}_{\{T_x<\infty\}}\mathbb{E}_y[V_x|\mathcal{F}_{T_x}]].
\end{align}$$
By the strong Markov property, $\mathbb{E}_y[V_x|\mathcal{F}_{T_x}] = \mathbb{E}_x[V_x].$ This gives
$$\mathbb{E}_y[V_x] = \mathbb{P}_y(T_x<\infty)\mathbb{E}_x[V_x]\leq \mathbb{E}_x[V_x].$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4339726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The Stiefel-Whitney classes of a tangent bundle as a manifold For any smooth manifold $M$, the tangent bundle $TM$ as a manifold is always orientable. In other words, the first Stiefel-Whitney class $w_1$ of the manifold $TM$ always vanishes.
Question: Does the manifold $TM$ also have vanishing higher Stiefel-Whitney classes $w_{i>1}$? If not, how can we compute them provided we know the Stiefel-Whitney classes of $M$?
p.s. I'm concerned about $w_2$ in particular.
| The characteristic classes of a tangent bundle $TM$, treated as a manifold itself, are defined using the double tangent bundle $TTM$. Given the projection $\pi: TM \to M$, the double tangent bundle splits as $\pi^* TM \oplus \pi^* TM$, and we have $$w_1(TTM) = 2 w_1(\pi^* TM) \equiv 0,$$ so $TM$ is orientable as you've asserted.
Similarly, $$w_2(TTM) = 2w_2(\pi^* TM) + w_1(\pi^* TM)^2 \equiv \pi_* w_1(TM)^2,$$ so whether it vanishes depends on $w_1(TM)^2$ in the cohomology ring of $M$.
You can work out what the other Stiefel-Whitney classes $w_i(TTM)$ are, using the formula $w(TTM) = \pi^* w(TM)^2$, where $w = 1 + w_1 + w_2 + \cdots$ is the total Stiefel-Whitney class.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4339861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Markov strong property exercise Let $(X_n)$ be a Markov chain with Q being its transition matrix.
Let $T=\inf\{n \ge 0:X_n \in A\}$ and Let $u(x)=P_x(T<+\infty)$.
Prove that $u$ verifies the system :
$$
\begin{cases}{}
u(x)=1 &\text{if } x\in A \\
u(x)=Pu(x) &\text{if } x\notin A
\end{cases}
.
$$
My attempt and understanding :
My understanding is that $T$ represents "the first time we get to the subset $A$" and $u(x)$ represents "the probability of hitting the subset $A$ starting from a $X=x$" ( because $P_x(T=+\infty)$ should represent the probability of never getting in the subset $A$).
That being said, the first part of the system makes sense because if $(X=x) \in A$ then $T=0$ ( smallest $n$ ) and we are already in $A$ so the probability of getting to $A$ should be equal to $1$.
The problem is the second part, how am I going to use markov strong property to prove that?
| The starting point is that if $x \not \in A$ then $P(T<\infty \mid X_0=x) =\sum_y P(T<\infty \mid X_0=x,X_1=y) P(X_1=y \mid X_0=x)$. This follows from using the total probability formula via conditioning on the outcome of the first step. Then you need to simplify that using the other properties of the situation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4340025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Volume bounded between sphere and three planes I found a question in my homework that I have been trying to solve for days with minimal progress. We're given a sphere of form $x^2+y^2+z^2=9$ and three planes, $x=1,y=1,z=1$
The sphere in question:
We're asked to find the volume bounded above the planes and below the sphere using an integral. I tried using Cartesian coordinates, only to get a nonintegrable equation with way too many roots. I tried using polar coordinates, still too many roots. Finally I used spherical coordinates. I split the question into two parts. Since the area of the wedge will be symmetrical on either side I solved for the volume bounded by $y=1,z=1, x=y$, and the sphere. This way the boundary would exclude the x=0 plane. I defined the integral as follows $$\int_{\arccos(\sqrt{(14)/4)}}^{\pi/4}\int^{\arccos(\sqrt{(7)/3)}}_{\arccos(1/3)}\int^3_{1/\cos\theta \sin\phi}\sin\phi p^2dpd\phi d\theta$$
The arccos values come from computing $\phi$ and $\theta$ at the points of intersection between the sphere and planes. $\pi/4$ is the upper bound for $\theta$ because I cut the question in half to avoid the x=1 plane from messing with my boundaries. That way I can just double my answer.
I don't think this answer is right. It's impossible to solve without a computer or converting the arccos values into lower decimal places so they're computable by hand. My answer was in the ballpark of my estimate using a triangular prism but was clearly too high. If anyone knows a more reasonable way to compute the boundaries I'd appreciate any help I can get.
| Set up is easier in cylindrical coordinates so I will start with that and then go to spherical coordinates.
At the intersection of the plane $z = 1$ and the sphere, $x^2 + y^2 = r^2 = 8$
At the intersection of $y = 1, z = 1$ and the sphere, $x = \sqrt{7}$
$ \displaystyle \tan\theta = \frac{y}{x} = \frac{1}{\sqrt7}$
For $ \displaystyle \theta \leq \frac{\pi}{4}$, the lower bound of $r$ is defined by plane $y = 1 \implies r = \csc\theta$
So the integral is,
$ \displaystyle \int_{\arctan(1/\sqrt7)}^{\pi/4} \int_{\csc\theta}^{2\sqrt2} r (\sqrt{9-r^2} - 1) ~ dr ~ d\theta$
Using symmetry you can double the volume or write the other integral as,
$ \displaystyle \int_{\pi/4}^{\arctan(\sqrt7)} \int_{\sec\theta}^{2\sqrt2} r (\sqrt{9-r^2} - 1) ~ dr ~ d\theta$
Setting it up in spherical coordinates, I will go in the order $d\rho, d\phi, d\theta$.
I will set it up for $ \theta \leq \pi/4$ and we can multiply the volume by $2$.
$\displaystyle \int_{\arctan(1 / \sqrt7)}^{\pi/4} \int_{\arcsin((\csc\theta)/3)}^{\arctan(\csc\theta)} \int_{\csc\theta \csc\phi}^3 \rho^2 \sin\phi ~ d\rho ~d\phi ~d\theta ~ + $
$ \displaystyle \int_{\arctan(1 / \sqrt7)}^{\pi/4} \int_{\arctan(\csc\theta)}^{\arccos(1/3)} \int_{\sec\phi}^3 \rho^2 \sin\phi ~ d\rho~ d\phi ~d\theta $
A bit of explanation for the above set up -
We know the bounds of $\theta$ from the work in cylindrical coordinates. For bounds of $\phi$, note that when $\rho$ is bound below by $y = 1$, lower bound of $\phi$ comes from intersection of $y = 1$ and the sphere, which is $y = 1 = 3 \sin\theta \sin\phi$. The upper bound of $\phi$ comes from intersection of plane $ y = 1$ and $z = 1$. At $z = 1, \rho = \sec\phi$ so $y = 1 = \sec\phi \sin\phi \sin\theta \implies \phi = \arctan(\csc\theta)$.
Now when $\rho$ is bound below by $z = 1$, lower bound of $\phi$ comes from intersection of plane $ y = 1$ and $z = 1$. So, $\phi = \arctan(\csc\theta)$ as we obtained earlier. The upper bound of $\phi$ comes from intersection of plane $z = 1$ and the sphere. So, $z = 1 = 3 \cos\phi \implies \phi = \arccos(1/3)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4340447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
lissajous curve ellipse derivation It is known that if $x$ and $y$ are oscillating with the same frequency and a difference in phase, then the curve they trace is generally an ellipse.
I tried to show this but failed. Can you help me with this? Or please tell me where I can find the derivations.
General Formula:[
\begin{cases}
x=A\cos(\theta+\theta_0)\\
y=\sin(\theta)
\end{cases}
]
| You can write $\cos(\theta+\theta_0)=\cos(\theta_0)\cos(\theta)-\sin(\theta_0)\sin(\theta)$ so the parametric equations are of the form
$$
\begin{cases}
x=a\cos(\theta)+b\sin(\theta)\\
y=c\sin(\theta)+d\sin(\theta)
\end{cases}
$$
which can be written as a matrix equation:
$$
\begin{bmatrix}
x \\ y
\end{bmatrix}
=
\begin{bmatrix}
a & b\\ c & d
\end{bmatrix}
\begin{bmatrix}
\cos(\theta) \\ \sin(\theta)
\end{bmatrix}
$$
or $X=MY$ with $X=\begin{bmatrix}
x \\ y
\end{bmatrix}
$, $M=\begin{bmatrix}
a & b \\ c & d
\end{bmatrix}
$ and $Y=\begin{bmatrix}
\cos(\theta) \\ \sin(\theta)
\end{bmatrix}$.
If $M$ is invertible (this is the case in the OP, as shown by a quick computation), we can write $Y=M^{-1}X$. We can then transform the identity $\cos^2(\theta)+\sin^2(\theta)=1$ into a quadratic equation in $x$ and $y$. Using the discriminant (for example), you can show this is the equation of an ellipse.
Detail of computations:
More specifically, $M^{-1}=\frac{1}{\delta}\begin{bmatrix}
d & -b \\ -c & a
\end{bmatrix}$ with $\delta=ad-bc$. Therefore,
$$
\cos(\theta)=\frac{1}{\delta}(dx-by),\hskip 5mm \sin(\theta)=\frac{1}{\delta}(-cx+ay)
$$
so
$$
(dx-by)^2+(ay-cx)^2=\delta^2
\iff
(a^2+d^2)x^2-2(bd+ac)xy+(b^2+c^2)y^2-\delta^2 = 0
$$
Therefore, the graph is indeed a conic section. It is not degenerate since it is the image of a circle by an invertible linear function (hence cannot be empty, or a point, or one or two lines).
The discriminant is $\Delta=4(bd+ac)^2-4(a^2+d^2)(b^2+c^2)=-4(ab-cd)^2$ is clearly negative, so it is an ellipse.
Remark: it cannot be a hyperbola or a parabola for topological reasons too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4340898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Assume $H\le S_n$ contains an element of order $n$ and a transposition. Given that $n$ is prime, does $H=S_n$? Let $H\leq S_n$ be a subgroup such that $H$ contains an element of order $n$. Assume that $n$ is a prime and prove that $H=S_n$.
Thoughts:
Since $n$ is prime, the element of order $n$ in $H$ must be a cycle of order $n$.
So we have a cycle of order $n$ and we have a trnasposition, say $(x_1 \space x_2)$.
What I want is to prove that we can get any simple transposition by composition of enough times of the cycle on our transposition. Not sure how to do it yet.
Any help would be appreciated. Thanks in advance.
| Hint: Say the transposition transposes $1$ and $k+1$, and let $\sigma$ be the cycle $(123\cdots n)$. Then $\sigma^k$ is a cycle (this part uses $n$ prime), so you can relabel the elements so that the transposition is $(12)$. Can you finish from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4341042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the following relation reflexive, symmetric and transitive? Consider the following relation in $\mathbb{R}$: $x \sim y \Leftrightarrow y^2 \leq 9-x^2$. Is this relation reflexive, symmetric and transitive?
I know it's not reflexive because, for example, for $x=4$, $x \nsim x$.
I think it's symmetric, but I'm not sure how to prove that. And I'm unable to indentify whether this is transitive or not.
| The relation is symmetric, because if $y^2 \le 9-x^2$ then we can write
$$y^2 \le 9-x^2 \Leftrightarrow y^2+x^2 \le 9 \Leftrightarrow x^2 \le 9-y^2$$
For the transitivity, we can use again that $x \sim y \Leftrightarrow x^2+y^2 \le 9$ to construct a counterexample: $x=3, y=0, z=3$. We have $x \sim y$ and $y \sim z$ but $x \not \sim z$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4341209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Integral of $L^2(\mathbb R^+)$ function is $o(\sqrt x)$ I stumbled upon an exercise which goes as follows :
Let $f \in L^2(\mathbb R^+)$, show that $\int_0^x f(t)\text{d}t = o\left(\sqrt x\right)$.
*
*Cauchy-Schwarz inequality gives the fact that $\int_0^x f = O\left(\sqrt x\right)$
*For decreasing functions, the results holds since $xf(x)^2 \leq \int_x^{2x} f^2 = o(1)$.
*Simple limit examples such as $f(t) = \dfrac 1 {\sqrt t\ln(t)}$ or $f(t) = \displaystyle\sum_{k} \dfrac{1_{[k,k+1]}}{\sqrt k}$ advocate for the result.
However I cannot produce a proof of that result nor find any counterexample, I don't even know what to believe. Given the source of the problem, a proof, if there is, should be elementary.
Happy Hollidays
| To elaborate on my comment. Fix an arbitrary $f\in L^2$ and let $g\in L^2 \cap L^p$ for some $1<p<2$. A simple triangle inequality followed by Hölder's inequality gives
$$
\lvert \int_0^x f(t)dt \rvert \leq \lvert \lvert f-g\rvert \rvert_{L^2}\sqrt{x} + \lvert \lvert g \rvert \rvert_{L^p} x^{1-1/p}.
$$
This implies that
$$
\limsup_{x \to \infty} \frac{1}{\sqrt{x}} \lvert \int_0^x f(t) dt \rvert \leq \lvert \lvert f-g\rvert \rvert_{L^2} \qquad \forall g \in L^2 \cap L^p.
$$
Using the fact that $ L^2 \cap L^p$ is dense in $L^2$, we can take the infimum of all such $g$'s, thus proving the claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4341482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Solve the ODE $y'=1-\frac{y}{x}$ I substituted $u=\frac{y}{x}$ then tried to solve the ODE $$\frac{u'}{2u-1}= -\frac{1}{x}$$ and I came this far $$\frac{1}{2}\ln |{2u-1}|=- \ln |{x}| + c_1$$ but then in the solution there was the step $$c_1=\ln c_2 \in \mathbb{R}, c_2 > 0$$ to get $$\ln |2u-1|=\ln{(\frac{c_2}{x})^2}$$ but why do we have this additional step? Couldn't we just calculate the solution without this step?
| Yes, the step before is enough to get a general solution. However, we can assume the constant to be an expression of other constants. The last step is just for simplification to the right-hand side.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4341648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If $deg(u)+deg(v) \ge n-1$ for $u$ and $v$ are non adjacent vertices, then G has Hamiltonian path Hamiltonian path is a path that contains all of the vertices of the graph. I know that if $deg(u)+deg(v) \ge n$ for every two non adjacent vertices $u$ and $v$ then the graph has Hamiltonian cycle and every Hamiltonian cycle is Hamiltonian path, too.
So I thought I just need to prove that if in a graph for every two non adjacent vertices $u$ and $v$, $deg(u)+deg(v) = n-1$ then the graph has Hamiltonian path. But I have no idea how to continue! Any help?
| Hint: Let $G$ be a graph such that for all nonadjacent $u,v$, we have $d(u) + d(v) \geq n-1$. Create a new graph $G+w$ of order $n+1$ by adding a vertex $w$, and making $w$ adjacent to everything in $G$. Can you show that $G+w$ has a Hamiltonian cycle? And if so, can you use the Hamiltonian cycle in $G+w$ to find a Hamiltonian Path in $G$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4341828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The differential of inclusion from S^{2} to R^{3} I've found this question on Loring Tu's "Introduction to Manifolds", numbered 11.4 and titled "Differential of the inclusion map"
On the upper hemisphere of the unit sphere $S^{2}$ we have the coordinate map $\phi = (u,v)$, where $u(a,b,c) = a, \ v(a,b,c) = b$
Let $i: S^{2} \to \mathbb{R^3}$ be the inclusion, and $i_*$ it's differential.
Calculate:
$$
i_*(\frac{\partial}{\partial u}), \ \
i_*(\frac{\partial}{\partial v})
$$
in terms of $ \frac{\partial}{\partial x}, \frac{\partial}{\partial y}, \frac{\partial}{\partial z}$, where x,y,z are the standard coordinates
I've fiddled with it a bit, and the closest I've came is something like:
$$i_*(\frac{\partial}{\partial u})(x) = (\frac{\partial}{\partial u})(x \circ i) = (\frac{\partial}{\partial u}) (u) = 1\\
i_*(\frac{\partial}{\partial u})(y) = (\frac{\partial}{\partial u})(y \circ i) = 0 \\
i_*(\frac{\partial}{\partial u})(z) = (\frac{\partial}{\partial u}) (z \circ i)
$$
I can't really wrap my head around
calculating $(\frac{\partial}{\partial u}) (z \circ i)$.
Some guidance would be very much appreciated.
| Here $z\circ i \circ \phi^{-1}=\sqrt{1-u^2-v^2}$ and so
$\frac{\delta}{\delta u} \sqrt{1-u^2-v^2}=$
$=\frac{-u}{\sqrt{1-u^2-v^2}}=-\frac{x}{z}$
so that
$i_*(\frac{\delta}{\delta u })=\frac{\delta}{\delta x}-\frac{x}{z}\frac{\delta}{\delta z}$
Similarly
$i_*(\frac{\delta}{\delta v })=\frac{\delta}{\delta y}-\frac{y}{z}\frac{\delta}{\delta z}$
Merry Christmas
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4341919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Maximum negative corrleation for $n$ identically distributed normal variables Suppose you have $n$ identically and symmetrically distributed normal random variables. What is the most negative common correlation coefficient possible? I believe the answer is $\frac{-1}{n-1}$, but I cannot prove it.
| $\newcommand{Cov}{\operatorname{Cov}}$
$\newcommand{Var}{\operatorname{Var}}$
$\newcommand{corr}{\operatorname{corr}}$
Assume WLOG $\Var(X_i)=1$ for all $i$, so $\Cov(X_i,X_j)=\corr(X_i,X_j)=\rho$ for $i\neq j$. Then
$$0\leq\Var\left(\sum X_i\right)=\sum\Var(X_i)+\sum_{i\neq j}\Cov(X_i,X_j)
=n+n(n-1)\rho,$$
and so $\rho\geq-\frac{1}{n-1}$ as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4342108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Moment Generating Function of a Truncated Gaussian The Moment Generating Function of a standard Gaussian distribution is $\exp(t^2/2)$. Let $X$ be a Gaussian r.v., $a>0$ and define the (outer-)truncated variable $Y=X\mathbb{I}(|X|\ge a)$. What is the MGF $E[\exp(tY)]$ of $Y$? It should be upper-bounded by $\exp(t^2/2)$, but should not be the same, since the moments of the two distributions are strictly lower. Even an upper bound on the function that is smaller than the gaussian case would be interesting.
As a variant, I would be interested in comparing the MGF of the folded normal $|X|$ against that of $|Y|$.
| We can write the MGF
$$ \phi(t) = \Big\langle \exp\big[t X I(|X|>a) \big]\Big\rangle_X. $$
Because $I(|X|>a)$ is either $1$ or $0$ we have
$$ \phi(t) = \Big\langle 1- \big(1-e^{t X}\big)I(|X|>a)\Big\rangle_X. $$
or
$$ \phi(t) = 1 - \Big\langle I(|X|>a) \Big\rangle + \Big\langle e^{tX}I(|X|>a) \Big\rangle.$$
Writing out the integrals,
$$ \phi(t) = 1 -\int_{-\infty}^{-a} \frac{e^{-x^2/2}}{\sqrt{2\pi}}dx - \int_a^\infty \frac{e^{-x^2/2}}{\sqrt{2\pi}}dx+ \int_{-\infty}^{-a} \frac{e^{tx-x^2/2}}{\sqrt{2\pi}}dx +\int_a^\infty \frac{e^{tx-x^2/2}}{\sqrt{2\pi}}dx.$$
Evaluating these gives the desired result:
$$ \phi(t) = 1+\frac{1}{2} \left(e^{\frac{t^2}{2}} \text{erfc}\left(\frac{a-t}{\sqrt{2}}\right)-\text{erfc}\left(\frac{a}{\sqrt{2}}\right)\right)+\frac{1}{2} \left(e^{\frac{t^2}{2}} \text{erfc}\left(\frac{a+t}{\sqrt{2}}\right)-\text{erfc}\left(\frac{a}{\sqrt{2}}\right)\right). $$
This reduces to the standard result as $a \rightarrow 0$:
$$ \phi(t) = e^{t^2/2}.$$
You can compare with the folded normal MGF from wikipedia: https://en.wikipedia.org/wiki/Folded_normal_distribution
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4342290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Characterization of the isomorphic semidirect products Let $A$ and $G$ be two finite abelian groups and let $\alpha$, $\beta:G\rightarrow{\rm Aut}(A)$. Suppose that $\alpha (G)$ and $\beta (G)$ are conjugate subgroups of ${\rm Aut}(A)$. Are the semidirect products $A\rtimes _{\alpha }G$ and $A\rtimes _{\beta }G$ isomorphic?
I know that this is true for a finite cyclic group $G$ but I don't what to do if $G$ is a finite non-cyclic Abelian group. I think the answer is usually no, so I will be thankful if someone provides me a counterexample.
Thank you in advance.
| If we assume only that the images of $\alpha$ and $\beta$ are conjugate in $\mathrm{Aut}(A)$, then the answer is no. Let $A = C_3$ and $G = C_4 \times C_2$, and consider two surjective homomorphisms $\alpha, \beta: G \to \mathrm{Aut}(A) \cong C_2$ with kernels isomorphic to $C_4$ and $C_2 \times C_2$ respectively. Then each of the two resulting semidirect products has a unique Sylow 3-subgroup $A$, whose centralizer is $A \times \ker \alpha$ or $A \times \ker \beta$ respectively. These subgroups are non-isomorphic, so the semidirect products are as well. Concretely:
$$A \rtimes_{\alpha} G \cong C_4 \times (C_3 \rtimes C_2) \cong C_4 \times S_3, \text{ while} $$
$$A \rtimes_{\beta} G \cong C_2 \times (C_3 \rtimes C_4) \cong C_2 \times \mathrm{Dic}_{12},$$
where both semidirect products above are given by the unique nontrivial actions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4342409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
The set $\mathcal{J} \subset \mathbb{R}^2$ defined by $\mathcal{J} = \{(x,y) \in \mathbb{R}^2\ |\ x=y\}$ has no interior points So far, I've though of it this way:
Assume it does, so there exists $(a,b) \in int(\mathcal{J})$ st. $B_{\delta}((a,b)) \subseteq \mathcal{J}$. Since $(a,b) = \alpha(1,1)$, then given $(j,k) \in B_{\delta}((a,b))$, $\sqrt{(j-\alpha)^2+(k-\alpha)^2}<\delta$, for $(j,k) \in \mathcal{J}$, then $(j,k) = \beta(1,1)$, so $\sqrt{2}(\beta-\alpha)<\delta$. I'm lost here. Is it possible to do that? How can I prove this?
Thanks
| This is not the way to go to get a contradiction since it is not the case that an open ball around a point of $\mathcal J$ does not contain any other point of $\mathcal J$. In fact, any open ball $B_\delta(k,k)$ contains infinitely many points of $\mathcal J$.
The goal is to show that given $(k,k)\in\mathcal J$, there is some point in $B_\delta(k,k)$ which is not in $\mathcal J$ to show that $B_\delta(k,k)\subsetneq\mathcal J$
To do this, just view it geometrically. A ball around a point on a straight line does have points that are on that line, but also others that are not on that line. The simplest way to do it algebraically is to perturb one component a bit (as in @Nightflight's answer)
A point $(x,y)\in B_\delta(k,k)$ satisfies $(x-k)^2+(y-k)^2\lt\delta^2$. Our goal is to write a point $(x',y')$ that is in this ball but does not have $x'=y'$.
For $\delta\gt 0$, choose your favorite $\delta_1$ such that $0\lt\delta_1\lt\dfrac{\delta}{\sqrt 2}$ and $\delta_2:=\sqrt{\delta^2-\delta_1^2}$
By construction, $\delta_1\ne \delta_2$ and $\delta_1^2+\delta_2^2\lt\delta^2$ and note that $(k+\delta_1,k+\delta_2)\in B_\delta(k,k)$ but $\notin\mathcal J$ since $k+\delta_1\ne k+\delta_2$
This is one of many ways to construct such a point. The crux is to just choose two $\delta_1,\delta_2\gt 0$ such that $\delta_1\ne\delta_2$ and $\delta_1^2+\delta_2^2\lt \delta^2$ for any given $\delta\gt 0$ and note that $(k+\delta_1,k+\delta_2)$ is a point that is in the $\delta$-ball around $(k,k)$ but not in $\mathcal J$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4342554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Solve the equation $(2x^2-3x+1)(2x^2+5x+1)=9x^2$ Solve the equation $$(2x^2-3x+1)(2x^2+5x+1)=9x^2$$
The given equation is equivalent to $$4x^4+4x^3-11x^2+2x+1=9x^2\\4x^4+4x^3-20x^2+2x+1=0$$ which, unfortunately, has no rational roots. What else can I try?
| Recall that your last equation can be written as the product of two polynomials of degree two, $$(2x^2-4x+1)(2x^2+6x+1)=0$$ It is then easy to find the roots of both polynomials.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4342702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Compute $\lim _{n \rightarrow \infty}(\sqrt[n]{n}-1)^{\frac{1}{n}}$.
I need to solve the following problem: $$ \lim _{n \mapsto
\infty}(\sqrt[n]{n}-1)^{\frac{1}{n}}. $$
My attempt:
$$\lim _{n \mapsto
\infty}\ln(\sqrt[n]{n}-1)^{\frac{1}{n}}=\lim _{n \mapsto
\infty}\frac{\ln(\sqrt[n]{n}-1)}{n}.$$
Now I want to prove that
$$\lim _{n \mapsto
\infty}\frac{\ln(\sqrt[n]{n}-1)}{n}=0.$$
But I'm stuck on proving the limit without the Heine theorem (equivalence relation between limit of a sequence and limit of a function). Any help would be greatly appreciated.
| Let $n$ be a positive integer.
Using Landau notations, $\sqrt[n]{n} -1= \exp(\frac{1}{n}\ln(n)) -1= \frac{\ln(n)}{n}+ o\left(\frac{\ln(n)}{n}\right)$ as $n\to+\infty$
Then $\frac{1}{n}\ln(\sqrt[n]{n}-1) = \frac{1}{n}\ln\left(\frac{\ln(n)}{n}+o\left(\frac{\ln(n)}{n}\right)\right) \to 0$ when $n\to+\infty$
And $(\sqrt[n]{n}-1)^\frac{1}{n}=\exp(\frac{1}{n}\ln(\sqrt[n]{n}-1)) \to \exp(0)=1$ when $n\to+\infty$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4342843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
show for$a\in\left(0,1\right)$ $x^{-a}$ is not self concordant s.t $0\leq x$ I want to show for $a\in\left(0,1\right)$ $x^{-a}$ is not self concordant s.t. $x > 0$. By definition, a function is called self-concordant function if $\left|\left(x^{-\alpha}\right)^{\prime\prime\prime}\right|\leq2\cdot\left(\left(x^{-\alpha}\right)^{\prime\prime}\right)^{\frac{3}{2}}$.
So what I need to prove is that $$\left|-ax^{-a-3}\left(-a-1\right)\left(-a-2\right)\right|\leq2\cdot\left(-ax^{-a-2}\left(-a-1\right)\right)^{\frac{3}{2}}$$
doesn't hold.
I got to $-x^{-a-3}\left(-a-1\right)\leq2\cdot\left(ax^{-a-2}\right)\sqrt{\left(-ax^{-a-2}\left(-a-1\right)\right)}$, but I don't know how to continue from here.
| We have $f(x) = x^{-\alpha}, \alpha \in (0,1), x>0$. Therefore
$$\begin{align}
& f'(x) = -\alpha x^{-\alpha-1}\\
& f''(x) = \alpha(\alpha + 1) x^{-\alpha-2} \\
& f'''(x) = -\alpha(\alpha + 1)(\alpha + 2)x^{-\alpha-3}
\end{align}$$
Base on the definition of self concordant according to wiki (basically the "convex optimization" book by Boyd and Vandenberghe, chapter 9), we must have
$$|f'''(x)|\le 2|f''(x)|^{\frac{3}{2}}$$
We form the quotient $Q(x)=\frac{|f'''(x)|}{ 2|f''(x)|^{\frac{3}{2}}}$ as follow
$$Q(x) = \frac{\frac{\alpha(\alpha + 1)(\alpha + 2)}{x^{\alpha+3}}}{2{\left(\frac{\alpha(\alpha + 1)}{x^{\alpha+2}}\right)^{\frac{3}{2}}}} = \frac{(\alpha + 2)x^{\frac{1}{2}\alpha}}{2\sqrt{\alpha(\alpha + 1)}}
$$
By taking derivative of $Q(x)$, as $Q'(x) = \frac{(\frac{1}{2}\alpha)(\alpha + 2)x^{\frac{1}{2}\alpha-1}}{2\sqrt{\alpha(\alpha + 1)}} = 0$, we see that derivative of $Q$ at origin is unbounded below, in other words $x=0$ is asymptote of $Q(x)$ at origin and $Q(x) \ge 1$ for a sufficiently large $x$. $f(x)\; \color{red}{\text{is not self-concordant!}}$
However if you put $h(x) = -\ln(-f(x)) -\ln(x)$, then $h(x)$ is self-concordant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4342955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.