Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
$D^2$ is not homeomorphic to $D^n/S^{n-1} $ The following example was left in exercise of my topology class. I think I would need help on how to prove the asserstion asked.
Let $D^n$ denote the unit ball in the n-th dimensional euclidean space and let $S^{n-1} $be the unit sphere.
Show that $D^2 $ is not homeomorphic to $D^n /S^{n-1}$.
These types of questions are proved by assuming that there exists an homeomorphism and then there is a fundamental property( upto homeomorphism) which is not satisfied in one of the sets but is satisfied in the other sets.
But I am not able to think of such a property in the current question.
| $S^n$ is not homeomorphic to $D^2$ for any $n$. That's because $D^2\setminus\{0\}$ is not simply connected (it retracts onto the boundary circle) but $S^n$ minus a point is... That uses the fact that $S^1$ isn't simply connected, which follows from perhaps one of the most known results of algebraic topology: $$\pi_1(S^1)\cong\Bbb Z$$And $S^n\setminus\{p\}$ is homeomorphic to $\Bbb R^n$ for any $p$, so it must be simply connected if $n\ge1$. If $n=0$, $S^0=\{-1,1\}$ (or $D^0/\emptyset$) is obviously not homeomorphic to $D^2$.
As for calculating $(n\ge1)$:
$$D^n/S^{n-1}\cong S^n$$Consider the map: $$D^n\to S^n$$Which sends: $$x=(x_1,x_2,\cdots,x_n)\mapsto(2\sqrt{\|x\|(1-\|x\|)}\cdot x,2\|x\|-1)$$Giving the left hand side coordinates of $\Bbb R^n$ and the right hand side coordinates of $\Bbb R^n\times\Bbb R$. This is a quotient map for compactness reasons, and the relation it induces is precisely the relation that kills $S^{n-1}=\partial D^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4621037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
The Weil conjecture involving Betti numbers. I know that one of the Weil conjectures says something loosely like the following.
If $X$ is a projective variety obtained from a complex projective variety $X_\mathbf{C}$ by "reduction mod $p$", then the $j$-th Betti number of $X_\mathbf{C}$ is equal to the degree of the polynomial $F_j \in \mathbf{Z}[x]$, where the $F_j$ are defined by
\begin{align}
\prod_{j = 0}^{2n}F_j(q^{-s})^{(-1)^{j + 1}} = \zeta_X(s) = \exp\left(\sum_{d \geq 1}\frac{N_d}{d}q^{-ds}\right).
\end{align}
I have two questions, but they are both related to this statement.
*
*In this context, is the $j$-th Betti number just the rank of the singular homology group $H_j(X_{\mathbf{C}})$ of $X_\mathbf{C}$ as a topological space?
*We can consider $X_\mathbf{C}$ as a scheme over $\textrm{Spec} \, \mathbf{Z}$ via a unique morphism $\varphi : X_\mathbf{C} \to \textrm{Spec} \, \mathbf{Z}$. Taking the fibre of $\varphi$ over the point $(p)$, one obtains a scheme $X_\mathbf{C} \times_{\textrm{Spec} \, \mathbf{Z}} \mathbf{F}_p$ over $\mathbf{F}_p$, à la Hartshorne II.3 page 89. Do we say that $X$ is obtained from $X_\mathbf{C}$ by reduction mod $p$ because $X \cong X_\mathbf{C} \times_{\textrm{Spec} \, \mathbf{Z}} \mathbf{F}_p$?
| *
*You want $H_j(X(\Bbb C))$, the rank of the singular homology of the $\Bbb C$-points of $X$ endowed with the analytic topology. Singular homology does not do what you want on the underlying topological space of schemes over $\Bbb C$ (because for instance every irreducible topological space is contractible), but it does what you want on their $\Bbb C$-points.
*This is not quite right. If the map $X\to\operatorname{Spec} \Bbb Z$ factors through $\operatorname{Spec} \Bbb C$, then the fiber over $(p)$ is empty since the image of $\operatorname{Spec} \Bbb C \to \operatorname{Spec} \Bbb Z$ is the generic point. What you want to do is to have $X_{\Bbb C}\to \operatorname{Spec} \Bbb C$ be the base extension of some projective variety $X\to\operatorname{Spec} R$ where $R$ is the ring of integers of some number field (or some slight generalization; please correct me in the comments if this is not quite precise enough). Then $X(\Bbb C)\cong X_{\Bbb C}(\Bbb C)$ and the fiber of $X\to \operatorname{Spec} \Bbb Z$ is nonempty over each point and you can actually form the fiber product you speak of.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4621161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Power functions question and solving the equation. I'm translating this from my mother tongue so bear with me -
I have a question in relation to power functions but I think the problem I am dealing with is my seemingly lack of other mathematical rules that I need!
The equation I want to solve is:
$$7\cdot 5^x = 9-6\cdot 5^x$$
Now I know I can't just simplify it by subtracting $6$ from $9$, because of the product rule $-6\cdot 5^x$. Then, I tried to put $-6\cdot 5^x$ to the left side:
$$\frac{7\cdot 5^x}{-6\cdot 5^x} = 9$$
I think that's right, since the opposite of a product is division so I don't need to switch up the minus sign. But then I don't know what follows.. could someone please break it down for me
| Let $5^x$ be $y$.
Now the question becomes $$7y = 9-6y$$
This is a linear equation, we would collect all the terms containing $y$ at one side.
$$7y+6y=9$$
You should be able to obtain
$$y=\frac{9}{13}.$$
Hence $$5^x=\frac9{13}$$
Now use logarithm to solve your question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4621298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $7 \mid 11^n - 4^n$ with mathematical induction I want to prove that $7 \mid 11^n - 4^n$ with mathematical induction. This is what I wrote:
*
*For $n = 1$, we have $7 \mid 11^1 - 4^1 \Rightarrow 7 \mid 7$ which is obviously true. $\checkmark$
*Assume that the statement is true for $n = k$. So: $7 \mid 11^k - 4^k$ and therefore there is an integer such that $m$ such that: $11^k - 4^k = 7m$. Thus: $4^k = 11^k - 7m$.
*Now we prove its truth for $n = k + 1$. So we want to prove that $7 \mid 11^{k + 1} - 4^{k + 1}$. We can write this like this: $7 \mid \big(11\cdot11^k\big) - \big(4\cdot4^k\big)$ and since $4^k = 11^k - 7m$, so in fact we have to prove: $7 \mid \big(11 \cdot11^k\big) - \big(4\cdot(11^k - 7m)\big) \Longrightarrow 7 \mid 11\cdot11^k - 4\cdot11^k + 7(4m) \Longrightarrow 7 \mid 7\cdot11^k + 7(4m) \Longrightarrow 7 \mid 7(11^k + 4m)$.
But from here you can clearly see that $7(11^k + 4m)$ is divisible by 7. So according to the principle of mathematical induction, the statement is proved. $\blacksquare$
Now my question is, is this proof correct? And is there another way to prove this statement by mathematical induction?
| A more general way would be to prove that $a^n-b^n=(a-b)(a^{n-1}+ba^{a-2}+\cdots+b^{n-1}a+ b^{n-1})$.
Another way would be to prove the binomial theorem $(a+b)^n=\sum_{n\ge i\ge 0}\binom{n}{i}a^ib^{n-i}$ and then expand $11^n=(7+4)^n$.
A third and alternate (unnecessary) way is to notice that $11^n-4^n$ is the number of ways of giving one of $11$ candies $\lbrace c_1,\cdots,c_{11}\rbrace$ to $n$ children such that not all of them get one of $c_1,c_2,c_3$ or $c_4$. No matter the distribution of candies, there is at least one kid who has only $7$ options for candy. So, $11^n-4^n$ is always divisible by $7$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4621513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How fast does $a_n:=\int_{1/\sqrt{2}}^1 \frac{dx}{\left(\frac{1}{2}+x^2\right)^{n+1/2}}$ decay as $n \to \infty$? Let
\begin{equation}
a_n:=\int_{1/\sqrt{2}}^1 \frac{dx}{\left(\frac{1}{2}+x^2\right)^{n+1/2}}
\end{equation}
for $n \in \mathbb{N}$.
Then, $a_n$ is clearly a monotone decreasing sequence of positive numbers and by the Dominated Convergence Theorem, $a_n \to 0^+$ as $n \to \infty$.
However, I have some difficulty estimating how fast $a_n$ decays. For example, $\frac{a_n}{f(n)}=O(1)$ as $n \to \infty$ for some polynomial $f$?
Could anyone please provide insight into the decay rate of $a_n$?
| Here is a more elementary answer to close the debate. By the change of variable $t = n\,(x^2-1/2)$ (which gives $x^2 + 1/2 = 1+t/n$) one gets
$$
\int_{\frac{1}{\sqrt 2}}^1 \frac{\mathrm d x}{\left(\tfrac{1}{2}+x^2\right)^{n+1/2}} = \frac{1}{2n}\int_0^\frac{n}{2} \frac{\mathrm d t}{\left(1+\frac{t}{n}\right)^{n+1/2}\left(\frac{1}{2}+\frac{t}{n}\right)^{1/2}} = \frac{I_n}{2n}
$$
where $I_n = \int_0^\infty f_n$ with
$$
f_n(t) = \frac{\mathbb{1}_{[0,n]}(t)}{\left(1+\frac{t}{n}\right)^n\left(1+\frac{t}{n}\right)^{1/2}\left(\frac{1}{2}+\frac{t}{n}\right)^{1/2}} \underset{n\to\infty}\longrightarrow \frac{1}{e^t \left(\tfrac{1}{2}\right)^{1/2}}
$$
since $(1+t/n)^n \to e^t$. Hence, by dominated convergence,
$$
I_n = \int_0^\infty f_n \underset{n\to\infty}\longrightarrow\int_0^\infty \sqrt{2} \,e^{-t}\,\mathrm d t = \sqrt 2
$$
that is $I_n = \sqrt{2} + o(1)$. Thus,
$$
\int_{1/\sqrt 2}^1 \frac{\mathrm d x}{(\tfrac{1}{2}+x^2)^{n+1/2}} = \frac{1}{\sqrt 2}\frac{1}{n} + o\left(\frac{1}{n}\right).
$$
In particular, this agrees with the coefficient given by Svyatoslav.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4621703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Does this sequence of complex numbers converge? I need to determine if the following recursive sequence of complex numbers converges.
$$z_0 = 0, \quad z_n = \frac{1}{2i + z_{n-1}}, \quad n \geq 1.$$
I started by expanding the sequence:
$$z_1 = \frac{1}{2i}, \quad z_2 = \frac{1}{2i + \frac{1}{2i}}, \quad z_3 = \frac{1}{2i + \frac{1}{2i + \frac{1}{2i}}}, \dots, $$
so this is a continued fraction. What strategies can I use to determine if a sequence of complex numbers converges or not?
| It helps to actually compute the first few terms. Using $i^{-1}=-i$, we get that
$$\begin{cases}z_0 = 0 \\ z_1 = -\frac{1}{2}i \\ z_2 = -\frac{2}{3}i \\ z_3 = -\frac{3}{4}i \\ \cdots\end{cases}$$
so it is relatively straightforward to conjecture (and prove via induction, an exercise for OP) that
$$z_n = -\frac{n}{n+1}i$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4622136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Annihilator method for Difference Equation. Suppose we have some difference equation $f(y(n))$ which is yet to be defined. Now assume it is equal to some function of $n$. When using the annihilator method we first rewrite our difference equation $f(y(n))$ to some characteristic polynomial. Then we must find a difference equation for the RHS, suppose our RHS is equal to $n2^n - 1$. How does one construct a difference equation of the RHS?
| To resume and reformulate your problem, we have an inhomogeneous recurrence relation $F(a_n,\ldots,a_0) = g(n)$, which can be rewritten as the more general homogeneous problem $\Phi(a_n,\ldots,a_0) = 0$, with $\Phi = G \circ F$, where $G$ is the annihilator of $g$. In practice, it means that the sequence $b_n = g(n) := n2^n-1$ satisfies $G(b_n,\ldots,b_0) = 0$.
It it to be recalled that solutions of the form $(\alpha_0+\alpha_1n+\alpha_2n^2+\ldots+\alpha_m\alpha^m)\lambda^n$, with $m \le n$, come from the characteristic polynomial $(r-\lambda)^m$, which itself comes from the operator $(S-\lambda)^m$, where $S$ is the shift operator, defined by $Sb_n = b_{n+1}$.
In the present case, we have two roots, namely $\lambda = 1$ and $\lambda = 2$, with a linear prefactor for the second root, hence the characteristic polynomial $(r-1)(r-2)^2$ and the annihilator $G = (S-1)(S-2)^2 = S^3 - 5S^2 + 8S - 4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4622285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What's the groupification of the cardinal numbers (without using AC)? The set-theoretic cardinal numbers form an abelian monoid under addition (ignoring size problems). Under the axiom of choice, given two cardinal numbers $c$ and $d$, there exists a cardinal number $e$ such that $c + e = d + e = e$. We now determine its groupification (or "Grothendieck group"): Considering formal differences, $c - d = (c + e) - (d + e) = e - e = 0$. So the groupification of the cardinal numbers is the trivial group.
What happens if we don't assume the Axiom of Choice? Things don't seem as easy because the cardinal numbers may only form a Partially Order Set (thanks to the CSB theorem). I don't know if they satisfy other properties in general.
| Even without the axiom of choice, $x+x\cdot\aleph_0=x\cdot\aleph_0$. In fact, $x+x=x$ if and only if $x\cdot\aleph_0=x$.
So, given any two sets $C$ and $D$, take $E=(C\cup D)\times\omega$. Then, let's denote the cardinals with the lowercase letters, we get that $c+e=d+e=e$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4622450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
The sequence $\{x_{n}\}$ by $x_{n}=\frac{1}{2\pi}\int_{0}^{\pi/2}\tan^{\frac{1}{n}}t \ dt$ is such that $\{x_{n}\}$ converges to $1/4$ For $n\geq 2$, define the sequence $\{x_{n}\}$ by $$x_{n}=\frac{1}{2\pi}\int_{0}^{\pi/2}\tan^{\frac{1}{n}}t \ dt$$ Then prove that the sequence $\{x_{n}\}$ converges to $1/4$.
My Attampt: I think it is a application of Dominated convergence theorem, But I am not sure, give some Hints.
| Divide the integral into two parts, from $0 < t < {\pi \over 4}$ and ${\pi \over 4} \leq t < {\pi \over 2}$. On the first interval one has $\tan t < 1$ and on the second one has $\tan t \geq 1$. Hence the functions $\tan^{1 \over n} t$ are increasing in $n$ on the left interval and decreasing in $n$ on the right interval. This should make it easier to find the right convergence theorem(s) to use on a given interval.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4622554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Can a $k$-planar graph have quadratically many maximal cliques? Can a $k$-planar graph have quadratically many maximal cliques?
I know that any planar or $1$-planar graph $G=(V,E)$ has only $O(|V|)$ maximal cliques, but is there anything known about this question for $k>1$?
| As far as I know, this study has only just begun.
The following statement came from Literature [1].
For $k ≥ 3$, not even the maximum number of edges is known for
$n$-vertex $k$-planar graphs, so counting larger cliques is likely to
be an extremely difficult problem.
[1] Gollin J P, Hendrey K, Methuku A, et al. Counting cliques in 1-planar graphs[J]. European Journal of Combinatorics, 2023, 109: 103654.
In fact, Bekos et al. [2] proved that a $3$-planar graph with $n$-vertices has at most $5.5n-10.5$ edges. But we do not know if the bound is tight.
[2] Bekos M A, Kaufmann M, Raftopoulou C N. On the density of non-simple 3-planar graphs[C]//International Symposium on Graph Drawing and Network Visualization. Springer, Cham, 2016: 344-356.
Gollin et al. also give this conjecture:
Conjecture. For $n ≥ 7$, the maximum number of triangles in an $n$-vertex 2-planar graph is at most $(17n − 49)/2$.
The bound in the conjecture is achieved by the 2-planar graphs formed by stitching copies of $K_7$ together. This stitching is possible since $K_7$ has a $2$-drawing with two facial triangles.
An $n$-vertex 1-planar graph with $4n − 8$ edges is called an optimal 1-planar graph. An open problem for optimal 1-planar graphs that I love is as follows:
Problem. Let $n$ and $t$ be positive integers with $t ∈ \{3, 4, 5\}$ and $n ≥ 10$. What is the maximum number of subgraphs isomorphic to $K_t$
in an $n$-vertex 1-planar graph with $4n − 8$ edges?
To address this problem, I recently worked on the properties of optimal 1-plane graphs, although some are not directly relevant. (Some other progress is being organized)
*
*L.C. Zhang, Y.Q. Huang, The reducibility of optimal 1-planar graphs with respect to the lexicographic product [J]. arXiv preprint arXiv:2211.14733, 2022.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4622689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof of an (Elementary) Inequality Involving Elementary Symmetric Polynomials Let $x_1,\dots, x_n$ be $n$ variables.
For brevity I'll use the following notation: for $I \subset \{1,\dots , n\}$ define
$$
x_I := \prod_{i \in I} x_i.
$$
Let $e_k(x_1,\dots, x_n)$ be an elementary symmetric polynomial. That is
$$
e_k(x_1,\dots, x_n) = \sum_{\substack{I \in \binom{[n]}{k}}} x_I.
$$
Suppose that we have the following conditions on $x_1,\dots, x_n$:
$$
\sum_{i=1}^n x_i = 1 \qquad \text{and} \qquad x_i \geq 0
$$
I have been trying to prove the following inequality without Lagrange multipliers:
$$
e_k(x_1,\dots x_n) \leq \frac{\binom{n}{k}}{n^{k}}
$$
I have had two difficulties:
(1) I have made an (albeit feeble) attempt using AM-GM. Any hints on how one might proceed without Lagrange Multipliers would be very much appreciated.
(2) Additionally, when one does use Lagrange Multipliers I have been having trouble arguing that the maximum does indeed occur when
$$
x_1 = \cdots = x_n = \frac{1}{n}.
$$
Thanks in advance for your help.
| Your inequality follow immediately from the Maclaurin's inequality.
The Maclaurin inequality we can prove by the Rolle's theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4622871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Use the approximation of $\,f(x)=\ln(1+x)\,$ at $\,x_0=0\,$ to estimate $\,\ln(15)$ I've been trying to solve an exercise from my textbook which goes as following.
For the function $f(x) = \ln(1+x)$
a) compute the Taylor's approximation at $x_0 = 0$
b) use this approximation to estimate $\ln 15$
The confusing part of the exercise for me is the point b)
How do I exactly use the approximation at $x_0 = 0$ to approximate $\ln (15)$?
$$f(x)= \ln(1+x)$$
$$f(x)\approx x-\frac{1}{2}x^2+\frac{1}{3}x^3-\frac{1}{4}x^4$$ This is the approximation at $x_0=0$ To my understanding to approximate $\ln 15$ we need to choose a point which is close to $x=15$ as our $x_0$ in order to approximate the value of $\ln15$. So my question is how do I exactly use this formula of $f(x)=\ln(1+x)$ at $x_0=0$ for $\ln(15)$?
| The Taylor polynomial $T(x)$ that you’re using to approximate $f(x)$ around zero can not be used to approximate $f$ around $14$. Only $x$ values close to zero would work.
Please check the picture showing the two functions around zero: in red is $f(x)$ and in blue is $T(x)$. It was obtained from here
$$\ln \left ( \frac{15}{e^3}\right )=\underbrace{\ln 15-3}_{\approx -0.2919}=-\ln \left ( \frac{e^3}{15}\right )=-\ln \left ( 1+\frac{e^3-15}{15}\right )\approx -T \left ( \frac{e^3-15}{15}\right )\approx -0.2912$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4622985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Partial derivatives of $g(x,y,z)=f(x-y,y-z,z-x)$. Let $f$ be two times differentiable function of variables $u,w,v$ and define $g(x,y,z)=f(x-y,y-z,z-x)$. Determine $\frac{\partial g}{\partial x}+\frac{\partial g}{\partial y}+\frac{\partial g}{\partial z}$ and express $\frac{\partial^2 g}{\partial x^2}$ in partial derivates of $f$.
This is from calc 3. I don't know how to solve this at all but I suspect there is some easy rule to do this. Help would be appreciated.
| Hint: Notice that we can use a tree diagram. Then with a change of variables $g(x,y,z)=f(u,v,w)$ and finally Chain rule.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4623097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How to calculate the distance of an object I have two screenshots (1920x1080) of a game, one with a 348-pixel-tall object that is 1 meter distant from the camera, and the other with a 138-pixel-tall version of the same thing. Given that the camera's field of vision is 90 degrees in the second screenshot, how can I precisely measure the object's distance from the camera?
I tried using a formula to determine the object's distance from the camera based on the object's height and camera distance, but the results were inaccurate.
| Precise answer: Without knowing the projection model, this is not possible.
An ad-hoc engineer's answer: One may assume the projection model is a pinhole camera model. This is probably a good assumption, because it is the model with least strange distortions and with geometrically beneficial properties. For a virtual game camera with horizontal* 90° opening angle this assumption seems to be reasonable.
Now you can translate your question into simple equations based on intercept and trigonometric theorems:
(I) w_1 / f = s / d_1
(II) w_2 / f = s / d_2
(III) w_45 / f = tan(45°)
where
f = focal length (unknown)
s = object width (unknown)
d_1 = object distance in image 1 = 1 m
d_2 = object distance in image 2 (unknown)
w_1 = object width in image coordinates in image 1 = 348 px
w_2 = object width in image coordinates in image 2 = 138 px
w_45 = 1920 px / 2
What remains is to solve this equation system:
(I) => f * s = w_1 * d_1
(II) => f * s = w_2 * d_2
___
=> w_1 * d_1 = w_2 * d_2
=> d_2 = w_1 * d_1 / w_2 = 348 px / (138 px) * 1 m = 2.52... m
So you do not even need equation (III). You would need it in case you are interested in focal length f = 960 px or object width s = 0.3625 m.
*I assume it is the horizontal opening angle from what is common and from what you drew in your question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4623209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Prove complex integral equality Suppose $\triangle$ is the open unit disk and $\overline{\triangle}$ be it’s closure (closed unit disk). Let $f$ be holomorphic in an open set containing the set $D = \mathbb{C} - \overline{\triangle}$ such that $ lim_{z \to \infty} f(z) = 5$. Show for $z \in D$,
$$\frac{1}{2 \pi i} \int_{\partial \triangle} \frac{f(\zeta)}{\zeta -z} d\zeta = 5 - f(z) $$
I tried to use a proof similar to that of Cauchy’s formula using the key hole contour with the smaller circle being the integral above and the radius of the bigger circle going to infinity. However, I don’t know how to show that the contour integral of the bigger circle is $ 2 \pi i ( z -5)$.
| Hint: $f$ has a Laurent expansion $f(\zeta) = 5 + a_1/\zeta+ a_2/\zeta^2 + \ldots$ which converges absolutely for $|\zeta| = 1$. What do you get for your integral with $f(\zeta)$ replaced by each of these terms?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4623334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Power series representation of the sum of $\sum_{k=0}^\infty(k^3+1) x^k/k!$ So, recently I was searching for, what seemed like a fairly simple one, what function does this sum represent:
$\displaystyle f(x)=\sum_{k=0}^\infty\left(k^3+1\right)\frac{x^k}{k!}$
So the natural thing would be to separate both equations:
$\displaystyle f(x)=\sum_{k=0}^\infty k^3\frac{x^k}{k!}+e^x$
So, with further identification $f(x)$ seemed like a simple answer:
$f(x)=T_3(x)e^x+e^x$
where $T_n$ is the $\,n^{\mathrm{th}}$ Touchard polynomial.
$f(x)=\left(x+3x^2+x^3+1\right)e^x$
What bugged me is the fact that neither I do I understand Touchard Polynomials nor does the question fit the level of knowing them, this is intended as a Second Semester undergrad Calculus II question, so my question is simple, is there a method of finding the sum without having to deal with the Touchard polynomials, is there a way of arriving to the same (or different, maybe I’m wrong) answer without dealing with them?
| $$
\begin{align}
\sum_{k=0}^\infty k^3 \frac{x^k}{k!} &= x\sum_{k=1}^\infty k^3 \frac{x^{k-1}}{k!} \\
&=x \sum_{k=0}^\infty (k+1)^2 \frac{x^k}{k!} \\
&=x\left(\sum_{k=0}^\infty (k+1)\frac{x^{k+1}}{k!} \right)^\prime \\
&=x\left(\sum_{k=1}^\infty k \frac{x^{k+1}}{k!} +xe^x\right)^\prime \\
&=x\left(x^2e^x + xe^x\right)^\prime \\
&=x(x^2 + 3x +1)e^x
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4623469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
What does the $z-z_0=m(y-y_0)$ equation do in this article? and why is it written into the sphere equation?, and how this 2 equations derived? I'm reading this topic Analytic Treatment of the Perspective View of a Circle in this article. I don't understand the subject, so I decided to analyze it piece by piece.
*
*$z-z_0 = m(y-y_0)$ What is the equation used for?
*And why do they put it inside the sphere equation?
$(x - x_0)^2 + (y - y_0)^2 + (z - z_0)^2 = r^2$
-->
$(x - x_0)^2 + (1 + m^2)(y - y_0)^2 = r^2$
Edit:Finally got it, thanks to your contributions. I just couldn't figure out how the 2 equations at the end were derived, I would appreciate it if you could explain.
$y-y_0=\frac{z_0-vy_0}{v-m}$
$x-x_0=uy-x_0=\frac{u(z_0-my_0)-vx_0+mx_0}{v-m}$
| $z-z_0=m(y-y_0)$ and $x$ arbitrary describes a plane with normal direction $(0,1,m)$. Projection of the sphere in this normal direction onto the $xy$-plane will leave an elliptical shadow that is described by the ellipse equation $$(x-x_0)^2+(1+m^2)(y-y_0)^2=r^2.$$
As to the wider context of the question: The equations given describe a scene in 3D space, specifically a circle defined as intersection of a sphere and a plane.
A camera is situated with its lense or hole (for a pinhole camera) at $(0,0,0)$. The physically correct image plane would be behind the lense, but mathematically it makes no difference (except a point-reflection) to use a plane at the same distance in-front of the lense.
The forward direction of the camera is here the y-direction, the image plane is positioned orthogonal to that at $y=1$. The optical ray for a point in the image plane is thus $(x,y,z)=(su,s,sv)=(yu,y,yv)$.
Together this results in an over-determined system connecting points in the image plane to the object in the scene
\begin{align}
x&=uy\\
z&=vy\\
z-z_0&=m(y-y_0)\\
r^2&=(x-x_0)^2+(y-y_0)^2+(z-z_0)^2
\end{align}
Use these equations to eliminate $x,y,z$ starting with $x,z$
\begin{align}
vy_0-z_0&=(m-v)(y-y_0)\\
r^2&=(u(y-y_0)+(uy_0-x_0))^2+(y-y_0)^2+(v(y-y_0)+(vy_0-z_0))^2
\end{align}
and then $y$, to get
\begin{align}
r^2(m-v)^2&=(u(vy_0-z_0)+(m-v)(uy_0-x_0))^2+(vy_0-z_0)^2+m^2(vy_0-z_0)^2
\\
&=((vx_0-uz_0)+m(uy_0-x_0))^2+(1+m^2)(vy_0-z_0)^2
\end{align}
for the equation of the projection in the image plane. Now one would need to expand and sort the terms to get the normal form for the resulting quadric.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4623614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Exercise 13, Section 6.4 of Hoffman’s Linear Algebra
Let $V$ be the space of $n\times n$ matrices over $F$. Let $A$ be a fixed $n \times n$ matrix over $F$. Let $T$ and $U$ be the linear operators on V defined by $T(B)=AB$ and $U(B)=AB-BA$.
(a) True or false? If $A$ is diagonalizable (over $F$), then $T$ is diagonalizable.
(b) True or false? If $A$ is diagonalizable, then $U$ is diagonalizable.
My attempt: (a) Suppose $A$ is diagonalizable. By theorem 6 section 6.4, $A$ is diagonalizable$\iff$$m_A=(x-c_1)\cdots (x-c_k)$, where $c_i\in F$ and $c_i\neq c_j$, if $i\neq j$. By exercise 10 section 6.3, $m_A=m_T$. By theorem 6 section 6.4, $T$ is diagonalizable.
(b) Suppose $A$ is diagonalizable. Let $R:V\to V$ such that $R(B)=BA$. Then $U=T-R$. By (a), $T$ and $R$ are diagonalizable. How to show $T-R$ is diagonalizable?
| Note that $TR=RT$, as for any matrix $B$, $$TR(B)=T(BA)=ABA=R(AB)=RT(B).$$
Therefore, $T,R$ can be made diagonal simultaneously in a basis, which means $T-R$ is diagonal in that basis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4623798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove function $f:\mathbb{R} \rightarrow \mathbb{R}$, $f(x)=e^{x^2}$ is not uniformly continuous. I try this proof by contradiction that
Assuming $f:\mathbb{R} \rightarrow \mathbb{R}$, $f(x)=e^{x^2}$ is uniformly continuous.
By definition we have $\forall \varepsilon>0$ $\exists \delta>0$ $|x-y|<\delta \Rightarrow |f(x)-f(y)|<\varepsilon$.
Choosing $\varepsilon=1$,we have $|x-y|<\delta \Rightarrow |f(x)-f(y)|<1$.
If $x<y$, than $|f(x)-f(y)|=|e^{y^2}-e^{x^2}|<|e^{(x+\delta)^2}-e^{x^2}|=|e^{x^2}(e^{\delta^2+2x\delta}-1)|<1$.
But for $x \rightarrow +\infty$, we have $|e^{x^2}(e^{\delta^2+2x\delta}-1)| \rightarrow +\infty$, which is a contradiction.
For $x>y$ it is the same.
May I ask if this is reasoning correct and sufficient?
Many thanks!
| I am just a student so i hope my answer is correct. But maybe you can try this.
The definition of a no uniformly continuous function is: $\exists \epsilon_0>0, \forall \delta>0 \; s.t. \; \exists |x-y|<\delta \Rightarrow |f(x)-f(y)|\geq \epsilon_0$
So if we find two sequences $x_n,y_n$ that take value on the definition domain of the function $f(x)$ and moreover that verify $\lim_{n\rightarrow \infty } |x_n-y_n| =0$ and $lim_{n\rightarrow \infty }|f(x_n)-f(y_n)|=\infty$. It simply means that it exist $x,y$ that will belong to the set: $S_{\delta}=\left \{ x,y \in definition \; domain \; of \; the \; fct \; f : |x-y|<\delta \right \}$ for any $\delta>0$ you can choose. But on the other side we have: $|f(x_n)-f(y_n)|>\epsilon_0$ for any $\epsilon_0>0$ you can choose from a specific $N$ (definition of divergence to $+\infty$).
Following this logic, let choose: $x_n=n+1/n \; , \; y_n=n \Rightarrow \lim_{n\rightarrow \infty } |x_n-y_n| = \lim_{n\rightarrow \infty } |1/n|=0 $ but on the other side: $ lim_{n\rightarrow \infty }|e^{(n+1/n)^2}-e^{(n)^2}|=\infty $
Indeed: $|e^{(n+\frac{1}{n})^2}-e^{n^2}|=|e^{n^2+\frac{1}{n^2}+2}-e^{n^2}|=|e^{n^2}(e^{\frac{1}{n^2}+2}-1)|\underset{n \to \infty }{\rightarrow} \infty |(e^2-1)|=\infty$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4624368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A Set of $n$ Numbers Where Any $k$ Taken Together Sum to a Prime Not too long ago, I saw an interesting question on this site that was asking about the maximum value of $n$ if there exist $n$ numbers such that any $3$ of them taken together sum to a prime. However, before I could answer it, the question was deleted. I am interested in a generalisation of this question. Hence, my question is:
Let $k$ be an arbitrary natural number. What is largest natural number $n\ge k$ such that there are $n$ natural numbers from which any arbitrary selection of $k$ numbers add up to a prime?
Small values of $k:$
$k=1:$
Here, $n$ can be any natural number. We just have to take a set of $n$ primes.
$k=2:$ This is where things start to get interesting. Since if two numbers sum up to an odd prime, they must be of different parity, we get that the maximum value of $n$ is $2.$
$k=3:$ Taking modulo $3$, we get that in this set of $n$ numbers, we can have reminders $0,1,2.$ Since no $3$ of them taken together must be divisible by $3,$ we get that $n≤6.$ If $n$ were greater than $6$, then we could find $3$ numbers with the same remainder modulo $3.$ But, we can do better. Suppose $S_3$ is an ordered $6$-tuple of $6$ numbers taken modulo $3$. Let $S_3=(0,1,2,0,1,2).$ Note that if $S_3$ were anything else, then too, we could find $3$ numbers such that they have the same remainder modulo $3.$ Hence, if we prove that this set isn't possible, we will be done with the $n=6$ case. In $S_3,$ we can pick the first three numbers. They have remainders $0,1,2.$ Since their sum is $0$ modulo $3,$ this number is either a multiple of $3,$ or $3$ itself. But, this number cannot be $3$ as there is only way to partition $3$ into $3$ natural numbers:$1+1+1.$ Since this number is a multiple of $3,$ and hence cannot be prime, we are done. So, $n≤5.$ With a bit of casework, it's not too tough to show that $n=5$ isn't possible either. So, $n≤4.$
$k≥4:$ This is where I need your help. Following the trend, I took modulo $k.$ I have a sort of upper bound for $n.$ Again, let $S_k$ be an ordered $n$-tuple of $n$ numbers taken modulo $k.$ Now, $S_k$ cannot have the block $(0,1,2,\ldots,k-1)$ repeating $k$ times. If it did have this block $k$ times, we would again be able to find a number that's divisible by $k.$ In fact, by the same argument, we get that this block can be present at most $k-1$ times. Hence, $n≤k(k-1).$ But, like we saw before, we may be able to do better than $n≤k(k-1).$ Note that I am using the block $(0,1,2,\ldots,k-1)$ to find the upper bound, as if the block were anything else, we would find $k$ numbers with the same remainder modulo $k$ quicker.
| Here's a partial answer.
The maximum possible value of $n$ is $k$ if $k$ is even. If $k$ is odd, then $n\leq 2k-2$.
We first treat the case where $k$ is even. In this case, it is clearly possible to get $n=k$, since there exists a prime number exceeding $k(k+1)/2$, which is thus the sum of $k$ distinct natural numbers. So, it suffices to show that, given any set $S$ of $k+1$ natural numbers, some subset of $S$ with size $k$ has composite sum. Let $s$ be the sum of the elements of $S$, so that the subsets of $S$ with size $k$ have sums $s-a$ for every $a\in S$. Since any such sum is the sum of $k$ distinct positive integers, it is strictly greater than $k$, implying that $s-a$ is odd for all $a\in S$. However, if $S=\{a_1,\dots,a_{k+1}\}$, then
$$s=a_1+\cdots+a_{k+1}\equiv (s-1)+\cdots+(s-1)=(k+1)(s-1)\equiv s-1\pmod 2,$$
a contradiction. (We have used the fact that $k$ is even in the last step).
In the case where $k$ is odd, we have tha any set of $2k-1$ integers contains a subset of size $k$ divisible by $k$. So, $n$ must be at most $2k-2$. If $n=2k-2$, there will be no "modulo $k$ obstruction," as we can choose our set to consist of $k-1$ numbers which are divisible by $k$ and $k-1$ numbers which are $1\pmod k$, but there may be other obstructions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4624593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Showing that the Fourier transform of a finite Borel measure $\mu$ with compact support is Lipschitz continuous function in $\mathbb{R}^n$ Let $\mu$ be a finite Borel measure on $\mathbb{R}^n$ with compact support, that is
$$\mathrm{supp}(\mu) = \{x\in X\mid \forall N_x\in \tau:(x\in N_x\implies \mu(N_x) > 0)\}$$
Here is a related article on the support.
Define the Fourier transform of $\mu$ as $\hat{\mu} = \int_{\mathbb{R}^n}e^{-2\pi i \xi \cdot x}d\mu x, \xi \in \mathbb{R}^n$. It is clear to me that $||\hat{\mu}||_\infty \leq \mu(\mathbb{R}^n)$, but I am having trouble proving that $\hat{\mu}$ is Lipschitz continuous. Namely, suppose that $\mathrm{supp}(\mu)\in B(0, R)$. My textbook says that then
$$|\hat{\mu}(x) - \hat{\mu}(y)| \leq R\mu(\mathbb{R}^n)|x - y|$$
for $x,y\in\mathbb{R}^n$. What is not clear to me is that how you can massage $|x - y|$ out of $e^{-2\pi i x\cdot \xi} - e^{-2\pi i y\cdot \xi}$ for a fixed $\xi\in\mathbb{R}^n$. In single variable case one way to obtain a Lipshitz upperbound would be to use triangle inequality and apply the mean value theorem for the cosine and sine term. But I don't think we can apply the same strategy directly in our case, for approximation in one of the iterated integrals by e.g.
$$|\cos(2\pi\left(x_k\xi_k + \text{rest}\right)) - \cos(2\pi\left(y_k\xi_k + \text{rest}\right))| \leq |\xi_k||x_k - y_k|$$
will not produce the desired inequality.
| $|e^{ia}-e^{ib}|\leq |a-b|$ for all real numbers $a$ and $b$.
$|(-2\pi x.\xi_1)-(-2\pi x.\xi_2)|\leq 2\pi |x.\xi_1-x.\xi_2|\leq 2\pi \|x\|\|\xi_1-\xi_2\|$ and there is a bound fo $\|x\|$ on the support.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4624745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
All natural number solutions for the equation $a^2+b^2=2c^2$ $a$, $b$ and $c$ of all Pythagorean triplets can be written in the form
$$
\begin{split}
a &= 2mn\\
b &= m^2-n^2 \\
c &= m^2+n^2
\end{split}
$$
where $m$ and $n$ are natural numbers. For any natural number $m$ and $n$, this set of equations will give a Pythagorean triplet. And all Pythagorean triplets satisfy this set of equations.
Can $a$, $b$ and $c$ of all triplets satisfying the equation $$a^2+b^2=2c^2$$ where $a$, $b$ and $c$ are natural numbers, be written as a set of equations as for the Pythagorean triplets?
So, I need a set of equations that generates triplets that satisfy the equation $a^2+b^2=2c^2$ for any natural numbers I plug into the set of equations. Also, every natural number triplets satisfying the equation $a^2+b^2=2c^2$ must satisfy the set of equations.
I tried to derive the set of equations myself, no attempts have been successful yet.
I would like to have the proof of the set of equations, (otherwise I won't know if every triple will satisfy the set of equations)
Any comments that helps to give an insight into solving the problem are really appreciated.
| Hint: For $a^2+b^2=2c^2$, observe that $a, b$ have the same parity. Therefore there exist integers $u, v$ such that $a = u+v$ and $b = u-v$. Expand...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4624896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
$\lim\limits_{x\to \infty}[f(x)-f(x-1)]\overset{?}{=}e$ Let :
$$f\left(x\right)=\int_{0}^{\lfloor x\rfloor}\prod_{n=1}^{\lfloor x\rfloor}\frac{\left(y+2n\right)\ln\left(y+2n-1\right)}{\left(y+2n-1\right)\ln\left(y+2n\right)}dy$$
Conjecture:
$$\lim_{x\to \infty}f(x)-f(x-1)=e$$
We have for $x=150000$:
$$f(x)-f(x-1)\approx 2.714446$$
I cannot proceed further because Desmos is a bit unfriendly.
How i come up with this conjecture :
Have a look to Andersson's inequality https://www.sciencedirect.com/science/article/pii/S0893965905003666 :
$$\int_{0}^{1}f_{1}\left(x\right)f_{2}\left(x\right)...f_{n}\left(x\right)dx\ge\frac{2n}{n+1}\left(\int_{0}^{1}f_{1}\left(x\right)dx\right)\left(\int_{0}^{1}f_{2}\left(x\right)dx\right)...\left(\int_{0}^{1}f_{n}\left(x\right)dx\right)$$
here :
$$f_{n}\left(x\right)=\frac{\left(x+2n\right)\ln\left(x+2n-1\right)}{\left(x+2n-1\right)\ln\left(x+2n\right)}$$
It doesn't fullfilled the constraint $f_n(0)=0$ and the convexity for $n$ small on $x\in[0,1]$ .
On the other hand the use of the floor function is just a pratical graph point of view on Desmos (free software) enlightening the probable existence of an asymptote .
Does it converge? If yes, is it $e$?
A counter-example is also welcome!
Ps : Something weird should be :$\lim_{x\to\infty}f(x)-f(x-1)=\frac{egg}{gg}$
| Here is a weaker result which is still enough to show that the limit is not $e$:
Claim. We have
$$ \lim_{n\to\infty} \frac{f(n)}{n} = \sqrt{3} + 2\log(1+\sqrt{3}) - \log 2 \approx 3.04901 $$
Proof. Let $n$ be a positive integer. Then
$$ f(n) = \int_{0}^{n} \prod_{j=1}^{n} \frac{\log(y+2j-1)}{\log(y+2j)} \frac{y+2j}{y+2j-1} \, \mathrm{d}y. $$
Substituting $y = nt$, the integral is recast as
\begin{align*}
f(n)
&= n \int_{0}^{1} g_n(t) \, \mathrm{d}t,
\qquad g_n(t) := \prod_{j=1}^{n} \frac{\log(nt+2j-1)}{\log(nt+2j)} \frac{nt+2j}{nt+2j-1}. \tag{*}
\end{align*}
Using the inequality $\frac{1}{1-x} \leq \exp(x + x^2) $ for $x \in [0, \frac{1}{2}]$, we find that
\begin{align*}
g_n(t)
&= \prod_{j=1}^{n} \frac{\log(nt+2j-1)}{\log(nt+2j)} \frac{1}{1 - \frac{1}{nt+2j}} \\
&\leq \exp\left[ \sum_{j=1}^{n} \left( \frac{1}{nt+2j} + \frac{1}{(nt+2j)^2} \right) \right] \\
&\leq \exp\left[ \int_{0}^{n} \frac{1}{nt+2s} \, \mathrm{d}s + \sum_{j=1}^{\infty} \frac{1}{(2j)^2} \right] \\
&= \exp\left[ \frac{1}{2} \log \left(\frac{t+2}{t}\right) + \frac{\zeta(2)}{4} \right].
\end{align*}
This proves that $g_n(t) \leq Ct^{-1/2}$ uniformly in $n$, and so, we can apply the dominated convergence theorem provided $g_n(t)$ converges pointwise as $n \to \infty$. However, for each fixed $t \in (0, 1]$,
\begin{align*}
g_n(t)
&= \prod_{j=1}^{n} \biggl( 1 + \frac{\log(1 - \frac{1}{nt + 2j})}{\log(nt+2j)} \biggr) \frac{1}{1 - \frac{1}{nt+2j}} \\
&= \exp \left[ \sum_{j=1}^{n} \biggl( \frac{1}{nt + 2j} + \mathcal{O}\biggl( \frac{1}{n \log n} \biggr) \biggr) \right] \\
&= \exp \left[ \sum_{j=1}^{n} \frac{1}{t + 2(j/n)} \frac{1}{n} + \mathcal{O}\left( \frac{1}{\log n} \right) \right] \\
&\to \exp\left( \int_{0}^{1} \frac{\mathrm{d}s}{t + 2s} \right)
= \sqrt{\frac{t + 2}{t}}.
\end{align*}
Therefore, by the dominated convergence theorem,
$$ \lim_{n\to\infty} \frac{f(n)}{n}
= \int_{0}^{1} \sqrt{\frac{t + 2}{t}} \, \mathrm{d}t
= \boxed{\sqrt{3} + 2\log(1+\sqrt{3}) - \log 2}. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4625047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Limit of the sequence $\lim_{n \to \infty}{\frac{\log(n^3-7)}{\sqrt{n}+2/\sqrt{n}}}$ Is there a way mathematically to prove this limit without using de l'hopital because I think I can't use it on sequences. I know I can say that the log function goes to infinity slower than the denominator but is that the only way I can prove this?
$$\lim_{n \to \infty}{\frac{\log(n^3-7)}{\sqrt{n}+2/\sqrt{n}}=0}$$
| Let $a_n = \frac{\log(n^3-7)}{\sqrt{n}+2/\sqrt{n}}$. Then
$$0 \leq a_n \leq \frac{\log(n^3)}{\sqrt{n}} = 3\frac{\log(n)}{\sqrt{n}}.$$
So we may instead show that $\lim_{n\to \infty} b_n=0$ where
$$b_n = \frac{\log(n)}{\sqrt{n}}.$$ To show that $\lim_{n\to \infty} b_n=0$ note that for any $0 < p < 1$ we can bound
$$\log(n) = \int_1^n \frac{1}{x} \, dx \leq \int_1^n \frac{1}{x^p}\, dx = \frac{1}{1-p} (n^{1-p} - 1) \leq \frac{1}{1-p}n^{1-p}.$$
Setting, for example, $p=\frac34$, we have
$$\log(n) \leq 4 \sqrt[4]{n},$$
and so
$$0\leq b_n = \frac{\log(n)}{\sqrt{n}} \leq \frac{4}{\sqrt[4]{n}},$$
from which it follows that $\lim_{n\to \infty} b_n=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4625211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
Solve the reccurence relation for $T(n) = 2T(n/2) + n − 1, n > 1, T(1) = 0, n = 2^k$ Even though it's pretty similar to other questions, I'm confusing myself with the answer because I always end up with:
$$
T(2^k)=\log_2n-\log_2n
$$
which doesn't seem right at all. Plus, I'm not sure whether the final big theta notation would be $\Theta(\log_2n)$ or $\Theta(1)$.
I don't know what else to do. Any help would be appreciated.
EDIT: Here's the obvious extremely wrong attempt:
\begin{align*}
A(2^k )&=2A(2^{k-1} )+2^k-1\\
&=(2A(2^{k-2} )+2^k-1)+2^k-1
&&=2A(2^{k-2} )+2(2^k)-2\\
&=(2A(2^{k-3} )+2^k-1)+2(2^k)-2
&&=2A(2^{k-3} )+3(2^k)-3
\end{align*}
$$
A(2^k)=2A(2^{k-i} )+i(2^k)-i
$$
sub in $i = k$:
\begin{align*}
A(2^k)&=2A(2^{k-k} )+k(2^k)-k \\
&=2A(2^0 )+k(2^k)-k && \\
&=\log_2n-\log_2n\\
\end{align*}
| You wrote
$$\color{red}{2A(2^{k-1} )}+2^k-1=\color{red}{(2A(2^{k-2} )+2^k-1)}+2^k-1.$$
But you seem to assume $2A(2^{k-1}) = 2A(2^{k-2}) + 2^k - 1$, which is incorrect.
It should be
\begin{align*}
2 \cdot A(2^{k-1}) &= 2 \cdot (2A(2^{k-2}) + 2^{k-1} - 1) \\
&= 4A(2^{k-2}) + 2^k - 2.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4625310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How many different sets of positive integers $(a, b, c, d)$ are there such that $a \lt b \lt c \lt d$ and $a + b + c + d = 41$? Is there a general formula which I can use to calculate this and if it's with proof or reasoning would be great as well. Even if you could please solve this $4$-variable ordered set of positive integers case would be a great help indeed. Sorry I am weak in this topic.
Thanks Robert for your pointers in modifying it to
$(4\times1) + (3\times2) + (2\times3) + (1\times4) = 41$, where there are no additional constraints on $x_i$
To solve this must do it manually by cases or is there a short way of calculating it using "Bars and Stars" (of which I am aware of the method)? The answer given is $321$.
But when I calculate it like this it doesn't seem to match the answer:
imagine the new given equation as containing $(4+3+2+1) = 10$$x_i$'s individually so as equivalent to finding the number of positive integer solutions for $b_1 + b_2 + ... + b_{10} = 41$ which is $C(41 - 1, 10 - 1) = C(40, 9)$. Now since we have $4x_1$'s that are identical, we need to divide this by $4!$ (for its permutations of "overcounting") and similarly divide next by $3!$ (for $3x_2$'s that are equal), and $2!$ for the $2x_3$'s that are equal. How is my approach wrong here? anybody, please help me? Thanks again
As per Robert's transformation with Anil's suggestion of the generating functions method, here's my work on it:
I actually used an online calculator called "Sage Math" to do this via this website:
https://www.whitman.edu/mathematics/cgt_online/book/section03.01.html
With these modifications
$f(x)=(1-x^4)^{-1}\times(1-x^3)^{-1}\times(1-x^2)^{-1}\times(1-x)^{-1}$
show(taylor($f,x,0,33$))
And have gotten the verification that $321$ is indeed the answer after some tedious algebra (please note that since we are looking for the coefficient of $x^41$ in the expansion Anil wrote which has $x^(4+3+2+1) = x^10$ factored outside meaning in the remaining Newton's Binomial Expansions with negative exponent, we only need to determine the coefficient of $x^(41-10)= x^31$ in it which is "$321$" as the answer suggested (please see image below):
| Thanks some info from OEIS, and some help from Mathworld, we have the following:
Let the number of partitions of $n$ into $k$ distinct parts be $Q(n, k)$. Then $Q(n,k)$ has the recurrence:
$$Q(n,k) = Q(n-k, k) + Q(n-k, k-1)$$
where $Q(n,1) = 1, Q(n,0) = 0$.
If you prefer, we have a (possibly easier?) method: $Q(n, k) = P(n- \binom{k}{2}, k)$, where $P(n,k)$ has the recurrence:
$$P(n,k) = P(n-1, k-1) + P(n-k, k)$$
where $P(n,1) = P(n,n) = 1$. We'll want $P(35,4)$ of course. Either way, the OEIS link gives $Q(41,4) = P(35,4) = 321$.
If you must have a closed-form function, Mathworld has you... "covered," in the loosest sense of the term, and only for $P(n,k)$ up to $k=4$. The function involved is
$$P(n,4) = \textstyle \frac1{864} \displaystyle \left\{ 3 \left(n+1 \right) \left[2n \left(n+2 \right)-13+9 \left(-1 \right)^n \right] - 96 \cos \frac{2 \pi n}{3}+ \\ \left( 108 \left(-1 \right)^ \left(n/2 \right) \right) \big(\bmod \left(n+1,2 \right) \big)+32 \sqrt{3} \sin \left(\frac{2 \pi n}{3} \right)\right\}$$
And yes, for $n=35$ it does in fact evaluate to $321$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4625992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Generating the whole sigma algebra with two independent sigma algebras Let $(\Omega,\mathcal A,\mathsf P)$ be a probability space. Let $\mathcal B\subset\Omega$ be a sub-$\sigma$-algebra of $\mathcal A$. Does there always exist a sub-$\sigma$-algebra $\mathcal C$ of $\mathcal A$ such that $\mathcal B$ is independent of $\mathcal C$ and such that $\sigma(\mathcal B\cup\mathcal C)=\mathcal A$ ?
This question is motivated by the Definition of causal graphs, where a random variable $X$ is said to "cause" another random variable $Y$ if there exists a measurable function $f$ and a random variable $E$ independent of $X$ such that $Y = f(X, E)$. I was wondering if any two random variables always "cause" each other.
| The answer is no: As a simple counter-example, consider
$$(\Omega,\mathcal A,\mathsf P)=(\{1,2,3\}, \mathfrak P(\{1,2,3\}), \text{Uniform}(\{1,2,3\})),$$ where $\mathfrak P(\{1,2,3\})$ denotes the power set of $\{1,2,3\}$ and $ \text{Uniform}(\{1,2,3\})$ denotes the uniform measure over $\{1,2,3\}$.
Choose now $\mathcal B=\{\emptyset,\{1\},\{2,3\},\Omega\}$. Then any sigma-algebra $\mathcal C$ such that $\sigma(\mathcal B\cup\mathcal C)=\mathcal A$ must contain either $\{2\}$ or $\{1,2\}$ so that $\{2\}\in\sigma(\mathcal B\cup\mathcal C)$. But such a $\mathcal C$ cannot be independent of $\mathcal B$, since we then have either $$\mathsf P(\{1\}\cap\{2\})=0\neq \mathsf P(\{1\})\mathsf P(\{2\})$$ or $$\mathsf P(\{1\}\cap\{1,2\})=\frac 13\neq \frac 29 = \mathsf P(\{1\})\mathsf P(\{1,2\}).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4626169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving $\frac{1}{x^2}$ is continuous. I wanted to prove that the function $f:\mathbb{R}\rightarrow\mathbb{R}$ given by $f(x)=\frac{1}{x^2}$ is continuous for all $x\in\mathbb{R}$ excluding x=0 of course. The proof goes as follows.
Let $\epsilon>0$ be given arbitrary and choose $\delta=min(1,\frac{c}{2},\frac{c^4\epsilon}{4(1+2|c|)})$. Assume that $\forall x.c\in\mathbb{R}: 0<|x-c|<\delta$. It follows:
$$|f(x)-f(c)|=|\frac{1}{x^2}-\frac{1}{c^2}|=|\frac{x^2-c^2}{x^2c^2}=|\frac{|x-c||x+c|}{x^2c^2}|=|\frac{|x-c||(x-c)+2c|}{x^2c^2}|\leq\frac{|x-c|(|x-c|+2|c|)}{x^2c^2}$$ $$ \leq\frac{|x-c|(1+2|c|)}{x^2c^2}<\frac{|x-c|4(1+2|c|)}{c^2c^2}<4\delta\frac{1+2|c|}{c^4}\leq\epsilon$$
Is this proof valid and in particular is the delta I chose fine and if not provide some tips please.
| Let $\epsilon>0$ is arbitrary. We will consider $c>0$ (for $c<0$ we can follow similar logic).
$$|f(x)-f(c)|=|\frac{1}{x^2}-\frac{1}{c^2}|=|\frac{x^2-c^2}{x^2c^2}|=|x-c|\frac{|x+c|}{x^2c^2}$$ From here, we need to find bounds for $x \text{ and }|x+c|$ than do not depend on $x$. Let $\delta_1=0.5c$. Then $|x-c|<\delta_1 \implies 0.5c < x < 1.5c, 1.5c <x+c < 2.5c$ We have for $|x-c|<\delta_1$:
$$|f(x)-f(c)|=|x-c|\frac{|x+c|}{x^2c^2} < \delta_1 \frac{2.5c}{0.25c^4}=\frac{10\delta_1}{c^3}$$
Next, pick $\large{\delta=min\{\delta_1, \frac{\large{c^3}\epsilon}{10}\}}$ then for $|x-c| < \delta$ we have $|f(x)-f(c)| < \epsilon$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4626498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Converting angles to the range $ -180$ deg through $180$ deg Is there a direct formula to transform a set of angles to $-180$ through $180$ deg? For instance, $181$ deg and $541$ deg should be translated to $-179$, while $-182$ deg and $-542$ deg should be translated to $178$. I know that for an angle $X$ which is not on any axis ($0, 90, 270, 360, ...$), when $\text{mod}(a,b)$ is the remainder of division of $a$ by $b$, we can write use $k = \lfloor(\frac{\text{mod}(X,360)}{90})\rfloor$ to determine the quadrant where angle $X$ resides since $k = 0, 1, 2, 3$ corresponds to quadrants $1, 2, 3, 4$ respectively. So for angle $X$:
$k = 0$ or $1$: $\text{mod}(X,360)$ gives the $0$ through $180$ range.
$k = 2$ or $3$: $\text{mod}(X,360)-360$ gives $-180$ through $0$ range.
Is there a way to make this simpler?
| In your notation, $\operatorname{mod}(x,360)$ would put it in the range $0$ to $360$, and therefore, we can get what you need by $\operatorname{mod}(x+180,360)-180$, sending $x$ to the range $-180$ to $180$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4626716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Show that the sequence $a_n=1-\frac13+\frac{1}{3^2}-...+(-1)^n\frac{1}{3^n}$ is bounded. Show that the sequence ${a_n}$ where $$a_n=1-\dfrac13+\dfrac{1}{3^2}-...+(-1)^n\dfrac{1}{3^n}$$ is bounded.
The first thing that came to my mind was to see if the sequence is monotone. If I am right, the $(n+1)th$ term should look like this: $$a_{n+1}=1-\dfrac13+\dfrac{1}{3^2}-...+(-1)^{n+1}\dfrac{1}{3^{n+1}}$$ and then the difference $a_{n+1}-a_n$ is $$a_{n+1}-a_n=\dfrac{(-1)^{n+1}}{3^{n+1}},$$ the sign of which depends on $n$. If we write the first terms $$a_1=1;a_2=\dfrac{2}{3}=\dfrac{6}{9},a_3=\dfrac{7}{9},$$ we can see that the sequence isn't monotone.
Another thing that I noted is that we actually have the sum of a geometric sequence with first term $b_1=1$ and common ratio $-\dfrac13$. That is for the general term, the sum is $$S_n=\dfrac{\left(-\frac13\right)^n-1}{-\frac43}$$
| You have shown that
$$
a_n = \frac{\left(-\frac13\right)^n-1}{-\frac43}
$$
(Note the $a_n$ here, not $S_n$; the $a_n$ themselves are the partial sums of a geometric series.) It is not difficult to see that $\left|\left(-\frac13\right)^n\right|\leq 1$, and together with the triangle inequality we get
$$
|a_n| = \left|\frac{\left(-\frac13\right)^n-1}{-\frac43}\right| \leq \frac{\left|\left(-\frac13\right)^n\right| + |-1|}{\left|-\frac43\right|}\leq \frac{1 + 1}{\frac43} = \frac32
$$
which clearly means it's bounded.
Alternately, we can use the monotonicity of every other term. Let $b_n = a_{2n}$ and $c_n = a_{2n+1}$ be the two resulting subsequences. Then
$$
b_{n+1} - b_n = a_{2n + 2} - a_{2n} = (-1)^{2n+2}\frac{1}{3^{2n+2}} + (-1)^{2n+1}\frac1{3^{2n+1}}\\
= \frac{1}{3^{2n+1}}\left(\frac13 - 1\right) < 0
$$
so $b_n$ is monotonically decreasing. Similarly we get that $c_n$ is monotonically increasing.
Finally, note that for any $n$ we have $c_n < b_n$. Which by the above monotonicity means that we must have
$$
c_0 \leq c_n < b_n \leq b_0
$$
Since every $a_m$ is either a $c_n$ or a $b_n$ for some $n$, we have $c_0 < a_m < b_0$, meaning the original sequence is bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4626929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Proof that $\int_0^1\frac{\sin\ln x}{\ln x}dx=0$ is wrong So I proved that $$\int_0^1\frac{\sin\ln x}{\ln x}dx=0$$Which is wrong according to Desmos. What is not right about my proof?
Let $$I(a)=\int_0^1\frac{\sin\ln x}{a\ln x}dx$$Then$$I'(a)=-\int_0^1\frac{\sin\ln x}{a^2\ln x}dx\implies-aI'(a)=I(a)$$Integrate both sides: $$-\int aI'(a)da=^{\text{integration by parts}}-aI(a)+\int I(a)da=\int I(a)da\implies-aI(a)=0$$
I looked over it and couldn't find anything wrong. Maybe I messed up in the integration by parts?
| Considering the integral
$$
I(a)=\int_0^1 \frac{\sin (a \ln x)}{\ln x} d x
$$
Letting $y=-\ln x$ and then differentiating $I(a)$ w.r.t. $a$ gives
$$
\begin{aligned}
I^{\prime}(a) & =\int_{\infty}^0 \cos (-a y)\left(-e^{-y} d y\right. \\
& =\int_0^{\infty} e^{-y} \cos (a y) d y \\
& =\frac{1}{a^2+1}
\end{aligned}
$$
Integrating $I^{\prime}(a)$ from $0$ to $1$ gives
$$
I(1)-I(0)=\int_0^1 \frac{1}{u^2+1} d u=\frac{\pi}{4}
$$
Hence $$\boxed{\int_0^1 \frac{\sin (\ln x)}{\ln x} d x=I(1)= \frac{\pi}{4}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4627110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show that the absolute of the shifted sine and the scaled sine coincide twice Let $\alpha\in(0,\pi)$ and $c>0$. How do I show that there exist exactly two $x\in[0,\pi]$ with $|\sin(x-\alpha)|=c\sin(x)$?
Let $f(x)=|\sin(x-\alpha)|$ and $g(x)=c\sin(x)$. Let $\alpha\le\pi/2$. We know that $f(0)>0=g(0)$, that $f$ is strictly decreasing on $[0,\alpha]$ while $g$ is strictly increasing, so there exists exactly one $x\in(0,\alpha)$ such that $f(x)=g(x)$. On $[\alpha,\pi]$ it's tricky because the derivatives $f'(x)=\cos(x-\alpha)$ and $g'(x)=c\cos(x)$ initially and finally have the same signs, and the derivatives are not simpler compared to the original problem. The case $\alpha\in[\pi/2,\pi)$ is then immediate by looking at $f(\pi-x)$ and $g(\pi-x)=g(x)$.
While this question is related, the answer is to specific to apply in this case.
Update: I want to stress that I in particular ask for a proof that there is no more than one intersection point on $(\alpha,\pi)$.
| By using the Existence Theorem of Zero Points, we can prove there exists at least one intersection point on $(\alpha,\pi)$.
Before starting, I'd define $W=f(x)-g(x)$ for $x\in[\alpha,\pi+\alpha]$. Notice that
\begin{align*}
W&=\sin(x-\alpha)-c\sin(x)\Rightarrow W=\sin(x)\cos(\alpha)-\cos(x)\sin(\alpha)-c\sin(x)\\
&=(\cos(\alpha)-c)\sin(x)-\cos(x)\sin(\alpha).
\end{align*}
①$f(\frac{\pi}{2})>g(\frac{\pi}{2})$, the intersection point is on $(\alpha,\frac{\pi}{2})$, $c<\sin(\frac{\pi}{2}-\alpha)$ , that is $\cos(\alpha)>c$.
On $[\alpha,\frac{\pi}{2}]$, the range of $f(x)$ includes the range of $g(x)$, and two funcions are monotonically increasing. There has one and only one intersection point on $[\alpha,\frac{\pi}{2}]$.
Consider $(\frac{\pi}{2},\pi)$. Since $\cos(\alpha)>c$ , $\cos(x)<0$, $W>0$ holds good on $(\frac{\pi}{2},\pi)$. Therefore, two functions do not have any intersection point on $(\frac{\pi}{2},\pi)$.
②$f(\frac{\pi}{2})<g(\frac{\pi}{2})$ , the intersection point is on ($\frac{\pi}{2},\pi$).
(i) $f(\alpha+\frac{\pi}{2})<g(\alpha+\frac{\pi}{2})$ . According to the monotonicity, there exists only one intersection point on $(a+\frac{\pi}{2},\pi)$ We can find $c\cdot sin(\alpha+\frac{\pi}{2})>1 \Rightarrow cos\alpha>\frac{1}{c}$
Let x=a+t , t$\in(0,\frac{\pi}{2})$.
W=sin(x-$\alpha$)-c$\cdot$sinx=$sint-csin(a+t)<sint-\frac{sin(a+t)}{cosa}=\frac{sint\cdot cosa-sin(a+t)}{cosa}=\frac{-sina\cdot cost}{cosa}<0$
Therefore, two functions do not have any intersection point on ($a,a+\frac{\pi}{2}$).
(ii) $f(\alpha+\frac{\pi}{2})>g(\alpha+\frac{\pi}{2})$.
According to the monotonicity, there exists only one intersection point on $(\frac{\pi}{2},a+\frac{\pi}{2}).$
We can also find $c<\frac{1}{cosa}$
(1) When x $\in(a+\frac{\pi}{2},\pi)$.
Since $c<\frac{1}{cosa}$ , $W>sin(x-a)-\frac{sinx}{cosa}$
Let x=a+t, $t\in (\frac{\pi}{2},\pi-a)$
$W>sint-\frac{sin(a+t)}{cosa}=\frac{-sina\cdot cost}{cosa}$
Since $cosa>0 , cost<0$ , $W>\frac{-sina\cdot cost}{cosa}>0$ holds good on $\in(a+\frac{\pi}{2},\pi)$.
Therefore, two functions do not have any intersection point on $\in(a+\frac{\pi}{2},\pi)$.
(2) When x $\in(a,\frac{\pi}{2})$.
According to $f(\frac{\pi}{2})<g(\frac{\pi}{2})$ , it is clear that cosa<c.
$W=(cos\alpha-c)sinx-cosx\cdot sin\alpha<-cosx\cdot sin\alpha<0$
Therefore, two functions do not have any intersection point on $\in(a,\frac{\pi}{2})$.
The solution is not brief or beautiful at all. Waiting for better solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4627277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Showing that the limits are not interchangeable in $\lim\limits_{n \to \infty} \int_{0}^{1} f_{n}(x)$ The problem : Let $f_{n}:[0,1] \to \mathbb{R}$ be defined for $n \ge 1$ by
$$
f_{n}(x):=
\begin{cases}
2n^{2} x & \text{for} & 0 \le x \le \frac{1}{2n},\\
-2n^{2} (x - \frac{1}{n}) & \text{for} & \frac{1}{2n} \le x \le \frac{1}{n},\\
0 & \text{for} & \frac{1}{n} \le x \le 1
\end{cases}
$$
[Q] : It is required to show that
$
\lim\limits_{n \to \infty} \int_{0}^{1} f_{n}(x) \ne \int_{0}^{1} \lim\limits_{n \to \infty} f_{n}(x)$
My attempt :
(i) $
\lim\limits_{n \to \infty} \int_{0}^{1} f_{n}(x)
= \lim\limits_{n \to \infty} (\frac{1}{2})
= \frac{1}{2}
$
(ii) Here I need to show that $f_{n} \to 0$ i.e., $\lim\limits_{n \to \infty}f_{n}(x) = f(x)=0 \;,\forall x\in [0,1]$
I have an intuition that as $n \to \infty$,
$\Rightarrow \frac{1}{n} \to 0$, and $x \notin \mathbin{]}0,\frac{1}{n}]$
i.e., as $n$ approaches infinity more and more values of $x$ get knocked out of $[0,\frac{1}{n}]$ and the function gets narrower. but I can't rigorously formalize this.
I would appreciate your insight regarding this part.
| The key is to fix $x\in [0,1]$. If $x=0$ then, clearly $f_n(0)=0$ and the limit is $0$.
Now fix $x$ at $0<x_0<1$. Then for any $n>1/x_0$, we see that $f_n(x_0)=0$ and the limit is $0$.
And that is all we need to show.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4627430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Moments of a shifted Binomial random variable does someone know how to calculate the following expectation?
Assume $B$ binomial distributed with parameter $N\in \mathbb{N}$ and $p\in [0,1]$. Further let $x\in \mathbb{R}$. Define $Y:= B-x$
$\mathbb{E}[(Y)^n]$ for $n\in \mathbb{N}$
Im actually interested in the case $n=4$.
| Note that the MGF of $B-x$ is given by
$$M_{B-x}(\xi)=E[e^{\xi (B-x)}]=e^{-\xi x}E[e^{\xi B}]=e^{-\xi x}M_B(\xi)$$
From this you can calculate any moment.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4627598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Let $\sim$ be some equivalence relation on $X$. Is there some function with domain $X$ such that $f(x)=f(y)$ exactly when $x\sim y$?
1. Suppose that $X$ is a set.
a) Let $f$ be some function with domain $X$ (and codomain anything you like), and say that $x \sim y$ means $f(x)=f(y)$. Is $\sim$ an equivalence relation? If it is, then prove it; if not then give a counterexample.
b) Let $\sim$ be some equivalence relation on $X$. Is there some function with domain $X$ such that $f(x)=f(y)$ exactly when $x\sim y$? If there is, then prove it; if not then give a counterexample.
I proved part a, but I don't really understand part b. Wouldn't this be true for any function that exists? Why would someone ask me this question? I think I missed something. Because it seems this is always true. If $x$ is related to $y$, then we immediately know $f(x) = f(y)$, so of course this would be true for any function. If $x$ is not related to $y$ , then we know for sure $f(x) \neq f(y)$ since if it was the case them, $x$ would be related to $y$ . What am I missing?
| I think you’re confusing the relation $\sim$ in the first part with the relation $\sim$ in the second part. In the first part, you have defined the relation such that $x\sim y$ if and only if $f(x) = f(y)$. Then you prove that this is an equivalence relation.
In the second part, you are given a new relation $\sim$, which is an equivalence relation on $X$. Then you have to prove that there exists or doesn’t exist a function $f$ such that $x\sim y\iff f(x) =f(y)$.
Not every function satisfies the condition in the second part. I will leave it to you to find if there does exist a function that does satisfy the conditions. The following is an example of a function that doesn’t satisfy the condition.
Let $X$ be the set of integers $Z$ and $\sim$ denote the modular arithmetic relation on $Z$ with the modulus being $2$. That is, $x\sim y\iff x\equiv y( \text{mod } 2)$
Define $f : Z \to Z$ by $$f(x) = x$$ For this function, $f(x) = f(y)$ only when $x=y$. Thus, this does not satisfy the condition in the second part as there exist integers $x$ and $y$ such that $x\sim y$, but $x\ne y$ and therefore $f(x)\ne f(y)$. For a specific example, note that $2\sim 4$ as $2\equiv 4 (\text{mod } 2)$, but $f(2) \ne f(4)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4627980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Discontinuous function in Smooth infinitesimal analysis I read that there isn't discontinuous functions in Smooth infinitesimal analysis. But I tried to define discontinuous function ($\varepsilon$ is infinitesimal):
$f(x) =
\begin{cases}
1, & \text{if $x$ = $\varepsilon$} \\
0, & \text{if $x$ $\neq$ $\varepsilon$}
\end{cases}$
What is mistake in my definition?
Thanks.
| tl;dr It is not the case that all $x \in R$ satisfy either $x = \varepsilon$ or $x \neq \varepsilon$, so your function is not well-defined.
Let $R$ denote the real line of Smooth Infinitesimal Analysis. What you read is true: in Smooth Infinitesimal Analysis all functions from $R$ to itself are continuous and in fact differentiable everywhere. This is an easy consequence of the Kock-Lawvere axiom.
Now, the function $f$ you defined is a perfectly well-defined discontinuous function -- from the set $S = \{ x \in R \:|\: x = \varepsilon \vee x \neq \varepsilon \}$ to $R$.
However, $(\forall x \in R. x = \varepsilon \vee x \neq \varepsilon)$ is not a theorem of Smooth Infinitesimal Analysis for any $\varepsilon$: in fact, its negation is a theorem! This is fine, since Smooth Infinitesimal Analysis uses intuitionistic logic, where not all instances of the law of excluded middle ($A \vee \neg A$) are taken as axioms.
Since the negation of $(\forall x \in R. x = \varepsilon \vee x \neq \varepsilon)$ holds, we have $S \neq R$. So your function has domain $S$, and not domain $R$, and consequently its existence does not contradict the fact that all functions from $R$ are continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4628130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to evaluate this integral with a Dirac delta function? $$P(s)=\int_{0}^{1}\int_{0}^{1}xy \, \delta{(s-(x+y))}\, dxdy$$
$P(s)$ is the probability density of random variable $s=x+y$, who is a function of the two original random variables $x,y$. The peak occurs inside the limit of integration.
$P(x,y)=xy$ is the probability density of random variable $x,y$.
I tried to apply the property of Dirac delta function $$\int_{a}^{b}f(x)\, \delta{(x-x_{0})}\, dx=f(x_{0}).$$
However, I am having trouble identify $x_{0}$.
Edit: screenshot below
This is the example the textbook use. For the exercise, $P(x,y)=xy$ instead of $1/36$, and I am kinda lost here.
| Here is a pedestrian approach:
$$\begin{align}P(s)~=~&\int_{[0,1]}\!dy~y \int_{[0,1]}\!dx~x~\delta{(s-(x+y))} \cr
~=~&\int_{[0,1]}\!dy~y \int_{\mathbb{R}}dx ~x~1_{[0,1]}(x) ~ \delta{(s-y-x)}\cr
~=~&\int_{[0,1]}\!dy~y(s-y)~1_{[0,1]}(s-y) \cr
~=~&\int_{\mathbb{R}}dy ~y(s-y)~1_{[0,1]}(y)~1_{[s-1,s]}(y) \cr
~=~&\int_{\mathbb{R}}dy ~y(s-y)~1_{[\max(0,s-1),\min(s,1)]}(y) \cr
~=~&\int_{\mathbb{R}}dy ~y(s-y)~\left(1_{[0,1]}(s) 1_{[0,s]}(y) + 1_{[1,2]}(s) 1_{[s-1,1]}(y) \right) \cr
~=~&1_{[0,1]}(s) \left[ ~y^2\left(\frac{s}{2}-\frac{y}{3}\right)\right]^s_0 + 1_{[1,2]}(s) \left[ ~y^2\left(\frac{s}{2}-\frac{y}{3}\right)\right]^1_{s-1} \cr
~=~&1_{[0,1]}(s)~\frac{s^3}{6} + 1_{[1,2]}(s) \left( -\frac{s^3}{6}+s-\frac{2}{3}\right),
\end{align}$$
where we have used indicator/characteristic functions.
From the 3rd line one can see that the map $s\mapsto P(s)$ is continuous. The last expression is continuous if the value the of indicator function is $1/2$ at the interval endpoints.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4628717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I show the convergence of this integral? Does $I$ converge if $f(x,y)$ is bounded and continuous on $\mathbb{R}^2$?
\begin{align}
I=\iint_{\mathbb{R}^2} f(x,y)e^{-(x^2+y^2)}dxdy
\end{align}
I tried to evaluate this integral value from the boundedness of $f$ and show it using squeeze theorem, but it did not work. How can I make it work?
| Suppose $f(x, y)$ is bounded by $M$ i.e. $|f(x, y)| < M$.
$$
\begin{align}
\left|\iint_{\mathbb{R}^2} f(x, y) e^{-(x^2 + y^2)} dx dy\right| &\le \iint_{\mathbb{R}^2}\left| f(x, y) \right| e^{-(x^2 + y^2)} dx dy \\
&\le \iint_{\mathbb{R}^2} M e^{-(x^2 + y^2)} dx dy \\
&= M\int_\mathbb{R} e^{-x^2}dx \int_{\mathbb{R}} e^{-y^2} dy \\
&= M \pi
\end{align}
$$
which implies that the integral converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4628868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How come I don't get a contradiction by Rolle's theorem here? We have $f(x)=x^4 -2x^3 -12x^2 -6x +11$, $x\in\mathbb{R}$
I need to prove that $f$ has exactly three extrema $x_1$,$x_2$,$x_3$ with: $-2<x_1<-1<x_2<0<x_3$
The functions possible extrema points since it is differentiable on $\mathbb{R}$ are where $f'(x)=0$.
The derivatives for this function are: $f'(x)=2(x^3-3x^2-12x-3)$
$f''(x)=12(x^2-x-2)$
$f'''(x)=12(2x-1)$
I calculated $f'(-2)=-14$, $f'(-1)=8$, $f'(0)=-6$ and $\lim_{x\to\infty}f'(x)=\infty$ so $\exists \kappa>0$ s.t $f'(\kappa)=0$
By applying the $\textit{Bolzano-Weierstrass Theorem}$ on $[-2,-1]$ ,$[-1,0]$ ,$[0,\kappa]$
We get $f'(x_1)=f'(x_2)=f'(x_3)=0$
I know realize that proving exactly 3 solutions can be done simply by saying that $f'$ is a 3rd degree polynomial but I'm still curious as to why my other method didn't work.
Let $f'$ have 4 roots $r_1$,$r_2$,$r_3$,$r_4\in \mathbb{R}$ with $r_1<r_2<r_3<r_4$
By applying $\textit{Rolle's Theorem} $ for $f'$ on $[r_1,r_2]$, $[r_3,r_4]$
$\exists \xi_1 \in(r_1,r_2):f''(\xi_1)=0$ and $\exists \xi_2 \in(r_3,r_4):f''(\xi_2)=0$
By applying $\textit{Rolle's Theorem}$ for $f''$on $[\xi_1,\xi_2]$ then $\exists \xi \in (\xi_1,\xi_2):f'''(\xi)=0 \Leftrightarrow 12(2\xi-1)=0 \Leftrightarrow \xi=\frac{1}{2} $
Where is the contradiction? Why didn't it come up?
| Didn't apply $\textit{Rolle's Theorem}$ for $[r_2,r_3]$ which you in the end give us a $f^{(4)}(x_0)=0$ which gives us a contradiction where $24=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4629205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Possibility of arranging $1$ and $-1$ in a grid such that the sum of the products is $0$ Consider an $11$ x $11$ grid, where in each square, the number $1$ or $-1$ is written. One multiplies the numbers in each row and column, and then sums up these $22$ products. Is it possible for this sum to be equal to $0$?
My approach was to realize that the product in every row or column is either $1$ or $-1$. This means, since we have $22$ products, that $11$ of them must be equal to $1$ and the other $11$ to $-1$. If there is an odd number of $-1$:s in a row or column, the product will be $-1$, while if there is an even number of $-1$:s, the product will be $1$. Then, I considered coloring each square in the following way: black if the cell contains a $-1$, uncolored if the cell contains a $1$. Thus, the question becomes: can one color grid cells in an $11$ x $11$ grid in such a way that an odd number of cells is colored in $11$ rows/columns, and an even number of cells is colored in the remaining $11$ rows/columns? However, from here, I couldn't make much progress. Does anyone have any ideas?
| Start with a grid $11\times11$, with $1$ in each square. Total is $22$.
Change $1$ to $-1$ in one square, new total is $18$.
Start with a grid $11\times11$, with random values. Change $1$ value, new total of rows increase or decrease by $2$, new total of columns increase or decrease by $2$, new total of rows+columns change by $-4, 0$ or $+4$.
Total is always equal to $2\pmod 4$. Total cannot be $0$, neither $4$ or $8$ or $12$ ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4629411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Does dividing the unit square into small enough polygonal regions always yield a region surrounded by at least six others? Divide* the unit square $I^{2}=[0,1]\times[0,1]$ into polygonal regions, each having the property that the distance between any two of its points is less than $\frac{1}{30}$.
Question: Must there be a polygon $P$ within $I^{2}$ surrounded by at least six adjacent polygons- that is, touching $P$ in at least a point? If yes, how to prove it?
*The polygons in the division may not be convex and the division may have "gaps" in the sense that the boundary between two adjacent polygons may be a disconnected polygonal line. The picture below illustrates gaps (white polygons) between the two grey polygons $P$ and $P'$.
$\hskip2.25in$
For example, the division pictured below has gaps.
$\hskip2in$
I think the answer may be yes from some drawings. And I do not think it matters if there are gaps. Indeed, these gaps will be produced by polygons in the original division. We can merge them with adjacent polygons in order to produce larger polygons and yield a new division of $I^{2}$ such that any two adjacent polygonal regions have either a point or a connected polygonal line as boundary between them. Since this process only decreases the number of adjacent polygons a polygon has then if the answer is affirmative for this new division, then it must also be affirmative for the original division. But that is as far as I have been able to go towards a solution.
| Yes! That is, assuming no gaps.
Form a connected path along the boundaries of regions from one side of the square to the opposite side. Now, consider the boundaries of all the regions touching the path. You get two bands of regions. Repeat to get three adjacent bands. If there were at most 5 adjacent regions, each region in the middle band must either touch only one region in the lower band or only one region in the upper band. Moreover, each region in the lower and upper bands can touch at most two regions in the middle band. So, we get a leapfrog pattern like the one shown. Then, you have a regions in the upper and lower bands touching 3 regions in the middle band, two in their own band and at least one more outside any of the three bands.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4629574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
A claim in cellular homology
These pages are from Allen Hatcher's book.
In the proof of (a), I did not get how they are using 2.22. And 2.25. In the proof.
And in (b), how they conclude that if $X$ is finite dim then $H_k(X)=0$ for all $k>dim X$.
In b: I looked at the long exact sequence of $(X^n,X^{n-1})$:
$...\to H_{k+1}(X^n,X^{n-1}) \to H_k(X^{n-1})\to H_k(X^n)\to H_k(X^n,X^{n-1})\to....$.
And get that $H_k(X^{n-1}) \simeq H_k(X^n)$ for all $k>n$. So, for $k>n$:
$H_k(X^n)\simeq H_k(X^{n-1})\simeq....\simeq H_k(X^0)=0$ (since k>0).
I know also that $X$ is finie dim so there is $N$ big enough such that $=X^N$.
And I would be grateful if you can explain the concept of (c) when $X$ is infinite dim (the direct approach).
Highly appreciate every clarification.
| Let's go through your concerns.
*
*In part (a), we want to show something about $H_{k}(X^{n}, X^{n-1}).$ The pair $(X^{n}, X^{n-1})$ is a CW pair, hence is a good pair, so we can apply Proposition 2.22 and conclude that
$$H_{k}(X^{n}, X^{n-1}) \cong \tilde{H}_{k}(X^{n}/X^{n-1}).$$ What is $X^{n}/X^{n-1}?$ Well, $X^{n}$ is a bunch of $n$-cells attached by their boundaries to $X^{n-1}$, so when we quotient out by $X^{n-1}$ all of the boundaries of these $n$-cells get crushed to a point. So, $X^{n}/X^{n-1}$ is a wedge of $n$-spheres, one for each $n$-cell of $X$.
We then have
$$\tilde{H}_{k}(X^{n}/X^{n-1}) = \tilde{H}_{k}(\bigvee_{n\text{-cells of } X}S^{n}).$$ By Corollary 2.25 we have
$$\tilde{H}_{k}(\bigvee_{n\text{-cells of } X}S^{n}) \cong \bigoplus_{n\text{-cells of } X} \tilde{H}_{k}(S^{n}).$$ From here the result should be clear.
*From what you wrote in the question it seems like you've got part (b), so I won't write about it.
*For the infinite dimensional case in part (c), the direct approach goes as follows: take a singular $k$-chain in $X$. This is a finite linear combination of maps $\Delta^{k} \to X$, each of which has compact image because $\Delta^{k}$ is compact. So, the image of the singular $k$-chain is compact as well, hence by Proposition A.1 only meets finitely many cells of $X$. Then, there exists some finite $m$ such that $X^{m}$ contains all these finitely many cells, and so the singular $k$-chain is also a singular $k$-chain in $X^{m}$. So, if we take a $k$-cycle in $X$, there is some finite $m$ such that it is also a $k$-cycle in $X^{m}$.
Fix $n \geq k$. Without loss of generality we can take $m \geq n$.
Let $Y = X^{m}$. Then, $Y$ is a finite dimensional CW complex, so by
the finite dimensional case our $k$-cycle is homologous to a
$k$-cycle in $Y^{n}$. But the $n$-skeleton of $Y^{n}$ is just the
$n$-skeleton of $X$ (because $Y = X^{m}$ with $m \geq n$), so it
follows that our $k$-cycle is homologous to a $k$-cycle in $X^{n}$.
This proves surjectivity. The argument for injectivity is similar,
so I leave it to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4629812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Given an equation with multiple variables, all integers, how would I minimize an extra variable (also an integer)? Consider:
$$2x + 4y + 8z + w = 75$$
Or more accurately for what I want to deal with:
$$nx + oy + pz + w = q$$
Where $n$, $o$, $p$, and $q$ are all positive integers and constant, and $x$, $y$, and $z$ are the variables being adjusted, also all positive integers. $w$ should also be a positive integer.
How would I find, whether it's an algorithm or something purely mathematical, a way to minimize $w$? That is, find the combination of values of $x$, $y$, and $z$ that results in the smallest possible $w$?
Example:
$$14x + 60y + 17z + w = 723$$
This has 20 answers, as an example one of them is $x=9, y=4, z=21$ where $w = 0$.
I think it could also be thought of like this:
$$f(x,y,z) = -nx - oy - pz + q$$
Find values of $x$, $y$, and $z$ at a minimum of $f(x,y,z)$.
| Find the greatest common divisor $d$ of $n$, $o$ and $p$.
See how often $d$ goes into $q$. That is, write
$$
q = Ad + R
$$
with quotient $A$ and remainder $R$.
Then $Ad$ can be written as an integer combination of the integers $n$, $o$ and $p$ and is the largest such number, so $R$ is the $w$ you seek. (In your example it's $0$ since 14, 60 and 17 have no common factor so $d=1$.)
If $q$ is large enough then $x$, $y$ and $z$ can be chosen to be positive. If $q$ is not big enough you will have to do a little more work.
There are algorithms for these calculations. Look up the extended Euclidean algorithm in wikipedia, scroll down to
The case of more than two numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4629967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Complex binomial series $$\sum_{r=0}^n \left[\frac{r+1}{r+2} (^n C_r) (x^r)\right] $$
Can someone help me evaluate this summation? I could solve till a certain extent but was then stuck. Would really help! so i started by differentiating $x(x+1)^n$ and got the coefficient of $(r+1)$. then I integrated the function I got above to get the coefficient of $(r+2)$ in the denominator but it was going a bit lengthy and i even got an incorrect expression at the end. so just wanted to know if there's any fallacy in my method or there is another elegant method too.
The summation can also be written as:
$(1+x)^n$ - $$\sum_{r=0}^n \frac {^nC_r}{r+2} $$
is there a way to evaluate the second summation?
| Using $$\frac{r+1}{r+2} \binom{n}{r} = \frac{1}{n+1} \cdot\frac{(r+1)^2}{r+2} \binom{n+1}{r+1} = \frac{1}{(n+1)(n+2)}\cdot (r+1)^2\binom{n+2}{r+2}$$ and re-indexing, your sum is $$\frac{1}{(n+1)(n+2)}\sum_2^{n+2} (r-1)^2 \binom{n+2}{r} x^{r-2} \\=\frac{1}{(n+1)(n+2)}\sum_2^{n+2} r(r-1)\binom{n+2}{r} x^{r-2} - \frac{1}{(n+1)(n+2)}\sum_2^{n+2}(r-1)\binom{n+2}{r}x^{r-2} \\ = \frac{1}{(n+1)(n+2)}\frac{d^2}{dx^2}(1+x)^{n+2} - \frac{1}{(n+1)(n+2)} \frac{d}{dx}\frac{(1+x)^{n+2}-1-(n+2)x}{x} \\ \vdots$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4630329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What's the relationship between angle $x+y$ and $w+v$ in this picture given two triangles with one side parallel formed above and below a common line? Is
I. $x+y> w+v$
II. $x+y< w+v$
III. $x+y= w+v$
IV. cannot be determined?
Why is $x$ equal to $v$?
My idea was that the x+y is not necessarily w+v, because if you move the two triangles closer so the third vertex of the triangle with vertices A and B shares a vertex with the triangle with vertices C and D, the path from B to C is not necessarily a straight line.
| If the first two inequalities are true, then the third equality automatically holds because $x+y-v-w$ cannot be positive and negative at the same time, needs to vanish.
You have not stated whether length and curvature can change.
If I understood correctly, because the angle sum is constant each term representing angle need not be equal to the corresponding angle and there are many other possibilities... like we can assume length of a straight line AB changing and moving to position ab, as also CD moving to position cd... they can rotate arbitrarily.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4630464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Generalised Mayer-Vietoris long exact sequence In chapter 8 of Bott/Tu, the authors generalise the standard Mayer-Vietoris sequence to the setting of a countable open cover of $X$. Let's fix a countable cover $\{ U_i\}$ of $X$. According to Prop 8.5 of Bott/Tu, the sequence
$$ 0 \rightarrow \Omega^*(X) \rightarrow \bigoplus_{i} \Omega^*(U_i) \rightarrow \bigoplus_{i<j} \Omega^*(U_{ij}) \rightarrow \bigoplus_{i<j<k} \Omega^*(U_{ijk}) \rightarrow \cdots $$
is exact. Later in this chapter they use this sequence to prove some essential properties of the de Rham-Cech bicomplex, and then move into a study of spectral sequences of bicomplexes.
I'm trying to understand this sequence for the simplest case of three covering sets, say $U_1, U_2$ and $U_3$. In this case, the above sequence terminates quickly:
$$ 0 \rightarrow \Omega^*(X) \rightarrow \bigoplus_{i} \Omega^*(U_i) \rightarrow \bigoplus_{i<j} \Omega^*(U_{ij}) \rightarrow \Omega^*(U_{123}) \rightarrow 0 $$
My question is this: is there a long exact sequence for 3 sets that generalises the usual binary Mayer-Vietoris sequence? A naiive guess would be something like:
$$ \cdots \rightarrow H^q(U_{123})\rightarrow H^{q+1}(X)\rightarrow \bigoplus_i H^{q+1}(U_i) \rightarrow \bigoplus_{i<j} H^{q+1}(U_{ij}) \rightarrow H^{q+1}(U_{123}) \rightarrow \cdots $$
Almost all of these maps can be obtained by descending the associated maps that come from the exact sequence of differential forms. I would guess that the connecting homomorphism can be somehow obtained from the associated spectral sequence, but I am not too sure about this.
| If you cover the circle by three intervals that pairwise intersect but not mutually, then $U_{123}$ is empty, $X = S^1$ and $U_i$ are contractible. So you cannot have an exact sequence of the form you guess, as at $H^1(X)$ you'd get
$$0 \to \mathbb{R} \to 0$$
which cannot be exact.
As @Mariano describes in the comments, the Mayer-Vietoris spectral sequence is the right generalization: it reduces to the usual exact sequence if the cover has only $2$ terms.
Alternatively you could also use an iterated exact sequence by first writing down exact sequences for $U_1 \cup U_2$ and $(U_1 \cup U_2) \cap U_3 = U_{13} \cup U_{23}$ and then for $X = (U_1 \cup U_2) \cup U_3$. This is equivalent to the spectral sequence approach, but breaks symmetry between the indices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4630644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Contrapositive, Converse and Inverse of statements with multiple quantifiers My textbook only touched on negation of statements with multiple quantifiers, and I would like to know:
*
*For a statement like
$\forall M>0, \exists \delta > 0$ such that if $0 < |x-a| < \delta$ then $|f(x)| > M,$
is its contrapositive
$\forall M>0, \exists \delta > 0$ such that if ~$(|f(x)| > M)$ then ~$(0 < |x-a| < \delta)\quad?$
*Do the converse and inverse similarly just affect the if-else?
| Yes, Contrapositive, Converse and Inverse all refer and apply to conditional formulae and affect only their antecedents and consequents and not any surrounding quantifier or connective.
This is not to say that they never alter quantifiers: the contrapositive of $$\forall x\big(Px\to\exists y Qxy\big)$$ is $$∀x(∀y¬Qxy→¬Px).$$ (Strictly speaking, we are referring to the contrapositive of the string within the parentheses.)
How can I know when to negate quantifiers when taking the contrapositive of a statement?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4630823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Generate 3D surface parametrization starting from 2D parametrization Consider the following parametrisation of a flower-like domain, where $R=0.4$ and $r=0.2$:
$\theta \mapsto [(R+r\cos(6\pi\theta))\cos(2\pi \theta), (R+r\cos(6\pi \theta))\sin(2\pi \theta)]$
which gives the following shape:
I'd like to get the three dimensional version of this: imagine the surface formed by rotating this figure and obtaining something which looks like a sphere but with many
| Suppose we take the curve $\mathcal C$, parameterized by
$$\begin{cases}
x(\theta) = (R+r\cos(16\pi\theta)) \cos(2\pi\theta) \\
y(\theta) = (R+r\cos(16\pi\theta)) \sin(2\pi\theta)
\end{cases}$$
with $R=0.4$, $r=0.2$, and $\theta\in[0,1]$, and revolve it about the $x$-axis. Consider a cross section of the resulting surface taken perpendicular to the $x$-axis at $x=x_0\in\left[-\frac35,\frac35\right]$. It's easy to show that $\sqrt{x^2+y^2}\le \frac35$.
In the plane of the cross section, solve $x(\theta_0)=x_0$ for $\theta_0$ and plug the solution (there are multiple depending on your choice of $x_0$) into the parametric equations for $\mathcal C$. $x(\theta_0)$ and $y(\theta_0)$ define a circle in the cross sectional plane centered at $(x_0,0,0)$ and with radius $y(\theta_0)$. We can parameterize this circle by the 3D parametric curve,
$$\begin{cases}
x'(\phi) = x(\theta_0) \\
y'(\phi) = y(\theta_0) \cos(2\pi\phi) \\
z'(\phi) = y(\theta_0) \sin(2\pi\phi)
\end{cases}$$
with $\phi\in[0,1]$.
It follows that the complete surface is parameterized by
$$\begin{cases}
x(\theta,\phi) = x(\theta) \\
y(\theta,\phi) = y(\theta) \cos(\pi \phi) \\
z(\theta,\phi) = y(\theta) \sin(\pi \phi)
\end{cases}$$
with $(\theta,\phi)\in[0,1]^2$.
The figure shows $\mathcal C$ (red) plotted in the plane $z=0$, the surface of revolution, and one of the circles (green) traced out by revolving a point on $\mathcal C$ about the $x$-axis (rendered with $\theta_0\approx0.159$, i.e. in the plane $x=0.2$, which happens to correspond to $3$ distinct circles at $6$ distinct values of $\theta_0$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4631007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that $ \sum_{i=0}^n {n \choose i} \cdot (-1)^i \cdot i^n =(-1)^n n!$ and $ \sum_{i=0}^n {n \choose i} \cdot (-1)^i \cdot i^k = 0$ for $kFor natural any number $n$, prove that $\sum_{i=0}^n{n \choose i}\cdot (-1)^i \cdot i^n =(-1)^n n!$ and for whole number $k<n,\,\sum_{i=0}^n {n \choose i} \cdot (-1)^i \cdot i^k =0$.
I know that $\sum_{i=0}^n{n \choose i}\cdot (-1)^i \cdot i^k = 0.$
For $k=0,1,2$ as it is expansion of $(x+1)^n,\,n(x+1)^{n-1},\,n(n+1)(x+1)^{n-2}$ at $x=-1$. But I am not able to prove for $k>2$. Can someone help me out?
| One way to see this would be to take $f(x)=\sum_{i=0}^n {n \choose i} (-1)^i \cdot e^{xi}$ so that the $k^{th}$ derivative of $f(x)$ is $f^{(n)}(x)=\sum_{i=0}^n {n \choose i} (-1)^i \cdot i^k e^{xi}$, and $f^{(n)}(0)=\sum_{i=0}^n {n \choose i} (-1)^i \cdot i^k$. But $f(x)$ can also be written as $(1-e^x)^n$, and all the derivatives of this before the $n^{th}$ will have a factor of $(1-e^x)$, so evaluate to $0$ at $x=0$. The $n^{th}$ derivative will be $n!(1-e^x)^0\cdot(-e^x)^n$ $+$ terms with factors of $(1-e^x)$, so the sum with $k=n$ is $(-1)^n n!$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4631131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 2
} |
Calculating dimension of a tensor product of algebras
Problem:
Let $\phi:(R,m)\to (S,n)$ be a local homomorphism of Noetherian local rings with $S$ formally equidimensional and so that $\dim S=\dim R+\dim S/mS$. Let $q\in \text{Spec} (R)$ and suppose that $Q\in \text{Spec}(S)$ contracts to $q$. We claim that: ($\ast$) $\dim S_Q/qS_Q\geq \text{ht} Q-\text{ht}(q)$.
I have:
($\ast \ast$) $\text{ht}(Q)\leq \text{ht}(q)+\dim S\bigotimes_R \kappa(q)$, and hence ($\ast$) holds if $\dim S_Q/qS_Q\geq\dim S\bigotimes_R \kappa(q)$ (where $\kappa(q)$ is the field $R_q/q_q$). Moreover, $S\bigotimes_R \kappa(q)\cong S\bigotimes_R (R/q)_q$ as $R-$algebras (since $\kappa(q)$ and $(R/q)_q$ are isomorphic $R-$algebras); and certainly $S_Q/qS_Q\cong S_Q\bigotimes_R (R/q)$ as $R-$algebras, which is almost what I want, but not quite (note that this tensor product is over $R$ instead of $S$).
Edit: I might have a proof but it is still fuzzy. The idea is that I think I can get a ring iso $S_Q/qS_q\to (S/\phi(q)S)/(Q/\phi(q)S)$, and I think it's not difficult to get that the RHS has dimension $\text{ht} Q-\text{ht} (\phi(q)S)$ (note that $S$ is catenary), and maybe it it somehow true that $\text{ht}(\phi(q)S) = \text{ht}(q)$. If all of that works, then I win, I will put an update or answer if I get that to work.
| By theorem 15.1 in Matsumara,
$\dim (S_Q/qS_Q) \geq \text{ht}(Q)-\text{ht}(q)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4631342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the limit $\lim_{x\to2}(x^3-2x-4)\tan\frac{\pi x}{4}$ Find the limit $$\lim_{x\to2}(x^3-2x-4)\tan\dfrac{\pi x}{4}$$
I can't even determine what type of indeterminate form we have. It's $0\times\text{undefined}$. We can see that $2$ is a root of the polynomial $x^3-2x-4$ and it factors as $(x-2)(x^2+2x+2)$. Additionally, when $x\to2$, $\tan\dfrac{\pi x}{4}\to\tan\dfrac{\pi}{2},$ which is not defined. How do we approach the problem and what's the intuition? I suppose somehow the limit $\lim_{f(x)\to0}\dfrac{\sin f(x)}{f(x)}=1$ may come in handy.
| Let $x=2+h.$ As $h\to0,$ using the asymptotic notation $\sim$ and the fact that $\lim_{t\to0}\frac{\tan t-\tan0}{t-0}=\tan'(0)=1:$
$$x^3-2x-4=10h+6h^2+h^3\sim 10h$$and
$$\tan\frac{\pi x}4=-\frac1{\tan\frac{\pi h}4}\sim-\frac4{\pi h}.$$
The limit of the product is therefore $-\frac{40}{\pi}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4631482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
For a finite abelian $G$, $f: G\to G$ defined by $f(g)=g^2$ is an isomorphism iff $|G|$ is odd Let $G$ be an abelian group of finite order, and define $f: G\to G$ by $f(g)=g^2$. I would like to prove that $f$ is an isomporphism if and only if $G$ has an odd order.
I am able to prove that $f$ (defined above) is a homomorphism if and only if $G$ is abelian fairly simply. If $G$ is abelian, then for all $a, b\in G$,
$$f(ab)=(ab)^2=abab=a^2b^2=f(a)f(b)$$
So $f$ is indeed a homomorphism. And if $f$ is a homomorphism, then for all $a, b\in G$,
$$f(ab)=f(a)f(b) \implies (ab)^2=a^2b^2 \implies a^{-1}ababb^{-1}=a^{-1}aabbb^{-1}\implies ba=ab$$
Meaning $G$ is abelian.
This proof holds for any (not necessarily finite) group.
Applying this to the original problem, we are now left with proving that $f$ is bijective if and only if $G$'s order is odd. Since $f$ is from a finite $G$ to itself, it will suffice to show that:
$$|G| \space \text{is odd} \iff f \space \text{is injective (or surjective)}$$
But here I am stuck. I believe there is a simple way to finish off the proof, but I can't think of it, and I couldn't find this question asked here before. I appreciate any help.
| Suppose $|G|$ is odd. Let $g\in{ker(f)}$, meaning that $g^2=1$. Thus $g=1$ by Lagrange, otherwise $g$ is an element of order 2 in a group of odd order.
Now, suppose $|G|$ is even. Then, $G$ admits an element of order 2, let's say $h\in{G}$. Thus $ker(f)$ isn't trivial, because $h\in{ker(f)}$, meaning that $f$ is not an isomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4631619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How do I find individual values of $\sin(\varphi)$ and $\cos(\varphi)$ from $x = a\sin^{2}(\varphi) + b\cos^{2}(\varphi)$? If $x = a\sin^2\phi+ b\cos^2\phi$, express $\sin$ and $\cos$ in terms of $x$ ( $a$ and $b$ are real constants)
I know how to find values of $T$ ratios in equations like $x = a\sin^2t$ or $x = b\cos^2t$ but how do I find the values of $\sin(t)$ and $\cos(t)$
in expressions like $x = \sin^2t + 5\cos^2t$ or $x = a\sin t + b\cos t$?
| HINT
I would recommend you to proceed as follows:
\begin{align*}
x & = a\sin^{2}(\varphi) + b\cos^{2}(\varphi)\\\\
& = (a\sin^{2}(\varphi) + a\cos^{2}(\varphi)) + (b - a)\cos^{2}(\varphi)\\\\
& = a + (b - a)\cos^{2}(\varphi)
\end{align*}
Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4631766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Convergence a.e. of this characteristic function let $f \geq 0$ be measurable on $E \subset \mathbb{R^n}$.
Now for any $\ n \geq 1$ there is a subset $E_n$ of $E$ with $\ m(E - E_n) < \frac{1}{n}$
I want to show that if I define: $f_n = f \chi_{_{E_n}}$
then $\lim\limits_{n \rightarrow \infty}f \chi_{_{E_n}} = f \chi_{_{E}} = f \ a.e.$
Is this fact obvious by construction and from the givens since measure of the $E_{_n}^{^c}$ gets smaller and converges to $0$
| No this claim is incorrect. Let $f\equiv1$, $E=[0,1]$, and $F_n$ a sequence of intervals in $E$ with $|F_n|=1/(2n)$ whose union covers $E$ infinitely-many times. (This is possible because the harmonic series diverges.) Then let $E_n=E\setminus F_n$. Clearly $f\chi_{E_n}$ converges nowhere on $E$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4631959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Ham sandwich theorem. Recently, I have been reading the Borsuk-Ulam theorem from Hatcher's algebraic topology book. In this book, there is an exercise as follows.
Let $A_1, A_2, A_3$ be compact sets in$\mathbb{R}^3$. Use the Borsuk–Ulam theorem to show that there is one plane $P \subset \mathbb{R}^3$ that simultaneously divides each $A_i$ into two pieces of equal measure.
I have some doubts regarding the above statement; each $A_i$ divided equal measure this part I did not understand properly. $\mathbb{R}^3$ has its won Lebesgue measure say $\mu$ if I take one of $A_i$ suppose $A_1$ say a unit circle $S^1$ which is compact and $\mu(S^1)=0$ then no need to divided by two equal measure. Is this statement say when I am taking one of them as the lesser dimension, we must take the Lebesgue measure concerning that dimension?
Can anyone suggest books or good references where I can find the statement of the ham sandwich theorem and its proofs?
| He does not say anything about the uniqueness of the plane. It can be the case that there is an infinite amount of planes such as $A_1=A_2=A_3=B_1(0)$, where $B_1(0)$ denotes the unit ball in $\mathbb{R}^3$. Each plane $P$ with $0\in P$ will suffice in this example. You outlined another example where an infinite number of possible planes divide the sets to equal measure. This happens in your example because the sets have a zero measure.
Hint for the exercise(Edit): Let $(\mathbb{R}^3)^*$ be the dual vector space of $\mathbb{R}^3$ and investigate the properties of the maps $f_i:(\mathbb{R}^3)^* \to \mathbb{R}$, given by $\omega \mapsto \mu(\{x\in A_i: \omega(x)>r_\omega\})-\mu(\{x\in A_i: \omega(x)\le r_\omega\}).$ Choose $r_\omega\in \mathbb{R}$ in a continues way so $\mu(\{x\in A_1: \omega(x)>r\})-\mu(\{x\in A_1: \omega(x)\le r\})=0$ for all $\omega$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4632113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Calculate $\frac{2}{\alpha^2}-\frac{1}{(\alpha +1)^2}$ if $\alpha$ be a root of $x^2+(1-\sqrt3)x+1-\sqrt3=0$ If $\alpha$ is a root of $x^2+(1-\sqrt3)x+1-\sqrt3=0$ calculate
$$\frac{2}{\alpha^2}-\frac{1}{(\alpha +1)^2}$$
What I have done:
$$\begin{aligned}
\alpha^2+\alpha+1&=\sqrt3(\alpha+1)\\
&\implies\alpha^4+2\alpha^3+3\alpha^2+2\alpha+1=3\alpha^2+6\alpha+3\\
&\implies\alpha^2(\alpha+1)^2=\alpha^2+4\alpha+2\\
&\implies\frac{2}{\alpha^2}-\frac{1}{(\alpha +1)^2}=1
\end{aligned} $$
Is there any easy way to calculate?
| Another way (not necessarily easier than OP’s one).
$\sqrt3(\alpha+1)=\alpha(\alpha+1)+1$
$\dfrac{\sqrt3}{\alpha}=1+\dfrac1{\alpha(\alpha+1)}$
$\left(\dfrac{\sqrt3}{\alpha}\right)^2=\left(1+\dfrac1\alpha-\dfrac1{\alpha+1}\right)^2$
$\dfrac3{\alpha^2}=1+\dfrac1{\alpha^2}+\dfrac1{(\alpha+1)^2}\;\;$ (by simplifying twice products)
$\dfrac2{\alpha^2}-\dfrac1{(\alpha+1)^2}=1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4632434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Does $x_1$ belong to $\overline{B}(y, \gamma_2)$? Let $(X, d)$ be a metric space and let $0<\gamma_1<\gamma_2$. Let $x_1, x_2\in X$ and $y\in X$ be fixed. Assume that
$$ x_2\in \overline{B}(y, \gamma_2)\quad\text{ and }\quad d(x_1, x_2)\le \gamma_1.$$
It is true that $x_1\in\overline{B}(y, \gamma_2)$?
My attempt: About me the answer is yes, since as $x_2\in \overline{B}(y, \gamma_2)$ and its distance from $x_1$ is less than $\gamma_2$ is trivial that also $x_1\in\overline{B}(y, \gamma_2)$.
On the contrary, during my calculus class another guy said that the answer is no. This is is tentative proof:
$$d(x_1, y)\le d(x_1, x_2)+d(x_2, y)\le \gamma_1 +\gamma_2\le 2\gamma_2. $$
$\bf{EDIT:}$ If not, which further assumptions are needed on $\gamma_1, \gamma_2$ to make $x_1\in\overline{B}(y, \gamma_2)$ true?
Could someone please help me to understand who is doing it right?
Thank you.
| It's probably easier to picture it in $\mathbb{R}$: take $y := 0$, $\gamma_2 := 2$, $x_2 := 1$, $\gamma_1 := 1.5$ and $x_1 := 2.5$.
Can you see the issue there?
EDIT: There is no assumption you can make on $\gamma_1, \gamma_2$ to make the claim true.
Let $y \in \mathbb{R}$ and $\gamma_2 > 0$ be fixed. Pick $x_2 := \gamma_2$. Then for all $\gamma_1 > 0$, there exists $x_1 \in \overline{B}(x_2,\gamma_1) \setminus \overline{B}(y,\gamma_2)$, just by taking $x_1 := x_2 + \gamma_1 = \gamma_2 + \gamma_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4632775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is there any way to find the number of real roots of a polynomial computationally? I'm creating a program to find the real roots of any given polynomial. It takes a user given polynomial as input and tries to find its roots. As an aid, mainly to check if I've missed any after the process, I'd like to know how many roots the polynomial has beforehand.
I am completely oblivious to the given polynomial. The only thing I can do with it is calculate the value for a given x. I do not know the degree of the polynomial.
The only lead I've found in this matter is Sturm's theorem but I don't see how it can be applied into this program.
I understand that given this restraint it may sound impossible but since there are so many mathematical algorithms that work solely based on this functionality, I thought it would be possible for there to be one that could solve this that I would be unaware of.
| There are hard limitations on what can be done in practice when the roots are ill-conditioned and/or the degree is huge. However, let us not forget that there is a subroutine that evaluates the polynomial in a reasonable amount of time and with an accuracy that is perceived as satisfactory!
Now let $p$ denote your polynomial. We recover the (unknown) degree of $p$ by observing the behavior of the function $f$ given by $$f(x) = \frac{p(2x)}{p(x)}.$$ It is straightforward to verify that $$f(x) \rightarrow 2^n, \quad x \rightarrow \infty, \quad x \in \mathbb{R}$$ where $n$ is the degree of $p$. Moreover, the convergence will eventually be monotonic. Example: If $p(x) = x^2 + 1$, then $$f(x) = \frac{4x^2 + 1}{x^2+1} = 4 \frac{1 + \frac{1}{4}x^{-2}}{1 + x^{-2}} \rightarrow 4 = 2^2, \quad x \rightarrow \infty, \quad x \in \mathbb{R}.$$
Once $n$ is know, then you recover $p$ using $n+1$ function evaluations and polynomial interpolation. Once $p$ is known, you proceed using either the Sturm sequence or methods based on Decartes' rule of sign.
Can this procedure be defeated? Certainly. The simplest thing is trigger an overflow in the evaluation of $p$. Is there a defence? Yes. Augment the implementation of $p$ and scale to protect against overflow in every arithmetic operation. This procedure is standard in eigenvector computations and is used in LAPACK (sequential) and StarNEig (parallel). The procedure is equivalent to extending the exponent field of the floating point numbers. Can Lagrange interpolation fail? Certainly. The conditioning of the relevant linear system can be arbitrarily bad, but it can be estimated. Is there a defense? Yes. Use extra arithmetic operations to emulate a sufficient small unit roundoff.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4633221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 7,
"answer_id": 3
} |
Proving that ${(1-\frac{2}{x^2})}^x < \frac{x-1}{x+1}$ for any $x > 2$. Proof that ${\left(1-\dfrac{2}{x^2}\right)}^x\!< \dfrac{x-1}{x+1}$ for any $x > 2$.
any ideas?
| Let $f(x)=\ln\frac{x-1}{x+1}-x\ln(1-2/x^2)$. It suffices to show that $f(x)>0$ for $x>2$, as $\ln$ is an increasing function. Using the series for $\ln(1+a)=a-a^2/3+a^3/3-a^4/4+\dots$, we get the asymptotic expansion
$$
\ln \frac{x-1}{x+1}=\ln \left(1-\frac1x\right)-\ln \left(1+\frac1x\right)=-\frac{2}{x}-\frac{2}{3x^3}-\frac{2}{5x^5}-\frac{2}{7x^7}-\dots
$$
and
$$
-x\ln\left(1-\frac2{x^2}\right)=\frac2x+\frac{2^2}2 \frac{1}{x^3}+\frac{2^3}3 \frac{1}{x^5}+\frac{2^4}4 \frac{1}{x^7}+\dots
$$
hence
$$
f(x)=\sum_{k=1}^\infty \frac 1{x^{2k+1}}\left(\frac{2^{k + 1}}{k + 1} - \frac2{2k+1}\right)>0
$$
since
$$
\frac{2^{k + 1}}{k + 1} - \frac2{2k+1}=2\frac{2^k(2k+1)-(k+1)}{(k+1)(2k+1)}>0.
$$
(Note that the inequality $f(x)>0$ holds for all $x>\sqrt 2$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4633390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Prove that $GOP'PD$ cyclic.
Let $ABC$ be a triangle with $\angle A=60.$ Define $G$ as the centroid and $O$ as the cirucenter. Define $P=(ABC)\cap AG$, $D$ as the midpoint of minor arc $BC$ and $P'$ as the reflection of $P$ over $BC$. Prove that $GOP'PD$ cyclic.
Since $\angle BOC=2\angle A=120, \angle BDC=120\implies O$ is the reflection of $D$ over $BC$. Hence $P'POD$ isoscele trapezoid and hence cyclic. Now, we simply need to show that $G\in OP'PD$.
I also observed that $GOP'D$ is harmonic but I couldn't prove it ( proving $GOP'D$ is harmonic finishes).
Also, define $H$ as the orthocenter, we have $$\angle BOC=\angle BHC=\angle BP'C\implies BHOP'C\text{ is cyclic}.$$
| There is a very straightforward solution to this problem, not using the added points and lines:
Let the intersection of $OD$ and $BC$ be $M$, and let's assume $R$ is the circumradius of $\triangle ABC.$ Then, we have these identities:
$$OM= \frac{R}{2}, \\ MD=BD \sin \angle CAD=\frac{BD}{2}=\frac{2R\sin \angle DAB}{2}=\frac{R}{2}, \\ \frac{AM}{\sin \angle B}=\frac{\frac{BC}{2}}{\sin \angle BAM} \implies GM= \frac{1}{6} \times BC \times \frac{\sin \angle B}{\sin \angle BAM}, \\\frac{MP}{\sin \angle CAP}= \frac{\frac{BC}{2}}{\sin \angle C} \implies MP= \frac{1}{2} \times BC \times \frac{\sin \angle CAP}{\sin \angle C};$$
Therefore:
$$OM \times MD=\frac{R^2}{4};$$
and:
$$GM\times MP=\frac{BC^2}{12}.$$
However, $BC=2R\sin \angle A =R \sqrt 3$. So, $OM \times MD=GM\times MP$, and $GOPD$ is cyclic.
We are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4633561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find intersection points of $x^2 - 3xy+ 2y^2 - x + 1 = 0$ and $y = \alpha x + \beta$ This question comes from exercise $1.3$ in Rational Points on Elliptic Curves (Silverman & Tate). I am self studying and trying to work some of the exercises. This one is giving me some trouble.
Let $C$ be the conic given by the equation
$$x^2 - 3xy+ 2y^2 - x + 1 = 0.$$
Let $L$ be the line $y = \alpha x + \beta$. Suppose that the intersection $L \cap C$ contains the point $\left(x_0, y_0\right)$. Assuming that the intersection consists of two distinct points, find the second point of $L \cap C$ in terms of $\alpha, \beta, x_0, y_0$.
We know the line intersections the conic at point $P = (x_0, y_0)$, so by the group law it also intersects at point $P^2$. I think my understanding of this is algebraically correct, but I don't know how to translate it to my analytic understanding and write $P^2$ in terms of$\alpha, \beta, x_0,$ and $y_0$.
Substituting $y$ yields
\begin{align*}
x^2 -3x(\alpha x +\beta) + 2(\alpha x +\beta)^2 -x + 1 &= 0\\
x^2 -3\alpha x^2 -3x\beta + 2\alpha^2x^2 + 4\alpha x\beta +2\beta^2 -x + 1 &= 0\\
\end{align*}
which doesn't seem to lead anywhere.
| Let's write your quadratic as
$$
(1 -3\alpha+2\alpha^2) x^2 -(1+3\beta - 4\alpha \beta)x +2\beta^2 + 1 = 0
$$
Now, by Vieta's formulas, the sum of the roots of the quadratic $ax^2 + b x + c$ is $-b/a$. So for this quadratic we have
$$
x_0 + x_1 = \frac{1+3\beta - 4\alpha \beta }{1 -3\alpha+2\alpha^2}
$$
Next we use the line equation to get $y_0 + y_1$:
$$
y_0 + y_1=\alpha (x_1 + x_0) +2\beta = \alpha \frac{1+3\beta - 4\alpha \beta}{1 -3\alpha+2\alpha^2}+2\beta = \frac{\alpha + 2\beta -3\alpha\beta}{1 -3\alpha+2\alpha^2}
$$
Solving then gives
$$
x_1 = \frac{1+3\beta - 4\alpha \beta }{1 -3\alpha+2\alpha^2} - x_0\\
y_1 = \frac{\alpha + 2\beta -3\alpha\beta}{1 -3\alpha+2\alpha^2} - y_0.
$$
Note that the denominator here is the coefficients of the quadratic form in the conic. I'm not sure of an intuitive way to get the coefficients in the numerator, though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4633708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to find the minimum-maximum of $ N( 0, 1) $ Given that two random variables $J,P∼N(0,1)$ are independent. Show how to compute the following and provide the answer:
(i) $E[\min(J,P)]$ and $E[\min(J^2,P^2)]$ and
(ii) $E[\max(J,P)]$ and $[\max(P^2,J^2)].$
A similar question was posted about 6 years ago, click here to see it. I followed the steps over there to try to solve this question. I understand there may be several ways of solving this question and I welcome any alternative methods in solving the question. However, here’s my understanding so far which may not be entirely correct:
I will let $D = \min(J,P)$ and $N=\max(J,P).$
It is a fact that $D+N = J+P.$ Then we can able to present the following:
$$ E(D)+E(N) = E(J) + E(P) = 0; \quad E(D)= -E(N)$$
Note that
$$ 2E(D) = E(D)-E(N) = E(D-N)=E(|J - P|), \ \text{(expectation of half-normal random variable)}$$
$$2E(D) = \frac{2}{\sqrt{\pi}}; \quad E(D) = \frac{1}{\sqrt{\pi}}, \quad E(N) = -\frac{1}{\sqrt{\pi}}.$$
$W = \sum_{i=1}^k Z_i^2$ is a $\chi^2$ distribution, where $Z_i \sim N(0,1)$. Let $A = \min(J^2,P^2)$ and $B= \max(J^2,P^2) .$
$$ E(A) + E(B) =E(J^2) + E(P^2) =2$$
$$E(A) = 2-E(B)$$
$$E(A)=E(B)=1.$$
| For the last part, you already know that $\mathbb{E}(A+B)=2$, and all you have to do is find the value of $\mathbb{E}(B-A)$, which is not zero this time.
Indeed, $\mathbb{E}(B-A) = \mathbb{E}(|J^2 - P^2|)$. Given $\theta$ uniformly distributed on $[0, 2\pi[$ independent from $R$ with density $f(r) = r e^{-\frac{1}{2}r^2}$, $r \ge 0$, use polar coordinates to write $(J,P) \sim (R\cos\theta, R\sin\theta)$, and then: $$\mathbb{E}(B-A) = \mathbb{E}(R^2)\mathbb{E}\big(|\cos(\theta)^2-\sin(\theta)^2|\big) = 2 \cdot \mathbb{E}(|\cos(2\theta)| = \frac{4}{\pi}$$
Hence $\mathbb{E}(A) = 1 - \frac{2}{\pi}$ and $\mathbb{E}(B) = 1 + \frac{2}{\pi}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4633853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why am I getting $\pi(n)=\operatorname{li}(n)+O\left(\log^2 n\right)$ I was trying to see if I could prove the Prime Number Theorem using Legendre's formula. I did it the following way:
We hae
$$\log n!=n(\log n-1)+O(\log n)=\sum_{p\leq n}\frac{n-s_p(n)}{p-1}\log p=(n-O(\log n))\sum_{p\leq n}\frac{\log p}{p-1}$$
Coming from Stirling's approximation and Legendre's formula.
Then,
\begin{align}
\sum_{p\leq n}\frac{\log p}{p-1}&=\frac{n}{n-O(\log n)}(\log n-1)+O\left(\frac{\log n}n\right) \\
&= \left(1+O\left(\frac{\log n}n\right)\right)(\log n-1)+O\left(\frac{\log n}n\right) \\
&=\log n-1+O\left(\frac{\log^2 n}{n}\right)=f(n)
\end{align}
Which we can then differentiate:
$$\frac{df}{dn}=\frac{1}{n}+O\left(\frac{\log^2 n}{n^2}\right)=\frac{d\pi}{dn}\frac{\log n}{n-1}$$
Because there is a $\frac{d\pi}{dn}$ chance that $n$ is prime and that $\frac{\log n}{n-1}$ will be added to the sum.
Then,
\begin{align}
\frac{d\pi}{dn}&=\frac{n-1}{\log n}\left(\frac{1}{n}+O\left(\frac{\log^2 n}{n^2}\right)\right)\\
&=\frac{n-1}{n}\frac{1}{\log n}+O\left(\frac{\log n}{n}\right)\\
&=\frac{1}{\log n}+O\left(\frac{\log n}{n}\right)\\
\end{align}
And finally with integration, we get
$$\pi(n)=\operatorname{li}(n)+O\left(\log^2 n\right)$$
Which is a very very very strong error bound on $\pi(n)-\operatorname{li}(n)$, considering the best known asymptotic (even after assuming RH) is
$$\pi(n)=\operatorname{li}(n)+O(\sqrt n\log n)$$
So where have I gone wrong, and what would this method give me in the end? I suspect it's in the imprecise definition of differentiation but I'm not sure.
| It is not true that
$$\log n!=(n-O(\log n))\sum_{p\leq n}\frac{\log p}{p-1}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4634274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why $\bigg\lfloor \frac{n^m}{\binom{n}{m}}\bigg\rfloor=m!$ for $n>(m+1)^{m+2}$? Let $n,m\in \mathbb{N}_0$. I need to prove that if $n>(m+1)^{m+2}$ then
$$ \Bigg\lfloor \frac{n^m}{\binom{n}{m}}\Bigg\rfloor=m!.$$
Since
$$ \frac{n^m}{\binom{n}{m}}=\frac{n^m(n-m)!}{n!}m!=m!+\bigg(\frac{n^m(n-m)!}{n!}-1\bigg)m!,$$
I have tried to see when
$$\frac{n^m(n-m)!}{n!}-1<\frac{1}{m!}.$$
Can anyone help me or give me a hint? Thanks in advance!
| I think your approach is fine, so I've continued with it below.
We can rewrite the fraction $\frac{(n-m)!}{n!}$ as follows:
$$\frac{(n-m)!}{n!} = \frac{1}{n(n-1)(n-2)...(n-m+1)} = \prod_{k = 1}^m{\frac{1}{n-k+1}}$$
And $n^m$ can be written as
$$\prod_{k = 1}^m{n}$$
Then combining the products gives
$$\frac{n^m(n-m)!}{n!} = \prod_{k = 1}^m{\frac{n}{n-k+1}} = \prod_{k = 1}^m{\left(1 + \frac{k - 1}{n-k+1}\right)}$$
which can be expanded out as
$$1 \cdot \left(1 + \frac{1}{n-1}\right) \cdot \left(1 + \frac{2}{n-2}\right) \cdot ... \left(1 + \frac{m-1}{n-m+1}\right)$$
Subtracting the 1 as you've done, leaves us with an expansion of $2^{m-1}$ terms (since each factor multiplies the number of terms by 2).
By inspection, we see that the largest possible size of a single term is $\frac{m-1}{n-m+1}$
Therefore, the remainder $\left(\frac{n^m(n-m)!}{n!} - 1\right)$ can be bounded by
$$2^m \cdot \frac{m}{n-m}$$
We want this to be less than $\frac{1}{m!}$, thus
$$2^m \cdot \frac{m}{n-m} < \frac{1}{m!}$$
From here, its quite easy to rearrange for n. Then taking logs of both sides, and applying Stirlings Formula gives you the bound
$$n > m^m$$
which is slightly better than the one in your question.
I hope this works for you, and was easy enough to follow :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4634517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Confused about dimension of a range of a linear map Suppose you have a linear map $T:Mat_{2\times3}(R)\rightarrow Mat_{2\times2}(R)$
Why is the range(T) 4 at most? I thought that this is a 2x2 matrix and that the maximum range is 2?
|
Why is the [dimension of] range(T) 4 at most? I thought that this is a 2x2 matrix and that the maximum range is 2?
$\text{range}(T)$ is not a $2\times 2$ matrix, but a SET of $2\times 2$ matrices. Precisely, it is a subspace of $W=M_{2\times 2}(\mathbb R)$. Its dimension has no connection with the maximum rank of the elements of $W$.
The dimension of the range of a linear application $f: V\to W$ is at most the minimum between the dimensions of $V$ and $W$. Since $\dim(M_{n\times m}(\mathbb R))=nm$, the dimension of $T$ is at most $2\times 2=4$.
If you choose a basis of $V=M_{2\times 3}(\mathbb R)$ and a basis $M_{2\times 2}(\mathbb R)$, then you can represent $T$ by a $4\times 6$ matrix (whose rank will be the dimension of $\text{range}(T)$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4634707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Zariski topology and morphisms that coincide on a dense set In Miles Reid Undergraduate Algebraic Geometry book it is stated informally, about the Zariski topology:
(1) two morphisms which coincide on a dense open set coincide everywhere
I am suprised that he requires a dense open set since morphisms are continuous maps for the Zariski topology and I thought that for a general topological space we had the following statement
(2) two continuous maps which coincide on a dense set coincide everywhere
My question is: is (2) a valid statement of general topology? If not, what specificities of the Zariski topology make it invalid? (compared to, for instance, a metric space — which I'm more familiar with)
EDIT: Also, a link to a proof of the correct statement, with the precise conditions, would be much appreciated
| Actually, I don't think either of these two claims are true, at least in a general-enough setting.
Consider a topological space $X$ obtained by gluing two copies of $\mathbb{R}$ along $\mathbb{R}\setminus\{0\}.$ It looks like a line with a double origin. It comes with two natural embeddings of $\mathbb{R}$ different in the choice of 'origin' to map the point 0 to. Nevertheless, these two maps agree on a dense set. This construction can be emulated in algebraic geometry once you define a sufficiently general type of spaces, such as schemes.
The statement becomes true once you require the target space of your pair of morphisms to be Hausdorff. In the world of algebraic geometry this translates to requiring the source to be reduced and the target to be separated. I suspect that Reid only covers projective and affine varieties over an algebraically closed field, where this is automatic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4634945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Probability of even number of dice result Suppose we roll a dice (with 6 sides) 21 times.
What are the odds of getting either 1 or 2 an even amount of times?
I tried to calculate it by representing the number of times we get 1 or two as a binomial variable $B(n=21, p=\frac{1}{3})$, And summing the probability of getting $0, 2,\ldots, 20$ and I get the probability is $0.5$, which from what I saw in a different way of calculation is wrong.
Where did my method go wrong?
| Here's a potential hint:
For any given number to appear an even amount of times, there are
$$
{21\choose2}(5^{21-2})+{21\choose4}(5^{21-4})+{21\choose6}(5^{21-6})+\cdots+{21\choose20}(5^{21-20})=\sum_{n=1}^{10}{21\choose2n}(5^{21-2n})
$$
possibilities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4635125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is there a math function that takes in a number n and return a number with the digits of n sorted by value? The idea crossed my mind of a function that would take in an integer n and return an integer of the digits of n in order.
e.g:
g(184729) = 987421
g(1212121212) = 2222211111
g(1234567890) = 9876543210
I looked it up and saw a similar question, but the answer did not satisfy me as I was looking for one using only basic arithmetic operations, summations, logarithme and modulo.
| So the answer I came up with might not be the best, but I am satisfied with it.
I splited the following answer in 4 functions (for obvious reasons) but it can be made into one single function.
First, i(x,n) returns the nth digit of x
$i\left(x,n\right)=\left\lfloor\operatorname{mod}\left(\frac{x}{10^{n}},10\right)\right\rfloor$
c(x)+1 returns the length of x
$c\left(x\right)=\left\lfloor\log\left(x\right)\right\rfloor$
f(x,n) counts how many time the digit n is in x
$f\left(x,n\right)=\sum_{p=0}^{c\left(x\right)}0^{\left|i\left(x,p\right)-\operatorname{mod}\left(n,10\right)\right|}$
Finally, g(x) checks f(x) for how many time each digit is in x, multiply them by $\frac{1}{9}$ multiply by 10 one last time so the summation of digits don't overlap
$g\left(x\right)=\left(\sum_{n=1}^{9}\left\lfloor\left(\frac{1}{9}\right)\cdot10^{f\left(x,n\right)}\right\rfloor\cdot n\cdot10^{\sum_{k=1}^{n-1}f\left(x,k\right)}\right)\cdot10^{f\left(x,0\right)}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4635278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\sum_{n = 1}^{\infty}\frac{x_{n}}{a_{n}}$ converges. Show that $\frac{1}{a_{n}}\sum_{k = 1}^{n}x_{k}\rightarrow 0$ as $n\rightarrow \infty$. Let $\{x_{n}\}_{n\geq 1}$ be a sequence in $\mathbf{R}$ and $\{a_{n}\}_{n\geq 1}$ be a sequence of positive real numbers satisfying $a_{n} \uparrow \infty$ as $n \rightarrow \infty$ . Further, suppose that $\sum_{n = 1}^{\infty}\frac{x_{n}}{a_{n}}$ converges. Then show that $\frac{1}{a_{n}}\sum_{k = 1}^{n}x_{k}\rightarrow 0$ as $n\rightarrow \infty$.
My Attempt: If $(x_{n})$ sequence of positive real number then by Comparision test $\frac{1}{a_{n}}\sum_{k = 1}^{n}x_{k}$ is converges to zero. But How can I approach for arbitrary sequence $(x_{n})$.
| Let $y_j$ be a sequence of $n$ real numbers.
If $|y_1+\ldots+y_n|\geq1$ and if $b_1\geq b_2\geq\ldots\geq b_n=1$,
then there exist $f$ and $g$ such that $\displaystyle\bigg|\sum_{j=f}^g b_j y_j\bigg|\geq1$.
$\\$
This is, I think, the key fact. If you want me to include the proof, let me know.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4635459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Tracial and finite von Neumann algebras A tracial von Neumann algebra $(M,\tau)$ is a von Neumann algebra with a faithful normal tracial state $\tau$ on $M$. That is, $\tau$ is a function from $M \to \mathbb{C}$ such that it is a faithful normal state and $\tau(xy)=\tau(yx)$. I m confused with tracial and finite von Neumann algebras. I could see references saying that a finite von Neumann algebra $M$ has a unique centre valued $Z(M)$ trace. But this need not be scalar valued no? My definition of trace is a positive linear functional $\tau$ satisfying $\tau(xy)=\tau(yx)$. Does a finite von Neumann algebra has a faithful tracial states? That is a scalar valued one?are they unique? I know a finite factor has a unique one.
| If you have a faithful tracial state, then the algebra is finite (if $v^*v=1$ then $0\leq \tau(1-vv^*)=\tau(1-v^*v)=0$, so $1$ is finite).
But the converse is not true. An algebra with a faithful state has to be "countably decomposable", it can only admit countably many pairwise orthogonal projections. But there are finite von Neumann algebras that have uncountably many pairwise orthogonal projections. For instance $\ell^\infty[0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4635658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Volume of the spherical cap: $K_h := \left \{(x, y, z) ∈ \mathbb{R}^3 | x^2 + y^2 + z^2 ≤ R, z > h\right \}$ I want to check if my solution to this problem is right.
I have to calculate for $R > 0$ and $h\in [0, R)$ the volume of the spherical cap: $K_h := \left \{(x, y, z) ∈ \mathbb{R}^3 | x^2 + y^2 + z^2 ≤ R^2, z > h\right \}$
So what I have done is
$$V(K_h)= \int_{R}^{h}\left(\int_{x^2+y^2 \leq R^2-z^2} 1d(x,y)\right)dz\\=\int_{R}^{h}(R^2-z^2)dz\\= \left[R^2z-\dfrac{z^3}{3}\right]_{R}^{h}=\dfrac{3hR^2-h^3}{3}-\dfrac{2R^3}{3}$$
I think I have done a mess, but I don't know where the error is. Can someone help me?
| Your integral over $dxdy$ is wrong. That region is a disk, with area $\pi(R^2-z^2)$.
$$\int_0^{R^2-z^2}dxdy=\int_0^{2\pi} d\phi\int_0^{\sqrt{R^2-z^2}} \rho d\rho\\=2\pi\frac{\left(\sqrt{R^2-z^2}\right)^2}{2}\\=\pi(R^2-z^2)$$
Also, the limits of integration for $z$ are from $h$ to $R$, not the other way around.
$$\int_h^R\pi(R^2-z^2)dz=\pi R^3-\pi R^2h-\pi\frac{R^3}3+\pi\frac{h^3}3$$
To check your solution, if you plug in $h=0$ you should get half the volume of a sphere, or for $h=-R$ you get the full volume.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4635820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
ODE: a specific question Determine the equation of the curve for which the $y$ intercept of the normal drawn to a point on the curve is equal to the distance of that point from the origin.
My attempt:
Consider an arbitrary point, $(x,y)$, whose slope is $\frac{dy}{dx} $.
Thus the slope of the normal is $\frac{-dx}{dy}$.
Using the $y$ intercept form of a line
$$y= \frac{-dx}{dy}x+ \sqrt{x^2 +y^2}$$
Now I have tried solving this using substitutions,($y=ux$, and $x=uy$), but that didn't work.
To use the method of the integrating factor, Unless I'm mistaken, I'll need a subsitution to do that, but I can't seem to find any appropriate substitutions.
Thanks for the help.
This problem is problem 54, in chapter 1, in the second volume of N.piskunov's differential and integral calculus.
There is a solution, but that does absolutely nothing to explain how to solve this
The solution they have given
$y+\frac{x}{y'}= \sqrt{x^2 +y^2}$
Whence
$x^2=C(2y+C)$
Which is what I'm unable to understand
TL;DR
How did
$y+\frac{x}{y'}= \sqrt{x^2 +y^2}$
Result in
$x^2=C(2y+C)$
| $$y= \frac{-dx}{dy}x+ \sqrt{x^2 +y^2}$$
$$ \frac{dx}{dy}x=-y+ \sqrt{x^2 +y^2}$$
$$ \frac{dx}{dy}=-\dfrac yx+ \sqrt{1 +\dfrac {y^2}{x^2}}$$
This looks like an homogeneous DE substitute $x=ty$:
$$t'y+t=-\dfrac 1t+ \sqrt{1 +\dfrac {1}{t^2}}$$
$$t'y=\dfrac {-1-t^2+ \sqrt{1 +t^2}}{t}$$
$$\dfrac 12 (t^2+1)'y= {-1-t^2+ \sqrt{1 +t^2}}$$
The DE is separable.You can integrate and substitute $u^2=t^2+1$ and $udu=tdt$.
Another way:
You can also rewrite the original DE as:
$$y= \frac{-dx}{dy}x+ \sqrt{x^2 +y^2}$$
$$1= -\frac{du}{dv}+ \sqrt{1+\dfrac uv}$$
Where $u=x^2$ and $y^2=v$.
$$(u+v)'= \dfrac { \sqrt{ u+v}}{\sqrt v}$$
Separate and integrate.
$$\sqrt {u+v}= \sqrt v+C$$
$$x^2+y^2=(y+C)^2$$
$$\boxed{x^2=C(C+2y)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4636004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How is it the Kullback-Leibler divergence is always non-negative but differential entropy can be positive or negative? According to wikipedia, we have $$
D_{KL}(f ||g ) \geq 0
$$
always, but if $f$ is the pdf of a random variable $X$ and $g$ is the density of the un-normalized Lebesgue measure i.e. the constant function $1$, then
$$
D_{KL}(f ||g ) = \int f \log(f/g) = \int f\log f = -h(X),
$$
the negative of the differential entropy. Many distributions however have positive differential entropy (as shown here). So this means the left-hand side can take negative values. What's going on here?
| It is not necessarily true that $KL[f || g] \geq 0$ when $f, g$ are not probability measures. In your case $g$ does not integrate to one and hence the result fails, as your counterexample(s) demonstrate.
For some more intuition on what breaks down, let's look at a standard proof of the non-negativity of the KL for probability distributions $f, g$. This proof uses Jensen's inequality:
$$KL[f || g] = - \int \log (g/f) f \; dx \geq -\log \int g dx = 0.$$
Note that in the last step we rely on the assumption that $\int g dx = 1$. For your scenario (with $g$ being the Lebesgue measure, presumably on all of $\mathbb{R}$), this integral is infinite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4636159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Diagonalization of the matrix $\begin{pmatrix}1&-3&3\\3&-5&3\\6&-6&4\end{pmatrix}$ We need to find whether the matrix $$\begin{pmatrix}1&-3&3\\3&-5&3\\6&-6&4\end{pmatrix}$$
is diagonalizable. If so, we have to find the diagonal matrix and also the matrix that will diagonalize it.
I have found out the eigen values: $-2,-2,4.$
And using the eigen values, the supposed diagonal matrix is$$\begin{pmatrix}-2&0&0\\0&-2&0\\0&0&4\end{pmatrix}.$$
But I need to prove that $D=P^{-1}AP.$ That's where the problem arises:
*
*I have a problem in finding the eigen vectors. How can I find the eigen vector when the eigen value is $-2?$ I am getting only one equation!
*Also, how can I find the "$P$" so that I can do $P^{-1}AP?$ (If I had 3 different eigen values and for that if we had got 3 different eigen vectors, finding $P$ was pretty straight forward. But here we have 2 repetitions of the same eigen value and I can't even find out the eigen vector.)
| You were nearly done, since you already found two independent eigenvectors for $-2,$ namely $(1,1,0)^T$ and $(-1,0,1)^T.$
An eigenvector for $4$ is $(1,1,2)^T.$
These 3 columns will do for $P.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4636355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Suppose $f:\mathbb{R} \to \mathbb{R}$ is uniformly continuous. Show that $f(x+1)-f(x)$ is bounded Suppose $f:\mathbb{R} \to \mathbb{R}$ is uniformly continuous. Show that $f(x+1)-f(x)$ is bounded.
I have considered the question, and my current approach is to show that there exists $\delta > 1$ such that there is some fixed $\epsilon > 0$ for which $|x-p| < \delta \implies |f(x)- f(p)| < \epsilon$, thus bounding $f(x+1)-f(x)$ by this $\epsilon$. Is my current approach correct, and are there any hints or alternative approaches to this question?
Thanks in advance.
| By definition, there exists $N\in\mathbb{N}$ such that $y\in (x-N^{-1},x+N^{-1})$ implies
$$|f(y)-f(x)|<1$$
This same logic applies to $x_k=x+\frac{k}{2N}$:
$$|f(y)-f(x_k)|<1$$
Then
$$ |f(x+1)-f(x)|=\left|f\left(x+\frac{2N}{2N}\right)-f(x)\right|$$
$$=\left|\sum_{k=1}^{2N}\left[f\left(x+\frac{k}{2N}\right)-f\left(x+\frac{k-1}{2N}\right)\right]\right|$$
$$\leq \sum_{k=1}^{2N}\left|f\left(x+\frac{k}{2N}\right)-f\left(x+\frac{k-1}{2N}\right)\right|$$
Since
$$\frac{k}{2N}-\frac{k-1}{2N}=\frac{1}{2N}<\frac{1}{N}$$
we may conclude that the whole sum is bounded by $|f(x+1)-f(x)|<2N$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4636481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Iterating Hamilton's construction $\Bbb R\to \Bbb C$ yields what type of algebraic structure on $\Bbb C^2$ (e.g. occurs in nested complex datatypes). The C++ language provides a general template-based implementation of complex numbers. Normally, these are using to manipulate numbers of the form $a + b i$, where $a, b$ are chosen as either 32 or 64 bit floating point numbers. These are respectively denoted by std::complex<float> and std::complex<double>. The language implements the standard multiplication, division, addition etc. operators on complex numbers.
Because the code provided to manipulate complex numbers is general, we can embed complex numbers within complex numbers, e.g. using std::complex<std::complex<double>>, nesting arbitrarily deeply. The doubly nested data type is uniquely defined by four floating point numbers. It is thus representing numbers in a form $(a + b i) + (c + d i) j$, where $i^2 = -1$, $j^2 = -1$, and $a, b, c, d \in \mathbb{R}$.
I initially thought these would be related to quaternions, but the structure encountered here is not anti-commutative in the $i, j$ basis units which is apparently a property of quaternions. On top of that, these seem to not be equivalent to bicomplex numbers, because there is no third analog of the imaginary unit present, named $k$ in this article. I get the suspicion that std::complex<std::complex<double>> is isomorphic to the complex numbers after reading the Frobenius theorem about real division algebras.
Is this the correct conclusion? If so, why? If not, what algebraic structure is std::complex<std::complex<double>> isomorphic to?
| In the structure $\Bbb C\otimes_{\Bbb R}\Bbb C$ you get $i^2+1=j^2+1=0$ and thus also $i^2-j^2=0$, identifying $i\otimes 1$ with $i$ and $1\otimes j$ with $j$. This means that $$(i+j)(i-j)=0,$$ so you get zero-divisors. In a further step, as also $(ij)^2=1$, $$P_\pm=\frac12(1\pm ij)$$ are projectors on sub-spaces of $\Bbb C\otimes_{\Bbb R}\Bbb C$ where $i$ acts like $\pm j$, so the space decomposes into the vector-space sum of two "cross-diagonal" copies of $\Bbb C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4636631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Distance from the operator to the set of operators Find the distance from the operator $A \in B(L_2[0,1])$ to the set of irreversible operators $(Ax)(t)=\alpha(t)x(t), \alpha \in L_\infty[0,1]$.
I think I should take advantage of the fact that if $A+B$ is reversible, if A is reversible and the norm of the operator $||B|| < \frac{1}{||A^{-1}||}$
| The distance $D$ will be zero if $A$ is not invertible, so let us assume that $A$ is invertible. The spectrum of $A$ is $\sigma(A)=\overline{\alpha([0,1])}$, and we are assuming that $0\not\in\sigma(A)$, so
$$
0<\delta=\min\{|\lambda|:\ \lambda\in\sigma(A)\}=\operatorname{essinf}\{|\alpha(t)|:\ t\in[0,1]\}.
$$
For each $t$ we have that $A-\alpha(t)\,I$ is not invertible, hence
$$
D\leq\|A-(A-\alpha(t)\,I)\|=|\alpha(t)|,
$$
so $D≤\delta$. Now suppose that $T$ is an operator with $\|T-A\|<\delta$. We have that $A^{-1}$ is the multiplication operator by $1/\alpha$, and so
$$
\|A^{-1}\|=\operatorname{esssup}\Big\{\frac1{|\alpha(t)|}:\ t\in[0,1]\Big\}=\frac1{\operatorname{essinf}\{|\alpha(t)|:\ t\in[0,1]\}}=\frac1\delta.
$$
That is $\|T-A\|<\|A^{-1}\|^{-1}$ and so $T$ is invertible. Thus $D≥\delta$. In summary, since the equality below also works when $A$ is not invertible, the distance from $A$ to the non-invertible operators is
$$
D=\operatorname{essinf}\{|\alpha(t)|:\ t\in[0,1]\}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4636751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question on Indefinite Integrals (Calculus II) hope you are doing well. I am working on a problem solving the following indefinite integral :
$$\int 6x \sqrt{2x^3+8}\, dx$$
I have gotten up to the point where :
$$\int \sqrt{u}\frac{du}{x}$$
But am stuck as I am not sure where to place the $x$, not sure if I have done an incorrect method or am missing something. Thank you for your assistance in advance :)
| What you did is fine. Let $u=2x^3+8$, $\frac{du}{dx}=6x^2$.
Hence $$\int 6x\sqrt{2x^3+8}\, dx = \int \frac{\sqrt{u}}{x}\, du$$
However, I do not expect this integral to have a nice form. Wolfram alpha expressed the solution in hypergeometric function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4636944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Unable to understand usage of the Beta function in a previous question In this question, OP says they used the definition of the Beta function on, $$S := \sum_{n=0}^{\infty}(-1)^{n}\frac{\Gamma(n+\frac{3}{4})}{\Gamma(n+\frac{5}{4})}=\frac{\sqrt{\pi}}{2}$$ to get, $$\sqrt{\pi}S=\intop_0^1\frac{x^{-\frac{1}{4}}}{(1+x)\sqrt{1-x}}dx$$ I can't figure out how they ended up with this equation. I tried to think of it as choosing values for $x$ and $y$ to end up on this integral, but I can't figure out which ones they used. I do know the integral definition of the Beta function which is $$\beta(x,y) = \int_{0}^{1}t^{x-1}(1-t)^{y-1}dt$$ Any hints would be helpful.
| I'm the OP. This is how I did it:
$\sum_{n=0}^\infty (-1)^n\frac{\Gamma({n+\frac{3}{4}})}{\Gamma({n+\frac{5}{4}})}=\sum_{n=0}^\infty (-1)^n\frac{\Gamma({n+\frac{3}{4}})}{\Gamma({n+\frac{3}{4}+\frac{1}{2}})}$.
Now multipling by $\Gamma(\frac{1}{2})=\sqrt{\pi}$:
$$\sum_{n=0}^\infty (-1)^n\Gamma(\frac{1}{2})\frac{\Gamma({n+\frac{3}{4}})}{\Gamma({n+\frac{3}{4}+\frac{1}{2}})}$$
The Gamma terms are just $B(\frac{1}{2},n+\frac{3}{4})$
So we get:
$$\sum_{n=0}^\infty (-1)^nB(\frac{1}{2},n+\frac{3}{4})$$
Using the integral form of the Beta function:
$$\sum_{n=0}^\infty (-1)^n \intop_0^1 x^{n+\frac{3}{4}-1}(1-x)^{\frac{1}{2}-1}dx$$
Now, exchanging the sum and the integral (The sum defintely converges, and the integral does too)
$$ \intop_0^1 \sum_{n=0}^\infty (-1)^n x^n*x^{-\frac{1}{4}}(1-x)^{-\frac{1}{2}}dx$$
And finally using the formula for a geometric series:
$$ \intop_0^1 \frac{x^{\frac{1}{4}}}{(1+x)\sqrt{1-x}}dx$$
And since the only change we did to the original sum was Multiply by $\sqrt{\pi}$:
$$\sqrt{\pi}\sum_{n=0}^\infty (-1)^n\frac{\Gamma({n+\frac{3}{4}})}{\Gamma({n+\frac{5}{4}})}=\intop_0^1 \frac{x^{\frac{1}{4}}}{(1+x)\sqrt{1-x}}dx$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4637117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Deciding whether $f$ is Lebesgue integrable Background: I'm learning Lebesgue's theory of integration. We have set up the outer measure, then defined the Lebesgue measure using Carthedory's criterion, then showed operations under which the set of Lebesgue measurable sets is stable (e.g. complements and countable union). We have defined measurable functions and now set up Lebesgue's integral using simple functions. We have just proved MCT.
Question: I'm having trouble deciding when a function $f$ is Lebesgue integrable over an interval. The definition of $f$ being integrable we follow is that (i) $f$ is measurable and (ii) $\int f^+, \int f^- <\infty$, where $f^+:=\max(f,0)$ and $f^-:=\max(-f,0)$.
How would I say whether this function is integrable? Let $f$ be defined on $\mathbb{R}$ as $x$ if $x$ is rational, and $0$ if $x$ is irrational.
So far, I've rewritten $f$ as $f(x)=x\chi_{\mathbb{Q}}(x)$. Then $f$ is measurable since $g(x)=x$ is measurable and $\mathbb{Q}$ is a measurable set, so its characteristic function is measurable, and then note that $f$ is a product of measurable functions, hence measurable.
I'm not sure how to show $\int f^+, \int f^- <\infty$ because I don't know how to integrate over $\mathbb{Q}$. Any ideas?
| For any measurable function $f$ and measurable set $A$ we have $$\int_A f(x)\lambda(dx)\leq \lambda(A)\cdot\sup_{x\in A}f(x)$$ Now note that $$\int x\cdot\chi_\mathbb Q(x)\lambda(dx)=\int_\mathbb Qx\lambda(dx)$$
First, compute for $K>0 $ the integral $$\int_{[-K,K]}x\cdot\chi_\mathbb Q(x)\lambda(dx)$$ and think about what happens when $K\to\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4637257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Has any other mathematician defined mathematical sentences in this particular way? I have come up with a particular way to represent mathematical sentences, such that even distinct sentences with the same truth value are indeed distinct entities. I came up with this way while thinking of how to differentiate the sentences $4=4$ and $2+2=4$. I would define the first sentence as the ordered triple $(4,=,4)$, where the second component is the equality relation on the natural numbers. And, I would define the second sentence as the ordered triple $((2,+,2),=,4)$, where the first component is itself the ordered triple $(2,+,2)$, whose second component is the binary operation of addition on the naturals. And also, for example the sentence $2+(2 \times 1) = 4$, would be represented in my scheme as the ordered triple $((2, +, (2, \times, 1)), =, 4)$. I hope it is clear now how to go about doing this. Anyway, my real question is, has any other mathematician done this or some very similar thing? I would love to see some references for this.
| The idea of defining formulas of finitary logics as tuples is quite old. See for instance R. Smullyan: First-Order Logic. New York, 1968, p.7, where formulas of classical propositional logic are defined as 1-, 2- or 3-tuples.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4637419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Linear transformation of random vector has bounded moments? Suppose that the random $p$-vector $\mathbf{y}=(Y_1, Y_2,\ldots, Y_p)'$
with $p\to\infty$ satisfies:
*
*$\mathrm{E}Y_i = 0$, $\mathrm{E}Y_i^2=1$ for any $1\leqslant i \leqslant p$;
*$\mathrm{Cov}(Y_i,Y_j)=0$ for $i\neq j$
*for any $m\geqslant 1$ and $1\leqslant i \leqslant p$, $\sup\limits_{p\geqslant 1}\mathrm{E}Y_{i}^m < \infty$.
The $p\times p$ matrix $\mathbf{A}_p$ has bounded spectral norm, that is, $\sup\limits_{p\geqslant 1}\|\mathbf{A}_p\|_2 = \sup\limits_{p\geqslant 1}\left(\max\limits_{|\mathbf{x}|_2\neq 0}\frac{|\mathbf{A}_p\mathbf{x}|_2}{|\mathbf{x}|_2}\right)<\infty$.
Denote $\widetilde{\mathbf{y}}=\mathbf{A}_p\mathbf{y}=(\widetilde{Y}_1, \widetilde{Y}_2, \dots, \widetilde{Y}_p)$.
Question: Does $\sup\limits_{p\geqslant 1}\mathrm{E}\widetilde{Y}_i^m<\infty$ for any $m\geqslant 1$ and any $1\leqslant i \leqslant p$?
It is obvious that the expectation $\mathrm{E}(\widetilde{\mathbf{y}})=\boldsymbol{0}$, but I do not know how to deal with higher-order moments.
| This is false. Consider $(Y_i)_{i\in\mathbb N}$ i.i.d. $N(0,1)$-distributed. Define the matrices $$A_p=\begin{pmatrix}
1 & 0 & \dots &\dots & 0 & 0 \\
0 & 2 &0& \dots &0&0 \\
\dots \\ 0 & 0 &\dots &\dots &0 & p
\end{pmatrix}$$ Then clearly $\mathbb E |\widetilde Y_p|^m \to \infty$ for $m>1$.
If we furthermore assume $\sup_{p\geq 1}\Vert A_p\Vert_2=\Lambda<\infty$, then we have $$\mathbb E|\widetilde Y_i|^m = \mathbb E| (A_pY)_i|^m\leq \Lambda^m \mathbb E|Y_i|^m\leq \Lambda ^m \sup_{j\geq 1}\mathbb E|Y_j|^m<\infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4637576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to solve $\int_{0}^{1}e^{x}\ln(1+e^{x})dx$? I am trying to solve this integral:
$\int_{0}^{1}e^{x}\ln(1+e^{x})dx$
Here is my attempt:
$[e^{x}\ln(1+e^{x})]_{0}^{1} - \int e^{x} \frac{e^{x}}{1+e^{x}}dx$
$[e^{x}\ln(1+e^{x})]_{0}^{1} - \int e^{x} \frac{e^{x}}{1+e^{x}}dx$
$[e^{x}\ln(1+e^{x})]_{0}^{1} - ( \ln(1+e^{x})e^{x} - \int \ln(1+e^{x})e^{x}dx )$
Then I got the original integral back, my attempt was to then use this method:
$I = \int_{0}^{1}e^{x}\ln(1+e^{x})dx$
$I = [e^{x}\ln(1+e^{x})]_{0}^{1} - \ln(1+e^{x})e^{x} + I$
$0 = [e^{x}\ln(1+e^{x})]_{0}^{1} - \ln(1+e^{x})e^{x}$
And now I don't know what to do because the $I$ on the left and right cancel each other. Any help is greatly appreciated.
| Consider the substitution
$$u= 1+e^x.$$
In this case, you have $dx = e^{-x} du$, $u=2$ as $x\to 0$, $u\to 1+e$ as $x\to 1$ and you integral turns into
$$ \int_2^{1+e} \log(u) du,$$
which can be easily solved using integration by parts.
To be more clear, integration by parts is:
$$\int_2^{1+e} f(u) g^{\prime}(u)\ du = f(u) g(u)\Big|_{2}^{1+e}-\int_2^{1+e} f^{\prime}(u) g(u) du.$$
Here you can take $f(u) = \log(u)$ and $g^{\prime}(u) =1.$
Then it follows that
$$ \int_2^{1+e} \log(u) du = u \log(u)\big|_2^{1+e} -\int_2^{1+e} 1 du =(u \log(u) - u)\Big|_2^{1+e}.$$
Going back with the substitution you have
$$\int_0^1 e^x \log(1+e^x) dx = \left[(1+e^x) \log(1+e^x) - 1-e^x)\right]\Big|_0^1 \\= (e+1)\log(e+1)-2\log(2) -e+1.$$
$\bf{EDIT:}$ I changed the extremals of the integral, but actually in the end I preferred to switch to the original integral to substitute the extremals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4637724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
In set theory, is the axiom of pairing circular? The axiom of pairing uses "objects" $a$ and $b$ to form the set $\{a,b\}$, and the objects in this case can be either individuals or sets. But it seems to me that the entire point of the axioms is to establish what a set is, so how can we use the concept of a set in its definition?
| (A) The Axiom is Pairing is
(B) That Axiom can then give the Interpretation that we have a Set with 2 elements , hence that Axiom is building up the Set theory.
(C) Consider https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory : The 1st is the Axiom of extensionality , containing words like "two sets are equal ...." , the 2nd is the Axiom of regularity containing words like "Every non-empty set ...." , the 8th is Axiom of Power Set where the title itself is containing the word Set.
The Axiom of Pairing is listed 4th & there is nothing special about it in terms of Circularity. All Axioms contain the word Set.
(D) We can have no way to make the Axioms without using the word Set. That term is left undefined. The Circularity is inherent ....
We must have some Intuition about what a Set is & what a Set is not. Then the Axioms formalize that Intuition. With that formalization , we can move on to various Definitions & theorems & other higher-level theories like Polynomials , groups , rings , vectors , Etc.
(E) We might think that we can avoid using Set in the Axioms by using a new term , something else like "Collection" or "Class" or "Category" , but that is only pushing the Circularity or undefinedness to that new term , "Collection" or "Class" or "Category".
In Set theory , the term Set is left undefined , with all Axioms using that term freely.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4637839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proof by induction of summation inequality: $1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\dots+\frac1{3^n}\ge 1+\frac{2n}3$ Proof by induction of summation inequality: $1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\dots+\frac1{3^n}\ge 1+\frac{2n}3$
We start with $$1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots+\frac1{3^n}\ge 1+\frac{2n}3$$ for all positive integers.
I have finished the basic step, the hypothesis is easy too, but I do not know what to do for n+1:
$$1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots+\frac1{3^n}+\frac1{3^{n+1}} \ge 1+\frac{2n}3 +\frac{1}{3^{n+1}}$$
Can you help me from here?
| The sum on the LHS is
$$S_n=\sum_{k=1}^{3^n} \frac1k$$
The hypothesis is that $S_n\ge 1+\frac{2n}{3}$
Base case $n=0$. As $S_0=1\ge1+0=1$, the case $n=0$ is true.
Assume true for $n$, prove
$$S_n+\sum_{k=3^n+1}^{3^{n+1}}\frac1k\ge 1+\frac{2n}{3}+\frac{2}{3}$$
As
$$\sum_{k=3^n+1}^{3^{n+1}}\frac1k\ge\sum_{k=3^n+1}^{3^{n+1}}\frac1{3^{n+1}}=\frac{2\cdot3^n}{3^{n+1}}=\frac23$$
is true at $n=0$, the induction holds.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4638233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Generalized Pépin-Test (Problem understanding a paper) I am reading this paper (but you don't need to, I will write down what is needed for the question), and I have difficulty understanding a certain conclusion.
$(1.1)\ \ $ $n = 2^k+1$ is prime $\Leftrightarrow$ $3^{\frac{n-1}{2}}\equiv -1 \pmod n.$
I totally understand this, the Pépin-Test.
The paper raises the problem
Problem $(2.3)$. Given an odd integer $h > 1$. Determine a finite set $\mathfrak{D}$ and for every positive integer $k \geq 2$ an integer $D\in \mathfrak{D}$ such that $\left(\frac{D}{h\cdot 2^k+1}\right) \neq 1$ and $D\not\equiv 0 \pmod{h\cdot 2^k+1}$.
where$\left(\frac{\cdot}{\cdot}\right)$ is the Jacobi-Symbol. I do understand this problem.
Later, the author claims
$(3.1)\quad$ Let $n=h\cdot 2^k+1$ with $h\not\equiv 0\pmod 3$, $k\geq 2$.
Then $\mathfrak{D}=\{3\}$ and $D_k = 3$ for $k\geq 2$ solves $(2.3)$. In particular, If $2^k>h$, then $n$ is prime $\Leftrightarrow 3^{\frac{n-1}{2}}\equiv -1 \pmod n$.
I could prove this, also.
Now comes my question. The author says
The first observation we make is that a solution to Problem $(2.3)$ for one
particular $h$ will in general lead to a solution for every $h'$ in the same residue
class modulo $\prod\limits_{D\in\mathfrak{D}}D$. In that light, $(3.1)$ is in fact a consequence of $(1.1)$ and the special case $h = 5$ and $\mathfrak{D} = \{3\}$.
I don't know how to handle this sentence. Unfortunately, I don't understand the special case.
So, $\prod\limits_{D\in\mathfrak{D}}D = 3$, still. But the residue classes $5\pmod 3$ are $-4,-1,2,5,8...$ and so on. But $\prod\limits_{D\in\mathfrak{D}}D = 3$ also solves $h \equiv 1,4,7...$.
I know that in my case $\left(\frac{D}{h\cdot 2^k + 1}\right)$ is the same as $\left(\frac{-D}{h\cdot 2^k + 1}\right)$, so I could come up with that. But I think this is not what the sentence wants to express.
I do not understand this claim.
| Let us introduce Problem $(2.3')$.
Problem $(2.3')$. Given an odd integer $h > 1$. Determine a finite set $\mathfrak{D}$, an integer $\ell\ge2$ and for every positive integer $k \geq l$ an integer $D\in \mathfrak{D}$ such that $\left(\frac{D}{h\cdot 2^k+1}\right) \neq 1$ and $D\not\equiv 0 \pmod{h\cdot 2^k+1}$.
Claim: If we have solved problem $(2.3')$ for $h$, then we can solve problem $(2.3)$ for $h$ as well.
Proof: Suppose we have $h, \mathfrak D, \ell$ and $k\to D$ as prescribed in $(2.3')$. There are only finitely many integers $k$ such that $2\le k<\ell$. For each such $k$, choose $$a_{h,k}=\begin{cases}
\text{a positive non-quadratic residue of }h\cdot2^k+1&\text{ if }h\cdot 2^k+1\text{ is prime}\\
\text{a prime factor of } h\cdot 2^k+1 &\text{ if }h\cdot 2^k+1\text{ is not prime}\\
\end{cases}$$
Then $h, \mathfrak D\cup\{a_{h,k}\mid 2\le k<\ell\}$, $k\to\begin{cases}a_{h,k}&\text{ if } 2\le k< l\\D&\text{ if } k\ge\ell\end{cases}$ can be used for problem $(2.3)$.
The first observation we make is that a solution to Problem $(2.3)$ for one particular $h$ will in general lead to a solution for every $h'$ in the same residue class modulo $\prod\limits_{D\in\mathfrak{D}}D$.
Suppose for a particular odd integer $h > 1$, we have found a solution to Problem $(2.3)$, i.e., a finite set $\mathfrak{D}_h$ and for every positive integer $k \geq 2$ an integer $D_{h,k}\in \mathfrak{D}_h$ such that $\left(\frac{D_{h,k}}{h\cdot 2^k+1}\right) \neq 1$ and $D_{h,k}\not\equiv 0 \mod{h\cdot 2^k+1}$.
Suppose we have another odd integer $h'>1$ such that $h'\equiv h \bmod(\prod\limits_{d\in\mathfrak D_h}d)$. I use letter $d$ instead of $D$ as the dummy index variable in the product so as to avoid the abuse of letter $D$. Of course, $\prod\limits_{d\in\mathfrak D_h}d$ and $\prod\limits_{D\in\mathfrak D_h}D$ mean the same number.
In order to show that we can find a solution of $(2.3)$ for $h'$ as well, it is enough to show that we can find a solution of $(2.3')$ for $h'$, thanks to the claim above.
Let $\mathfrak{D}_{h'}=\mathfrak D_h$, $\ell=\lceil\displaystyle{\log_2\left(\frac{\max_{d\in\mathfrak D_{h}}d}{h'}\right)}\rceil$
and for every positive integer $k\ge\ell$, $D_{h',k}=D_{h,k}$.
*
*$D_{h',k}\in \mathfrak{D}_{h'}$
*We have $$\begin{align}\left(\frac{D_{h',k}}{h'\cdot 2^k+1}\right) &= \left(\frac{h'\cdot 2^k+1}{D_{h',k}}\right)\tag{1a}\\
&= \left(\frac{h'\cdot 2^k+1}{D_{h,k}}\right)\tag{1b}\\
&= \left(\frac{h\cdot 2^k+1}{D_{h,k}}\right)\tag{1c}\\
&= \left(\frac{D_{h,k}}{h\cdot 2^k+1}\right)\tag{1d}\\
&\neq 1\tag{1e}\end{align}$$
Explanations:
$(1a)$: Since $k\ge2$, $h'\cdot2^k+1\equiv1\mod4$. Apply the law of quadratic reciprocity for Jacobi symbol.
$(1b)$: $D_{h',k}=D_{h,k}$.
$(1c)$: Since $h'\equiv h \bmod\prod\limits_{d\in\mathfrak D_h}d$, we have $h'\cdot 2^k+1\equiv h\cdot 2^k+1\bmod\prod\limits_{d\in\mathfrak D_h}d$. Since $D_{h,k}\in \mathfrak D_h$, $h'\cdot 2^k+1\equiv h\cdot 2^k+1\bmod D_{h,k}$
$(1d)$: Similar to $(1a)$.
$(1e)$: Assumption.
*$0<D_{h',k}\le\displaystyle{\max_{d\in\mathfrak D_{h}}d} \le h'\cdot 2^\ell< h'\cdot 2^k+1.$
Hence $D_{h',k}\not\equiv 0 \mod{h'\cdot 2^k+1}$.
... $(3.1)$ is in fact a consequence of $(1.1)$ and the special case $h = 5$ and $\mathfrak{D} = \{3\}$
$(1.1)$ is the special case of $(3.1)$ with $h=1$.
"the special case $h = 5$ and $\mathfrak{D} = \{3\}$" refers to the special case of $(3.1)$ with $h=5$.
In other words, $(3.1)$ is the consequence of two special cases of itself. A clearer statement could have been
... $(3.1)$ is in fact a consequence of two special cases of itself, the case $h=1$, which is $(1.1)$ and the case $h = 5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4638400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why is this almost affine code not equivalent to a linear code? I am a bachelor student who just started studying coding theory and I came across the following example in "Generalized Hamming weights for almost affine codes" by Johnsen and Verdure, page 4 (https://arxiv.org/abs/1601.01504):
We will use a running example throughout this paper. It is the almost affine code C
′
in [14, Example 5]. It is a code of length 3 and dimension 2 on the alphabet F = {0, 1, 2, 3}. Its set
of codewords is
$$
\begin{matrix}
000 & 011 & 022 & 033 \\
101 & 112 & 123 & 130 \\
202 & 213 & 220 & 231 \\
303 & 310 & 321 & 332 \\
\end{matrix}
$$
[...] This is an example of an almost affine code which is not
equivalent to a linear code, and not even to a multilinear code.
I understand why the code is almost affine. However, to me, it seems like this is a linear code, since all the codewords can be expressed as linear combinations of 101 and 011. What mistake am I making? Also, how would I check that it is not equivalent to a multilinear code? Thank you in advance.
| When people say a $C$ linear code, they mean that $C$ is linear subspace of a vector space. It is true that $C'$ is a sub-group of the group $(\mathbb Z/4\mathbb Z)^3$, generated by the two group elements $(1,0,1)$ and $(0,1,1)$, but since it is not a sub-space of a vector space, we cannot call $C'$ linear. Note that $\mathbb Z/4\mathbb Z$ is not a field.
I do not know about how to prove it is not equivalent to a (multi)linear code, perhaps someone else can answer about that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4638583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How is $\pi^{-1}\left(U_i\right)$ different from $U_i \times F$ in fiber bundles? On defining a fibre bundle, it is argued that the projection map $\pi$ requires to be satisfied the condition that there is a homeomorphism $h$ such that the first coordinate coincide.
Definition (Fibre bundle). A fibre bundle structure on a space $E$ is a pair of spaces $F$ and $X$ equipped with a projection map $\pi: E \rightarrow X$ such that there is an open cover $\left\{U_i\right\}$ of $X$ and homeomorphisms $h_i: \pi^{-1}\left(U_i\right) \rightarrow U_i \times F$ coinciding with $\pi$ in the first coordinate. We write:
$$
F \longleftrightarrow E \stackrel{\pi}{\longrightarrow} X .
$$
In particular, this means that each fibre $F_x=\pi^{-1}(x)$ maps homeomorphically to $\{x\} \times F$. We call $E$ the total space, $X$ the base space, $F$ the fibre and $h_i$ the (local) trivialisations.
But I don't understand how this two spaces $\pi^{-1}\left(U_i\right)$ and $U_i \times F$ are really different?
|
But I don't understand how this two spaces $\pi^{-1}(U_i)$ and $U_i\times F$ are really different?
They are not "really" different in the sense that they are diffeomorphic, meaning "essentially the same".
However, they are not "the same" in the sense that $\pi^{-1}(U_i)\subseteq E$ and $U_i\times F$ is not related to $E$ (Note that $E$ and $F$ are both manifolds).
So the statement that $E$ has a fibre structure means that locally the space $E$ looks like $U_i\times F$, i.e. there are open subsets $\pi^{-1}(U_i)$ that look like/are diffeomorphic to $U_i\times F$.
However, $E$ might not have the global structure $E\cong X\times F$ (not necessarily true). For example, let $E$ be a Moebius strip and let $X$ be a circle on this Moebius strip (going around once). Now for each $x\in X$ draw a line segment on the strip such that the whole strip is nicely divided up into line segments (As in this wikipedia image). Then globally the space $E$ does not look like "a circle"$\times$"a line segment". That would be a cylinder, not a Moebius strip.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4638819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculating $\mathbb{E}[2\sin (\pi Z)|\cos (\pi Z)]$ when $Z$ is uniform on $[0,2]$ I am trying to calculate the following conditional expectation.
Let Z be a uniformly distributed random variable on the closed interval $[0, 2]$.
Define $X = \cos(\pi Z)$ and $Y = 2\sin(\pi Z)$.
Calculate $\mathbb{E}[Y|X] = \mathbb{E}[Y|\sigma(X)]$.
I have tried multiple approaches, but don't know how to proceed since I am only familiar with the standard techniques for calculating the conditional expectation, i.e. when Y is independent of X, Y is measurable with respect to $\sigma(X)$ or if a joint density $f(x,y)$ exists. The first two parts don't apply here and since X and Y aren't independent I don't know how to calculate the joint density.
Any help is greatly appreciated. Thank you very much!
| This is a simplified case from:
https://stats.stackexchange.com/questions/603110/how-to-calculate-px-in-a-sinx-y
Here: $P[Z\in A|X=x]=\frac{1}{2}(I_A(arccos(x)/\pi)+I_A((2\pi-arccos(x))/\pi))$
Therefore: $E[X|Y]=E[Y|X]=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4639027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Introducing undergraduate students to dynamical systems In my department a course on dynamical systems is offered this semester. It is a course offered to third (out of four) year undergraduate students and it involves basic dynamics of real maps, logistics map, Sharkovsky's theorem and basic dynamics of complex polynomials.
I have been asked to contribute to this course, and I was thinking about giving tasks to the students. That is, reading about topics that will not be covered in the course, but are fairly close to it and present to the whole class what they have learned and enjoyed about the topic. In this, I hope to promote cooperation and motivate them to engage in the pursuit of results beyond the course.
I know of some articles that should be suitable for undergraduates to read. Of course, under supervision and guidance. For example, this article.
So, I would like to ask, if anybody knows of any articles that offer a gentle and elementary description of dynamical phenomena of polynomials, with minimal to no prerequisites of complex analysis, much like the aforementioned one. Such articles covering other topics I mentioned above are welcome as well.
Thank you!
| I'm a fan of Chaos, Fractals and Dynamics: Computer Experiments in Mathematics by Bob Devaney. He's got a bunch of other interesting looking elementary books on dynamics too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4639226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to prove that $a \vee b \equiv (a / b) / (a \wedge b)$?
If the symmetric difference is defined as $a / b \equiv (a \wedge \neg b) \vee (\neg a \wedge b)$, then prove that $$a \vee b \equiv (a / b) / (a \wedge b)$$
I tried to expand the right-hand side of the equivalence:
$$(a/ b)/(a\wedge b)\equiv (¬(a/b)\wedge(a\wedge b))\vee((a/b)\wedge¬(a\wedge b))$$
and then break the expansion of the disjunction:
*
*$¬(a/b)\wedge(a\wedge b)$
*$(a/b)\wedge¬(a\wedge b)$
For the first part:
$$¬(a/b)\wedge(a\wedge b)\equiv ¬((a\wedge¬b)\vee(¬a\wedge b))\wedge(a\wedge b)$$
$$\equiv (¬(a\wedge¬b)\wedge¬(¬a\wedge b))\wedge(a\wedge b)\equiv (¬a\vee b)\wedge(a\vee¬b)\wedge a\wedge b $$
$$\equiv ((¬a\vee b)\wedge a)\wedge((a\vee¬b)\wedge b)\equiv ((¬a\wedge a)\vee(b\wedge a))\wedge((a\wedge b)\vee(¬b\wedge b))$$
$$\equiv(0\vee(a\wedge b))\wedge((a\wedge b)\vee 0)\equiv (a\wedge b)\wedge(a\wedge b)\equiv a\wedge b $$
And for the second part:
$$(a/b)\wedge¬(a\wedge b)\equiv ((a\wedge¬b)\vee(¬a\wedge b))\wedge¬(a\wedge b) $$
$$\equiv (a\wedge¬b)\vee(¬a\wedge b)\vee¬a\vee¬b\equiv ((a\wedge¬b)\vee¬a)\vee((¬a\wedge b)\vee¬b)$$
$$\equiv ((¬a\vee a)\wedge(¬b\vee¬a))\vee((¬a\vee¬b)\wedge(b\vee¬b))\equiv(1\wedge¬(b\wedge a))\vee(¬(a\wedge b)\wedge 1) $$
$$¬(a\wedge b)\vee¬(a\wedge b)\equiv ¬(a\wedge b) $$
So I have that $(a/ b)/(a\wedge b)\equiv (a\wedge b)\vee ¬(a\wedge b)$. Expanding again gives$\dots$
$$(a\wedge b)\vee ¬(a\wedge b)\equiv (a\wedge b)\vee¬a\vee¬b\equiv(a\vee¬a\vee¬b)\wedge(b\vee¬a\vee¬b)$$
$$\equiv ¬b\wedge¬a\equiv ¬(a\vee b) $$
So given the algebraic manipulation I get $(a/ b)/(a\wedge b)\equiv ¬(a\vee b)$, which is the opposite of what I was looking for. How can I prove this?
| I find the 'engineering' notation a little easier to manipulat, $a \oplus b = a \overline{b}+\overline{a}b$, note that coincidence
$\overline{a \oplus b} = ab + \overline{a} \overline{b}$.
Then
\begin{eqnarray}
(a \oplus b) \oplus ab &=& (a \overline{b}+\overline{a}b)( \overline{a}+\overline{b}) + (ab + \overline{a} \overline{b}) ab\\
&=& a \overline{b}+\overline{a}b + ab \\
&=& a + b
\end{eqnarray}
Alternatively consider the four exhaustive cases $(a,b) \in \{(F,F), (F,T), (T,F), (T,T) \}$ and show equality for each assignment.
Addendum: Engineers working with digital logic use alternative notation:
$\overline{a}$ means $\lnot a$
$a+b $ means $a \lor b$
$ab$ or $a \cdot b$ means $a \land b$.
Exclusive or $\oplus$ and coincidence $\odot$ are defined above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4639391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find the roots of the equation $(1+\tan^2x)\sin x-\tan^2x+1=0$ which satisfy the inequality $\tan x<0$ Find the roots of the equation $$(1+\tan^2x)\sin x-\tan^2x+1=0$$ which satisfy the inequality $$\tan x<0$$
Shold I solve the equation first and then try to find which of the roots satisfy the inequality? Should I use $\tan x$ in the solution itself and obtain only the needed roots?
I wasn't able to see how the inequality can be used beforehand, so I solved the equation. For $x\ne \dfrac{\pi}{2}(2k+1),k\in\mathbb{Z}$, the equation is $$\dfrac{\sin^2x+\cos^2x}{\cos^2x}\sin x-\left(\dfrac{\sin^2x}{\cos^2x}-\dfrac{\cos^2x}{\cos^2x}\right)=0$$ which is equivalent to $$\sin x+\cos2x=0\\\sin x+1-2\sin^2x=0\\2\sin^2x-\sin x-1=0$$ which gives for the sine $-\dfrac12$ or $1$. Then the solutions are $$\begin{cases}x=-\dfrac{\pi}{6}+2k\pi\\x=\dfrac{7\pi}{6}+2k\pi\end{cases}\cup x=\dfrac{\pi}{2}+2k\pi$$ How do I use $\tan x<0$?
| It seems like you forgot a minus sign in front of $\frac{5\pi}{6}$ according to @AnneBauval, so your third solution doesn't work (it gives $\sqrt{3}$). Your second solution isn't in the domain since as you said at the beginning that $x\ne\frac{\pi}{2}(2k+1)=\pi k+\frac{\pi}{2}$ and substituting $k\mapsto2k$ tells us that the second solution isn't allowed. So the first solution is the only one that is usable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4639727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Function equal to infinite series $\sum_{n=0}^{\infty}(-1)^n\frac{x^{2n}}{(2n+3)(2n+1)!}$ I'd like to know if there is a simple function equivalent of $$\sum_{n=0}^{\infty}(-1)^n\frac{x^{2n}}{(2n+3)(2n+1)!}$$
I recognize that it looks similar to $\frac{\sin{x}}{x}$, but with an extra $(2n+3)$ factor in the denominator that I don't know how to account for. I've graphed the sum from $n=0$ to $100$ on an online graphing calculator and tried to find a function that matches it, but I've had no luck. I noticed that $\frac{\sin{x}}{3x}$ looks similar, but it is a very naive guess and I don't have a better one. It may even be the case that such a function does not exist.
| If you had a $2n+2$ in the denominator, it would look like $$g(x) = \sum_{n=0}^\infty (-1)^n \frac{x^{2n}}{(2n+3)!}. \tag{1}$$ If you multiplied $g(x)$ by $x^2$ and took the derivative,
$$\begin{align}
\frac{d}{dx}[x^2 g(x)] &= \sum_{n=0}^\infty (-1)^n \frac{d}{dx}\left[ \frac{x^{2n+2}}{(2n+3)!} \right] \\
&= \sum_{n=0}^\infty (-1)^n \frac{(2n+2) x^{2n+1}}{(2n+3)(2n+2)(2n+1)!} \\
&= \sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{(2n+3)(2n+1)!}. \tag{2}\end{align}$$
This gets you very close to what you want; all you need to do is factor out one power of $x$:
$$f(x) = \sum_{n=0}^\infty (-1)^n \frac{x^{2n}}{(2n+3)(2n+1)!} = \frac{1}{x} \sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{(2n+3)(2n+1)!} = \frac{1}{x} \frac{d}{dx}\left[ x^2 g(x) \right] \tag{3}$$ where $g$ is defined as in $(1)$. So what is $g$? We know that $$\sin x = \sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{(2n+1)!}. \tag{4}$$ So $$\sin x - x = \sum_{n=1}^\infty (-1)^n \frac{x^{2n+1}}{(2n+1)!} = \sum_{n=0}^\infty (-1)^{n+1} \frac{x^{2n+3}}{(2n+3)!}. \tag{5}$$It follows that $$g(x) = \frac{x - \sin x}{x^3}, \tag{6}$$ from which we use $(3)$ to compute $f(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4639925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What am i doing wrong for this induction problem? $$\sum_{i=1}^n \frac{2i-1}{2^i} = 3 - \frac{2i+3}{2^i}$$
This equation has been giving me a lot of trouble, and I don't know what I am doing wrong for this question. I'm trying to get this: $$ 3 - \frac{2(k+1)+3}{2^{k+1}}$$
Here is my work:
$$\sum_{i=1}^k \frac{2i-1}{2^i}+\frac{2(k+1)-1}{2^{k+1}}$$
$$3 - \frac{2k+3}{2^k} + \frac{2(k+1)-1}{2^{k+1}}$$
$$ 3 - \frac{(2k+3)(2^{k+1})+(2(k+1)-1)(2^k)}{(2^k)(2^{k+1})}$$
$$ 3 - \frac{(2k+3)(2^{k})(2)+(2(k+1)-1)(2^k)}{(2^k)(2^{k})(2)}$$
Of course, I've tried other methods, but this is the farthest I've gotten. The base case checks out, and I know that this works out, but I just don't know where I've gone wrong. Can anyone tell me what I'm not doing? (This is for all values greater than or equal to 1 and Try not to reveal the answer, please.)
| When you combined the fractions you missed the minus sign in front of the first fraction, the correct step starting from your second row is
$$3-\frac{2k+3}{2^k}+\frac{2(k+1)-1}{2^{k+1}}=3-\frac{(2k+3)(2^{k+1})-(2(k+1)-1)(2^{k})}{2^{k+1}2^{k}}.$$
To think a bit more easily why this is so just note that
$$-a+b=-a-(-b)=-(a-b).$$
PS. It is worth considering when dealing with fractions of similar denominators to combine them using stuff like
$$\frac{2k+3}{2^{k}}=\frac{2(2k+3)}{2^{k+1}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4640032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Generatingfunctionology Chapter 1 Exercise 9 Regarding the exercises of the Generatingfunctionology book available at (https://www2.math.upenn.edu/~wilf/DownldGF.html).
In particular Chapter 1, Exercise 9 (page 25).
First part: Here a function $f$ is defined for $n\geq 1$ as (a) $f(1)=1$, (b) $f(2n) = f(n)$, (c) $f(2n+1) = f(n) + f(n+1)$. And the generating function of the sequence is defined as $$F(x) = \sum_{n\geq 1} f(n) x^{n-1}.$$
I must show that $$F(x) = (1+x+x^2)F(x^2).$$
*
*First $F(x^2) = \sum_{n\geq 1} f(n) x^{2n-2}$. Then, I started by applying the techniques of "The Method" (page 8) to (c) and got:
$$f(2n+1) = f(n) + f(n+1) \Rightarrow \sum_{n\geq 1} f(2n+1) x^{2n+1} = \sum_{n\geq 1} f(n) x^{2n+1} + \sum_{n\geq 1} f(n+1) x^{2n+1} $$
From here, by changing variables $m=2n+1$ and $l = n+1$, I got:
$$\sum_{m\geq 3} f(m) x^m = \sum_{n\geq 1} f(n) x^{2n+1} + \sum_{l\geq 2} f(l)x^{2l-1}$$
$$x \sum_{m\geq 1} f(m) x^{m-1} - f(1) x - f(2) x^2 = x^3 \sum_{n\geq 1} f(n) x^{2n-2} + x\sum_{l\geq 1} f(l)x^{2l-2} - f(1)x$$
which substituting by $F(x), F(x^2)$ and since $f(1)=1$ and $f(2n)=f(n), f(2)=f(1)$,
$$xF(x) - x - x^2 = x^3F(x^2)+xF(x^2) - x $$
$$F(x) = (x^2 + 1) F(x^2) + x. $$
*Second, applying "The Method" to (b):
$$f(n) = f(2n) \Rightarrow \sum_{n\geq 1} f(n) x^{2n} = \sum_{n\geq 1} f(2n) x^{2n}$$
Again, changing variables $m=2n$,
$$\sum_{n\geq 1} f(n) x^{2n} = \sum_{m\geq 2} f(m) x^{m}$$
$$x^2\sum_{n\geq 1} f(n) x^{2n-2} = x\sum_{m\geq 1} f(m) x^{m-1} - f(1)x$$
$$ x^2 F(x^2) = xF(x) - x$$
$$ x F(x^2) = F(x) - 1$$
*Combining both, I got
$$ 2F(x) -x -1 = (x^2 + x + 1)F(x^2).$$
Clearly, something I did is incorrect, but I cannot figure out what. I appreciate any help.
Second part: I must show the general formula $F(x) = \prod_{j\geq 0}^\infty \left(1 + x^{2^{j}} + x^{2^{j+1}}\right)$. By substituting, it is clear that the formula is true. However, how would I start to prove this formaly? The solution says to "consider the product as a formal beast which obviously satisfies the functional equation for $F$", but I don't know what this means nor can I found it online. Could you clarify me what does "consider the product as a formal beast" mean?
Again, I appreciate any help.
| I believe with the help of the people from the comments to my question (namely @ancientmathematician, @JBL, @Alexander Burstein) I can post an answer for completion and to check it as answered.
First part: As pointed out, it was a simple mistake of changing variables of the summation without taking into account that I was adding some terms by doing this, which account for the wrong factors appearing. Indeed, this exercise is easiest solved by taking
$$F(x) = \sum_{n\geq 1} f(n) x^{n-1} = \sum_{n\ even}f(n) x^{n-1} + \sum_{n\ odd} f(n) x^{n-1} = \sum_{n\geq 1}f(2n) x^{2n-1} + \sum_{n\geq 0} f(2n+1) x^{2n}$$
and using the given identities to compute the correct result.
Second part: This part was much harder, as I was not aware of the results the author used in their solution, but I found everything I needed on this wikipedia page.
Here, the function $F(x)$ converges to $F(x) = \prod_{j\geq 0}^\infty (1+x^{2^j}+x^{2^{j+1}})$ if this product converges to an analytic function (the function $F(x)$ can be checked by repeatedly replacing $x$ by $x^2$ in the provided identity $F(x) = (1+x+x^2)F(x^2)$).
Now, taking the logarithm of the product, the convergence problem is mapped to a sum of logarithms, so one converges iff the other does too:
$$\log \prod_{j= 0}^\infty (1+x^{2^j}+x^{2^{j+1}}) = \sum_{j=0}^\infty \log(1+x^{2^j}+x^{2^{j+1}}).$$
And by taking $|x|<1$, the limit converges to $\log(1)=0$ so the sum converges (everything is positive as we are dealing with power of 2 exponents).
Nevertheless, one can also use the limit comparison test (again see the same wikipedia page), and consider that $\sum_{j=0}^\infty \log(1+p_j)$ converges iff $\sum_{j=0}^\infty p_j$ converges. And here, $p_j = x^{2^j} + x^{2^{j+1}}$, which converges for $|x| < 1$.
Edit: Argument using formal power series:
With the help of kind people in the comments (and Wikipedia), I now believe I understand the formal power series argument for the convergence of the product:
For a "large enough" $n$, when multiplying the current polynomial ($\prod_{j=0}^n (1+x^{2^j}+x^{2^{j+1}})$) with the $n+1$ term, the coefficients of the lower degrees terms are not affected (the degrees of the coefficients that change grow exponentially with $j$). I.e., they stabilize, and therefore the series converges. Thanks so much to the people that helped!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4640442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Given $a_1$, find an increasing sequence so that $a_1+\dots+a_k $ divides $ a_1^2+\dots+a_k^2$ for all $k$ Prove that for every natural number $a_1>1$ there's an infinite series $a_1<a_2<a_3<...$ such that for every natural number $k,$ $a_1+a_2+...+a_k \vert a_1^2+a_2^2+...+a_k^2$.
At first glance, I thought this problem is an induction problem and could be solved by forming the sequence inductively. I tried that and didn't get anywhere even though I'm pretty sure there must be an inductive solution.
I went in a different direction, experimenting with series and trying the series
$$a_2=3\cdot a_1$$
$$a_3=3\cdot a_2=9\cdot a_1$$
$$a_4=3\cdot a_3=9\cdot a_2=27\cdot a_1$$
and so on... (esentially every number is three times the number before it in the series)
This way, if we assume the condition stands for $k-2$ and try to prove it for $k-1$ we get
$$a_1\cdot (\frac{3^k-1}{2}) \vert a_1^2\cdot (\frac{9^k-1}{8})$$
From here, we know $a_1 \vert a_1^2$ so we need $\frac{3^k-1}{2} \vert \frac{9^k-1}{8}$, but since $\frac{9^k-1}{8}=\frac{3^{2k}-1}{8}=\frac{(3^k-1)(3^k+1)}{8}=\frac{(3^k-1)}{2}\cdot \frac{(3^k+1)}{4}$, so actually we get that $\frac{3^k-1}{2} \vert \frac{9^k-1}{8}$ if $\frac{(3^k+1)}{4}$ is a whole number. This only works for $k\equiv 1$ (mod 2). I'm not sure where to go from here, I've tried proving this, but for even numbers and haven't gotten anywhere. I'm worried I might be going deep down the series solution rabbithole even though it might not lead anywhere. Any help is appreciated, thanks!
| We're trying to find a strictly increasing set of integers $a_i$, with $a_1 \ge 2$, so for all $k \ge 1$
$$\sum_{i=1}^{k}a_i \, \mid \, \sum_{i=1}^{k}a_i^2 \tag{1}\label{eq1A}$$
Unfortunately, I don't know of any way to finish what you've tried. Instead, as you surmised, there's an inductive solution. For $k = 1$, \eqref{eq1A} is true since $a_1 \mid a_1^2$. Assume that, for some $m \ge 1$, \eqref{eq1A} is true is for $k = m$. Set
$$j = \sum_{i=1}^{m}a_i \tag{2}\label{eq2A}$$
Thus, by \eqref{eq1A}, since $a_1 \ge 2$, we have
$$\sum_{i=1}^{m}a_i^2 = jn, \; \; n \ge 2 \tag{3}\label{eq3A}$$
Let
$$a_{m+1} = j(n + j - 1) \tag{4}\label{eq4A}$$
Since $n \ge 2$ and $j \ge 2$, then \eqref{eq4A} and \eqref{eq2A} give that $a_{m+1} \gt j \; \; \to \; \; a_{m+1} \gt a_{m}$. Using $k = m + 1$ in \eqref{eq1A}, the LHS becomes
$$\sum_{i=1}^{m+1}a_i = j + j(n + j - 1) = j(n + j) \tag{5}\label{eq5A}$$
The RHS of \eqref{eq1A} is then
$$\begin{equation}\begin{aligned}
\sum_{i=1}^{m+1}a_i^2 & = jn + [j(n + j - 1)]^2 \\
& = j(n + j\,[(n+j) - 1]^2) \\
& = j(n + j\,[(n+j)^2 - 2(n+j) + 1]) \\
& = j(n + j\,[n+j][n+j-2] + j) \\
& = j(n+j)[1 + j(n+j-2)]
\end{aligned}\end{equation}\tag{6}\label{eq6A}$$
From \eqref{eq5A}, the LHS divides the RHS of \eqref{eq1A}, so it's true also for $k = m + 1$. Thus, by induction, we have \eqref{eq1A} is true for all $k \ge 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4640587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Mistake in raising power Find roots of:
$$x^{6}\ -\ \left(x-1\right)^{6}=0 \tag {1}$$
I know this equation has $4$ complex roots and exactly one real roots of value $0.5$.
However, my first instinct was to do this:
$$x^{6}\ =\ \left(x-1\right)^{6} \tag{2}$$
"raise both sides to 6-th power" to get:
$$x=x-1\tag{3}$$
Which has no real solution. I see that this wrong. How to avoid this error? Thanks.
Inspired by watching this youtube video
Edit:
I am not asking about how to solve the problem. I want to know
what I did wrong from an Algebraic stand-point. Maybe raising to the power? What is wrong with that?
| Thanks for all the posted comments above. At the moment, no one had posted an answer, but I understood the following, which combined may provide an answer.
$$|x|=|x-1|$$
can't always be written as $x=x-1$. I need to learn how to solve such an equation.
Also,
$$a^n=b^n$$
Does not always imply that $a=b$. The result is affected by the domain of $a$ and $b$ and whether the power is even or odd or integer or not, maybe among other factors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4640732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What's the probability that in three children 1 is a girl. It was asked that what is the probability that only one is a girl in a group of 3 children.
My sample space:-
B B B
B B G
B G G
G G G
What my friend says:-
B B B
B B G
B G B
G B B
B G G
G G B
G B G
G G G
He says it will be like the when 3 coins are tossed. But I say that BBG and BGB are same. So there will be only 4 total outcomes.
Am I correct? Because if we see logically, then probability of one girl is 1/4. Because 1 girl is one girl, whether she is at left or right or middle.
And it is different from the 3 coins that we are tossing.
| You're both correct on the sample space (in a sense); however, in your case, the outcomes are not equally likely.
Think about the order in which the children are born; there are $3$ ways to get two boys and one girl:
*
*First a boy is born, then another boy, then a girl
*First a boy, then a girl, then a boy
*First a girl, then a boy, then a boy
On the other hand, there is only one order in which three girls arrive, as another example. That is to say, "$2$ boys, $1$ girl" is not as likely as "$3$ girls". And this should make sense: think about your real life, and how often you've seen groups of three siblings, and how rare it's been an entire gender was excluded from them.
Having the sample space written in terms of equally likely events makes it significantly easier to calculate the relevant probabilities. Hence, even if each ordering of events results in the "same" outcome, in some sense, it's usually best to account for order when reasonable anyways.
This comes from a principle which says, if every event involved is equally likely,
$$\text{the probability $X$ happens} = \frac{\text{the number of ways $X$ can happen}}{\text{the number of possible events overall}}$$
Then one easily sees, for instance, the probability of "$2$ boys, $1$ girl" is $3/8$.
Since the events in your sample space are not equally likely, one has to use a different means of calculation.
An example taking your fallacy to the extreme is winning the lottery.
In your eyes, the sample space of events for winning a lottery has two events: "winning" and "losing". But if you try to calculate the probability in the erroneous method you have, you would conclude your probability of winning is $50\%$ (before we've even considered what the lottery is or how many people are in it, too!), which is obviously silly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4640927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$\forall z \in \mathbb C, |z|>1$, calculate $\sum_{(p,q)\in\Delta} (z^{p+q}-1)^{-1}$ Let $\Delta = \{(p,q)\in(\mathbb N^*)^2|p\land q =1\}$ and $z\in\mathbb C$ with $|z|>1$,
Calculate : $$S(z)=\sum_{(p,q)\in\Delta}\frac{1}{z^{p+q}-1}$$
My try :
Let $n\in \mathbb N^*\backslash\{1\}$,
we know that : $\forall k \in [1,n-1], k \land (n-k) = k \land n$
So : $$\Delta = \bigcup_{n\ge2}\{(p,q)\in[1,n-1]^2|p\land q = 1 \text{ and } p+q = n\}
=\bigcup_{n\ge2}\{(k,n-k)|k\land n = 1 \text{ and } k\in[1,n-1]\}$$
Let $A_n= \{(k,n-k)|k\land n = 1 \text{ and } k\in[1,n-1]\}$ (Also $A_n$ are obviously disjoint),
We get : $$S(z) = \sum_{n\ge 2}\sum_{(p,q)\in A_n}\frac{1}{z^{n}-1}$$
However : $\text{Card}(A_n) = \varphi(n)$ (the euler totient function).
We have :
$$S(z) = \sum_{n\ge 2} \frac{\varphi(n)}{z^{n}-1} = \sum_{n\ge 2} \frac{\varphi(n)}{z^{n}}\frac{1}{1-z^{-n}} = \sum_{n\ge 2} \sum_{k\ge 0}\frac{\varphi(n)}{z^{n(k+1)}} $$
I don't know what I could do next.
Thanks for your help !
| Thanks to @reun, we have our answer !
Using : $$S(z) = \sum_{n\ge 2} \sum_{k\ge 0}\frac{\varphi(n)}{z^{n(k+1)}} = \sum_{n\ge 2} \sum_{k\ge 1}\frac{\varphi(n)}{z^{nk}} $$
We have : $$\sum_{n\ge 2} \sum_{k\ge 1}\frac{\varphi(n)}{z^{nk}} = \sum_{m\ge 2} \sum_{d|m \text{ and } d\ge 2}\frac{\varphi(d)}{z^{m}}=\sum_{m\ge 2}(m-1)z^{-m}=\frac{1}{(z-1)^2} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4641113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Circular motion at constant speed I have a question. I have an algorithm that computes the trajectory of a body moving in a circle. The program repeats the following at short intervals to update the body's X and Z positions:
$$x = \mathrm{X_c} + R\cos\left(\frac{2\pi}{T}t\right)$$
$$z = \mathrm{Z_c} + R\sin\left(\frac{2\pi}{T}t\right)$$
$X_c$ and $Z_c$ are the x and z positions of the rotation axis, respectively. The $t$ value is increased by $1$ with each update. In turn, $T$ means the time it takes the body to complete one circle. The problem is that changing the radius $R$ value changes the speed of the body. I want a constant speed, independent of the radius. How can I get rid of the $T$? Ideally, the speed should be predefined. This means that regardless of the radius of the circle, the body should always move at a fixed speed. Is there a way? I will be grateful for your help.
| For an object undergoing circular motion (wlog - around the origin) with uniform angular frequency $\omega$ and radius $R$, a 'time' dependent parameterization via $t$ would be:
$$ (x,y) = (R\cos(\omega t), R\sin(\omega t))$$
The angular speed is $v=\omega R$ which can be verified by taking the derivative w.r.t $t$. So if you want to preserve $v$ while changing $R$ you need to compensate for that change by an inverse change in $\omega$.
Example: Suppose we begin from initial angular frequency $\omega_1$ and initial radius $R_1$. We now change the radius to $R_2$ but want to keep the previous angular speed $v_1=\omega_1 R_1$. If you set the new angular frequency in the following way: $\omega_2 = \omega_1\frac{R_1}{R_2}$ then you can see that it will compensate for the change in the radius appropriately:
$$ v_2 = \omega_2 R_2 = \omega_1\frac{R_1}{R_2}R_2=\omega_1R_1 = v_1$$
So the angular speed did not change, as required.
Note also that since the above gives $\omega_1 = \frac{v_1}{R_1}$ and
$\omega_2 = \frac{v_2}{R_2}$ we can read off a general expression for a "$v$ preserving" $\omega$ as a function of a variable radius $R$ that you may want to use:
$$ \omega(R) = \frac{v}{R}$$
In your case the angular frequency is currently fixed at $\omega = \frac{2\pi}{T}$ which means that a complete revolution always takes $T$ time units. But since you care about the speed $\omega R$ being constant, you can't keep the $T$ fixed as well. So you will have to adjust your code in some way to use an $\omega$ that is variable rather than fixed in order to achieve that. You may still initialize it to the same value $\frac{2\pi}{T}$ but it will have to change when the radius changes, as explained earlier.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4641275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.