Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why is the graph of $f(x) = \frac{x^3}{x^2 - 1}$ approaching negative infinity below $x = -1$? I'm studying calculus at elementary level . While drawing a graph, I had little confusing thing. The function is $f(x) = \dfrac{x^3}{x^2-1}$ ($x$ is real number) and I did this in wolfram alpha , and I wonder why the graph near below $x= -1$ is going to minus infinity. I think it's going to plus infinity because from the $f(x)$ if we say $x$ goes to nearly $-1$ then denominator term goes to minus infinity and since numerator term is $x^3$ then it's still minus. So we have minus in denominator term and numerator so it's plus. This was what I was thinking and I don't get that it's wrong. Could you explain it?
I see you already got it, but for the benefit of others who may come here with the same question, and just to be thorough, the reason is that since $f(x) = \dfrac{x^3}{x^2-1}$, then we can do polynomial long division (divide $x^3$ by $x^2-1$) to see that $y=x$ is the oblique asymptote of $f(x)$. Since asymptotes describe "end behavior" then $f(x)$ is going to "behave like" the line $y=x$ as $|x|$ gets really large, i.e., as $x$ gets really far away from $0$ in either direction, i.e., as $x \to \pm\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1744484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Dual spaces and weak solutions. I have two questions: (1) $H^{-1}$ space is defined as the dual space of $H_{0}^{1}$, so is the dual space of $H^{1}$ also $H^{-1}$? Or is it correct to act an $T\in H^{-1}$ on a function $u\in H^{1}$? (2) For the Poisson problem $-\Delta u=f$ with boundary condition $u=g$ for proper conditions on the data and domain. If there exists $w\in H^{1}$ with $w=g$ on the boundary s.t. it holds for all $v\in H_{0}^{1}$ that $\int \nabla w \nabla v=\int fv$, then what can we say about this $w$? Can we get that $w$ is the weak solution of the Poisson problem?
* *$H^1_0$ is contained in $H^1$ and has the same norm, so $H^{-1}$ must be contained in the dual of $H^1$. It turns out that it is a proper subset of the actual dual of $H^1$. *Sure, this is usually the definition of the weak solution (though we have to interpret "$w=g$ on the boundary" using the trace operator). In particular you are correct to consider only $v \in H^1_0$ rather than $v \in H^1$. I find this easier to understand in the context of the corresponding variational problem: you should be able to add a variation $v$ to a solution candidate $u$ and get another solution candidate. For Dirichlet conditions this means the variations must vanish on the boundary. Then playing with it a bit you find that variations in the variational context are the same as test functions in the distributional context.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1744570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
M. Ross problem 12 chapter 5 - Exponential distribution I have a question regarding problem 12(b) and (c) of chapter 5 of M.Ross "Introduction to probability models". The question is as follows: If $X_1, X_2, X_3$ are independent exponential random variables with rates $\lambda_i$, $i = 1,2,3$, find (b) $P(X_1 < X_2 \mid \max(X_1,X_2,X_3) = X_3)$; (c) $E[\max X_i \mid X_1 < X_2 < X_3]$. For (b), my first thought was that $P(X_1 < X_2 \mid \max(X_1,X_2,X_3) = X_3) = P(X_1 < X_2)$ since from my point of few, $\max(X_1,X_2,X_3) = X_3$ does not have to influence $X_1 < X_2$. According to the answer book of Ross: $P(X_1 < X_2 \mid \max(X_1,X_2,X_3) = X_3) = \frac{P(X_1 < X_2 < X_3)}{P(X_1 < X_2 < X_3) + P(X_2 < X_1 < X_3)}$. Can anyone explain me what mistake I made with my initial reasoning? For (c) I have no clue how to reason to get the correct answer.
(b) The following might help to understand your mistake. Let $U_{1},U_{2},U_{3}$ be independent random variables where $P\left(U_{1}=1\right)=P\left(U_{1}=3\right)=\frac{1}{2}$ and $P\left(U_{2}=2\right)=1=P\left(U_{3}=2\right)$. Then it is evident that: $$P\left(U_{1}<U_{2}\mid\max\left(U_{1},U_{2},U_{3}\right)=U_{3}\right)=1\neq\frac{1}{2}=P\left(U_{1}<U_{2}\right)$$ (Almost) degenerated random variables can be very helpful to examine questions like: "is my intuition correct here?" (c) Under condition $X_{1}<X_{2}<X_{3}$ you are dealing with the original PDF divided by probability $P\left(X_{1}<X_{2}<X_{3}\right)$. To be worked out is the integral:$$\frac{\frac{1}{\lambda_{1}\lambda_{2}\lambda_{3}}\int_{0}^{\infty}\int_{x}^{\infty}\int_{y}^{\infty}ze^{-\lambda_{1}x-\lambda_{2}y-\lambda_{3}z}dzdydx}{\frac{1}{\lambda_{1}\lambda_{2}\lambda_{3}}\int_{0}^{\infty}\int_{x}^{\infty}\int_{y}^{\infty}e^{-\lambda_{1}x-\lambda_{2}y-\lambda_{3}z}dzdydx}$$ Note that the denominator equals $P(X_1<X_2<X_3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1744699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the Matrix $A^{482}$ in terms of $A$ Given $$A=\begin{bmatrix} -4 & 3\\ -7 & 5 \end{bmatrix}$$ Find $A^{482}$ in terms of $A$ I tried using Characteristic equation of $A$ which is $$|\lambda I-A|=0$$ which gives $$A^2=A-I$$ so $$A^4=A^2A^2=(A-I)^2=A^2-2A+I=-A$$ so $$A^4=-A$$ but $482$ is neither multiple of $4$ nor Power of $2$, How can I proceed ?
If $A^4=-A$, then $$A^{482}=A^2A^{480}=A^2(A^4)^{120}=A^2(-A)^{120}=A^2(A^4)^{30}=A^2(-A)^{30}=A^{32}=(A^4)^8=(-A)^8=(A^4)^2=(-A)^2=A^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1744832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Maximum and minimum distance of two points Consider six distinct points in a plane. Let $m$ and $M$ denote respectively the minimum and the maximum distance between any pair of points. Show that $M/m \geqslant \sqrt{3}$.
Assume otherwise. Pick two points $A,B$ at distance $M$ (The small dots in the following image) Then all other points are in the closed lens-shaped area bounded by the circles of radius $M$ around $A$ and $B$ (big circles in above image). In particular, they must be in the ten blue triangles shown. As the blue triangles have side length $\frac M{\sqrt 3}$, each of them can contain at most one of the points (including on the boundary). After taking away those already occupied by $A$ and B$, there are only four blue triangles left. We conclude that each of the top two and each of the bottom two blue triangles contains one of the four remaining points. Next consider the red lines: The shape bounded from above by the top red line and from below by the two small circles has diameter $<\frac M{\sqrt 3}$, hence can contain at most one point. We conclude that one of the points must be above the top red line. By the same argument, one of the points must be below the bottom red line. But then these two points are more than $M$ apart - contradiction!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1744983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Intersection of a line and line at infinity in projective space I understand parallel lines in Euclidean space intersect at the line at infinity in terms of projective space. My question is for a single line. A single line if extended to infinity must intersect the line at infinity at some point (correct me if this wrong.). The thing that I find hard to interpret is how could it not intersect the infinity line in two points which are located in the two opposite directions of the line? I have checked this existing question line at infinity and it didn't help.
Each line has just one point at infinity, which is approached by going in either direction along the line. Two lines share the same point at infinity if and only if they are parallel to each other. Two lines not parallel to each other have different points at infinity. When one adds to the affine line a point at infinity that is approached by going in either direction, the line becomes topologically a circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1745225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is the Laplacian of $1/r$ a Dirac delta? How does one show that $\nabla^2 1/r$ (in spherical coords) is the Dirac delta function ? Intuitively, it would seem that the function undefined at the origin and I'm not able to construct a limiting argument that avoids this problem.
Delta function is zero everywhere except at origin, and integration over space is zero. The first property is easy to prove with vector identities. For the second property: $$ I = \iiint_V \nabla^2 \frac{1}{r} d V = \iiint_V \nabla \cdot \nabla \frac{1}{r} dV $$ With divergence theorem: $$ I = \oint_S \nabla \frac{1}{r} \vec{dS} = \oint_S - \frac{1}{r^2} \vec{n} \vec{dS} = \int_{\Omega} - \frac{1}{r^2} r^2 {d \Omega} = -4 \pi = -4 \pi \iiint_V \delta(r) dV $$ That's all, your question missed the factor $-4 \pi$. As to the test function (as response to comment below), the spherical surface in the second integral has to be infinitely small around the origin. In this way $f(r)$ in an integration like (The volume $V$ can be any size but only the infinitesimal region around the origin matters, as at outside laplacian of 1/r is zero): $$ \iiint_V f \nabla^2 \frac{1}{r} dV $$ can be replaced by the constant $f(0)$. Of course $f$ must be continuous and finite around the origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1745570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $\mathbb{Q}(\sqrt{2}+\sqrt[3]{5})=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$ Show that $\mathbb{Q}(\sqrt{2}+\sqrt[3]{5})=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$ and find all $w\in \mathbb{Q}(\sqrt{2},\sqrt[3]{5})$ such that $\mathbb{Q}(w)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$. It is clear that $\mathbb{Q}(\sqrt{2}+\sqrt[3]{5}) \subseteq \mathbb{Q}(\sqrt{2},\sqrt[3]{5})$. But given any $x\in \mathbb{Q}(\sqrt{2},\sqrt[3]{5})$, how do I show that $x\in \mathbb{Q}(\sqrt{2}+\sqrt[3]{5})$? For the second question, is $w$ the associates of $\sqrt{2}+\sqrt[3]{5}$? How do I show that? Thanks!
IMO, Galois theory is the way to go here, but I want to give you an idea of how to do this in an elementary way. The key idea below is that squaring should "eliminate" some of the $\sqrt{2}$, but won't really get rid of any of the $\sqrt[3]{5}$ (I will use $\alpha=\sqrt[3]{5}$ for the rest of this answer). If we square $\sqrt{2}+\alpha$, and throw away any integers, we are left with $2\sqrt{2}\alpha+\alpha^2$. Squaring this yields $$ 8\alpha^2+20\sqrt{2}+5\alpha$$ and we can subtract from that $20(\sqrt{2}+\alpha)$ to be left with $$ 8\alpha^2-15\alpha $$ Squaring one more time yields (after removing the rational middle term) $$ 320\alpha + 225\alpha^2$$ and a linear combination of the previous two expressions easily yields (after a possible rational division) $\alpha$. Thus we have shown that $\sqrt[3]{5}=\alpha\in\mathbb{Q}(\sqrt{2}+\alpha)$, and so of course so does $\sqrt{2} = (\sqrt{2}+\alpha)-\alpha$. In other words, $\mathbb{Q}(\sqrt{2},\sqrt[3]{5})\subset\mathbb{Q}(\sqrt{2}+\sqrt[3]{5})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1745628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Prove the intersection of events to be 1 $B_n$, $n$ from $1$ to infinity is countably infinite sequence of events and each event has probability $1$. How do I formally prove that the probability of intersection of $B_n$ from $n = 1$ to infinity is also $1$. Intuitively I know because $P[B_i] = 1$, so all events are equivalent to the total sample space, and so does its intersection. But how to prove it in formal way?
Hint: The probability of at least one of the events not happening is at most the sum of the probabilities of each event not happening, which is zero because there are only countably many of them each with probability zero of not happening.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1745841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Limit of $f(x)=|\log x|$ My textbook solved this problem: Find $f'(1^{-})$ if $$f(x)=|\log x|$$ for the interval $x>0$ The textbook solved it by using the method described below: $$f'(1^{-})=\lim\limits_{x\to 1^{-}} \frac{f(x)-f(1)}{x-1}$$ Which becomes: $$f'(1^{-})=\lim\limits_{x\to 1^{-}} \frac{|\log x|-|\log 1|}{x-1}$$ They substituted $x=1-h$ $$f'(1^{-})=\lim\limits_{h\to 0^{+}} \frac{|\log (1-h)|-|\log 1|}{-h}$$ Now they claimed the answer to this is $-1$ I really don't understand how they arrived at that answer. Could it be a typo on their part?
An alternative answer which does not require Taylor series or the knowledge of the derivative of $\text{log}$ is to use the limit definition for $\frac{1}{e}$: $$\frac{1}{e} = \lim_{n\rightarrow \infty} \left( 1- \frac{1}{n} \right)^n = \lim_{m\rightarrow 0} \left( 1- m \right)^{1/m} $$ So that: $$ -\lim_{h\rightarrow 0} \frac{\left| \text{log}(1-h) \right|}{h}=-\lim_{h\rightarrow 0} \left| \text{log}(1-h)^{1/h} \right| = - \left| \text{log} \left( \lim_{h\rightarrow 0} (1-h)^{(1/h)} \right) \right| $$ $$ = -\left| \text{log} \left(1/e \right) \right| = - \left| -1 \right|=-1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1745959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Difference between Increasing and Monotone increasing function I have some confusion in difference between monotone increasing function and Increasing function. For example $$f(x)=x^3$$ is Monotone increasing i.e, if $$x_2 \gt x_1$$ then $$f(x_2) \gt f(x_1)$$ and some books give such functions as Strictly Increasing functions. But if $$f(x)= \begin{cases} x & x\leq 1 \\ 1 & 1\leq x\leq 2\\ x-1 & 2\leq x \end{cases} $$ Is this function Monotone increasing?
As I have always understood it (and various online references seem to go with this tradition) is that when one says a function is increasing or strictly increasing, they mean it is doing so over some proper subset of the domain of the function. To say a function is monotonic, means it is exhibiting one behavior over the whole domain. That is, a monotonically increasing function is nondecreasing over its domain and is also an increasing function since it is non-decreasing over any subset of the domain. Similarly, a strictly monotonically increasing function is a function that is strictly increasing over its whole domain, rather than simply increasing over a subset of the domain (as determined from the increasing/decreasing test in Calculus). One can say similar things about a monotonically decreasing function vs. a decreasing function. This largely echoes what was said by Thomas above, but, taking monotone as a term referring to the behavior of a function over the whole domain, one does not need to say "I'm used to." That said, one should always be clear on what definitions are being used as consistency is not a human's strong point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1746077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 3, "answer_id": 0 }
Prove that $\oint_{|z|=r} {dz \over P(z)} = 0$ I got stuck on this problem, hope anyone can give me some hints to go on solving this: P is a polynomial with degree greater than 1 and all the roots of $P$ in complex plane are in the disk B: $|z| = r$. Prove that: $$\oint_{|z| = r} {{dz}\over{P(z)}} = 0$$ Here, the direction of the integral is the positive direction(actually, it can take whatever direction, because the value of the integral is 0). What I tried so far: Applying D'Alembert-Gauss theorem, we can write $P(z) = (z-z_1)^{p_1}(z-z_2)^{p_2}...(z-z_n)^{p_n}$, here $z_i$ are complex numbers which different from each other. We can choose for each $i = 1,...,n$ a $r_i > 0$ small enough such that $B(p_i,r_i)$ are disjoint with each others and all belong to $B$. So use Cauchy theorem for compact Jordan region generated by $B$ and $B(p_i, r_i)$, it's easy to see that: $$\oint_{|z| = r} {{dz}\over{P(z)}} = \sum_1^{n}{\oint_{|z-z_i|=r_i} {{dz}\over{P(z)}}} = \sum_1^{n}{\oint_{|z-z_i|=r_i} {{\prod_{j \neq i}{1 \over {(z-z_j)^{p_j}}}}\over{(z-z_i)^{p_i}}}}$$. Then I tried to apply Cauchy theorem for each ${\oint_{|z-z_i|=r_i} {{\prod_{j \neq i}{1 \over {(z-z_j)^{p_j}}}}\over{(z-z_i)^{p_i}}}}$: $$f^{(k)}(z) = {k! \over {2 \pi i}} \oint_{\partial{B}}{{f(t) \over (t-z)^{k+1}}dt}$$, here $\partial{B}$ is notion for the boundary of the disk $B$ But I got stuck when trying to calculate the $(p_i - 1)$-th derivative for ${\prod_{j \neq i}{1 \over {(z-z_j)^{p_j}}}}$. I expect that each expression should be equal to 0, but I can't prove it. Anyone has any ideas to move on? If there's any point unclear, please don't hesitate to ask me. Thanks!
Using the ML inequality: $$\left|\oint_{|z|=R}\frac{dz}{p(z)}\right|\le2\pi R\cdot\max_{|z|=R}\frac1{|p(z)|}\le2\pi R\frac1{R^n}\xrightarrow[R\to\infty]{}0$$ since $\;n\ge 2\; $ . Why? Because of the maximum modulus principle: $$p(z)=\sum_{k=0}^na_kz^k=z^n\sum_{k=0}^na_kz^{k-n}\stackrel{\forall\,|z|=R}\implies\left|p(z)\right|\ge|z|^n\left(\left|a_n\right|-\left|\frac{a_{n-1}}z\right|-\ldots-\left|\frac{a_0}{z^n}\right|\right)\ge |a_n|R^n$$ the last equality being true for $\;R\;$ big enough since the expression within the parentheses tends to $\;|a_n|\;$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/1746172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Elliptic curves with trivial Mordell–Weil group over certain fields. I am looking for elliptic curves $E,E'$ defined over $\mathbb{F}_{3}$ and $\mathbb{F}_{4}$ respectively and given by a Weierstrass equation such that their Mordell-Weil group is trivial, i.e. such that $$ E(\mathbb{F}_{3})=\{\mathcal{O}\}\quad\text{and}\quad E'(\mathbb{F}_{4})=\{\mathcal{O}\}, $$ where $\mathcal{O}$ is the point at infinity. Is there any procedure to find them? Or is the only way just to change the coefficients until you find one satisfying the requirements?
By Hasse's bound we know that $1\le |E(\mathbb{F}_3)|\le 7$; and indeed there is an elliptic curve with $E(\mathbb{F}_{3})=\{\mathcal{O}\}$, given by $$ y^2=x^3-x-1. $$ Actually, since we know that all such curves are given by the long Weierstrass equation $y^2=x^3+ax^2+bx+c$ with nonzero discriminant, we can just try all possibilities for $a,b,c$. There are not many curves to test. Taking all possibilities we obtain that $E(\mathbb{F}_{3})$ can be one of the following possibilities: $1, C_2, C_3,C_4, C_5,C_6,C_7, C_2\times C_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1746278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding the power series of a complex function So I have the function $$\frac{z^2}{(z+i)(z-i)^2}.$$ I want to determine the power series around $z=0$ of this function. I know that the power series is $\sum_{n=0}^\infty a_n(z-a)^n$, where $a_n=\frac{f^{(n)}(a)}{n!}$. But this gives me coefficients, how can I find a series for this? Edit: maybe we can use partial fractions?
It's also convenient to perform a partial fraction decomposition followed by a binomial series expansion. We obtain by partial fraction decomposition \begin{align*} \frac{z^2}{(z+i)(z-i)^2}&=\frac{1}{4(z+i)}+\frac{3}{4(z-i)}+\frac{i}{2(z-i)^2}\\ &=-\frac{i}{4}\cdot\frac{1}{1-iz}+\frac{3i}{4}\cdot\frac{1}{1+iz}-\frac{i}{2}\cdot\frac{1}{(1+iz)^2}\tag{1}\\ &=-\frac{i}{4}\sum_{n\geq 0}\left(iz\right)^n+\frac{3i}{4}\sum_{n\geq 0}\left(-iz\right)^n -\frac{i}{2}\sum_{n\geq 0}\binom{-2}{n}\left(iz\right)^n\tag{2}\\ &=\sum_{n\geq 0}\left(-\frac{1}{4}+\frac{3}{4}(-1)^n-\frac{1}{2}(n+1)(-1)^n\right)i^nz^n\tag{3}\\ &=\sum_{n\geq 0}\left(\frac{2n-1}{4}(-1)^{n+1}-\frac{1}{4}\right)i^{n+1}z^n \end{align*} Comment: * *In (1) we apply geometric series expansion and binomial series expansion *In (2) we do some rearrangement and use the identity $\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$ with $p=2$ *In (3) we do some final rearrangement
{ "language": "en", "url": "https://math.stackexchange.com/questions/1746395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to test the convergence of $\sum^{\infty}_{n=2}\frac{1}{\ln{n}}$ and $\sum^{\infty}_{n=2}\frac{\ln{n}}{n^{1.1}}$? How to test the convergence of $\sum^{\infty}_{n=2}\frac{1}{\ln{n}}$ and $\sum^{\infty}_{n=2}\frac{\ln{n}}{n^{1.1}}$? For the first one, I use basic comparison and compare it to $\frac{1}{n}$, since $\frac{1}{n}\lt \frac{1}{\ln{n}}$ and $\frac{1}{n}$ diverges, so $\sum^{\infty}_{n=2}\frac{1}{\ln{n}}$ diverges. Foe the second one, I have no idea how to start with? Any suggestion?
Use the $n^{\alpha}-$test with $\alpha = 1.05$. The $n^{\alpha}-$test says that if $\alpha > 1$, and $a_n \ge0$ eventually, and if: $$\lim n^{\alpha}a_n = 0$$ Then $\sum a_n$ converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1746506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability and expectation of three ordered random variables I am really stuck on the following question during my exam preparation. Let $X_1$, $X_2$ and $X_3$ be independent exponential random variables with respective rates $\mu_1$, $\mu_2$ and $\mu_3$. Find: a) $P(X_1 < X_2 < X_3)$ b) $E(X_2 | X_1 < X_2 < X_3) $ Question a For question a) I've tried two approaches but they do not give me the same answer. Why the discrepancy and is one of the two even correct? First approach $P(X_1 < X_2 < X_3) =$ [using the law of total probability] $ = P(X_1 < X_2 < X_3 | min X_i = X_1) P(min X_i = X_1) + P(X_1 < X_2 < X_3 | min X_i = X_2) P(min X_i = X_2) + P(X_1 < X_2 < X_3 | min X_i = X_3) P(min X_i = X_3)$ [The second and third term lead to contradictions and therefor are 0] $= P(X_1 < X_2 < X_3 | min X_i = X_1) P(Xi = min X1)$ $= P(X_1 < X_2 < X_3, min X_i) = P(X_1 < X_2 < X_3, X_1 < X_2, X_1 < X_3)$ $= P(X_2 < X_3, X_1 < X_2, X_1 < X_3)$ [first entry contains redundant information] $= P(X_2 < X_3)P(X_1 < X_2)P(X_1 < X_3)$ [They are independent] $= u_2/(u_2+u_3) \cdot u_1/(u_1+u_2) \cdot u_1/(u_1+u_3)$ Second approach The joint distribution is given by $f(x_1,x_2,x_3) = f_1(x_1)f_2(x_2)f_3(x_3)$ due to independence $P(X_1 < X_2 < X_3) = \int_{x_2}^{\infty}\int_0^{\infty}\int_0^{x_2}f_1(x_1)f_2(x_2)f_3(x_3)dx_1 dx_2 dx_3 $ $= \int_0^{\infty}\int_0^{x_2}f_1(x_1)dx_1 \int_{x_2}^{\infty}f_3(x_3)dx_3 f_2(x_2)dx_2$ $= u_2/(u_2+u_3) - u_2/(u_1 + u_2 + u_3) $ Question b For question b) I want to do the following: $\int_{x_2}^{\infty}\int_0^{\infty}\int_0^{x_2}x_2*f_1(x_1)f_2(x_2)f_3(x_3)dx_1 dx_2 dx_3 $ am I on the right track?
Your first approach has a flaw, because: $$\mathsf P(X_2 <X_3 ,X_1 <X_2 ,X_1 <X_3 )\neq \mathsf P(X_2 <X_3 )~\mathsf P(X_1 <X_2 )~\mathsf P(X_1 <X_3 )$$ The random variables are independent, but the events of their pairwise ordering are not.   When given that $X_3$ is somewhat larger than $X_2$ (whatever that is) then you would be more likely to anticipate that it is also larger than $X_1$ than without that information. $$\mathsf P(X_1<X_3\mid X_2<X_3)\gt \mathsf P(X_1<X_3)$$ Your second approach holds up: $$\begin{align} \mathsf P(X_1<X_2<X_3) =&~ \int_0^\infty F_{X_1}(x) f_{X_2}(x)(1-F_{X_3}(x))\operatorname d x \\[1ex] =&~ \int_0^\infty (1-e^{-\mu_1 x})\mu_2 e^{-\mu_2 x} e^{-\mu_3x}\operatorname d x \\[1ex] =&~ \mu_2\int_0^\infty e^{-(\mu_2+\mu_3) x}-e^{-(\mu_1+\mu_2+\mu_3) x} \operatorname d x \\[1ex] =&~ \dfrac{\mu_2}{\mu_1+\mu_2}-\dfrac{\mu_2}{\mu_1+\mu_2+\mu_3} \end{align}$$ For question (b) you are on the right track, but you need to normalise: $$\begin{align}\mathsf E(X_2\mid X_1<X_2<X_3) =&~ \dfrac{\int_0^\infty x f_{X_2}(x) F_{X_1}(x) (1-F_{X_2}(x))\operatorname d x}{\int_0^\infty f_{X_2}(x) F_{X_1}(x) (1-F_{X_2}(x))\operatorname d x} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1746575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Improper convergence of $ \cos(x)/{x^{1/2}} $ I have to evaluate the convergence of the improper integral $ \int_1^\infty \frac {\cos(x)}{x^{1/2}}dx $. As the function is continuous on every $ [1, M] $, I can tell that this function is Riemann integrable on every interval $ [1,M] $, M > 1. So all I have to do is to evaluate the limite at the bounds : $$ \lim_{b\to \infty}\int_1^b \frac {\cos(x)}{x^{1/2}}dx $$. The problem is, I don't know how to evaluate this integral. I've tried integrating by parts, but it doesn't work as the power of x isn't an integer. Should I use the comparison theorem? Or should I integrate this? Thank you for your help.
We prove convergence. Integrate by parts, letting $u=x^{-1/2}$ and $dv=\cos x\,dx$. Then $du=(-1/2)x^{-3/2}$ and we can take $v=\sin x$. So our integral from $1$ to $M$ is $$\left. x^{-1/2}\sin x\Large\right|_1^M-\int_1^M (-1/2)x^{-3/2}\sin x\,dx.$$ Both parts behave nicely as $M\to\infty$, because $|\sin x|\le 1$. Remark: By looking at our function from $\pi$ to $2\pi$, and $2\pi$ to $3\pi$, and so on, one can show that $\int_1^\infty x^{-1/2}|\cos x|\,dx$ doe not converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1746681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Continuous strictly increasing function with derivative infinity at a measure 0 set Let $E\subset [0,1]$ with $\mu(E)=0$. Does there exist a continuous, strictly increasing function $f$ on $[0,1]$ so that $f'(x)=\infty$ for all $x\in E$ (in Lebesgue sense)? I think there exist such a function, but I don't know how to construct.
I couldn't figure how the general case works but if $E$ is countable consider $E=\{q_n\;:\;n\in\mathbb{N}\}$ and take $f$ a function which is $0$ on $(-\infty,-1/2]$, $1$ on $[1/2,\infty)$, monotonic and has a derivative $f':\mathbb{R} \to [0,\infty]$ with $f'(0)=\infty$, additionally take a sequence $(a_n)_{n\in\mathbb{N}}\in \ell^1$, $a_n>0$. Then the function $$ g(x) = \sum_{n\in\mathbb{N}} a_nf(x-q_n) $$ is continuous and monotone. Further since for all $n$ $a_nf'(x-q_n)$ is a non negative function $$ g(x) = \int_{-1}^x \sum_n a_n f'(s-q_n)\mathrm{d}s $$ and hence at every $q_m$ $$ g'(x) = f'(0) + \sum_{n\not= m} f'(q_m-q_n) =\infty. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1746772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
is this question Using chinese remainder theorem? I think that it will use Chinese remainder theorem but I don't know how to put... well by CRT, there exist $x=k\mod (m_1)(m_2)(m_3)$ which is $x=a_1^3 \mod m_1$ and $x=a_2^3 \mod m_2$ and $x=a_3^3 \mod m_3$ but $k$ must be $a^3$ ?
Yes, one uses the Chinese Remainder Theorem, but not quite in the way partly described in the post. Consider the system of congruences $y\equiv a_i\pmod{m_i}$ ($i=1,2,3$). By the Chinese Remainder Theorem, this system of congruences has a solution. Call it $a$. Then $a^3\equiv a_i^3\equiv x\pmod{m_i}$ for all $i$, and therefore $a^3\equiv x\pmod{m_1m_2m_3}$. Remark: The above answers the question as given. However, the wording of the question seems a little peculiar. It says "Suppose that there exists $x$ satisfying $\dots$." However, given $a_1,a_2,a_3$, there always exists such an $x$ (Chinese Remainder Theorem), so it is odd to suppose that there is an $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1746898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
why is the geometric mean less than the logarithmic mean? Can someone explain why the geometric mean is less than the logarithmic mean? $$\sqrt{ab} \leq \frac{b-a}{\log b-\log a} $$
There might be a geometric interpretation that you are looking for, but I still prefer an algebraic approach. So let's suppose $0 < a < b$, and put $b = ta, t > 1$. Thus: $LHS = \dfrac{1}{\sqrt{ab}} = \dfrac{1}{\sqrt{ta^2}} = \dfrac{1}{a\sqrt{t}}$,and $RHS = \dfrac{\log(at) - \log a}{at- a}= \dfrac{\log t}{a(t-1)}$. Thus you prove: $\dfrac{\log t}{t-1} < \sqrt{t}\iff f(t) =\log t - t\sqrt{t} + \sqrt{t} < 0$. Taking first derivative: $f'(t) = \dfrac{1}{t} - \dfrac{3\sqrt{t}}{2} + \dfrac{1}{2\sqrt{t}} < 0, t > 1\Rightarrow f(t) < f(1) = 0$, and the inequality follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1746976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Numerical solver for maxwell equations? Just curious if someone has come across a package where I can simply solve the basic maxwell equations(just the curl equations). I'm just interested in solving it on a 2-d plate out of interest. Anyone come across such a package in their travels? Thanks
You mean "numerically solve", right? In this case I would suggest you to give FiPy an earnest shot. It's reasonably simple and fairly well documented. It may be the answer to your problem. PS: FiPy is based on the Finite Volume Method (FVM).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1747226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Frobenius Norm and Relation to Eigenvalues I've been working on this problem, and I think that I almost have the solution, but I'm not quite there. Suppose that $A \in M_n(\mathbb C)$ has $n$ distinct eigenvalues $\lambda_1... \lambda_n$. Show that $$\sqrt{\sum _{j=1}^{n} \left | {\lambda_j} \right |^2 } \leq \left \| A \right \|_F\,.$$ I tried using the Schur decomposition of $A$ and got that $\left \| A \right \|_F = \sqrt{TT^*}$, where $A=QTQ^*$ with $Q$ unitary and $T$ triangular, but I'm not sure how to relate this back to eigenvalues and where the inequality comes from.
You are in the right way. The corresponding Schur decomposition is $A = Q U Q^*$, where $Q$ is unitary and $U$ is an upper triangular matrix, whose diagonal corresponds to the set of eigenvalues of $A$ (because $A$ and $U$ are similar). Now because Frobenius norm is invariant under unitary matrix multiplication: $$||QA||_F = \sqrt{\text{tr}((QA)^*(QA))} = \sqrt{\text{tr}(A^*Q^* QA)} = \sqrt{\text{tr}(A^*A)} = ||A||_F$$ (the same remains for multiplication of $Q$ on the right) then we could write: $$||A||_F = ||Q U Q^*||_F = ||U||_F \rightarrow \sqrt{\sum_{j=1}^n |\lambda_j|^2} \leq ||A||_F$$ directly proves your statement. Note: The inequality comes from the definition of the Frobenius norm: The sum of the square of all entries in the matrix. Since $U$ contains the eigenvalues on his diagonal, the term in the left has to be less or equal to the sum over all entries, because $U$ could have non zero entries over his diagonal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1747359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Question about $\aleph$-fixed point I am working through a proof on cardinals I found and can't reason some of the steps. The proposition is that there is an $\aleph$-fixed point, i.e. there is an ordinal $\alpha$ (which is necessarily a cardinal), so that $\aleph_{\alpha} = \alpha$. The proof goes as follows: Let $\alpha_{0} = \aleph_{0}$ (or any other cardinal), $\alpha_{n + 1} = \aleph_{\alpha_{n}}$, and $\alpha = \sup \{ \alpha_{n} \mid n \in \omega \}$. Now if $\alpha = \alpha_{n}$ for some $n$, then $\alpha = \alpha_{n+1} = \aleph_{\alpha_{n}} = \aleph_{\alpha}$. Otherwise $\alpha$ is a limit ordinal and we have that $\aleph_{\alpha} = \sup \{ \aleph_{\xi} \mid \xi < \alpha\} = \sup\{ \aleph_{\alpha_{n}} \mid n \in \omega \} = \sup \{\alpha_{n + 1} \mid n \in \omega \} = \alpha$. Now the limit case makes sense to me, but why on earth can we state that if $\alpha = \alpha_{n+1}$, then $$ \alpha = \alpha_{n+1} = \aleph_{\alpha_{n}} = \aleph_{\alpha}. $$ I suppose the main issue I am having is why does $\alpha = \alpha_{n}$ imply that $\alpha = \alpha_{n+1}$. After that, it is really just a matter of applying definitions.
I think the key piece is that the $\alpha_n$s are increasing: $\alpha_n\le\alpha_{n+1}$. More broadly, the following is true: $\aleph_\beta\ge\beta$, for all $\beta$. So we know $\alpha_n\le\alpha_{n+1}\le\alpha$ (the latter inequality since $\alpha$ is the sup of the $\alpha_n$s), so if $\alpha_n=\alpha$ then $\alpha_n=\alpha_{n+1}$ - and indeed $\alpha_n=\alpha_k$ for all $k\ge n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1747465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
What is the correct way of adjoining an "infinite" point to a totally-ordered field, such that the end result is still totally-ordered? Let $Q$ denote a totally-ordered field. Now adjoin an infinite element $\infty$ to $Q$, by equipping the field of rational functions $Q(\infty)$ in the indeterminate $\infty$ with the least preorder $\lesssim$ such that the obvious axioms hold: * *If $x \in Q$, then $x \lesssim \infty$. *If $x,y \in Q$ and $x \leq y$, then $x \lesssim y$. *If $x \lesssim y$, then $x+z \lesssim y+z$. *If $0 \lesssim x$ and $0 \lesssim y$, then $0 \lesssim xy$. Question. Is $\lesssim$ necessarily a total ordering of $Q(\infty)$? (i.e. antisymmetric and linear). And if not, what is the correct way of adjoining an infinite point to a totally-ordered field, such that the end result is still totally-ordered?
For a more general construction : let $\Gamma$ be a totally ordered abelian group, and $\chi: \Gamma\to \{ \pm 1\}$ be a group morphism. Then we can form the Hahn series field $Q((\Gamma))$ as the set of formal series $\sum_{\gamma\in \Gamma} a_\gamma X^\gamma$ with $a_\gamma\in Q$ such that the support (ie $\{\gamma\in \Gamma\,|\, a_\gamma\neq 0\}$) is well-ordered. The product is then given by usual convolution, which is well-defined precisely thanks to the well-ordering condition on the support( this guarantees that the sums involved in the convolution product will be finite). For instance, $\Gamma = \mathbb{Z}$ gives $Q((\Gamma)) = Q((X))$ the usual Laurent series field (the well-ordering condition just says that the series have bounded below degree terms). Then for $f = \sum_{\gamma\in \Gamma} a_\gamma X^\gamma\in Q((\Gamma))$, we define $f\geqslant 0$ iff $\chi(\gamma)a_\gamma\geqslant 0$ in $Q$, where $\gamma$ is the least element of the support of $f$. This gives a total ordering on $Q((\Gamma))$, and if you specialize to $\Gamma = \mathbb{Z}$ and $\chi:\mathbb{Z}\to \{\pm 1\}$ the reduction mod $2$ morphism, this gives an ordering on $Q((X))$ wich restricts to the ordering you want on $Q(X)$. (I took this construction from Efrat's book "Valuations, orgerings, and Milnor K-Theory.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1747603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Consecutive integers sum with different steps First of all: beginner here, sorry if this is trivial. We know that $ 1+2+3+4+\ldots+n = \dfrac{n\times(n+1)}2 $ . My question is: what if instead of moving by 1, we moved by an arbitrary number, say 3 or 11? $ 11+22+33+44+\ldots+11n = $ ? The way I've understood the usual formula is that the first number plus the last equals the second number plus second to last, and so on. In this case, this is also true but I can't seem to find a way to generalize it.
Assume you have a sequence $$x, x+a, x+2a, x+3a, ..., x+na.$$ Let us note $$S = \sum_{i=0}^n (x+ia).$$ Then by the trick you mentioned, we see that $$S+S = (2x+na)+(2x+na)+...+(2x+na) = (n+1)(2x+na).$$ Hence $$S = \frac{(n+1)(2x+na)}{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1747696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 3 }
Question regarding projective coordinate transformation While reading Kunz's commutative algebra book, I came across a statement I can't understand. First, let me define the notations. Let $L/K$ be extension of fields, and let $\mathbb{P}^n (L)$ denote the projective n-space over $L$. * *A Projective coordinate transformation is a mapping $\mathbb{P}^n (L) \rightarrow \mathbb{P}^n (L)$ given by a matrix $A\in GL(n+1,L)$ through the equation $(Y_0,...,Y_n) = (X_0,...,X_n).A$ $\space$ $\space$ $\space$ (1) *A Subset $V\subset \mathbb{P}^n (L)$ is called a projective $K$-variety if there are homogeneous polynomials $F_1,...,F_m\in K[X_0,...,X_n]$ such that $V$ is the set of all common zeros of the $F_i$ in $\mathbb{P}^n (L)$. The author next says that "This concept is invariant under coordinate transformations (1) as long as A has coefficients in K". I have no idea what he means by this. I have tried to apply the transformation both on the n-tuples and on the variables, but in either case $V$ seems to be changing (If I'm not wrong). Can somebody explain what these mean? Thanks in advance.
$V$ is certainly changing in the sense that after applying a coordinate transformation $A$ to $\mathbb{P}^n$ you will find that $A(V)$ is not (in general) the zero locus of the same homogeneous polynomials $F_1,\ldots,F_m$ that $V$ was, but the author is saying that there still exist other homogeneous polynomials $G_1,\ldots,G_m$ that cut out $A(V)$ as their zero locus. So $A(V)$ is still a projective variety - i.e. the concept of being a projective variety is invariant under coordinate transformations. Indeed, you can check that letting $G_i:=F_i\circ A^{−1}$ suffices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1747846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Localization Preserves Euclidean Domains I'm wanting to prove that given a ring $A$ (by "ring" I mean a commutative ring with identity) and a multiplicative subset $S \subset A$: if $A$ is an Euclidean Domain, and $0 \notin S$ then $S^{-1}A$ (localization of A at S) is also an Euclidean Domain. I'm trying to produce an Euclidean Function in $S^{-1}A$ using the Euclidean Function $N:A \rightarrow \mathbb{N}$, that I already have from $A$ but I'm having trouble trying to define it in a way that works and verifies the properties an Euclidean Function must verify. Does any one mind giving me hints? I don't really want a solution.. I would like to work it myself. Thanks in advance. :)
In wikipedia's language, we may assume that $N$ satisfies $N(a)\le N(ab)$ for $a,b\in A$. Let us denote the candidate function for the localization by $N_S\colon (S^{-1}A)\setminus\{0\}\to\mathbb N$. We will also replace $S$ by its saturation, i.e. by $S_{\mathrm{sat}}:=\{ a\in A \mid \exists b\in A: ab\in S\}$. Notice that $S_{\mathrm{sat}}^{-1}A=S^{-1}A$ because for any $a\in S_{\mathrm{sat}}$, we have $a^{-1}=\frac{b}{s}\in S^{-1}A$ where $b\in A$ and $s\in S$ are such that $s= ab$. Hence, assume henceforth that $S$ is saturated in the sense that for any $a\in A$, if there exists some $b\in A$ with $ab\in S$, then we have $a\in S$. Hint: First, note that you may assume $N_S(s)=1$ for all $s\in S$. Indeed, for any $a\in S^{-1}A$, you have $N_S(s)\le N_S(\frac as\cdot s)=N_S(a)$. Hence, $N_S(s)$ must be minimal. Argue similarly that $N_S(\frac 1s)=1$ for $s\in S$. Now use the fact that $A$ is a unique factorization domain. Full spoiler, hover for reveal: We first note that an Element $s\in S$ can not have any prime factor in $A\setminus S$. Indeed, let $s=s_1\cdots s_n$ be the prime factors of $s$. Then, $s_1\in S$ and $s_2\cdots s_n\in S$ because $S$ is saturated. Proceed by induction. For $\frac{ta}{s}\in S^{-1}A$, with $t,s\in S$ and $a$ not divisible by any element of $S$, let $N_S\left(\frac{ta}{s}\right):=N(a)$. This is well-defined because if $\frac{t_1a_1}{s_1}=\frac{t_2a_2}{s_2}$, then $s_1t_2a_2=s_2t_1a_1$. Since $a_1$ is not divisible by any element of $S$, it contains no prime factor in $S$. Since $s_2t_1\in S$, it contains no prime factor in $A\setminus S$. This argument symmetrically works for $s_1t_2\cdot a_1$ and it follows that $s_1t_2=s_2t_1$ and (more importantly) $a_1=a_2$. Now we prove that $N_S$ yields a degree function turning $S^{-1}A$ into a Euclidean ring. Given $\frac{t_1a_1}{s_1},\frac{t_2a_2}{s_2}\in S^{-1}A$, we perform division with remainder $s_2t_1a_1 = qa_2 + r$ such that either $r=0$ or $N(r)<N(a_2)$. Hence, $$\frac{t_1a_1}{s_1} = \frac{q}{s_1t_2}\cdot \frac{t_2a_2}{s_2} + \frac{r}{s_1s_2}$$ and we have $N_{S}\left(\frac r{s_1s_2}\right)=N(r)<N(a_2)=N\left(\frac{t_2a_2}{s_2}\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1748012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Why does $\tan\left(\frac{\pi}{2} - \beta\right) = \frac{t}{s}$ imply that $\tan\beta = \frac{s}{t}$? I have been preparing for the SAT on KhanAcademy. For one of the trigonometry problems, the following conversion is made: $$\tan\left(\frac{\pi}{2} − \beta\right)= \frac{t}{s}$$ $$\tan(\beta) = \frac{s}{t}$$ There is no explanation for how this change works. Can anyone explain?
Observe that $$\tan(\pi/2-\beta)=\frac{\sin(\pi/2-\beta)}{\cos(\pi/2-\beta)}=\frac{\cos\beta}{\sin\beta}=\frac{1}{\tan\beta}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1748117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can every partially ordered set (POSET) take the form of a directed acyclic graph (DAG)? A POSET (Partially ordered set) is a set on the elements of which we have established a partial order relation ($\leq$), i.e. a relation which is: * *reflexive: $x\leq x,$ for every x in S *anti-symmetric: $x \leq y \wedge y \leq x \Rightarrow x=y $ *transitive: $x\leq y, y\leq x \Rightarrow x\leq z$ My question is if every POSET can take the form of a DAG (Directed Acyclic Graph) if we view its elements as the nodes and the relation itself as the edge set.
YES. Every POSET can take the form of a DAG. However, in order to obtain a less cluttered graph, you’d better avoid drawing every single edge. (see figure) You can omit the edges that can be inferred from the reflexive and transitive properties. Moreover you can arrange nodes in order to orient every edge upward and omit vector tips. In this way you obtain a Hasse diagram: it is just a stripped down DAG.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1748204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Actuarial Mathematics proof error? Is there something wrong with this proof or is it just me? Did they forget the -ve sign or does it cancel somehow? (From the second to last to last line)
Remember that the survival function (and consequently the deterministic number of lives) is a nonincreasing function with $s(\infty) = l_\infty = 0$. Consequently, by the fundamental theorem, $$\int_{y=x}^\infty l_y \mu(y) \, dy = g(\infty) - g(x),$$ where $g$ is an antiderivative of the integrand, i.e., $g$ satisfies $g'(y) = l_y \mu(y)$. But since $g(\infty) = 0$, upon differentiation, we obtain $$\frac{dl_x}{dx} = -g'(x) = -l_x \mu(x),$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1748333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find all the parameters and such that the line $y = ax + \frac{1}{2}a - 2$ intersects the hyperbola $xy = 1$ at right angles in at least one point . Problem: Find all the parameters and such that the line $y = ax + \frac{1}{2}a - 2$ intersects the hyperbola $xy = 1$ at right angles in at least one point. My work: We try to find tangent to hyperbola such that tangent line intersects given line at rigth angle. We know that two line $y_1=ax+b$ and $y_2=cx+d$ intersects at right angle iff $a=-\frac{1}{c}$. So we need to find tangent to hyperbola. Let $t...y=kx+l$ be our tangent line. To find that first we will need $y'$ and we will find it by implicit differentiation. $$xy=1$$ $$y+xy'=0 \iff y'=\frac{-y}{x}$$ $$k=-\frac{y}{x}$$ But I haven't idea how to finish my work.
It would be better to use the functional equation $$y=\frac 1x$$ The derivative is $$y'=-\frac 1{x^2}$$ So at $x_0=r$ we get $y_0=\dfrac 1r$, $f'(x_0)=-\dfrac 1{r^2}$. For the perpendicular line, we want the slope to be $m=-\dfrac 1{f'(x_0)}=r^2$. Putting that into the point-slope equation of a line, the perpendicular line is $$\begin{align} y&=m(x-x_0)+y_0 \\ &=r^2(x-r)+\frac 1r \\ y&= (r^2)x+\left(\frac 1r-r^3\right) \end{align}$$ Your question wants that to be $y=ax+\frac 12a-2$ so we see that $a=r^2$ and $\dfrac 12a-2=\dfrac 1r-r^3$. Substituting, $$\frac 12r^2-2=\frac 1r-r^3$$ That can be simplified to the equation $$2r^4+r^3-4r-2=0$$ Factoring the left hand side, $$(2r+1)(r^3-2)=0$$ This has the solution $r=-\dfrac 12$ or $r=\sqrt[3]2$. Since $a=r^2$, this gives the final answer $$a=\frac 14 \quad\text{or}\quad a=\sqrt[3]4$$ We try that in a grapher, and we see it checks. The two magenta points are the points of intersection where the lines and graph are perpendicular. By "coincidence" one of those points is also a second point of intersection with another line: that can be ignored. Of course that is not really a coincidence. Your line equation can be rewritten as $$y=a\left(x--\frac 12\right)+-2$$ which means the question was asking for the lines through the point $\left(-\dfrac 12,-2\right)$, which is on the graphed hyperbola, that are perpendicular to the hyperbola. We get two lines, one for each branch of the hyperbola, and of course both lines go through that point and one is perpendicular to the hyperbola at that point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1748452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Rolling a die until two rolls sum to seven Here's the question: You have a standard six-sided die and you roll it repeatedly, writing down the numbers that come up, and you win when two of your rolled numbers add up to $7$. (You will almost surely win.) Necessarily, one of the winning summands is the number rolled on the winning turn. A typical game could go like this: $1, 1, 4, 5, 3$; you win on the 5th turn because $3 + 4 = 7$. How many turns do you expect to play? Here's what I've tried: We seek $E(N)$ where $N$ is a random variable counting the number of turns it takes to win. Then $N \ge 2$, and $$E(N) = \sum_{n=2}^\infty n P(N=n) = \sum_{n=1}^\infty P(N > n).$$ I want to find either $P(N=n)$, the probability that I win on the $n$th turn, or $P(N > n)$, the probability that after $n$ turns I still haven't won. Note that $P(N = 1) = 0$. Let $X_k$ be the number rolled on the $k$th turn. Then $$P(N = 2) = P(X_1 + X_2 = 7) = \sum_{x=1}^6 P(X_1 = x)P(X_2 = 7-x) = 6\cdot \frac{1}{6}\cdot\frac{1}{6} = \frac{1}{6}.$$ So far so good. To compute $P(N > 3)$ I let $A_{i, j} = \{\omega \in \{1, \dotsc, 6\}^3 : w_i + w_j = 7\}$ and used the inclusion-exclusion principle and symmetry to find $$|A_{1,2} \cup A_{2,3} \cup A_{1,3}| = 3|A_{1,2}| - 3|A_{1,2}\cap A_{1,3}| = 90$$ so $P(N > 3) = \frac{126}{216} = \frac{7}{12}$. This is the probability that no two of three dice sum to seven. Similarly, I found $P(N > 4)$ to be $\frac{77}{216}$. I don't see how to generalize the above. I also thought that $$P(N > n) = P(X_i + X_j \ne 7 \text{ for all }1 \le i\ne j \le n) = (1 - P(X_i + X_j = 7))^{\binom{n}{2}} = \left(\frac{5}{6}\right)^{n(n-1)/2}$$ but that's false because the events are not independent. I also tried $$P(N = n) = P(X_n = 7 - X_k \text{ for some } 1 \le k < n \text{ and }N \ne n - 1)$$ where that last clause is shorthand for "and the previous rolls did not secure your victory". This yields the recursion $p_n = (1-(5/6)^{n-1})(1-p_{n-1})$, $p_1 = 0$, which didn't agree with my previously computed probabilities. (Perhaps I made an error.)
While not exactly stated, it seems that you win only if the sum of consecutive numbers sums to 7. Obviously you can't win on the first roll. For every roll thereafter, you have a $1/6$ chance of winning on the next roll, (and a $5$ in $6$ chance of the game continuing). Now, $P(N=2) = \frac{1}{6}$, $P(N=3) = \frac{5}{6}\times\frac{1}{6}$, $P(N=4) = \left(\frac{5}{6}\right)^2\times\frac{1}{6}$, so $P(N=n) = \left(\frac{5}{6}\right)^{n-2}\times\frac{1}{6}$. Thus $$E(N) = \sum P(N=n)n = 2\times\frac{1}{6} + 3\times\frac{5}{6}\times\frac{1}{6} + 4\times\left(\frac{5}{6}\right)^2\times\frac{1}{6} + \dotsb = \frac{1}{6} \sum_{n=0}^{\infty}(n+2)\left(\frac{5}{6}\right)^n.$$ I don't know if you need to derive this, but, $$\sum_{n=1}^{\infty} n x^{n-1} = \sum_{n=0}^{\infty}(n+1) x^n = \frac{1}{1-x}\sum_{n=0}^\infty x^n = \frac{1}{(1-x)^2}$$ so $$\frac{1}{6}\sum_{n=0}^{\infty}(n+2)\left(\frac{5}{6}\right)^n = \frac{1}{6}\sum_{n=0}^\infty(n+1)\left(\frac{5}{6}\right)^n + \frac{1}{6}\sum_{n=0}^\infty \left(\frac{5}{6}\right)^n$$ which equals $$\frac{1}{6}\times\frac{1}{\left(1-\frac{5}{6}\right)^2} + \frac{1}{6}\times\frac{1}{1-\frac{5}{6}} = 7.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1748549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 2 }
Cover $(0, +\infty )$ by open sets Cover $(0, +\infty)$ by open sets $U_\alpha$ such that for any $\epsilon > 0$ there are points $x, y \in (0, +\infty)$ with $|x-y|<\epsilon$, not both belonging to the same $U_\alpha$ The distance beteen $x$ and $y$ is $\epsilon$ so would it work if we took our open setss $U_\alpha=B_{\frac{\epsilon}{2}}(\alpha)=\{x \in \mathbb{R}| d(x, \alpha) <\frac{\epsilon}{2} \}$ So $|x-y|<\epsilon$ but we can take our open ball as having a radius$\frac{\epsilon}{2} $ I feel like I may have missed something - is my answer correct?
Your definition of $U_{\alpha}$ is not clear, is $\alpha$ a point or is it an index? Besides that, you want to be able to find these two points for every $\epsilon$. If I understand correctly, what you need is a family of sets such that you can always find two points very close to each other such that they correspond to different $U_{\alpha}$'s. You can go overkill and try $U_{x,\epsilon}:=B_{\epsilon}(x)\cap (0,+\infty)=\{y\in (0,+\infty)\mid d(x,y)<\epsilon\}$, for every point $x\in (0,+\infty)$ and every $\epsilon >0$. So now let $\epsilon >0 $, take any $x\in (0,+\infty)$ and the neighborhood $U_{x,\frac{\epsilon}{4}}$ and $y:=x+\frac{\epsilon}{2}$ and the neighborhood $U_{y,\frac{\epsilon}{4}}$, you know that $d(x,y)=\frac{\epsilon}{2} < \epsilon $ and that $U_{x,\frac{\epsilon}{2}}\cap U_{y,\frac{\epsilon}{4}}=\emptyset$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1748675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Where is a good source for serious math (wall-size) posters? Where is a good source for math wall posters that give glimpses of serious and beautiful mathematics? I'm a faculty member looking to find some wall posters (e.g. 2 ft x 3 ft) to hang in a handful of display cases around our department. I'd like the posters to be eye-catching and cool (maybe even inspiring!), and also deal with serious mathematics - not be trivial, vague, or snide. Any ideas?
One source for posters that may fit your description (though I don't know about the size) could be http://www.ams.org/samplings/mathmoments/mathmoments
{ "language": "en", "url": "https://math.stackexchange.com/questions/1748781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
For every nonzero vector $v$ there exists a linear functional $f$, sucht that $f(v) \neq 0$. I want to prove that for all $v \in V$ with $v \neq 0 \implies \exists f \in V^{*} : f(v) \neq 0$. I know that if $V$ is finite-dimensional we can choose a basis $\{e_i\}$ of $V$ and construct the corresponding dual basis $\{e^{*}_i\}$. If $v \neq 0$ then necessarily at least one component of $v$ with respect to $\{e_i\}$ must be nonzero. WLOG let $v^{j} \neq 0$ then $e^{*}_j(v) = v^{j} \neq 0$ proving the theorem. However, I think this theorem should be true even in the infinite-dimensional case and I would like to see a basis-free proof (i.e. a proof that doesn't require a choice of basis). Apparently there's a proof in this question, but I'm pretty sure that the constructed functional $f$ is not linear. For the sake of argument let me reproduce the answer here: Let $v \neq 0$ and let $H$ be a subspace of $V$, such that $V = \operatorname{span}(v)\oplus H$. Define $f: V^{*} \to \mathbb{F}$ by $f(v) = 1$ and $f(h) = 0$ for all $h \in H$. Now, let's check whether $f$ is homogenous. Let $w \in \operatorname{span}(v)$, such that $w = \alpha v$ for some nonzero $\alpha \in \mathbb{F}$. We need to show that $f(w) = f(\alpha v)$ equals $\alpha f(v) = \alpha$. However, by definition $f$ is nonzero only for the vector $v$ and we get $f(w) = 0 \neq \alpha = \alpha f(v)$. Even if we interpret the definition of $f$ to mean that $f(s) = 1$ for all $s \in \operatorname{span}(v)$ it doesn't work out. How is this supposed to work out?
$f$ is actually nonzero on $\operatorname{span}(v)$ (except at $0$, of course). What they did not say explicitly is that you extend $f$ to $\operatorname{span}(v)$ by linearity, i.e. $f(\alpha v) \equiv \alpha f(v)$ for all $\alpha\in\mathbb{F}$. So the proof is more or less a definition, in effect.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1748904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Convergence of $\int_0^{\infty} \frac{x^n}{(1+x^2)^m}dx$ Suppose we have an integral of the form $$I(n,m):= \int_0^{\infty} \frac{x^n}{(1+x^2)^m}dx$$ Most of my test cases computed seem to indicate if $n\geq 2m-1$, then this integral diverges. I have a non-rigorous justification for this: just expand the bottom and look at leading terms. If we ignore lower order terms as we move towards $\infty$, the integrand will be of order $x^{n-2m}$. If $n= 2m-1$, then this will be $x^{-1}$, whose integral diverges here, and similarly for $n>2m-1$, whereas for $n<2m-1$, the integrand behave like $x^{-2}$ or greater negative powers, which will converge. This is all very nice intuitively, but I'm wondering if anyone has a more rigorous, formal approach to this? Here's a related question, of which this is a generalization.
If $n<2m-1$, we can just directly bound the integrand: $$\frac{x^n}{(1+x^2)^m}\leq \frac{x^n}{x^{2m}+1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1748991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Show that a periodic, completely multiplicative arithmetic function is a Dirichlet character to some module $q$ Show that if $f$ is a periodic, completely multiplicative arithmetic function, then $f$ is a Dirichlet character to some modulus $q$. A Dirichlet character modulo $q$ is an arithmetic function $\chi$ which is periodic module $q$(i.e., $\chi(n+q)=\chi(n)$), completely multiplicative (in particular $\chi(1)=1$) and $\chi(n)\neq 0$ iff $\gcd(n,q)=1$. Suppose that $q$ is the minimal period of $f$. Since $f$ is periodic module $q$, we can think $f$ as a function on $\mathbf{Z}/q\mathbf{Z}$. Then how to show that $\chi(n)\neq 0$ iff $\gcd(n,q)=1$?
Note that if $f\equiv1$ we have obviously that $f$ is the principal character mod $1$. So assume that $f\not\equiv1$, assume that $f\not\equiv0$ and also assume that $\left(n,q\right)=1$. Since $f$ is completely multiplicative and $q$ periodic (and we consider $q$ as the minimum period), we have (remember we can work in $\left(\mathbb{Z}/q\mathbb{Z}\right)^{*}$) $$f^{\phi\left(q\right)}\left(n\right)\equiv f\left(n^{\phi\left(q\right)}\right)\equiv f\left(1\right)\mod q $$ from the Euler's theorem. So if we assume for absurd that $f\left(n\right)=0 $ hence we have $f\left(1\right)=0 $, and this is absurd, then $f\left(n\right)\neq0 $. Now assume that exists an $n $ such that $\left(n,q\right)=d>1 $ and $f\left(n\right)\neq0 $. Then exists $a,b\in\mathbb{N} $ such that $q=da $ and $n=db $. Since $$f\left(n\right)=f\left(d\right)f\left(b\right) $$ we have $f\left(d\right)\neq0 $. Now for all $m$ we have $$f\left(m\right)f\left(d\right)=f\left(dm\right)=f\left(dm+q\right)=f\left(d\right)f\left(m+\frac{q}{d}\right) $$ and so $$f\left(m\right)=f\left(m+\frac{q}{d}\right) $$ and it is absurd because $q$ is the minimum period.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1749075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proving $n - \frac{_{2}^{n}\textrm{C}}{2} + \frac{_{3}^{n}\textrm{C}}{3} - ...= 1 + \frac{1}{2} +...+ \frac{1}{n}$ Prove that $n - \frac{_{2}^{n}\textrm{C}}{2} + \frac{_{3}^{n}\textrm{C}}{3} - ... (-1)^{n+1}\frac{_{n}^{n}\textrm{C}}{n} = 1 + \frac{1}{2} + \frac{1}{3} +...+ \frac{1}{n}$ I am not able to prove this. Please help!
Hint. One may observe that $$ \frac1k=\int_0^1t^{k-1}dt,\quad k\geq1, $$ giving $$ \sum_{k=1}^n\frac1k=\int_0^1\sum_{k=1}^nt^{k-1}dt=\int_0^1\frac{1-t^{n}}{1-t}dt=\int_0^1\frac{1-(1-u)^{n}}udu $$ then use the binomial theorem in the latter integrand to conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1749182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Roots of a Quartic (Vieta's Formulas) Question: The quartic polynomial $x^4 −8x^3 + 19x^2 +kx+ 2$ has four distinct real roots denoted $a, b, c,d$ in order from smallest to largest. If $a + d = b + c$ then (a) Show that $a + d = b + c = 4$. (b) Show that $abcd = 2$ and $ad + bc = 3$. (c) Find $ad$ and $bc.$ (d) Find $a, b, c, d$ and $k$. My attempt: $$ x^4 −8x^3 + 19x^2 +kx+ 2 $$ With Vieta's formulas; $$ a+b+c+d = 8 $$ $$ab+ac+ad+bc+bd+cd = 19$$ $$abc + abd + acd + bcd = -k$$ $$ abcd = 2 $$ (a) Show that $a + d = b + c = 4$ As $$ a+b+c+d = 8 $$ but $b+c = a+d $ $$ 2a+2d = 8 $$ $$ a+d = 4 $$ Hence $$ a+d = b+c = 4 $$ (b) Show that $abcd = 2$ and $ad + bc = 3$ As $$ abcd = 2 $$ and $$ab+ac+ad+bc+bd+cd = 19$$ $$ ad+bc + a(b+c) + d(b+c) = 19 $$ $$ ad+bc + (a+d)(b+c) = 19 $$ $$ ad+bc + (4)(4) = 19 $$ $$ ad+bc + 16 = 19 $$ $$ ad +bc = 3 $$ (c) Find $ad$ and $bc.$ Given $ad +bc = 3$ and $ abcd = 2 $ Hence $$ bc = \frac {2}{ad}$$ $$ad +bc = 3$$ $$ ad + \frac {2}{ad} = 3 $$ Let $ad = z $ $$ z^2 - 3z + 2 = 0 $$ $$ ad = 2 $$ $$ bc = 1 $$ (d) Find $a, b, c, d$ and $k$. Given $$abc + abd + acd + bcd = -k$$ $$ ad(b+c)+ bc(a+d) = - k $$ But $b+c=a+d=4$ $$ 4ad+ 4bc = - k $$ $$ 4(ad+bc) = -k $$ $$ 4(3) = -k $$ $$ k = -12 $$ Now this is the part which I am stuck on.. How do I find $a,b,c,d$?
Hint If you know that $ad = 2$ and $a + d = 4$, then $$(x - a)(x - d) = x^2 - (a + d) x + ad = x^2 - 4 x + 2 ,$$ so finding $a, d$ is just finding the roots of that quadratic. Of course, finding $b, c$ is analogous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1749383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Integrability of a function in $L^2$ we consider $V $ a polynomial in $R[x_1,x_2,..,x_n]$ such that $e^{-V(X)}\in L^2(R^n)$ I want to prove that this implies that $e^{-V(X)} $ vanish as $|X|$ goes to $+\infty$ For that we suppose that the function didn't vanish,and we prove that this implies that it's value would be greater than ε>0 in a set of an infinite measure. But i didn't know how to prove that .Can someone help me please
If you are taking the Lebesgue integralof $f$ then what $f$ does on a set of measure $0$ is irrelevant. So if $S$ is an unbounded null set, let $f(x)=0$ for $x\not \in S,$ and let $f(x)$ be anything at all for $x\in S.$ Then $\int f=0.$ Example: Let $S=Q^n$ and let $f(x_1,...,x_n)=\sum_{j=1}^n|x_j|$ for $(x_1,...,x_n)\in S$ and $f(x)=0$ for $x\not \in S.$ And $\int_{R^n}f=0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1749611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$(\mathbb{Z}/77 \mathbb{Z})^{\times} \cong \mathbb{Z}/10 \mathbb{Z} \times \mathbb{Z}/6 \mathbb{Z}$ - Group Let $A$ a ring with unit element $1 \ne 0$ let $A^{\times}=\{a \in A: a$ invertible$\}$. Show that $(\mathbb{Z}/77 \mathbb{Z})^{\times} \cong \mathbb{Z}/10 \mathbb{Z} \times \mathbb{Z}/6 \mathbb{Z}$ as a group. I am stucked on this problem for a while. Is anyone could help me at this point? or just a theorem allow me to conclude?
Hint: When $p$ is a prime, we have that $(\mathbb{Z}/p\mathbb{Z})^\times \cong (\mathbb{Z}/(p-1)\mathbb{Z})$. Notice that $(\mathbb{Z}/77\mathbb{Z}) \cong (\mathbb{Z}/7\mathbb{Z})\times (\mathbb{Z}/11\mathbb{Z})$. Can you show that $$(\mathbb{Z}/7\mathbb{Z}\times \mathbb{Z}/11\mathbb{Z})^\times \cong (\mathbb{Z}/7\mathbb{Z})^\times\times (\mathbb{Z}/11\mathbb{Z})^\times \cong (\mathbb{Z}/6\mathbb{Z}) \times (\mathbb{Z}/10 \mathbb{Z})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1749715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Single variable $C^1$ locally invertible function is globally invertibe? I am wondering if and why a single variable $C^1$ locally invertible function on the entire real line is globally invertible. I've been told that for single variable function, if the derivative is always non zero and continuous, then the inverse can be defined on the entire range of the function. I know quite well that local invertiblity does not imply global invertibility in general.
That's correct for $f:\mathbb{R}\rightarrow \mathbb{R}$. If $f^\prime$ is everywhere nonzero $f$ is strictly increasing or decreasing. It's easy to see that this implies that $f$ is one to one (injective), hence a unique inverse $f(\mathbb{R}) \rightarrow \mathbb{R}$ exists (this is just set theory). That this is continuously differentiable is only a local question. (Of course you cannot conclude that $f$ is onto, which can be seen from simple examples)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1749981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Difficult Inverse Laplace Transform I've had this question in my exam, which most of my batch mates couldn't solve it.The question by the way is the Laplace Transform inverse of $$\frac{\ln s}{(s+1)^2}$$ A Hint was also given, which includes the Laplace Transform of ln t.
$f(s,a) = L(t^a) = \frac{\Gamma(a+1)}{s^{a+1}} $ Differentiating with respect to a, we get $L(t^a\cdot lnt) = \frac{\Gamma'(a+1) - \Gamma(a+1)\cdot lns}{s^{a+1}} $ set a = 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1750104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Can't seem to solve a radical equation? Question is : $\sqrt{x+19} + \sqrt{x-2} = 7$ So there is this equation that I've been trying to solve but keep having trouble with. The unit is about solving Radical equations and the question says Solve: $$\sqrt{x+19} + \sqrt{x-2} = 7$$ I don't want the answer blurted, I want to know how it's done, including steps please. Thank you!
Multiply both sides by $\sqrt{x+19} -\sqrt{x-2} $ to get $$x+19 -(x-2) = 7 (\sqrt{x+19} - \sqrt{x-2}) \\ 21 = 7 (\sqrt{x+19} - \sqrt{x-2}) \\ 3 =\sqrt{x+19} - \sqrt{x-2}$$ Adding this to the original equation you get $$2\sqrt{x+19}=10 \Rightarrow x+19=25 \Rightarrow x=6$$ P.S. You can find the same method employed in my answer here: to this similar question
{ "language": "en", "url": "https://math.stackexchange.com/questions/1750192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
What equation produces this curve? I'm working on an engineering project, and I'd like to be able to input an equation into my CAD software, rather than drawing a spline. The spline is pretty simple - a gentle curve which begins and ends horizontal. Is there a simple equation for this curve? Or perhaps two equations, one for each half? I can also work with parametric equations, if necessary.
Assuming you mean $$\begin{align} y(0) &= 40 \\ y(120) &= 0 \\ \dot{y}(0) &= 0 \\ \dot{y}(120) &= 0 \end{align}$$ then a simple cubic will do: $$y(x) = \frac{x^3}{21600} - \frac{x^2}{120} + 40$$ At range $x=0\dots120$, it looks like this: You can find these very easily. In general, a cubic curve is $$y(x) = C_3 x^3 + C_2 x^2 + C_1 x + C_0$$ and its derivative $$\dot{y} = \frac{d y(x)}{d x} = 3 C_3 x^2 + 2 C_2 x + C_1$$ Fix four values, each fixing one of the constants, and that's it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1750282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 11, "answer_id": 9 }
Determine the largest open set to which $f(z)=\sum_{n=1}^{\infty}(-1)^n(2n+1)z^{n}$ can be analytically continued Let $U=B_1(0)$ and $$f:U \rightarrow \mathbb{C},\qquad f(z)=\sum_{n=1}^{\infty}(-1)^n(2n+1)z^{n}.$$ Determine the largest open set to which $f$ can be analytically continued Remark: I was given following suggestion: Consider $f(w^2)$. Do not know how to use the suggestion, I would appreciate any suggestions in this exercise and if is possible, the solution.
The series converges for $\;|z|<1\;$ , for example using the $\;n\,-$ th root test. Now, for any $\;|z|=1\implies z=e^{it}\;,\;\;t\in\Bbb R\;$ , we have that $$\lim_{n\to\infty}|(-1)^n(2n+1)e^{nit}|=\lim_{n\to\infty}(2n+1)=\infty\neq0$$ and thus the series doesn't converge on the unit circle, so the maximal open set where $\;f\;$ can exist as analytic function is the open unit disk $\;|z|<1\;$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1750383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that $c^{1/n} \rightarrow 1$ for $c > 0$. I can show a straightforward $\epsilon-\delta$ proof that when $c > 1$, $c^{1/n} \rightarrow 1$. But not when $c < 1$? WTS For all $\epsilon$ there exists $N$ st. for $n \geq N$, $|c^{1/n} - 1| < \epsilon$ When $c < 1$, the absolute value can be removed and the inner expression changed to: $1 - c^{1/n}$. We can temporarily set this $< \epsilon$, and see if a useful expression for $n$ in terms of $\epsilon$ comes out: $1 - c^{1/n} < \epsilon$ $1 - \epsilon < c^{1/n}$ $\log_{c}(1 - \epsilon) < \frac{1}{n}$ $log_{1 - \epsilon}(c) > n$. The final expression is not terms of $\text{"something"} < n$, which is usually expected. What might be wrong with what I showed?
(very unoriginal) By Bernoulli's inequality, $(1+a/n)^n \ge 1+a $ so $(1+a)^{1/n} \le 1+a/n $. If $c > 1$, let $c = 1+a$. Then $c^{1/n} = (1+a)^{1/n} \le 1+a/n \to 1 $ as $n \to \infty$. If $0 < c < 1$, let $c^{1/n} = \dfrac1{1+b} $. Then $c = \dfrac1{(1+b)^n} \le \dfrac1{1+nb} \lt \dfrac1{nb} $ so $b < \dfrac1{nc} $ and $c^{1/n} = \dfrac1{1+b} \gt \dfrac1{1+\dfrac1{nc}} = \dfrac{nc}{nc+1} = 1-\dfrac{1}{nc+1} \to 1 $ as $n \to \infty$. From "What is Mathematics" by Courant and Robbins. Read this book!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1750494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is $\mathbb R \times \mathbb R_{sorg}$ normal? I know that $\mathbb R \times \mathbb R$ is normal and $\mathbb R_{sorg} \times \mathbb R_{sorg}$ is not. But what about $\mathbb R \times \mathbb R_{sorg}$ ? $\mathbb R_{sorg}$ is the Sorgenfrey line.
Yes, it is. By Dowker's theorem we know that $\mathbb{R_l} \times [0,1]$is normal ( $\mathbb{R_l}$ is the left limit topology, another name for the Sorgenfrey line), because $\mathbb{R_l}$ is generalised ordered, so normal and countably paracompact. Then a theorem by Morita (paper) shows that $\mathbb{R_l} \times \mathbb{R}$ is also normal as the reals can be covered by countably many subspaces that have a normal product with $\mathbb{R_l}$. [ADDED] The above was my first idea. The Sorgenfrey line and the reals are so nice that more can be said: The Sorgenfrey line is hereditarily Lindelöf (see this blog post), so in particular paracompact normal, and it's also perfectly normal. (items A, B,H in the above post) Then result 2 in this post says that a product of a paracompact and a sigma-compact space is paracompact, and is proved in that blog as well. As $\mathbb{R}_l$ is paracompact and $\mathbb{R}$ is sigma-compact, we know the product is paracompact. But result 4 in that same post gives even more: a hereditarily Lindelöf space (Sorgenfrey) times a separable metrisable space (the reals) is again hereditarily Lindelöf (which implies here that that is hereditarily normal, and even perfectly normal). The proofs here are simpler, because we can use more of the spaces than in the Morita and Dowker results.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1750609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that: $\vdash \forall x(\forall y\alpha)\to \forall y(\forall x \alpha)$ How does one prove that $$\vdash \forall x(\forall y\alpha)\to \forall y(\forall x \alpha)$$ in first order logic? I have tried using the specialization and generalization rules on various wffs but they don't lead me to anything concrete.
If we have a formula like $\forall x \forall y\ \alpha$, where $\alpha$ is another arbitrary formula, then we can exchange the order of the quantifiers without changing its semantics. Note, however, that this would NOT be possible if the quantifiers were different, e.g. $\forall x \exists y$. Now why can we do this? Well, if we have a look at how the semantics of classical first-order logic are defined, then this becomes pretty obvious. Let's suppose we wish to evaluate the truth value of the formula for some signature $\Sigma$ over the domain $\mathcal{U}$ and a variable assignment $\gamma$. Furthermore, let's call the truth evaluation function $I$. With this we have $$I_{\Sigma, \gamma} ( \forall x \forall y\ \alpha ) = \mathbf{T}$$ $$\iff I_{\Sigma, \gamma \cup \{ x \gets c \}} ( \forall y\ \alpha ) = \mathbf{T} \text{ for each $c \in \mathcal{U}$}$$ $$\iff I_{\Sigma, \gamma \cup \{ x \gets c, y \gets d \}} ( \alpha ) = \mathbf{T} \text{ for each $c , d \in \mathcal{U}$.}$$ Obviously, it makes no difference in which order these two quantifiers occur. Therefore we observe that $$ \forall x \forall y\ \alpha \to \forall y \forall x\ \alpha \equiv \forall x \forall y\ \alpha \to \forall x \forall y\ \alpha \equiv $$ $$ \equiv \neg \forall x \forall y\ \alpha \lor \forall x \forall y\ \alpha \equiv \top \text{,} $$ thus the formula is valid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1750735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Question about Vitali Covering (from a Lemma in Royden and Fitzpatrick's book) Definition. For a real valued function $f$ and an interior point $x$ of its domain, the upper derivative of $f$ at $x$ denoted by $\overline{D}f(x)$ is defined as follows: $$\overline{D}f(x)=\lim_{h\rightarrow0}\left[ \sup \left \{\frac{f(x+t)-f(x)}{t}: 0<|t|\leq h \right \} \right]$$ A Lemma in Royden and Fitzpatrick's Real Analysis book says: Lemma. Let $f$ be an increasing function on the closed, bounded interval $[a,b]$. Then for each $\alpha>0$, $$m^*\{x\in (a,b) : \overline{D}f(x) \geq \alpha \} \leq \frac{1}{\alpha}[f(b)-f(a)].$$ The book proceeds to prove this by: Let $\alpha>0$. Define $E_{\alpha}:=\{x\in (a,b): \overline{D}f(x)\geq\alpha \}$. Choose $\alpha' \in (0,\alpha)$. Let $\mathscr{F}$ be the collection of closed, bounded intervals $[c,d]$ contained in $(a,b)$ for which $f(d)-f(c)\geq \alpha ' (d-c)$. Since $\overline{D}f\geq \alpha$ on $E_{\alpha}$, $\mathscr{F}$ is a Vitali covering for $E_{\alpha}$. I can follow the rest of the proof, but why is the statement above true? In particular, why is $\mathscr{F}\neq \emptyset$, and why is it a Vitali covering for $E_{\alpha}$? I have an inkling that it might be due to the fact that $f$ is increasing in $(a,b)$ and thus it can only have a countable number of discontinuity on it, but I can't quite get a solid grasp of it.
Take any $x \in E_\alpha$. Now, since $\overline{D}f(x)\geq\alpha$, it follows that for some small $\delta$, $t<\delta \implies\frac{f(x+t)-f(x)}{t}\geq\alpha'$. (The definition for the upper derivative above is slightly wrong, I will edit it) Putting $t=d-x$, this means that $t<\delta \implies f(d)-f(x) \geq \alpha'(d-x)$. The interval $[d,x]$ is in $\mathscr{F}$ for every $d$ close enough to $x$,for arbitrary $x$ in $E_\alpha$. This makes $\mathscr{F}$ a Vitali covering for $E_\alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1750975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving a differential equation through substitution In a book an example is given: Solve $\frac{dy}{dx} = (x+y-4)^2$ by first making an appropriate substitution. In the solution a step is given which I don't understand: We let $u = x+y-4$ and thus $\frac{dy}{dx} = u^2$. We need to calculate $\frac{du}{dx}$. For this example, taking the derivative with respect to x gives $$\frac{du}{dx} = 1+\frac{dy}{dx}.$$ The last step I cannot follow, where does the summand $\frac{dy}{dx}$ come from? I tried $\frac{dy}{dx} = u^2 = \frac{dy}{du}\cdot 1 = \frac{dy}{du}\cdot \frac{du}{dx}$ yielding the solution $y = \frac{1}{3}(x+y-4)+C$ which is definately wrong.
Given that $u=x+y-4$, clearly, differentiating with respect to $x$ gives $$\frac{du}{dx}=1+\frac{dy}{dx}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1751135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence and limit of $\sum_{j=1}^n\sin\left(\frac{j-1}n\right)\left\{\cos\left(\frac jn\right)-\cos\left(\frac{j-1}n\right)\right\}$ The title says it all - I'm trying to find a way of proving the convergence and evaluating the limit of $a_n=\sum_{j=1}^n\sin\left(\frac{j-1}n\right)\left\{\cos\left(\frac jn\right)-\cos\left(\frac{j-1}n\right)\right\}$. This exercise is from the field of integration theory, so probably Riemann sums are to be used on this one but I can't find any way that makes sense. I'd appreciate any hints.
We have: $$ \lim_{n\to +\infty} \sum_{j=1}^{n}\sin\left(\frac{j-1}{n}\right)\left[\cos\left(\frac{j}{n}\right)-\cos\left(\frac{j-1}{n}\right)\right]=\color{red}{\frac{\sin(2)-2}{4}}.$$ Sketch of proof: we have: $$ \cos\left(\frac{j}{n}\right)-\cos\left(\frac{j-1}{n}\right) = \int_{\frac{j-1}{n}}^{\frac{j}{n}}(-\sin x)\,dx=-\frac{1}{n}\,\sin\left(\frac{j-1}{n}\right)+O\left(\frac{1}{n^2}\right), $$ hence our limit is the same as: $$ -\lim_{n\to +\infty}\sum_{j=1}^{n}\frac{1}{n}\sin^2\left(\frac{j-1}{n}\right) = -\int_{0}^{1}\sin^2(x)\,dx. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1751207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
How to square both the sides of an equation? Question: $x^2 \sqrt{(x + 3)} = (x + 3)^{3/2}$ My solution: $x^4 (x + 3) = (x + 3)^3$ $=> (x + 3)^2 = x^4$ $=> (x + 3) = x^2$ $=> x^2 -x - 3 = 0$ $=> x = (1 \pm \sqrt{1 + 12})/2$ I understand that you can't really square on both the sides like I did in the first step, however, if this is not the way to do it, then how can you really solve an equation like this one (in which there's a square root on the LHS) without substitution?
You can square it like that, and the equality will still hold - remember these expressions are equal, so squaring them mean they are still equal. This can, however, produce spurious solutions - if you do this you should check that the values you get do indeed solve the given equation. Note however, that $\sqrt{x+3} = (x+3)^{1/2}$, and have another look at the equation. Don't forget that if you divide by anything, you have to make sure it isn't $0$... edit: the other comments do more involving the different cases arising from different values of $x$ and are quite clear
{ "language": "en", "url": "https://math.stackexchange.com/questions/1751410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Three nested summations I'm not sure of how to solve three nested summations and I came up with the following. Is it wrong? $$\sum\limits_{i=1}^n\sum\limits_{j=1}^n\sum\limits_{k=1}^{2i+j} {1}=\sum\limits_{i=1}^n\sum\limits_{j=1}^n(2i+j)=\sum\limits_{i=1}^n(2in+\frac{n(n+1)}{2})=3\frac{n^2(n+1)}{2}$$
Since $$\sum_{i=1}^n\sum_{j=1}^n i=\sum_{i=1}^n\sum_{j=1}^n j=\frac{n^2(n+1)}2$$ we can do this: $$\sum_{i=1}^n\sum_{j=1}^n\sum_{k=1}^{2i+j}1=\sum_{i=1}^n\sum_{j=1}^n(2i+j)=3\sum_{i=1}^n\sum_{j=1}^ni=\frac{3n^2(n+1)}2\quad\blacksquare$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1751523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proof of $[0,1]~\text{disconnected}\implies(0,1)~\text{disconnected}$ I want to prove the following implication $$[0,1]~\text{disconnected}\implies(0,1)~\text{disconnected}.$$ My try: Suppose $[0,1]=U\cup V$ with $U,V$ open, disjoint and nonempty. Using the subspace topology of $\mathbb{R}$ we also have $U=U'\cap[0,1]$ and $V=V'\cap[0,1]$ where $U',V'$ are open in $\mathbb{R}$. We have $(0,1)=(0,1)\cap[0,1]=(U'\cap(0,1))\cup(V'\cap(0,1))$. This is a union of open sets since $(0,1)$ is an open interval. How can I prove that this is also a union of disjoint sets? Will this $U\cap V=(U'\cap[0,1])\cap(V\cap[0,1])=U'\cap V'\cap[0,1]=\emptyset$ be sufficient for showing the disjoint-requirement? Also, I do not know how to start for showing that $(0,1)$ is union of nonempty sets. I would appreciate any help.
You just take off the points $0,1$. $(0,1)=(U\setminus\{0,1\})\cup(V\setminus\{0,1\})$. Prove $U\setminus\{0,1\}$ and $V\setminus\{0,1\}$ are open in $(0,1)$ (follows almost trivially), non-empty (trivial) and disjoint (more trivial).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1751678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$E_{k} \subset [0,1]$ such that $\lim_{k \to \infty} m(E_{k}) = 1$ but $\bigcup_{n=1}^{\infty}\bigcap_{k=n}^{\infty}E_{k} = \phi $ I am trying to find an example of collection $\left \{ E_{k} \right \}_{k=1}^{\infty}$ such that each $E_{k} \subset [0,1]$ satisfying $\lim_{k \to \infty} m(E_{k}) = 1$ but $\bigcup_{n=1}^{\infty}\bigcap_{k=n}^{\infty}E_{k} = \phi$ What I tried was to construct a collection with monotonically increasing measure tending to 1. But such collections do not posses the property $\bigcap_{k=n}^{\infty}E_{k} = \phi$ for each positive integer n. How should I approach to construct such example?
There's a classic example. Let $E_k=[0,1]\setminus F_k$, where $F_k=\left[\sum_{n=1}^k\frac{1}{n},\sum_{n=1}^{k+1}\frac{1}{n}\right] \operatorname{mod}1$. Mod 1 means to move points over by whole numbers until you're in $[0,1]$. Clearly $m(E_k)=1-\frac{1}{k+1}\to1$. Meanwhile, if there's any point, $x$, in $\cup\cap E_k$, then there's some tail for which $x+\mathbb{N}$ avoids all $F_k$, but the $F_k$ keep going forever covering all reals (since the summation diverges) without missing any numbers, so that couldn't happen.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1751806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Using the Intermediate Value Theorem and derivatives to check for intersections. I have the following question: Prove that the line $y_1=9x+17$ is tangent to the graph of the function $y_2=x^3-3x+1$. Find the point of tangency. So, what I did was: Let's construct a function $h$, such that $$h(x)=x^3-3x+1-(9x+17)=x^3-12x-16.$$ This function is continuous everywhere, particularly in the interval $[0,5]$. Now, $$h(0)=(0)^3-12(0)-16=-16$$ and $$h(5)=(5)^3-12(5)-16=49.$$ So, by Bolzano's theorem, there must exist a point $c \in (0,5)$ such that $h(c)=0$. This proves that the line $y_1$ and the function $y_2$ are tangent to each other at least at one point. Now, we can think of the point of intersection as the point in which $h'(x)=0$. So, $$h'(x)=0 \iff 3x^2-12=0 \iff x \in \{-2,2\}.$$ Now, if either one of those two points is the point of intersection, then it must be a root of $h(x)$, so plugging them in we have $$h(2)= (2)^3-12(2)-16 = -32 \neq 0$$ and $$h(-2)=(-2)^3-12(-2)-16=24-24=0.$$ Therefore, $x=-2$ is the point of intersection of $y_1$ with $y_2$. Now, my issue here is that I don't know exactly how to justify that what I did was actually right. In particular, I don't know why taking the difference of both original functions (constructing $h$) is a valid move and why is it that finding the points in which $h'(x)=0$ equates to finding the point of intersection between $y_1$ and $y_2$. So, my question is: if what I did was actually correct, why is it so? And if it's not, why am I wrong?. Thanks in advance.
I have a comment, and I know this has been looked at: The statement involving Bolzano's Theorem, where you find that there is $c\in (0,5)$ where $h(c)=0$, implies that the graphs of the two functions intersect. It doesn't imply tangency, and it's actually a distraction, in that it gives you a point in $(0,5)$, which is quite far from the point of tangency you eventually find. (In fact, if you try using Bolzano's Theorem near the point of tangency, you must look at $h'(x)$, and not $h(x)$, since $x=-2$ gives rise to a local maximum of $h(x)$, where the function $h(x)$ is tangent to the $x$-axis.) If you want to use Bolzano's Theorem, here's one (overly complicated) approach. First, notice that $$h'(-3)=15,\ \ h'(0)=-12,\ \ h'(3)=15$$ Then by Bolzano's Theorem, there exists $c\in (-3,0)$ so that $h'(c) = 0$, and there also exists $c'\in (0,3)$ so that $h'(c) = 0$. Since $h'(x)$ is a quadratic, you can find the two solutions to the equation $h'(x)=0$ by hand, namely at $c'=2$, $c=-2$ (with notation consistent with our conclusion from Bolzano's Theorem). Since it's a quadratic, we know that we have accounted for all of them. (As the other answer states, this application of Bolzano's Theorem actually serves as an unnecessary extra step, since we can find the zeroes of $h'(x)$ by hand.) You still need the two curves to intersect at the point to have a point of tangency, which is when we use the zeroes of $h'(x)$ (the critical numbers) and check which of these are zeroes of $h(x)$. In checking, you find that $h(2) = -32$, while $h(-2) = 0$. Now plug $x=-2$ back into your starting functions $y_1$ (or $y_2$) in order to conclude that you have a single point of tangency at $(-2,-1)$. Of course, the less complicated alternative is to check that $h(x)$ has a double-root at $x=-2$, which it does, since $h(x) = (x+2)^2(x-4)$, which guarantees a point of tangency there. (This method is exactly that of the other answer, where you find roots of $h(x)$, and then check to see which of those is a root of $h'(x)$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1752012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
De Morgan's Laws proof My only problem with this proof is that they seem to be assuming that $(A \cup B)'$ is nonempty (similar idea for $A' \cap B')$. Is that because this holds trivially if $(A \cup B)'$ is empty? In other words, $A \cup B = U$ where $U$ is the universal set.
You are correct that this proof is assuming that $(A\cap B)'$ is nonempty. Luckily, we can quite easily check that it holds in that case, as it means that there are no elements not in both $A$ and $B$, and so $A'\cap B'=\emptyset$. Likewise for the other direction, if $A'\cap B'=\emptyset$ then there are no elements that are in neither $A$ nor $B$, so $(A\cup B)'=\emptyset$. The second part of the theorem is likewise easy to show.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1752111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Express $1/(x-1)$ in the form $ax^2+bx+c$ Let $x$ be a root of $f=t^3-t^2+t+2 \in \mathbb{Q}[t]$ and $K=\mathbb{Q}(x)$. Express $\frac{1}{x-1}$ in the form $ax^2+bx+c$, where $a,b,c\in \mathbb{Q}$. I have proved that $f$ is the minimal polynomial of $x$ over $\mathbb{Q}$ but I am stuck showing the above claim. I tried writing $\frac{1}{x-1}=ax^2+bx+c$ and solve for $a,b,c$ but it didn't seem to work. Any idea?
Using the Extended Euclidean Algorithm as implemented in this answer, we get $$ \begin{array}{r} &&x^2&1&-(x+2)/3\\\hline 1&0&1&-1&(1-x)/3\\ 0&1&-x^2&x^2+1&(x^3-x^2+x+2)/3\\ x^3-x^2+x+2&x-1&x+2&-3&0\\ \end{array} $$ which means that $$ \left(\vphantom{x^2}x-1\right)\left(x^2+1\right)+\left(x^3-x^2+x+2\right)\cdot\left(-1\vphantom{x^2}\right)=-3 $$ Therefore, $$ -\frac{x^2+1}3\equiv\frac1{x-1}\pmod{x^3-x^2+x+2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1752195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Is the absolute Galois Group of $\Bbb Q$ countable? Is $\text{Gal} (\overline{\Bbb Q}/\Bbb Q)$ countable or uncountable? It seems like it should be countable (because the algebraic closure of $\Bbb Q$ is countable and there are countably many permutations of the irrational algebraic numbers, and a countable union of countable sets is countable. However, I've seen references that seem to imply it is uncountable. What is the answer, and why/how?
Let $I\subseteq \Bbb N$ be any subset and let $K_I=\Bbb Q(\{\sqrt{p_i}\}_{i\in I})$ where $p_i$ is the $i^{th}$ prime. Then there are precisely $2^{\Bbb N}$ in fact the Galois group of the compositum of this extension is exactly isomorphic to $$\prod_{i\in\Bbb N}\Bbb Z/2\Bbb Z$$ and this of course indicates there are uncountably many elements in $\text{Gal}(\overline{\Bbb Q}/\Bbb Q)$ You can also do a simple cardinality argument using inverse limits, but that technology is a bit stronger.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1752320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
The Area of Trapezium is given by $A=\frac{1}{2}(4-x^2)(2x+4)$ The Area of Trapezium is given by: $$A=\frac{1}{2}(4-x^2)(2x+4)$$ Find the Maximum area of Trapezium. Hi, can anyone help me with this question. I know we differentiate the equation, but i don't know what to do next. Can anyone help me. Thanks.
Hint. One may observe that $$A(x)=\frac{1}{2}(4-x^2)(2x+4)$$ gives $$A'(x)=-(3x-2)(x+2)$$ If there exists a maximum of $A$, it has to be found among the values $A(x_0)$ for which $A'(x_0)=0$. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1752426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Let $(a_n)_{n \geq 0}$ be a strictly decreasing sequence of positive real numbers , and let $z \in \mathbb C$ , $|z| < 1$. Let $(a_n)_{n \geq 0}$ be a strictly decreasing sequence of positive real numbers , and let $z \in \mathbb C$ , $|z| < 1$. Prove that the sum $a_0 + a_1z + a_2z^2 + \cdots + a_nz^n +\cdots $ is never equal to zero.
Simply note that $(1-z)\sum_{k=0}^\infty a_kz^k=a_0+\sum_{k=1}^\infty (a_k-a_{k-1})z^k$. If $\sum_{k=0}^\infty a_kz^k=0$ for some $|z|<1$, $$a_0=\sum_{k=1}^\infty (a_{k-1}-a_k)z^k\implies$$ $$ \begin{align} a_0\leq \left| \sum_{k=1}^\infty (a_{k-1}-a_k)z^k\right| &\leq \sum_{k=1}^\infty (a_{k-1}-a_k)|z|^k \\ &< \sum_{k=1}^\infty (a_{k-1}-a_k)\\ &= a_0 - \lim_n a_n \end{align}$$ Hence $\lim a_n<0$ which is a contradiction. This has some interesting consequences, like $\frac{1}{1-z}+e^z$ has no zeroes if $|z|<1$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1752531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Hermitian Matrix and nondecreasing eigenvalues I am studying for finals and looking at old exams. I found this question and am not sure how to proceed. Let $A$ be an $n\times n$ Hermitian matrix with eigenvalues $\lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n$. Prove that if $a_{ii}=\lambda_1$ for some $i$, then every other entry of row and column $i$ is zero. Any help is greatly appreciated!
Let $ \langle \cdot, \cdot \rangle $ be the standard inner product on $ \mathbb{C}^n $. For the Hermitian matrix $ A $, we will show that $ \inf_{||v||=1} \langle Av,v \rangle = \lambda_1 $ for $ v \in \mathbb{C}^n $ having unit norm. First note that $ \langle Av,v \rangle $ is real, since $ \langle Av,v \rangle = \langle v,A^*v \rangle = \langle v,Av \rangle = \overline{\langle Av,v \rangle} $. $ A $ is Hermitian, hence Normal, so by the Spectral Theorem, $ \mathbb{C}^n $ has an orthonormal basis $ \{ v_i \}_{i=1}^{n} $ of characteristic vectors of $ A $ whose eigenvalues respectively are $ \{ \lambda_i \}_{i=1}^{n} $. If $ v = \sum_{i=1}^{n} \alpha_iv_i $with norm $ 1 $ (i.e, $ \sum_{i=1}^{n} |\alpha_i|^2 = 1 $), then we have, $$ \langle Av,v \rangle = \langle \sum_{i=1}^{n} \alpha_i\lambda_iv_i, \alpha_iv_i \rangle = \sum_{i=1}^{n} |\alpha_i|^2\lambda_i \ge (\sum_{i=1}^{n} |\alpha_i|^2) \lambda_1 = \lambda_1 $$ and $ \langle Av_1, v_1 \rangle = \lambda_1 $, which proves the assertion. Since we also have $ \langle Ae_i, e_i \rangle = \lambda_1 $ by hypothesis, we have equality in the above computation. If $ e_i = \sum_{i=1}^{n} \alpha_iv_i $, then we have $ \sum_{i=2}^{n} |\alpha_i|^2 (\lambda_i - \lambda_1 ) = 0 $ and hence all terms in the sum are zero. If $ k $ is the largest index for which $ \alpha_k \neq 0 $ (such $ k $ exists), then, $ \lambda_k = \lambda_1 \implies \lambda_1 = \lambda_2 = \cdots = \lambda_k $ Thus, $$ Ae_i = A(\sum_{i=1}^{k} \alpha_iv_i) = \lambda_1(\sum_{i=1}^{k} \alpha_iv_i) = \lambda_1e_i $$ which means the $ i $-th column of $ A $ is actually $ \lambda_1e_i $, thus, except $ a_{ii} $, all other entries in column $ i $ of $ A $ are zero. (and hence, follows for row $ i $ too)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1752653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a term in a sequence A strictly increasing sequence of positive integers $a_1, a_2, a_3,...$ has the property that for every positive integer $k$, the subsequence $a_{2k-1}, a_{2k}, a_{2k+1}$ is geometric and the subsequence $a_{2k}, a_{2k+1}, a_{2k+2}$ arithmetic. If $a_{13}=539$. Then how can I find $a_5.$ Any help will be appreciated. Thank you.
Hint: Let $a_1=a,a_2=ka$. Then it is not hard to show that $a_{2n+1}=a(nk-n+1)^2$. We have $539=11\cdot7^2$, so evidently we take $k=2,a=11$. That gives $a_5=11(2\cdot 2-1)^2=99$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1752848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Limit ordinal that cannot be written as $\alpha \cdot \omega$? I learned from this thread that every limit ordinal can be written as $\omega\cdot\alpha$ for some $\alpha$ but is this also true for $\alpha\cdot\omega$ even though ordinal multiplication is not commutative? If it does not hold there must be a limit ordinal that cannot be written as $\alpha\cdot\omega$. I wouldn't even know how to write $\omega+\omega$ as $\alpha\cdot\omega$ but how can I prove this is genuinely impossible?
Think about the following, if $n<\omega$, what is $n\cdot\omega$? On the other hand, if $\alpha\geq\omega$, it is certainly the case that $\alpha\cdot\omega\geq\omega\cdot\omega$. So where does $\omega+\omega$ fit in? (Hint: It doesn't.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1752940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
compute probability density function of a bivariate function without sampling Suppose $X_1 \sim f_{X_1}(x_1)$, $X_2 \sim f_{X_2}(x_2)$ are random variables with known probability density function. Is there any way to compute the probability density function of a bivariate function $g(x_1,x_2)$, assuming on specific finite support $x_1 \in [a_1,b_1]$ and $x_2 \in [a_2,b_2]$ without Monte Carlo sampling or any other sampling method? $X_1$ and $X_2$ are independent, and $g $ is "smooth". I'm looking for a way to possibly implement the computation numerically for an arbitrary $g$.
HINT Let's rename your variables $X,Y$ for easier notation. So you are defining $$ Z = g(X,Y) $$ and asking what is the probability density function of $Z$. For simplicity, let's assume a particular (very simple) $g$, let's say $Z = X+Y$. $Z=z$ means $X+Y=z$, so $X=z-Y$; let's condition on $Y=y$, then you end up with $$ f_Z(z) = \int_\mathbb{R} f_X(z-y) f_Y(y) dy. $$ So if $g(x,y)$ has a nice form allowing to solve $z = g(x,y)$ for one of the variables in terms of the other, you can use this technique to get the pdf...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1753077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can they both be perfect squares? Let $X$ and $Y$ be positive integers. Prove that at least one of $X^2+Y$ and $Y^2+X$ is not a perfect square.
If $X^2+Y=A^2$ then $Y\ge{2X+1}$ and thus $Y>{X}$. If $Y^2+X=B^2$ then $X\ge{2Y+1}$ and thus $X>{Y}$. And this is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1753208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Discuss about compactness of these sets My question is: How can I see if (in $\mathcal H=\mathcal l (\mathbb{N} )$ $B_1=\left\{ u | \frac{|u_k|}{k^2}\leq1 \right \}$ ,$B_2=\left\{ u | \frac{|u_k|}{log(1+k)}\leq1 \right \}$ are compacts or not. I know that I should show that they admit a convergent subsequence. Any help?
A metric space with an infinite closed discrete subspace is not compact. Let $v_k=(x_{k,n})_{n\in N}$ where $x_{k,n}$ is $1$ when $k=n$, and is $0$ when $k\ne n.$ Then $\|v_k-v_j\|=1$ when $k\ne j.$ So $\{v_k: 2\leq\ k \in N\}$ is an infinite closed discrete space of $B_1$ and of $B_2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1753308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What if we define function of moderate decrease as function satisfying $|f|\le\frac{A}{x^2}$ In Stein's Fourier Analysis, he defines the a continuous function $f$ as of moderate decrease if there exists $A>0$ such that $$|f(x)| \le \frac{A}{1+x^2} \forall x\in\mathbb{R}$$ I am wondering what if we just define it as $|f(x)|\le \frac{A}{x^2}$? I am asking this because it seems like we don't need that $1$ when showing "whenever $f$ belongs to $\mathcal{M}(\mathbb{R}),$ then we can define$\int_{-\infty}^{\infty} f(x)dx $. The proof basically argues that the sequence $I_n := \int_{-N}^N f(x) dx$ is Cauchy. And the derivation goes like $$|I_M - I_N | \le | \int_{N\le |n| \le M} f(x) dx |$$ $$ \le \int_{N\le |n| \le M} |f(x)| dx$$ $$\le \int_{N\le |n| \le M} \frac{A}{x^2} dx$$ $$ \le A \int_{N\le |n| \le M} \frac{dx}{x^2}$$ So I feel like the $1$ in the definition probably does not matter? But there should be a reason...
I am wondering what if we just define it as $|f(x)|\le \frac{A}{x^2}$? The function $ x \mapsto \frac1{x^2}$ is not in $L^1(\mathbb{R})$ whereas $ x \mapsto \frac1{1+x^2}$ is in $L^1(\mathbb{R})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1753430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Numerical method with convergence greater than 2 It is a well-known fact that, for solving algebraic equations, the bisection method has a linear rate of convergence, the secant method has a rate of convergence equal to 1.62 (approx.) and the Newton-Raphson method has a rate of convergence equal to 2. Is there any numerical method for solving algebraic equations with a rate of convergence greater than 2?
In fact, one can go further. Joseph Traub has an entire book devoted to the construction of a family of methods that are effectively high-order generalizations of the Newton-Raphson and secant methods. On a more modern front, Bahman Kalantari constructed a "basic iteration" family that is a generalization of the Newton-Raphson and Halley methods. He also managed to produce some cool artwork in the process.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1753534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How can one show that a möbius transformation maps the unit circle to some axis? Suppose I have the möbius transformation f(z) = $\frac{1+z}{1-z}$ and I want to show that it will map the unit circle (excluding the point 1) to the imaginary axis. Would it help to express my unit circle in polar form? Here's what I did: f(e$^{i\theta}$) = $\frac{1+e^{i\theta}}{1-e^{i\theta}}$ But I'm not sure where to go from this step.
Point $z=1$ corresponds to $\theta=0$. $e^{i\theta}$ is non zero, and so is $e^{i\theta/2}$. Dividing both numerator and denominator yield: $$ f(e^{i\theta})=\frac{e^{-i\theta/2}+e^{i\theta/2}}{e^{-i\theta/2}-e^{i\theta/2}}=\frac{\cos(-\theta/2)+i\sin(-\theta/2)+cos(\theta/2)+i\sin(\theta/2)}{\cos(-\theta/2)+i\sin(-\theta/2)-cos(\theta/2)-i\sin(\theta/2)}=\\ \frac{2\cos(\theta/2)}{-2i\sin(\theta/2)}=-i\cot(\theta/2) $$ For the circle $\theta$ will vary between 0 and 2$\pi$, so $\theta/2$ will vary between 0 and $\pi$, and $\cot(\theta/2)$ will vary between $+\infty$ and $-\infty$. Therefore $f(z)$ will map to all points on the imaginary axis
{ "language": "en", "url": "https://math.stackexchange.com/questions/1753627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is it that the interval of convergence is half open? I am given the following power series and asked to find the radius of convergence and determine the exact interval of convergence $$\sum\biggr(\frac{3^n}{n\cdot 4^{n}}\bigg)x^n \Leftrightarrow \sum\bigg(\frac{3}{n^{1/n}\cdot 4}\bigg)^{n}x^n$$ If $a_n=(\frac{3}{n^{1/n}\cdot 4})^{n}$, then $\lim\sup|a_n|^{1/n}=\frac{3}{4}$, hence, the radius of convergence is $R=\frac{4}{3}$. I'm then given that the interval of convergence is $I=[-\frac{4}{3},\frac{4}{3})$. I am confused how exactly the author determined that the I.O.C. is half open, and how I can determine if the I.O.C. will be open, closed or half open?
Series convergence tests, e.g., Hadamard's formula for checking radius of convergence, are often inconclusive along the boundary; you must check the boundary separately, by plugging in boundary values, thus obtaining a new series (with no x's involved anymore) and again using some series convergence test, to check for convergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1753740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
f(x) is a function such that $\lim_{x\to0} f(x)/x=1$ $f(x)$ is a function such that $$\lim_{x\to0} \frac{f(x)}{x}=1$$ if $$\lim_{x \to 0} \frac{x(1+a\cos(x))-b\sin(x)}{f(x)^3}=1$$ Find $a$ and $b$ Can I assume $f(x)$ to be $\sin(x)$ since $\sin$ satisfies the given condition?
Hint. You may use the standard Taylor series expansions, as $x \to 0$, $$ \begin{align} \cos x&=1-\frac{x^2}2+O(x^4)\\ \sin x&=x-\frac{x^3}6+O(x^4) \end{align} $$ giving $$ \begin{align} \frac{x(1+a\cos x)-b\sin x}{(f(x))^3}&=\frac{(1+a-b) x+\frac16 (-3 a+b) x^3+O(x^5)}{(f(x))^3} \\\\&=\frac{(1+a-b) x+\frac169 (-3 a+b) x^3+O(x^5)}{x^3(1+\epsilon(x))^3} \end{align} $$ where, as $x \to 0$, we have used $f(x)=x(1+\epsilon(x))$ with $\epsilon(x) \to 0$. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1753874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Find all points of finite order on the elliptic curve $y^2+7xy=x^3+16x$. I am studying Rational Points on Elliptic Curves by Silverman and Tate. This is Problem 2.12 (h). Determine all of the points of finite order on the elliptic curve $y^2+7xy=x^3+16x$. Also determine the structure of the group formed by these points. I have done the other ones that are in Weierstrass standard form with integer coefficients, for example $y^2=x^3-43x+166$. The methods are: * *Use the strong Nagell-Lutz theorem. Find $y$ such that $y^2|D$. Try those $y$ values and find the corresponding $x$ values. If they are integers and satisfy the equation of the curve, check if they have finite orders. *Use the Reduction Modulo $p$ theorem. Find a prime $p$ such that $p\nmid 2D$. Reduce the equation to modulo $p$. The finite order points on the original curve is a subgroup of the finite order points on the new curve. These methods work well for the curves with standard form. But for $y^2+7xy=x^3+16x$, I am not sure what to do. I can make change of variable, of course, to transform it into $Y^2=x^3+\frac{49}{4}x^2+16x$. But this introduce a fraction coefficient, also the change of variable involves a fraction $Y=y+\frac{7}{2}x$. With this the reduction modulo $p$ would not work. Even the N-L theorem is about integer division. My question: How can I get rid of the fractions? Or is there another way to find the finite order points? Thank you for any help!
Your curve $y^2 + 7xy = x^3 + 16x$ is isomorphic to $y^2 = x^3 - 44091x + 3304854$. Maybe you can find the transformation between these two models yourself?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A team squad combination and probability problem A team of 11 is chosen randomly from a squad of 18. Two of the squad are goal keepers and one of them must be chosen. If neither of the goalkeepers is captain or vice captain, what now is the probability that both the captains and vice captains are selected? The working I came up with was: $$\frac{^{14}C_{8}}{^{16}C_{10}}=\frac{3}{8}$$ The answer is correct as I checked the solutions. What I am wondering was why $^{14}C_{8}$ was used in the working to give the correct answer. (Why use the ways to select 8 from 14?)
Apparently we’re supposed to understand that exactly one of the goalkeepers is chosen. Once that choice has been made, we must choose the other $10$ members of the team from the $16$ members of the squad who are not goalkeepers; this can be done in $\binom{16}{10}$ different ways. How many of these $\binom{16}{10}$ possible teams include both the captain and the vice captain? If we’ve selected both of them for the team, there are still $8$ players to be picked, and there are $14$ players left from whom they can be chosen, namely, the $14$ who are neither the captain, the vice captain, nor either of the goalkeepers. Thus, the team can be completed in $\binom{14}8$ ways. In short, once we’ve chosen our goalkeeper, there are $\binom{14}8$ teams that include that goalkeeper, the captain, and the vice captain out of a total of $\binom{16}{10}$ teams that include that goalkeeper. Assuming that all $\binom{16}{10}$ of those teams are equally likely to be picked, the probability of getting one of the $\binom{14}8$ with the captain and vice captain is $$\frac{\binom{14}8}{\binom{16}{10}}=\frac{10\cdot9}{16\cdot15}=\frac38\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Finding the monthly payment for fixed-rate mortgage, but with first month interest free. I'm trying to calculate the monthly payment of a fixed-rate (annuity) loan, but with the twist that the first month is interest free. I.e., I have a principal $P_0$ - the total sum that I've loaned - and I want to pay it off completely in $N$ months. The monthly interest rate is $r$, except for the first month, where it is zero. I want to find the annuity $c$. The formula I've found helps me calculate this in the situation without the first month exception: $$c = \frac{r}{1 - (1+r)^{-N}}P_0$$ How can I modify it?
Let's write the Loan $P_0$ as the present value of the $N$ payments \begin{align} P_0&=c+cv^2+cv^3+\cdots+cv^N=c+c\left(\sum_{k=2}^N v^k\right)=c+c\left(a_{\overline{N}|r}-v\right)=c\left(1-v+a_{\overline{N}|r}\right)\\ &=c+cv\left(v+v^2+\cdots+v^{N-1}\right)=c+cv\left(\sum_{k=1}^{N-1} v^k\right)=c+cva_{\overline{N-1}|r}=c\left(1+va_{\overline{N-1}|r}\right) \end{align} So we have $$ c=\frac{P_0}{1-v+a_{\overline{N}|r}}=\frac{P_0}{1+v\,a_{\overline{N-1}|r}} $$ where $ a_{\overline{n}|r}=\frac{1-v^n}{r} $ and $v=\frac{1}{1+r}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Are we allowed to compare infinities? I'm in middle school and had a question (my dad is helping me with formatting). We're learning about infinity in math class and there are a lot of problems like how it's not a number and how if you add one to infinity it doesn't change value. But can you have one infinity be more than another? There are an infinite amount of odd numbers and an infinite amount of even numbers, so are there the same number of odd and even numbers? I think so, because for every odd number $n$ there is an even number $n+1$. So the odd numbers are $1,3,5,7,\ldots$ while the even numbers are $2,4,6,8,\ldots$, and as long as you stop counting at an even number the two lists will have the same number of numbers. But there are also an infinite amount of multiples of $2$ and an infinite amount of multiples of $3$, but I don't think there are the same amount of both. The multiples of $2$ are $2,4,6,8,\ldots$ while the multiples of $3$ are $3,6,9,12,\ldots$ So, no matter which number you stop at, the multiples of $2$ will have more numbers. (Side question (this is dad speaking, now): is there an easy way to explain why we need to put dollar signs around mathematical expressions to make them look prettier? My daughter doesn't know what $\LaTeX$ is, but I want to give her an explanation that isn't horribly hand-wavy.)
There have been many answers already, but I'd thought maybe I could try to provide an easier explanation of how cardinality works. Consider the three infinite sets $A = \{1,2,3,...\}$, and $B = \{1,2,3,...\}$ and $C = \{1,2,3,...\}$, that is to say, each of them are just all natural numbers. Surely you agree that they have the same size, regardless of how we calculate it. We now change $B$ a bit: We add 5 to every number in it. So now $B = \{6,7,8,...\}$. Adding 5 to every number surely does not change the number of elements in $B$. So we can say that the size of $A$ and $B$ are still the same. Now let's do something different: Let's multiply all numbers in $A$ by 2 and all numbers in $C$ by 3. This surely does not change the number of elements in them - we only shuffled around the values a bit. But now we also see that $A = \{2,4,6,...\}$ and $C = \{3,6,9,...\}$ - the sets we initially looked for! This means that, strangely enough, they have the same number of elements in them. That's why we say both sets are of the same size. So why did we initially think that $\{2,4,6,...\}$ has more elements than {3,6,9,...}? That's because we implicitly used the natural numbers as a scale to measure how far we went in the sequence - we compared the elements by their numeric value, not by their index in the sequence. And that's absolutely not wrong in itself, but we must remind ourselves that this only shows us how quickly the values in the sets grow compared to the natural numbers. That is just something different from the number of elements they contain (in relation to each other). In math speak, our initial idea refers not to the cardinality, but the density, and the answer of @Henning Makholm covered it very nicely. Another example It may also be helpful if you consider this example: You know fractions, right? They are made of a numerator and a denominator like $\frac{a}{b}$. Let's look at the two sequences $\frac{1}{1}, \frac{1}{2}, \frac{1}{3}, ...$ and $\frac{1}{1}, \frac{2}{1}, \frac{3}{1}, ...$. From their construction, it's intuitive to say they have the same number of elements - we always added 1 to either the numerator or denominator in each element. But at the same time, the former sequence only occupies the space between 0 and 1, while the latter is equal to all natural numbers (which looks a lot larger). So the absolute values involved in a sequence are no indicator of how many elements it contains (again: different concepts!).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "148", "answer_count": 11, "answer_id": 0 }
Solve for $\alpha$: $P = \frac{1}{\sigma}\int_{0}^{\alpha} \exp (\frac{ -2 x^{\beta}}{\sigma} ) dx$ I need to solve: $$P = \frac{1}{\sigma}\int_{0}^{\alpha} \exp ( \frac{ -2 x^{\beta}}{\sigma} ) \;dx $$ This simplifies to: $$P = \frac{1}{\sigma} \int_{0}^{\alpha} \exp (- B x^{\beta}) \;dx $$ But if we let: $$t^{2} = Bx^{\beta}$$ And try to make it an erf, then: $$2t dt = \beta B x^{\beta-1} dx$$ This could continue ad-infinitum. Any ideas? Or is a numerical solution the only thing we can do? Update: So for $\beta = 2$, an analytical solution has been found. What about the case for $0 < \beta < 1$?
As Robert Israel answered, the result is given in terms of the incomplete gamma function. This can can also simplify to $$P = \frac{1}{\sigma}\displaystyle\int_{0}^{\alpha} \exp \Big(\frac{ -2 x^{\beta}}{\sigma}\Big ) dx=-\frac{\alpha }{\beta \sigma }\,E_{\frac{\beta -1}{\beta }}\left(\frac{2 \alpha ^{\beta }}{\sigma }\right)$$ (provided $\Re(\beta )>0$) where appears the exponential integral function. This is a very complex function of $\alpha$ and, as already answered, only numerical methods could solve the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Is there any well-ordered uncountable set of real numbers under the original ordering? I know that the usual ordering of $\mathbb R$ is not a well-ordering but is there an uncountable $S\subset \mathbb R$ such that S is well-ordered by $<_\mathbb R$? Intuitively I'd say there is no such set but intuitively I'd also say there is no well-ordered uncountable set at all, which is obviously wrong. I still struggle to grasp the idea of an uncountable, well-ordered set.
There can't be. If $S\subseteq \Bbb R$ is well-ordered by the usual ordering, for every element $s_{\alpha}\in S$ that has an immediate successor $s_{\alpha+1}\in S$ (every element of $S$ except the greatest element if there is one), the set of rationals $Q_{\alpha}$ between the element and its successor is nonempty: $(s_{\alpha}, s_{\alpha+1}) \cap \Bbb Q \ne \emptyset$, and the $Q_{\alpha}$ are disjoint. If $S$ were uncountable, then $\bigcup_{\alpha < length(S)} Q_{\alpha}$ would also be uncountable — impossible, as it's a subset of $\Bbb Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
On solution to simple ODE Consider the ODE $$\frac{dx}{dt} = ax + b$$ where $a$ and $b$ are two parameters. The way to solve this is to divide both sides by $ax+b$ and integrate: $$\int \frac{\dot x}{ax+b}dt = t+C \\ \frac{\log|ax+b|}{a} = t+C \\ x(t) = Ke^{at}-\frac ba$$ Easy enough. But I'm not sure why we're not excluding some possible solutions in the first step of this approach. Doesn't dividing by $ax+b$ immediately rule out any solution where $x(t)=-\frac ba$ anywhere in the interval over which the function is defined? That seems like we might be losing a lot of potential solutions. So why is the above solution the general solution?
You're right, and this is a good issue to point out. In this case, it's straightforward to show uniqueness, though: Suppose that $x(t)$ is a solution and notice that if $y(t) = e^{-at} x(t)$, we have \begin{align*} y'(t) &= -ae^{-at} x(t) + e^{-at} x'(t) \\ &= -ae^{-at} x(t) + e^{-at} \big(ax(t) + b\big) \\ &= be^{-at} \end{align*} Now integrating shows what $y$ must be, and hence $x$. No division by zero at all. Sometimes, one can solve an equation in a somewhat ad hoc manner, as this separation of variables does, and then simply check that the solution is valid by substitution and unique by an easy argument like this one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Orthogonal matrix norm If $H$ is an orthogonal matrix, then $||H||=1$ and $||HA||=||A||, \forall A$-matrix (such that we can writ $H \cdot A$). What norm is this about?
The operator norm $$ \|A\|=\max\{\|Ax\|_2:\ \|x\|=1\}, $$ where $\|\cdot\|_2$ is the Euclidean norm, also satisfies those two equalities. They follow easily from the fact that $\|y\|_2^2=y^Ty$, so $$\|Hx\|_2^2=(Hx)^THx=x^TH^THx=x^Tx=\|x\|_2^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 0 }
Open cover; why is the following open? I am asked to look at the following For $n=0,1,2...$ define $U_n \subseteq [0,1]$ by $U_0=[0,\frac{1}{2})$ and $U_n=(2^{-n},1]$ for $n \geq 1$. So, along the unit interval, we have a cover from zero to half and other sets $U_n$ covering $1/2,1/4...$ to $1$. Here's a question I cannot answer Then $(U_n)_{n \geq 0}$ is an open cover of $[0,1]$. Why is this open? No idea. Rather, aren't they..."clopen"? Clearly, each $U_n$ has one side "open" which is "(" or ")" but the other end is always "closed" "[ or ]." Thus aren't each of $U_n$ ..clopen? So neither opened nor closed. So I am forced to challenge the statement above; they aren't open. I checked the definition of open in this context but indeed, $(U_i)_{i \in I}$ is open if $U_i$ is open for each $i \in I$. Well, immediately, for $i = 1$, $U_1=(\frac{1}{2},1]$ which is not open(nor closed) so I've found a counterexample. Please tell me why I am wrong!
They are open in the subspace topology of $[0,1]$ For example, in your example $U_1 = (\dfrac{1}{2}, 1]$ is open, because it is an intersection of an open set in $\mathbb{R}$ say $(\dfrac{1}{2}, \dfrac{3}{2})$ with $[0,1]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to verify this relationship between area under the graph and the preimage? Define $h : \mathbb{R} \to [0, \infty)$, Let $H = \{(x,y)| 0 \leq y \leq h(x)\}$ be the area under the graph (including the boundary) I wish to show the following is true: $$H = \bigcup_{c>0} h^{-1}[c,\infty) \times [0, c]$$ Attempt: ($\subseteq$) Let $(x,y) \in H$, then $0 \leq y \leq h(x)$, then $h^{-1}[y,\infty) \times [0, y] \subseteq h^{-1}[c,\infty) \times [0, c] \quad \forall y,c \in \mathbb{R}$, so $H \subseteq \bigcup_{c>0} h^{-1}[c,\infty) \times [0, c]$ ($\supseteq$) Not sure...
If $h = 0$ (so the constant function), then $h^{-1}[c,\infty)] = \emptyset$ for all $c > 0$, as no $x$ has $h(x) \ge c > 0$. And then the product set is empty too and we'd get $H = \emptyset$ which is false, as $H = X \times \{0\}$. So we should have a union for all $c \ge 0$ not just $c > 0$. The rest of the post assumes this change. Now the proof from left to right should be written a bit differently. Pick $(x,y) \in H$, so $0 \le y \le h(x)$, where $x,y$ are fixed. Pick $c = y$, which is now OK, as $y \in [0,\infty)$, so $c \ge 0$. Then $x \in h^{-1}[[c,\infty)]$, as $h(x) \ge y = c$ and $y \in [0,c]$, as $c = y$. So $(x,y)$ is in the union, as we have a specific (!) $c$ such that $(x,y)$ is in the right hand side for that $c$. Right to left is quite similar. If $(x,y)$ is in the union, there is a specific fixed $c$ (for that pair) such that $(x,y) \in h^{-1}[[c,\infty)] \times [0,c]$. The latter means that $x \in h^{-1}[[c,\infty)]$ and $y \in [0,c]$. So we have $h(x) \in [c,\infty)$ (so $h(x) \ge c$) and $0 \le y \le c$, and so combining we get $0 \le y \le c \le h(x)$ which means that $(x,y) \in H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are some applications of large cardinals? Most mathematicians don't often encounter cardinalities larger than that of the continuum, it seems? What are some results outside of pure set theory/logic that rely on the properties of larger cardinals? In a similar vein, can anyone give examples of types of mathematical objects (groups, fields, etc.) that have cardinalities larger than the continuum?
Some objects whose cardinal is at least $2^c,$ where $c$ is the cardinal of the reals : (1). The set of all real functions. (2). The set of all filters on an infinite set. (3). The dual space $l_{\infty}^*$ of the Banach space $l_{\infty}.$ (4). The maximal compactifications of $N$,of $Q$,and of $R$. (5). The free group on any set $S$ such that $|S|>c.$ One result that comes to mind is that if $X$ is a separable Tychonoff space and $X$ has a closed discrete subspace $Y$ with $|Y|\geq c$ then $X$ is not a normal space. This relies on the facts that the power set of $Y$ has cardinal at least $2^c$ and that the set of continuous $g:X\to R$ has cardinal at most $c$, and that $c<2^c$. Examples: (1). The Niemitzky plane is not normal. (2). $bN$ has a non-normal subspace, where $bN$ is the maximal compactification of $N.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1754985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proof related to pigeon hole principle to be done with induction since the question is about a positive integer m, it's obvious that the use of mathematical induction needed, but to prove the fact for n = k+1 we have to use the pigeon hole principle, i am so confused in using these both at once, some one help me in solving this proof
Hint: The base case is easy, so let's look at the inductive step. Assuming $P(m)$ is true, let's consider any set $S$ of $2m+3$ distinct integers from among $[-2m-1, 2m+1]$. If less than $3$ of those are from among $R = \{\pm(2m+1), \pm2m\}$, the induction hypothesis takes care of it. So that leaves cases where $|S \cap R| \in \{3, 4\}$. Here use Pigeonhole for each distinct case separately. There are two distinct cases to consider, accounting for symmetry in signs. Either $\{2m+1, -2m-1\} \subset S$ or $\{2m+1, 2m, -2m\} \subset S$ but $-2m-1 \not \in S$. Case 1: $\pm (2m+1) \in S$ If $0 \in S$ it is trivial. Else take as holes the pairs $(1, 2m), (2, 2m-1), \dots (m, m+1)$, their negatives and you will have $2m$ holes and $2m+1$ pigeons to roost. Case 2: $\{2m+1, 2m, -2m\} \subset S$ but $-2m-1 \not \in S$ Again if $0, -1 \in S$, it is trivial. Else we take pairs $(1, 2m-1), (2, 2m-2), \dots(m-1, m+1)$ and the pairs $(-2, -2m+1), (-3, -2m+3), \dots (-m, -m-1)$ to get $2m-2$ pairs for $2m-1$ pigeons (excluding $m$ if its there).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1755084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Another characterization of the cofinality? Is it true that $cf(\kappa)=\min \{\lambda:\ \kappa^{\lambda}>\kappa\}$? $cf(\kappa)$ is certainly $\geq$ than that minimum since $\kappa^{cf(\kappa)}>\kappa$, but I don't know how to tackle the inverse inequality. What's worse, I'm not even sure if my hypothesis is true :) Is it? (I'm actually fighting to prove that this $\min$ is a regular cardinal, and my idea was that in could be equal to the cofinality).
For infinite cardinal $k,$ it's also consistent with ZFC that it is true because of Konig's Theorem (aka Konig's Lemma): $k^{cf(k)}>k,$ and because GCH implies that $k^l=k$ for $0<l<cf(k).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1755339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How many 2-dimensional subspaces is a 1-dimensional subspace contained in? V is a 3-dimensional vector space over some field K of order 2. There are seven 2-dimensional subspaces, and seven 1-dimensional subspaces, using ${n\choose k}_q = \frac{(q^n-1)(q^n-q)...(q^n-q^{k-1})}{(q^k-1)(q^k-q)...(q^k-q^{k-1})}$. I can show that there each 2-dimensional subspaces contains three 1-dimensional subspaces (using the equation above). How do I show that each 1-dimensional subspace is contained within three 2-dimensional subspaces? Thanks in advance.
Hint 1: You know there are $7 \times 3$ pairs $(X,Y)$ of two dimensional subspaces $X$ including one dimensional subspaces $Y$. You know how many one dimensional spaces there are. Count those pairs another way. Possible hint 2: There's a duality that switches one and two dimensional subspaces. You might be able to use that to prove what you need. PS You could change "some field of order 2" to "the field of order 2".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1755455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For what maximum positive $k$ is $2n \sin^{2} \frac{\pi}{n} > \tan \frac{k\pi}{n}$ true? I am trying to find the maximum value of $k$ such that the inequality $$2n \sin^{2} \frac{\pi}{n} > \tan \frac{k\pi}{n}$$ is satisfied. I impose restrictions that $n \in \mathbb{Z}$ with $n \geq 5$, and $k \in \mathbb{Z}$ with $k \leq \lfloor \frac{n}{2}\rfloor$. If $n$ is very large, I can expand the inequality in Taylor series to obtain that $2\pi >k$, so $k \leq 6$. How could I find the greatest upper bound for $k$ for small and intermediate $n$? I suspect that I still have $k \leq 6$ but cannot prove or disprove it. I should add that I only need an upper bound for $k$ rather than an exact value. Thanks...
Let $$ f(x)=\frac{x}{\pi}\,\arctan\Bigl(2\,x\sin^2\frac{\pi}{x}\Bigr),\quad x>0. $$ Then $k=\lfloor\, f(n)\rfloor$. Using that $\arctan x<x$ and $\sin x<x$ if $x>0$, we see that $$ f(x)<\frac{x}{\pi}\,\arctan\Bigl(\frac{2\,\pi^2}{x}\Bigr)<2\,\pi<7. $$ Moreover, $\lim_{x\to\infty}f(x)=2\,\pi$. Thus $k\le6$ for all $n$ and $k=n$ for all $n$ large enough. It can be shown that $f$ is increasing, and direct calculation shows that $k=6$ for all $n\ge53$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1755581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is there a factor of 1.7159 with the tanh function used in neural network activation? I was reading about neural networks when I came across the line : Recommended f (x) = 1.7519 tanh (2/3 * x). How do we arrive at these values (we can fix the other once the other is obtained using the condition f(1) = 1) ? Pg 10 at Efficient Backprop
If you read further, at the top of page 14 it states that the required conditions for the sigmoid are: * *$f(\pm1)=\pm1$ *The second derivative is a maximum at $x=1$ *The effective gain is close to 1 Once you've decided that a $\tanh$ curve is a useful curve to try to fit to your sigmoid, then it is a case of choosing prameters.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1755711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Eigenvalues of a tridiagonal block matrix When a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, being $$\lambda_k= a + 2 \sqrt{bc} \, \cos(k \pi / {(n+1)}) , fork=1,...,n$$. Now my question, is there formula for eigenvalues of a tridiagonal block matrix as well? for example I have $$ A=\left[ \begin{matrix} B & I & 0 \\ I & B & I \\ 0 & I & B \\ \end{matrix} \right] $$ which $$ B=\left[ \begin{matrix} -4 & 1 & 0 \\ 1 &-4 & 1 \\ 0 & 1 & -4 \\ \end{matrix} \right] $$ and $I$, $0$ are 3 by 3 matrices. can the eigenvalues be calculated from a similar formula?
Write $$C:=\left[ \begin{matrix} 0 & 1 & 0 \\ 1 &0 & 1 \\ 0 & 1 & 0\\ \end{matrix} \right],$$ so that $$A=B\otimes I + I\otimes C.$$ Now $B$ and $C$ are symmetric and diagonalisable by orthogonal $\Omega$, $\Delta$ and you tell us you have a formula for the eigenvalues $\lambda_1, \lambda_2, \lambda_3$ of $B$ and $\mu_1, \mu_2, \mu_3$ of $C$. Now $\Omega\otimes \Delta$ will simultaneously diagonalise $B\otimes I$ and $I\otimes C$, and hence $A$. Moreover the eigenvalues of $A$ are now patently the nine $\lambda_i+\mu_j$. Or have I done something very silly?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1755832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$xy^{-1}$ In order to prove the following: $$x<y \iff x^{-1}>y^{-1}$$ *for $x>0$ and $y>0$ I tried this: $$x<y\implies y-x>0$$ I have to prove that this, implies that $\frac{1}{x}>\frac{1}{y}$, that is, $\frac{1}{x}-\frac{1}{y}>0$. Well, we know that $$\frac{1}{x}-\frac{1}{y} = \frac{y-x}{xy}$$ The numerator $y-x>0$ by assumption, the denominator $xy$ can be $>0$ or $<0$. Since $x$ and $y$ are positive, then $xy>0$ and its proved that the quotient is positive and thus $>0$. Now, for the converse, suppose $x^{-1}>y^{-1}$, then: $$\frac{1}{x}-\frac{1}{y} = \frac{y-x}{xy}>0 \implies $$ Well, if the quotient is $>0$, then either $y-x$ and $xy$ are both positive, or both negative. If they're positive, then $y>x\implies x<y$ which is what we wanted to prove. Now, if both are negative, then $y-x<0 \implies y<x$. What? This can't happen. UPDATE: totally forgot that $x>0$ and $y>0$... So, $xy$ can't be negative. Also, in order to prove $x>0 \iff x^{-1}>0$ I tried: $x>0 \implies \frac{1}{x}>0$ but... what? this is too obvious...
Some of the answers here threw me off. I think it's a lot easier than some of the answers here. Simply use the properties of multiplication. Another answer here showed that the statement is true with one step, but you can step through the intermediary "calculations" to see what is going in. If $x < y$ then $1 < x^{-1}y$. Here, I have simply left multiplied by $x^{-1}$ on both sides. Similarly, if $1 < x^{-1}y$ then $y^{-1} < x^{-1}$, which was to be shown. You have to be careful not to include elements for which no inverse exists, but you have given that $x, y >0$. The proof in the other direction follows the exact same pattern.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1755945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Show a given analytic function is constant Suppose that $f$ is analytic on some region $R\in\mathbb{C}$. If Im$(f)$ = $k\cdot$Re$(f)$ for some nonzero constant $k\in\mathbb{C}$, then show that $f$ is constant on $R$. I know that if $f'(z)=0$ for all $z$ in some region $R$, then $f(z)$ is constant. However, I'm not sure how to apply this to the question at hand. I also think the Cauchy-Riemann equations could be helpful, but again, I'm not sure how to apply them to the question. Any guidance would be greatly appreciated. Thank you!
If $f$ is analytic, so is the real-valued function $$g(z) = \frac{f(z)}{1 + ik}$$ Now there's a straightforward application of the Cauchy-Riemann equations to conclude that a real-valued analytic function is constant. Notice that there is one value of $k$ for which this doesn't work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1756099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $f:SO(n)\rightarrow S^{n-1}$, $f(A)=(A^n_i)_i$ a submersion? Let $f:SO(n)\rightarrow S^{n-1}$, $f(A)=(A^n_i)_i$, that is $f(A)$ is the last row of $A$. Show that $f$ is a submersion. I'm not sure how to calculate $df$, because I only know how to calculate the differential using local charts, but I don't know how to parametrize $SO(n)$, so this is my attempt: Let $F:M_n(\mathbb{R})\rightarrow\mathbb{R}^n$, $F(M)=(M^n_i)_i$. Then $F|SO(n)=f$. Since $F$ is linear, if $p\in SO(n)\subset M_n(\mathbb{R})$, $dF_p(v)=(v^n_i)_i$, and we conclude that $df_p(v)=(v^n_i)_i.$ So, this is right? And how can I show that df is surjective?
In fact, if $e_n$ is the $n-$th base vector $f(A)= Ae_n$. Let us extend this formula to the set of all matrices, $F(A)=A.e_n$. So $F$ is linear and is its own differential $F'(A)B= Be_n$. The derivative of $f$ at the point $A$ is the restriction of this map to the tangent space $ \cal {A}(n). A$ of $O(n)$ at $A$, where $\cal A$ is the set of antisymmetric matrices. So we are reduce to prove that for every vector $y$ perpendicular to $Ae_n =u$ we can find an antisymmetric matrix $B$ such that $y=B (Ae_n)=Bu$. But precisely ${\cal A} .u=u^{\perp}$, as one can check in an orthonormal basis where $u$ is the first vector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1756163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find all possible real solutions. Find all possible real solutions of $a, b, c, d$ and $e$ if: $3a= (b+c+d)^3$ $3b= (c+d+e)^3$ $3c= (d+e+a)^3$ $3d= (e+a+b)^3$ $3e= (a+b+c)^3$ Well I believe the solutions are possible only if $a=b=c=d=e$. In that case the solutions possible are $0, \frac{1}3$ and $-\frac{1}3$, but I am unable to prove that no other solution exists. Probably using $A.M.\geq G.M.$ might work (where equality is possible iff the terms are equal). But again I am not able to get that to help.
HINT If $x, y$ real then $x^3 < y^3 \Leftrightarrow x<y$. I consider the equalities to be numbered. Now suppose $a>b$. Then from equality (1) and (2) we get $b>e$. Because $a>e$ from (1) and (5) we get $d>a$. So $d > a > b > e$. Now, because $d>e$ from (4) and (5) we get $e > c$, so $d > a > b > e > c$. Because $b > c$ from (2) and (3) we get $c>a$ which is false. Therefore $a=b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1756282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $f(x)=\sum_{n=1}^{\infty} (\frac{x}{n}-log(1+\frac{x}{n}))$ is continuous and can be differentiated ad infinitum We have $f:(0,\infty) \rightarrow \mathbb{R}$ defined by infinite series $f(x)=\sum_{n=1}^{\infty} (\frac{x}{n}-log(1+\frac{x}{n}))$ Prove that $f$ is continuous and can be differentiated infinitely. I have problem right in the beginning. I do not know how to prove that above series in uniformly convergent. I was trying to use series expansion of natural logarithm, but I get this $f(x)=\sum_{n=1}^{\infty}\sum_{k=2}^{\infty} \frac{(-1)^kx^k}{kn^k}$ and I don't know how to proceed.
By "$\log$" you mean the "natural logarithm". From Wiki of "Natural logarithm", we have the following inequality for all $x\in(0,\infty)$, $$\frac{x}{x+1}\le\log(1+x)\le x.$$ Then we see that each term $u_n(x):=\frac{x}{n}-\log(1+\frac{x}{n})$ satisfies $$0\le u_n(x)\le\frac{x}{n}-\frac{\frac{x}{n}}{\frac{x}{n}+1}=\frac{x^2}{nx+n^2}.$$ Now for $\forall M>0$ and $\forall x\in(0,M]$, we have $$0\le u_n(x)\le\frac{M^2}{n^2},$$ and then $$0\le f(x)=\sum_{n=1}^\infty u_n(x)\le M^2\bigg(\sum_{n=1}^\infty\frac{1}{n^2}\bigg).$$ Hence by Weierstrass criterion, $f$ converges uniformly on $(0,M]$ for all $M>0$, and thereby converges (but may not uniformly) on $(0,\infty)$ by the arbitrariness of $M$. Therefore, the continuity of $f$ follows from this property of each term $u_n$ and the uniform convergence of $f$ on $(0,M]$ for all $M>0$. Similarly, for $\forall M>0$ and $\forall x\in(0,M]$, $$0\le u_n'(x)=\frac{1}{n}-\frac{1}{n+x}\le\frac{M}{n^2},$$ $$0\le|u_n^{(k)}(x)|=\bigg|\frac{(-1)^k}{(n+x)^k}\bigg|\le\frac{1}{n^k},$$ thus the infinite differentiability of $f$ follows from the continuity and infinite differentiability of each term $u_n$ and the uniform convergence of all-order derivatives $u_n^{(k)}$ on $(0,M]$ for all $M>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1756404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
ODE $y(1+2xy)dx+x(1-xy)dy=0$ $$y(1+2xy)dx+x(1-xy)dy=0$$ I have tried to isolate $\frac{dy}{dx}$ and got the following: $$\frac{dy}{dx}=-\frac{y(1+2xy)}{x(1-xy)}$$ but I understand that the terms have to be in the same order and that is not the case. What should I do?
Set $u=xy$. Hence $\frac{du}{dx}=x\frac{dy}{dx}+y.$ Hence your equation became $$\frac{x}{u}\frac{du}{dx}-1 = \frac{1+2u}{u-1}.$$ This can be rewritten $$x\frac{du}{dx} = u\frac{3u}{u-1}$$ or else $$\frac{u-1}{3u^2}du=\frac{1}{x}dx.$$ Now you can integrate both sides.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1756538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Evaluating $\lim\limits_{x \to -\infty} \sqrt{x^2 + 3x} - \sqrt{x^2 + x}$. Is Wolfram wrong or is it me? What am I doing wrong? My attempt $$\begin{align} \lim_{x \to -\infty} \sqrt{x^2 + 3x} - \sqrt{x^2 + x} &= \lim_{x \to -\infty} \sqrt{x^2 + 3x} - \sqrt{x^2 + x} \cdot \frac{\sqrt{x^2 + 3x} + \sqrt{x^2 + x}}{\sqrt{x^2 + 3x} + \sqrt{x^2 + x}} =\\ &= \lim_{x \to -\infty} \frac{2x}{\sqrt{x^2 + 3x} + \sqrt{x^2 + x}} =\\ &= \lim_{x \to -\infty} \frac{\frac1x \cdot 2x}{\sqrt{\frac{x^2}{x^2} + \frac{3x}{x^2}} + \sqrt{\frac{x^2}{x^2} + \frac{x}{x^2}}} = \\ &= \lim_{x \to -\infty} \frac{2}{\sqrt{1 + \frac3x} + \sqrt{1 + \frac1x}} = 1 \end{align}$$ Wolfram result
On your way to the last line, you're tacitly assuming that, for example, $\frac1x\sqrt{x^2+x} = \sqrt{\frac{x^2}{x^2}+\frac{x}{x^2}}$. But that is only true when $x$ is positive! When $x$ is negative, $\frac1x\sqrt{\cdots\vphantom{x}}$ will be negative, and thus it can never be written as a sum of square roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1756623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How to check if $n!$ is $ O(2^n)$ How can I check if $n! \in O(2^n)$? The definition of $f$ being $O(g)$ is $f(n) \le c g (n)$, where $c>0$. So it would mean $n! \le c 2^n$. What is the clearest way to solve this? (As I am solving the past papers for my next examen.) Thank you!
As you said you need to check if there is a constant $C>0$ such that $n! \le C 2^n$ for all $n$ (or at least all sufficiently large $n$, but it does not matter much). Note it is crucial that $C$ does not depend on $n$. In this case it turns out there is no such $C$. There are various ways to see this. One of them could go like this. Assume there is such a $C$. Then $n! \le C 2^n$ for all sufficiently large $n$. Then $\prod_{i=1}^n \frac{i}{2} \le C$ for all sufficiently large $n$. Yet for $i \neq 1$ we fave $\frac{i}{2}\ge 1$, so that $\prod_{i=1}^n \frac{i}{2} \ge \frac{1}{2} \frac{n}{2}$ (for $n \ge 2$). Thus we get $\frac{n}{4} \le C$ for all sufficiently large $n$, which is absurd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1756733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
How to rewrite $\frac{\partial \rho u_i u_j}{\partial x_j}$ in vector notation I want to rewrite this index notation expression to a vector notation /symbolic notation. $$\frac{\partial \rho u_i u_j}{\partial x_j}=\frac{\partial \rho }{\partial x_j}u_i u_j+\rho\frac{\partial u_i}{\partial x_j} u_j+\rho u_i\frac{\partial u_j}{\partial x_j} $$ This is what i have tried so far: Expression 1: $$=u_i\vec{u}\cdot\left(\nabla\rho\right)+\rho\vec{u}\cdot\left(\nabla\vec{u}\right)+\rho u_i\left(\nabla \cdot \vec{u}\right)$$ $$=u_i\left[\nabla \cdot\left(\rho\vec{u}\right)\right]+\rho\vec{u}\cdot\left(\nabla\vec{u}\right)$$ Expression 2: $$=\nabla\cdot\left(\rho u_i \vec{u}\right)$$
Given that I prefer index notation, that presents less ambiguities, I would write expression 1 as: $$ \mathbf{u}(\mathbf{u}\cdot\nabla)\rho+\rho(\mathbf{u}\cdot\nabla)\mathbf{u}+\rho\mathbf{u}(\nabla\cdot\mathbf{u}) $$ and expression 2 as $$ \nabla\cdot(\rho\mathbf{u}\otimes\mathbf{u}) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1756825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
linear ODE problem A substance evaporates at a rate proportional to the exposed surface. If a spherical mothball of radius $\frac{1}{2}$ cm has radius $0.4$ cm after $6$ months, how long will it take: * *For the radius to be $\frac{1}{4}$ cm? *For the volume of the mothball to be half of what it was originally? So I understand the radius of the sphere gets smaller as the time goes by. $$\frac{dV}{dt}=4\pi (r-t)^2$$ I came to this, but it does not seems to right
$$\frac{d}{dt}(\frac43\pi r^3) = -4k\pi r^2, $$ $$3r^2\frac43\pi \frac{dr}{dt} = -4k\pi r^2,$$ $$\frac{dr}{dt} = -k $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1756932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }