Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Fourier sine transform of $ \frac{1}{e^{ax } - e^{-ax}} $ Consider the function $ \frac{1}{e^{ax} - e^{-ax}}.$ We are try to find the Fourier sine transform. After putting the values we get $$\int_{-\infty}^{\infty}\frac{1}{e^{ax}-e^{-ax}}\sin px\, dx$$ Now I try to solve it by different techniques but every time got stuck, for example I convert $(e^{ax}-e^{-ax})$ in $\sinh x$, but when I take it in nominator, it become $cosec (hx)$. But again I am stuck in finding the integral of $(\sin(px))(cosec(hx))$. Similarly I tried integration by parts, but I could not solve. Can I get some help?
We can find it in Gradshteyn, I. S.; Ryzhik, I. M.; Zwillinger, Daniel (ed.); Moll, Victor (ed.), Table of integrals, series, and products. Translated from the Russian. Translation edited and with a preface by Victor Moll and Daniel Zwillinger, Amsterdam: Elsevier/Academic Press (ISBN 978-0-12-384933-5/hbk; 978-0-12-384934-2/ebook). xlv, 1133 p. (2015). ZBL1300.65001. 3.981.1 is $$ \int_0^\infty\frac{\sin \alpha x}{\sinh \beta x}\, dx = \frac{\pi}{2\beta}\,\tanh\frac{\alpha\pi}{2\beta}, \qquad \operatorname{Re} \beta > 0, \alpha > 0. $$ Going from $\int_0^\infty$ to $\int_{-\infty}^\infty$ doubles the result, but going from $e^{ax}-e^{-ax}$ to $\sinh(ax)$ halves the result. So your answer is $$ \frac{\pi}{2 a}\,\tanh\frac{p\pi}{2 a} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4343093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Solve the equation $x^4-8x^3+23x^2-30x+15=0$ Solve the equation $$x^4-8x^3+23x^2-30x+15=0$$ As $x=0$ is obviously not a solution, we can consider $x\ne0$, so I have tried to divide both sides by $x^2$ to get $$x^2-8x+23-\dfrac{30}{x}+\dfrac{15}{x^2}=0$$ Clearly this does not help. I have also tried to find the rational roots using Horner's method. It seems that the polynomial in the left-hand side does not have rational roots.
Note that, if $p(x)$ is your polynomial, then $p(x+2)=x^4-x^2-2x-1=x^4-(x+1)^2$. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4343240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Closed-form for $\sum_{n=1}^{\infty} \left(m n \, \text{arccoth} \, (m n) - 1\right)$ I'm looking for a closed-form for the following sum: $$\sum_{n=1}^{\infty} \left(m n \, \text{arccoth} \, (m n) - 1\right)$$ for $|m|>1.$ In a previous question of mine, the following similar sum was determined: $$\sum_{n=1}^{\infty} (-1)^n \left(m n \, \text{arccoth} \, (m n) - 1\right)=\frac{1}{2}+\frac{1}{2} \log \left|\cot\left(\frac{\pi}{2m}\right)\right|+\frac{m}{2\pi} \left(\frac{1}{2}\text{Cl}_2\left(\frac{2\pi}{m}\right)-2\text{Cl}_2\left(\frac{\pi}{m}\right)\right)$$ for $|m|>1,$ where $\text{Cl}_2$ is the Clausen function of order 2. I determined the following closed-forms for example as mentioned in that question: $$\sum_{n=1}^{\infty} \left( 4n \, \text{arccoth} \, (4n)-1\right) = \frac{1}{2} - \frac{G}{\pi}- \frac{1}{4} \ln (2)$$ $$\sum_{n=1}^{\infty} \left(6n \, \text{arccoth} \, (6n) - 1\right) = \frac{1}{2} - \frac{3}{2\pi} \, \text{Cl}_2 \left( \frac{\pi}{3}\right)$$ $$\sum_{n=1}^{\infty} \left( 8n \, \text{arccoth} \, (8n) - 1\right) = \frac{1}{2} - \frac{2}{\pi} \, \text{Cl}_2 \, \left(\frac{\pi}{4}\right) - \frac{1}{4} \ln (2-\sqrt{2})$$ I suspect a similar method to that used by @skbmoore involving the Barnes G function might be applicable. Any help would be much appreciated. Here is my attempt: $$S(m) := \sum_{n=1}^{\infty} \left(m n \, \text{arccoth} \, (m n) - 1\right) = \frac{m}{2} \log \left( \prod_{n=1}^{\infty} \frac{1}{e} \left(\frac{1+1/(m n)}{1-1/(m n)}\right)^{n}\right)\\ = \frac{1}{2} + \frac{m}{2} \zeta' \left(-1,1-\frac{1}{m}\right)-\frac{m}{2}\zeta' \left(-1,1+\frac{1}{m}\right)+\frac{1}{2} \zeta' \left(0,1-\frac{1}{m}\right)+\frac{1}{2} \zeta' \left(0,1+\frac{1}{m}\right) \\ =\frac{1}{2} + \frac{m}{2} \left(\zeta' \left(-1,1-\frac{1}{m}\right)-\zeta' \left(-1,1+\frac{1}{m}\right)\right) \\+ \frac{1}{2} \ln\left( \Gamma \left(1-\frac{1}{m}\right)\Gamma \left(1+\frac{1}{m}\right)\right) - \frac{1}{2}\ln (2\pi) \\=\frac{1}{2} + \frac{m}{2} \left(\zeta' \left(-1,1-\frac{1}{m}\right)-\zeta' \left(-1,1+\frac{1}{m}\right)\right) + \frac{1}{2} \ln\left( \frac{\pi}{m} \csc \left(\frac{\pi}{m}\right)\right) - \frac{1}{2}\ln (2\pi)$$ However, I would like to write the solution in terms of the Clausen function if possible, like in the alternating sum, but I cannot see how to do that.
To get your result in terms of the Clausen function of order $2$, we can use the identity $$ \begin{align} \zeta'(-1,x) &= -\log G(x+1) + x \log \Gamma (x) + \zeta'(-1) \\ &= - \log \Gamma(x) - \log G(x) + x \log \Gamma(x) + \zeta'(-1) \\ &= - \log G(x) + (x-1) \log \Gamma(x) + \zeta'(-1), \end{align}$$ which holds for $x>0$, and the Barnes G reflection formula $$\log \left( \frac{G(1-x)}{G(1+x)} \right)= x\log \left(\frac{\sin \pi x}{\pi} \right) + \frac{\operatorname{Cl}_{2}(2 \pi x)}{2 \pi}, \quad 0 < x< 1, $$ which was used in skbmoore's answer to your previous question. Then $$ \begin{align} \small\zeta' \left(-1, 1- \frac{1}{m}\right)- \zeta' \left(-1, 1+ \frac{1}{m}\right) &= \small- \log \left(\frac{G \left(1-\frac{1}{m}\right)}{G \left(1+ \frac{1}{m}\right)} \right) - \frac{1}{m} \log \left(\Gamma \left(1+ \frac{1}{m} \right) \Gamma \left(1- \frac{1}{m} \right) \right) \\ &= \small - \log \left(\frac{G \left(1-\frac{1}{m}\right)}{G \left(1+ \frac{1}{m}\right)} \right) - \frac{1}{m} \log \left(\frac{1}{m}\Gamma \left(\frac{1}{m}\right) \Gamma \left(1- \frac{1}{m} \right) \right) \\ &= \small- \frac{1}{m}\log \left(\frac{\sin \left(\frac{\pi}{m} \right)}{\pi} \right) - \frac{\operatorname{Cl}_{2} \left(\frac{2 \pi}{m} \right)}{2 \pi}- \frac{\log\left(\frac{\pi}{m} \csc\left(\frac{\pi}{m}\right)\right)}{m}. \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4343353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Why is coin tossing combination *without* repetition? As per the title, for a traditional statistics question like: Find the probability of landing 7 heads in 10 throws of fair coin, P(head) = 0.5 No problems with this being a combination (as opposed to a permutation), but I cannot convince myself why it's the WITHOUT repetition sub category of combinations. Thus allowing one to use the Binomial Distribution via nCr. As opposed to using the n+r-1Cr distribution for WITH repetition. What exactly does "without repetition" physically mean in this real life example?
Perhaps more generally than in my Comments, consider an urn with ten marbles: five red and five blue. What is the practical difference between sampling with replacement and sampling without replacement? Suppose you choose five marbles out of ten, with replacement. That is, you take out a marble, note its color, and put the marble back in the urn. So, on each draw the urn has $5/10 = 0.5$ red marbles and $5/10 - 0.5$ blue marbles. Following the same logic is for tossing a fair coin five times, we see that the probability of drawing a red marble in exactly two of the five draws is ${5\choose 2}(1/2)^5 = 10/32 = 0.3125.$ [As before, the 'binomial coefficient' ${5\choose 2} = 10$ refers to the arrangements of two red and three marbles in sequence.] By contrast, if I do not replace eacg=h of the five marbles as I draw them, then the contents of the urn are different on each draw. So, we have to say that there are ${10\choose 5}$ ways to draw five marbles from among ten. The number of ways to choose exactly two red marbles out of five is ${5\choose 2} = 10$ and the number of ways to choose exactly three blue marbles out of five is ${5\choose 3} = 10.$ Then the number of ways to choose exactly 2 red marbles in five draws without replacement from the urn is the 'hypergeometric' probability $$\frac{{5\choose 2}{5\choose 3}}{{10\choose 5}} = \frac{10(10)}{252} = 0.3968254.$$ The bottom line is that the difference between (binomial) sampling with replacement and (hypergeomatric) sampling without replacement has given different probabilities for getting exactly two red marbles in five draws from the urn---binomial $0.3125$ and hypergeometric $0.3968254.$ In R: choose(10, 5) [1] 252 choose(5,2) [1] 10 dhyper(2, 5,5, 5) # without replacement [1] 0.3968254 dbinom(2, 5, .5) # with replacement [1] 0.3125 The plot below shows the binomial and hypegeometric PDFs for values $x = 0,1,2,3,4,5.$ Hypergeometric probabilities for $x=0$ and $x=5$ are quite small (almost too small to show on the graph). The hypergeometric distribution has smaller variance because options decrease as the marbles in the urn disappear. R code for the figure: k = 0:5 pdf.b = dbinom(k, 5, .5) pdf.h = dhyper(k, 5,5, 5) hdr = "PDFs of Binomial (blue) and Hypergeometric Distributions" plot(k-.04, pdf.b, type="h", ylim=c(0,.4), lwd=2, ylab="PDF", xlab="x", col="blue", main=hdr) lines(k+.04, pdf.h, type="h", lwd=2, col="brown") abline(h=0, col="green2")
{ "language": "en", "url": "https://math.stackexchange.com/questions/4343453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Over a left V-ring, why is every module containing a simple essential submodule is simple? I've tried to prove the statement $R$ is a $V-$ring if and only if every module containing a simple essential submodule is simple. I know that a ring $R$ is called as a $V-$ring if every simple $R$ module is injective. However, I cannot prove the above statement by just using this definition of $V-$ring. Is there anyone can give me a hint for this, at least an extra information about injectivity or $V-$ring that I can use for the proof of that statement?
"$R$ is a V-ring if and only if every module containing a simple submodule is simple." is a false theorem. For example, the ring $R=F_2\times F_2$ is a semisimple ring, and so all of its modules contain simple modules. Semisimple rings are also V-rings, but obviously not all of its modules can be simple. To address the new version of the question "$R$ is a V-ring if and only if every module containing an essential simple submodule is simple." This is obvious since injective modules are summands of their supermodules. No summand can be essential unless it is already the entire module. So if a module contains an essential injective submodule, that submodule is already the entire module.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4343634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Average number of days to see all possible cards My father and I go to the restaurant everyday, and each one of us needs to grab a card, which has a number from 1 to 600. I thought about registering every new card we see in a list, and a question arose: "How many days, on average, would we need to see every possible card?" * *Each card has an equal probability of being chosen *The two cards my father and I get are different So far, I reasoned that the minimum number is 300, if it were the case that everyday we got different cards. In addition, it gets ever more harder to reach the final number of cards. When the list of seen cards is almost complete, it is more likely that we are going to draw a card that has already been seen before, and not a new one. I coded 1000 simulations in python and this is the graph of the number of days expected to reach $n$ cards on average: which is compatible with my reasoning, and results in an average of 2088 days for us to see each one of the 600 cards. But I would like to see a non-brute force way to derive such value.
If you draw with replacement instead, then this is the coupon collector's problem, which has a simple solution and is still a good approximation of your situation (since drawing the same card twice on a given day is unlikely). The expected number of draws to get all coupons from a set of $n$ is $n\sum_{k=1}^n\frac1k\approx n \log n + \gamma n + \frac12$, where $\gamma$ is Euler's constant. For $n=600$, this is about $4185$ draws, or $2092.5$ days at two draws/day. We can adjust this to simulate drawing without replacement on each day. Drawing two cards without replacement is equivalent to drawing with replacement until we've seen two different cards. If we do this, each draw after the first one has an $\frac{n-1}n$ chance of being different from the first, so the expected number of draws per day is $1+\frac n{n-1}=2+\frac1{n-1}$. At this rate, it takes about $4185/(2+\frac1{599})\approx2090.75$ days to see all the cards.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4343891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
A clarification of this model theory question regarding when two theories are the same I was told in an answer to a previous question that for all non-zero real numbers $r$ and $r'$, the theory of the structures $(\mathbb{R};+,r)$ and $(\mathbb{R};+,r')$ are the same. But how can this be? For example, the theory of $(\mathbb{R};+,2)$ is different from the theory of $(\mathbb{R};+,3)$, because, for example, the first contains the statement $2=2$ while the second one doesn't, because it doesn't have a $2$ symbol in the first place. So, are those theories really the same, or merely equivalent, and if so, what is the definition of two theories being equivalent? I would like a clarification of this issue.
Neither theory contains the statement $2 = 2$ because $2 = 2$ is not a statement in the vocabulary. You have discussed structures, but structures must be a structure over some language. The language here is a language consisting of one binary operator (which we'll call $+$) and one constant symbol (which we'll call $c$). The informal statement $2 = 2$ corresponds to the formal proposition $c = c$, which is clearly satisfied by both structures. Because there is an isomorphism between the structures $(\mathbb{R}; +, r)$ and $(\mathbb{R}; +, r')$, they have the same theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4344013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is it true that an open set and its closure have same boundaries? I think the proposition is true. Here is what I thought. Let $\Omega$ be an open set in a metric space $X$. $\partial\Omega=\overline{\Omega}\setminus\Omega^\circ=\overline{\Omega}\setminus\Omega$ since $\Omega$ is open. On the other hand, $\partial\overline{\Omega}=\overline{\overline{\Omega}}\setminus{\overline{\Omega}}^\circ=\overline{\Omega}\setminus{\overline{\Omega}}^\circ$ by the well-known theorem that the closure is closed. Now we prove ${\overline{\Omega}}^\circ=\Omega$. Since $\overline{\Omega}=\partial\Omega\cup\Omega$, our purpose is to prove no interior point of $\overline{\Omega}$ lies in $\partial\Omega$. If so, there is a point $x\in\overline{\Omega}$ and its neighborhood $U(x,\delta)\subset\partial\Omega$, which implies $U(x,\delta)\cap\Omega=\emptyset$. That contradicts the fact that $x$ is a limit point of $\Omega$. ${\overline{\Omega}}^\circ=\Omega$ is true. In conclusion, $\partial\Omega=\overline{\Omega}\setminus\Omega=\overline{\Omega}\setminus{\overline{\Omega}}^\circ=\partial\overline{\Omega}$. The proof is complete. Please help me to check whether the proof above is correct or not? Thank you all very much.
${\overline{\Omega}}^\circ=\Omega$ is false. In your argument you only have $U(x,\delta) \subset \overline {\Omega}$ not $U(x,\delta) \subset \partial {\Omega}$ $\mathbb R \setminus \{0\}$ is an open set whose boundary is $\{0\}$. But its closure is $\mathbb R$ whose boundary is empty.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4344155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many ways to deal with the integral $\int \frac{d x}{\sqrt{1+x}-\sqrt{1-x}}$? I tackle the integral by rationalization on the integrand first. $$ \frac{1}{\sqrt{1+x}-\sqrt{1-x}}=\frac{\sqrt{1+x}+\sqrt{1-x}}{2 x} $$ Then splitting into two simpler integrals yields $$ \int \frac{d x}{\sqrt{1+x}-\sqrt{1-x}}=\frac{1}{2}\left [\underbrace{\int\frac{\sqrt{1+x}}{x}}_{J} d x+\underbrace{\int\frac{\sqrt{1-x}}{x} d x}_{K}\right] $$ To deal with $J$, we use rationalization instead of substitution. $$ \begin{aligned} J &=\int \frac{\sqrt{1+x}}{x} d x \\ &=\int \frac{1+x}{x \sqrt{1+x}} d x \\ &=2 \int\left(\frac{1}{x}+1\right) d(\sqrt{1+x}) \\ &=2 \int \frac{d(\sqrt{1+x})}{x}+2 \sqrt{1+x} \\ &=2 \int \frac{d(\sqrt{1+x})}{(\sqrt{1+x})^{2}-1}+2 \sqrt{1+x} \\ &=\ln \left|\frac{\sqrt{1+x}-1}{\sqrt{1+x}+1} \right| +2 \sqrt{1+x}+C_{1} \end{aligned} $$ $\text {Replacing } x \text { by } -x \text { yields }$ $$ \begin{array}{l} \\ \displaystyle K=\int \frac{\sqrt{1-x}}{-x}(-d x)=\ln \left|\frac{\sqrt{1-x}-1}{\sqrt{1-x}+1}\right|+2 \sqrt{1-x}+C_{2} \end{array} $$ Now we can conclude that $$ I=\sqrt{1+x}+\sqrt{1-x}+\frac{1}{2}\left(\ln \left|\frac{\sqrt{1+x}-1}{\sqrt{1+x}+1}\right|+\ln \left|\frac{\sqrt{1-x}-1}{\sqrt{1-x}+1}\right|\right)+C $$ My question is whether there are any simpler methods such as integration by parts , trigonometric substitution, etc… Please help if you have. Thank you for your attention.
Here is another approach, $$\int\dfrac{\sqrt{1+x}+\sqrt{1-x}}{2x}dx=\int\dfrac{\sqrt{2+2\sqrt{1-x^2}}}{2x}dx=\frac1{\sqrt2}\int\dfrac{\sqrt{1+\sqrt{1-x^2}}}xdx$$ Let, $x=\sin2\theta$, $$\int\dfrac{\cos\theta}{\sin2\theta}\times2\cos2\theta \;d\theta=\int\dfrac{\cos2\theta}{\sin\theta}d\theta=\int\csc\theta-2\sin\theta \;d\theta$$$$=\ln|\csc\theta-\cot\theta|+2\cos\theta+C$$ Now we need to denote the result in term of $x$,$\cos2\theta=\sqrt{1-x^2}=2\cos^2\theta-1\;\Rightarrow\;2\cos\theta=\sqrt{2+2\sqrt{1-x^2}}$ $\csc\theta-\cot\theta=\dfrac{1-\cos\theta}{\sin\theta}=\dfrac{2\cos\theta-\cos2\theta-1}{\sin2\theta}=\dfrac{\sqrt{2+2\sqrt{1-x^2}}-\sqrt{1-x^2}-1}{x}$ Hence, $$\int\dfrac{\sqrt{1+x}+\sqrt{1-x}}{2x}dx=\sqrt{2+2\sqrt{1-x^2}}+\ln\left|\dfrac{\sqrt{2+2\sqrt{1-x^2}}-\sqrt{1-x^2}-1}{x}\right|+C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4344261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
What is the characteristic equation of $a_{n+2}+2a_{n}=0$? So if we let $a_{n}=Cr^n$ then we have $Cr^{n+2}+2Cr^n=Cr^n(r^2+2)$. So I got that the characteristic equation $r^2+2=0$ but it should be $r^2+2r=0$ apparently. How is that?
The characteristic equation of : $$a_{n + 2} + \alpha a_{n + 1} + \beta a_n = 0$$ is : $$r^2 + \alpha r + \beta = 0$$ In your case $\alpha = 0$ and $\beta =2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4344381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\mathbb E[ |X| ] < \infty \iff \forall \epsilon : \mathbb E[ |X / \epsilon | ] < \infty$ $X$ is a random variable. $$ \mathbb E ( | X | ) < \infty \implies \forall \epsilon > 0, \sum_n \mathbb P ( | X | \geq n \epsilon ) < \infty. $$ Any help to prove this? (This amounts to prove that $\mathbb E[ |X| ] < \infty \iff \forall \epsilon : \mathbb E[ |X / \epsilon | ] < \infty$). ---------------------------------------------------- original post Since I know that $$X \in L^1 \iff \sum \mathbb P ( | X | \geq n) < \infty$$ I was given the hint to try to prove the statement hereinafter: Let us say that $X$ is a random variable. $$\forall \epsilon > 0, \exists K>0, \forall n \in \mathbb N: \mathbb P( X \geq n \epsilon ) \leq K \mathbb P ( X \geq n), $$ is a useful fact that I need in some proof of my probability lecture, however I am quite unsure about how to prove such statement. Any idea? ---------------------------------------------------- comment I guess that the original post is completly wrong because one cannot bound each term (cf. Ian comment), however bounding the whole sum is doable somehow.
Using the monotone convergence theorem, for any $\epsilon>0$, $$ \mathsf{E}|X|=\lim_{M\to\infty}\mathsf{E}[|X|\wedge M]=\epsilon\lim_{M\to\infty}\mathsf{E}[|X/\epsilon|\wedge M/\epsilon]=\epsilon\,\mathsf{E}|X/\epsilon|. $$ Thus, $$ \mathsf{E}|X|<\infty \Leftrightarrow \mathsf{E}|X/\epsilon|<\infty \Leftrightarrow \sum_{n}\mathsf{P}(|X/\epsilon|\ge n)<\infty. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4344512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find the spectrum of $(Ff)(x)=\int_{-\pi}^{\pi} (1 + \cos(x-y))f(y)dy$ As the title suggests we have a function $T: L^2 [-\pi,\pi] \to L^2[-\pi,\pi]$ that sends $f \to \int_{-\pi}^{\pi} (1 + \cos(x-y))f(y)dy$. I have read a couple of related posts, but unfortunately I am not sure how to proceed. Most of what I read either worked with Fourier transforms (I see that if $K(x)=1+cos(x)$, then the expression we have is $K$ convoluted with $f$) or directly by finding point spectrum and invoking some property where doing this sufficed to compute the total spectrum. I did not succed with the first one, and after that went to try to compute the point spectrum. What I get is that $\pi$ and $2\pi$ are the (point) spectrum, and any function of the form $ b_1 \cos(t) + b_2 \sin(t)$ and $b_0$ (constant one) are their corresponding eigenfunctions, respectively. To do this I just worked out the expression, using the fact that we can rewrite $\cos(x-y)$ as $\cos(x)\cos(y)+\sin(x)\sin(y)$. From here I got that the image of our map can be written as $a_1 + a_2\cos(t) + a_3 \sin(t)$, from where I deduced that these are the unique eigenvalues after some computations. What approach do you guys suggest? Does my function have any special properties (like compactness or self adjointess) to conlude this is it? Is the Fourier transformation easier here? Thanks a lot!
@cmk is right. Look at that post. But if you want a more specific answer for your specific convolution kernel, here goes. (But please then look at the link that cmk suggested afterwards.) Take $\psi_k(x) = e^{i k x}$. Then $F \psi_k = \lambda_k \psi_k$ where $\lambda_0=2\pi$, $\lambda_1=\lambda_{-1}=\pi$ and $\lambda_k=0$ for $k \not\in \{-1,0,1\}$. That is a complete orthonormal basis. So the spectrum is pure-point. It is a rank-3 self-adjoint operator. Just integrate to get these results: multiplying by $(2\pi)^{-1}$ you get $$ \frac{1}{2\pi} \int_{-\pi}^{\pi} \left(1+\frac{1}{2} e^{i x} e^{-iy} + \frac{1}{2} e^{-ix} e^{iy}\right) e^{i k y}\, dy\, , $$ which is equal to the expression utilizing $L^2$-inner products $$ \langle \psi_0\, ,\ \psi_k\rangle\, \psi_0(x) + \frac{1}{2} \langle \psi_1\, ,\ \psi_k\rangle\, \psi_1(x) + \frac{1}{2} \langle \psi_{-1}\, ,\ \psi_k\rangle\, \psi_{-1}(x)\, , $$ using the mathematicians' convention for sesquilinearity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4344774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A proof of $|\mathbb{N}| = |\mathbb{N} \times \mathbb{N}|$ $\newcommand{\N}{\mathbb{N}}$ There are various versions of proof of 'there is a bijection from $\N$ to $\N \times \N$.' But only one of them suits my taste: Make a list $$ \begin{aligned} &(1,1),\\ &(1,2), (2,1)\\ &(1,3), (2,2),... \end{aligned} $$ and index them by $\N = \{1,2,\cdots\}$. The sum of the elements of each pair equals the index of the corresponding layer plus $1$. It provides the key idea, but I feel I need to explicitly show that it really works. Please check if the followings are valid. First, let us restrict the domain of discourse to natural numbers. I defined a function $f: \N \to \N^2$ with $$ \forall m :\forall n \le m:f\left(\frac{(m-1)m}{2} + n\right) = (n,m-n+1) $$ where $m$ is the index for each layer of the list and $n$ is the column index. $(m-1)m/2$ can be thought of as the accumulated number of elements from the first layer and the layer $m-1$. Now, we have to show $f$ is bijective. * *(surjective) I am going to show that $\forall a, b: \exists k:f(k) = (a,b)$. Let $n = a$, and $m-n+1 = b$. Then, $$ m = n + b - 1 \ge n $$ Letting $ k = (a + b - 2)(a + b - 1)/2 + a$ shows $f$ is subjective. *(uniqueness of $m$ and $n$ for each argument $k$ of $f$) First, we show that $$ \forall k: \exists!m,n: \left(n \le m \land k = \frac{(m-1)m}{2} + n\right) $$ It is easy to show that $$ \left\{\left.\left(\frac{(m-1)m}{2}\right.,\left.\frac{m(m+1)}{2}\right]\cap\N ~\right|~m\in\N \right\} $$ partitions $\N$. Therefore, for arbitrary $k$, there is unique $m$ such that $$ \frac{(m-1)m}{2} < k \le \frac{m(m+1)}{2} \text{ or equivalently, } 0 < k - \frac{(m-1)m}{2} \le m $$ Using the result, let $n = k - {(m-1)m}/{2} $. *(injective) It follows from 2) that for each $a, b \in \N$ there is unique $k$ such that $f(k) = (a,b)$. The proof is a lot verbose, but I could be content only with such details.
All the details are in the Wikipedia page for this Cantor's pairing. It even derives an inverse. But IMO $f(n,m)=2^{n-1}\cdot (2m-1)$ is a simpler bijection from $\Bbb N \times \Bbb N$ to $\Bbb N$ ($\Bbb N=\{1,2,3,\ldots\}$) (from my high school days). It's based on the simple observation that every number is a power of $2$ (possibly $2^0=1$) times an odd number in a unique way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4344878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is the space of differential forms $\bigoplus_{p=0}^n \Lambda_x^p$? In Wald's book "General Relativity", the space $\Lambda_x$ of differential forms at a point $x$ is worked out in the following manner: Let $M$ be an $n$-dimensional manifold. The vector space of all $p$-forms at a point $x \in M$ is given by $\Lambda_x^p$. The vector space $\Lambda_x$ of all differential forms (not limited to a specified degree $p$) is given by the direct sum: $$ \Lambda_x =\bigoplus_{p=0}^n \Lambda_x^p $$ However, from my understanding, the direct sum $A \oplus B$ of two spaces $A$ and $B$ consists of all ordered pairs $(a,b)$ where $a \in A$ and $b \in B$, with the additional structure: $$(a_1, b_1) + (a_2, b_2) = (a_1 + a_2, b_1 + b_2)$$ Wouldn't this mean that an element from $\omega \in \Lambda_x$ would take the form: $$ \omega = (\omega_1, \omega_2, \dots , \omega_n) $$ where $\omega_1$ is a 1-form, $\omega_2$ a 2-form, and so on? This to me doesn't seem to be the space of all differential forms at $x$, unless all $\omega_i$ are 0 except one for each $\omega$. In fact, it seems to be a much larger space, since it can contain multiple differential forms of different degrees. My first guess was that a $p$-form and a $q$-form would be combined via the map $ \bigwedge: \Lambda^p_x \times \Lambda^q_x \mapsto \Lambda^{p+q}_x $, but then we are no longer limited to the degree $n$. Why is this written as a direct sum rather than, for example, a union? $$ \Lambda_x = \bigcup_{p=0}^n \Lambda_x^p $$
I would consider it as the space of formal sums of differential forms. So an element in the space might be $3dx_1 \wedge dx_2 \wedge dx_3 - 19dx_1 \wedge dx_2 + 4$. The operations like $\wedge$ and addition still make sense if you extend them by linearity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4345065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Conditional expectation of this stochastic process? I'm just beginning to learn about stochastic processes and encountered this very elementary problem that confused me a bit: We toss a coin that lands on Head with probability $p$ and Tail with $q=1-p$. This probability never changes. We define $W_n$ to be $-1$ when the $n$-th toss is Tail and $1$ if Head. The problem was to determine whether or not $(W_n)_n$ is a martingale. While solving, the lecturer writes this: $$\mathbb{E}(W_{n+1}\vert\mathcal{F}_n)=\mathbb{E}(W_{n+1})=p-q$$ Here $\mathcal{F}_n=\sigma(W_1,\dots,W_n)$. He argues that since the $n$-th toss and the $n+1$-th toss are independent, $W_{n+1}$ and $\mathcal{F}_n$ are independent, which is why the above holds. I get this. But I also think that $W_n$'s are defined on the space $(\{H,T\},\{\emptyset,\{H\},\{T\},\{H,T\}\},\mathbb{P})$ where $\mathbb{P}(H)=p$ and $\mathbb{P}(T)=q$. So $\sigma(W_1,\dots,W_n)$ is the entire $\sigma$-algebra $\{\emptyset,\{H\},\{T\},\{H,T\}\}$. But then $$\mathbb{E}(W_{n+1}\vert\mathcal{F}_n)=W_{n+1}$$ Could someone please explain to me what I'm doing wrong? This will really help me learn about the topic. P.S.- I also posted this on Cross Validated. I hope I am not violating any community guidlines by doing this.
When you define the $W_n$ stochastic process you're not just defining it on the space $\{H,T\}$, because that space would only contain information on one coin toss. You define it on the space of all possible outcomes of all coin tosses. Your space is $\Omega = \{\omega = (\omega_1, \omega_2, \dots) | \;\; \omega_i \in \{H, T\}\}$. Therefore $\sigma(W_1, \dots, W_n)$ is not the entire $\sigma$-algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4345205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Gradient of $\lVert A(X) - b\rVert^2$ with $A$ a linear operator Let $X\in\mathbb{R}^{p\times m}$, $b\in\mathbb{R}^{k}$ and let $A:\mathbb{R}^{p\times m}\to\mathbb{R}^k$ be a linear operator. I need to compute the gradient with respect to $X$ of $f(X) = \lVert A(X) - b\rVert ^2$ and my claim is that $\nabla f(X) = 2\cdot A^T(A(X) - b)$, there $A^T$ is the transposed operator of $A$. However, I cannot prove it in a simple way that does not involve Fréchet derivatives. Is there a simple way to prove it (if my claim is correct)?
I am posting the answer I got thanks to @Fei Cao's hint. Let us consider $f(X) = \frac{1}{2}\lVert A(X)-b\rVert^2_2$ \begin{equation} \nabla f(X) = A^T(A(X) - b) \end{equation} Indeed for $H\in\mathbb{R}^{p\times m}$ it holds: \begin{align*} &f(X+H) - f(X) - \langle A^T(A(X) - b), H\rangle = \frac{1}{2}\lVert A(X) - b + A(H)\rVert^2_2 - \frac{1}{2}\lVert A(X)-b\rVert^2_2 - \langle A(X) - b, A(H)\rangle = \\ &= \frac{1}{2}\left[\lVert A(X) - b\rVert_2^2 + \lVert A(H)\rVert_2^2 + 2\langle A(X) - b, A(H)\rangle\right] - \frac{1}{2}\lVert A(X)-b\rVert^2_2 - \langle A(X) - b, A(H)\rangle = \\ & = \frac{1}{2}\lVert A(H)\rVert_2^2 = o(H) \qquad \text{for $H\to 0$} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4345371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Proof of Cauchy-Schwarz inequality using a particular result from orthonormal sets Let {$e_1, ... , e_k$} be an orthonormal set in a finite dimensional real inner product space $V$. If $ v \in V$, I have shown the following result: $\sum_{i=1}^{k} |\langle v,e_i\rangle|^{2} \leq ||v||^{2} $ I am trying to prove the Cauchy-Schwarz inequality based on this property. There exists an orthonormal basis {$e_1,...,e_n$} of $V$ and for $v,w \in V$ we therefore have that there exists $a_1,...,a_n,b_1,...,b_n \in \mathbb{R} $ such that $ \sum_{i=1}^{n} a_ie_i = v$ and $ \sum_{i=1}^{n} b_ie_i = w $. Then $|\langle v,w\rangle| = |\langle v,\sum_{i=1}^{n} b_ie_i\rangle| \\ \qquad \qquad \quad = |\sum_{i=1}^{n} b_i\langle v,e_i\rangle| \\ \qquad \qquad \quad $ I get stuck from here though.
From you last line, apply the Cauchy-Schwarz inequality: $\mid<v,w>\mid^2\le \displaystyle \sum_{i=1}^kb_i^2\displaystyle \sum_{i=1}^k<v,e_i>^2 \le \displaystyle \sum_{i=1}^kb_i^2||v||^2 = \displaystyle \sum_{i=1}^k <w,e_i>^2||v||^2 \le ||w||^2||v||^2\implies \mid<v,w>\mid\le ||v||||w||.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4345545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
find the number of irrational roots of $\frac{4x}{x^2 + x + 3} + \frac{5x}{x^2 - 5x + 3} = -\frac{3}{2}$ Find the number of irrational roots of the equation $$\dfrac{4x}{x^2 + x + 3} + \dfrac{5x}{x^2 - 5x + 3} = -\dfrac{3}{2}.$$ I got a solution : divide both denominator and numerator by $x$. Let $x+\dfrac{3}{x} = y$. Then the equation becomes $$\dfrac{4}{y + 1} + \dfrac{5}{y - 5} = \dfrac{3}{2}.$$ Now simplifying this we get $y = -5, 3$. Finally, $x+\dfrac{3}{x} = -5$ has 2 irrational roots, and $x+\dfrac{3}{x} = 3$ has 2 imaginary roots. But my question is, since this question is for a competitive exam, is there any other quick approach to solve this question?
Here is another way to solve this problem. It is up to you and @dxiv to decide whether this method is longer or shorter than yours. We start by shifting the $2^{\text{nd}}$ term on LHS of the given identity to its RHS. $$\dfrac{4x}{x^2 + x + 3} = -\dfrac{3}{2}-\dfrac{5x}{x^2 - 5x + 3}$$ When RHS is simplified, we have, $$\dfrac{4x}{x^2 + x + 3} = -\dfrac{3x^2 - 5x + 9}{2x^2 - 10x + 6}.$$ Now, we add the denominators to the respective numerators to get, $$\dfrac{x^2 + 5x + 3}{x^2 + x + 3} = \dfrac{-x^2 - 5x - 3}{2x^2 - 10x + 6}.$$ This implies, $$ x^2 + 5x + 3 = 0\qquad\text{and}\qquad x^2 + x + 3= -2x^2 + 10x - 6\quad\Longrightarrow\quad x^2 -3 x + 3=0.$$ The first quadratic equation gives us two irrational roots, while the second two imaginary roots
{ "language": "en", "url": "https://math.stackexchange.com/questions/4345694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find an optimal allocation of groups in an array Say we have an $n\times m$ array with $n$ and $m$ are odd, and a list of positive integers $(a_1,\dots, a_k)$. Each $a_i$ represents a number of elements which are to be allocated together in a row of the array. Each slot of the array has a value $1+$rows from the middle row$+$columns to the middle column so that the central slot has value $1$ and it increases by $1$ every time we move one row or column farther from it. I would like to find the allocation that minimizes the sum of values of slots in which there is an allocated element with the restrictions that each group must be allocated together in the same row, and that two groups in the same row must be separated by an empty slot. For instance, given a $3\times 3$ array and the integers $(1,2,3)$ an optimal allocation would look like $$ \begin{array}{|c|c|c|} \hline & 1 & \\ \hline 3 & 3 & 3\\ \hline 2 & 2 & \\ \hline \end{array} $$ where the number corresponds to the integer in the list. In this case, the total value is computed as $$ \begin{array}{|c|c|c|} \hline & 2+ & \\ \hline 2+ & 1+ & 2+\\ \hline 3+ & 2+ & \\ \hline \end{array}=12 $$ A case in which I need to put two groups in the same row would be for instance $$ \begin{array}{|c|c|c|c|} \hline 4& 4 &4& 4& \\ \hline 5 & 5 & 5&5&5\\ \hline & 2 & 2 & &1 \\ \hline \end{array} $$ My thoughts I think this problem can be modelled as an integer programming problem. The main difficulty that I find is that the allocation occurs in groups (the elements corresponding to an integer in the list must be allocated together) but the values are individual, so I am unsure what variables to use. For instance, I could define $x_{ij}$ to be the value of the $(i,j)$ slot of the array and $y_{ij}=1$ if there is an element in $(i,j)$ ($0$ otherwise). But then I would have to somehow take into acoung the groups. I could introduce some variable $z_{ijk}$ telling whether the group $i$ (corresponding to $a_i$) is allocated in the row $j$ starting at column $k$. However, I haven't been able to work out the constraints. Another strategy that I have thought of is to find a heuristics to obtain some optimal allocation. I think the biggest groups should be allocated centered in the middle rows, and then if there is space left for some smaller groups on the sides, then we can allocate them there. However, I am not completely sure that this strategy works. Can someone give me any ideas on how to approach this problem?
The following is an integer linear programming formulation of the problem. For any $i \in \{1, 2, \dots, k\}$, we define the variables $r_i$ and $c_i$ such that the slots for group $i$ are at row $r_i$, starting from column $c_i$ to column $c_i + a_i - 1$. It is straightforward to define the objective function of the ILP using the variables $r_1, c_1, r_2, c_2, \dots, r_k, c_k$. As for the constraints, we can also easily find the constraints to limit the group within the $n \times m$ array: \begin{align*} 1 \leq r_i &\leq n,\\ 1 \leq c_i &\leq m + 1 - a_i. \end{align*} We also have the "separation" restriction: if group $i$ and group $j$ are in the same row, then must be separated by a slot (and cannot overlap). The relevant constraints would be \begin{align*} c_i - c_j + 2m|r_i - r_j| &\geq a_j + 1,\\ c_j - c_i + 2m|r_i - r_j| &\geq a_i + 1. \end{align*} To see why these constraints work, consider the following cases: * *Suppose the two groups are in different rows. Then $|r_i - r_j| \geq 1$. In this case, the first constraint always hold for any $c_i$ and $c_j$, because $$c_i - c_j + 2m|r_i - r_j| \geq 1 - m + 2m\cdot 1 = m + 1 \geq a_j + 1.$$ (The first inequality also uses the fact $c_i, c_j \in [1, m]$, while the last inequality assumes that $a_j \leq m$, i.e., a group cannot take more slots than the number of columns in the array.) The same conclusion also applies for the second constraint. *Suppose the two groups are in the same row. Then $|r_i - r_j| = 0$. So, the constraints can be simplified to \begin{align*} c_i - c_j &\geq a_j + 1,\\ c_j - c_i &\geq a_i + 1. \end{align*} It is easy to see that the two constraints above are satisfied iff the two groups satisfy the "separation" restriction. The absolute values in the constraints can be removed, e.g. with this method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4345872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Methodology to solve trigonometric equations such as $4x+\tan(x)=\pi$ I have to solve the following trigonometric equation: $$ 4\cdot \arctan(\omega_{180}) \ + \ \omega_{180} \ = \ \pi $$ which by setting $\arctan(\omega_{180}) = x$ becomes: $$ 4x \ + \ \tan(x) \ - \ \pi \ = \ 0 $$ In order to solve the latter equation in terms of $x$, I tried to replace the term of $\tan(x)$ by the Taylor series approximation of $\sin(x), \cos(x)$ using only the first two terms but the derived results were not correct. Anyway, I used MATLAB to solve it and it did solved it. The correct result is: $$ x = 0.61048 \Rightarrow \omega_{180} = 0.7 \text{rad/sec} $$ What I would like to ask is in which ways can I solve this equation by hand ? I don't want a complete solution as an answer but rather the methodologies I could study and then apply in such trigonometric equations.
Since it is transcendental, there is no explicit expression for the zero of function $$f(x)=4x+\tan(x)-\pi$$ and numerical methods (or approximations) are required. Being very lazy, expand $f(x)$ as a series around $x=0$ and obtain $$f(x)=-\pi +5 x+\frac{x^3}{3}+\frac{2 x^5}{15}+\frac{17 x^7}{315}+\frac{62 x^9}{2835}+\frac{1382 x^{11}}{155925}+O\left(x^{13}\right)$$ Now, using series reversion, $$x=t-\frac{t^3}{15}-\frac{t^5}{75}-\frac{t^7}{7875}+\frac{67 t^9}{70875}+\frac{5017 t^{11}}{19490625}+O\left(t^{13}\right)$$ where $t=\frac{f(x)+\pi}{5}$. Since we desire $f(x)=0$ then $t=\frac \pi 5$. Replacing, the estimate is then $$x \sim \color{red}{0.610487}235$$ while the exact solution given by Newton method is $0.610487119$ Just for the fun of doing it almost by hand, all of the above was done using my $50$ years old non-programmable calculator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4346011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving: if P then (if p then q) without dependencies (How Logic Works Exercise 3.2.3) I am working through Hans Halvorson's How Logic Works: A User's Guide and I am struggling to solve Exercise 3.2.2 which reads: Prove the following sequent: ⊢ Q → (P → Q) My proposed solution using the rules Halvorson has thus far presented(namely: conjunction elimination/introduction, disjunction intro, modus ponens, modus tollens, double negation, and conditional proof) : 1 (1) Q → (P → Q) a 2 (2) Q a 1,2 (3) P → Q 1,2 MP 1 (4) Q → (P → Q) 2,3 CP (5) (Q → (P → Q)) → (Q → (P → Q)) 1,4 CP The use of a conditional proof (CP) on line 5 seems off to me and obviously the statement (Q → (P → Q)) → (Q → (P → Q)) ≠ ⊢ Q → (P → Q). Any guidance on how to use CP more appropriately would be appreciated.
If $P = T, Q = T$, then $Q \to (P \to Q)=T$. If $P = T, Q = F$, then $Q\to (P\to Q)= F \to F = T$. If $P = F, Q = T$, then $Q\to (P\to Q)=T\to T = T$. Finally, if $P = F, Q = F$, then $Q\to (P\to Q)= F\to T = T$. Thus $Q\to (P\to Q) = T$ for any values of $P, Q$. Hence it is a tautology, proving the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4346143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does partial differentiation give for a second degree equation which doesn't represent a conic? Question says: Find real solution to the equation $$3x^{2}+3y^{2}-4xy+10x-10y+10=0.$$ My first thought was to treat it as a general conic ( $ax^2 + 2hxy + by^2 + 2gx + 2fy + c=0 $) but when I do so, its discriminant is lesser than $0$. And on plotting on geogebra, also it only shows a point. And this is the same point we get after doing partial differentiation and solving for $x,y$. Generally for a conic, this $(x,y)$ represents the center of the conic. But for equations like these which don't represent a conic, how does doing partial differentiation and then solving, gives us the integer solution to it. If it is center, then there should be a conic too. My friends told that conic is in a complex plane, but shouldn't center should also be in complex plane? Sorry Idk much about conics which exist in both planes, so maybe what I wrote in last 2 lines is completely wrong. Pardon me for that.
Multiplying by $\,3\,$ then "completing the square" for the quadratic in $\,x\,$: $$ \begin{align} & 9x^2+9y^2-12xy+30x-30y+30 \\ = \;\; &9x^2 - 6\, (2y-5)\,x \color{red}{+(2y-5)^2-(2y-5)^2} +9y^2-30y+30 \\ = \;\;&\big(3x-(2y-5)\big)^2 + 5y^2-10y+5 \\ = \;\;&(3 x - 2 y + 5)^2 + 5 (y-1)^2 \end{align} $$ It follows that the only real solution is $\,y=1, x=-1\,$. [ EDIT ] $\;$ The final equation can be written as $\,x'^{\,2}+ y'^{\,2}=0\,$ with $\,x' = 3x-2y+5\,$ and $\,y' = \sqrt{5} y-\sqrt{5}\,$, which is a degenerate conic, specifically two intersecting complex lines $\,x' \pm i\, y' = 0\,$ with the single common real point at $\,x' = y' = 0\,$. This is consistent with OP's findings of discriminant $\,\le 0\,$ and center of the conic at the unique real point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4346289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find all continuous function from $(\mathbb{R},d_{2})$ to $(\mathbb{R},d_{disc})$ and all continuous function other way around The set of real numbers is equipped with two metrics: the standard metric $d_{2}(x,y)=|x-y|$ and discrete metrics $d_{disc}(x,y)=1$ if $x\neq y$ and $d_{disc}(x,y)=0$ if $x= y$. I have to find all continuous function from $(\mathbb{R},d_{2})$ to $(\mathbb{R},d_{disc})$ and also all continuous function from $(\mathbb{R},d_{disc})$ to $(\mathbb{R},d_{2})$. I do not know how to even start with solving this. Any help?
$f:(\mathbb{R}, d_{disc}) \rightarrow (\mathbb{R}, d_2)$ Then you can show $f$ is continuous. (If the domain is endowed with a discerte metric then any function defined on that domain must be continuous irrespective of the target metric space) Proof : 1) Choose any open set $U\in \tau(d_2) $ , then $$f^{-1}(U)\subset \Bbb{R}$$ And, $\tau(d_{disc}) ={\scr{P}}(\Bbb{R})$ In other words, every subset of $\Bbb{R}$ in the discrete metric is open (also closed) . Hence, $$f^{-1}(U)\in \tau(d_{disc})$$ So, any function $f:(\mathbb{R}, d_{disc}) \rightarrow (\mathbb{R}, d_2)$ is continuous. Now, in other direction, $f:(\mathbb{R}, d_2) \rightarrow (\mathbb{R}, d_{disc})$ is continuous then $f(\Bbb{R})$ being continuous image of connected set must be connected in the space $(\mathbb{R}, d_{disc})$ . And we know only connected subsets of $(\mathbb{R}, d_{disc})$ are all one point sets. (Check: If more than one point is included then you can able to find a non-trivial proper clopen (both open and closed) subset) Hence, $f(\Bbb{R}) ={c} , $ for some $c\in \Bbb{R}$ Hence, $f$ is a constant function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4346588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Dimension of maximal totally isotropic space over finite field with standard bilinear form Let $p$ be a prime and consider the vector space $\mathbb{F}_p^n$, equipped with the standard bilinear form $\langle x, y \rangle = \sum_{i=1}^n x_iy_i$. A subspace $U$ is said to be totally isotropic if $\langle u, v \rangle = 0$ for every two (not necessarily distinct) $u,v \in U$. What can we say about the dimension of a maximal totally isotropic subspace? A very well known upper bound is $\dim U \leq \lfloor \frac{n}{2} \rfloor$ and it follows by $\dim U + \dim U^{\perp} = n$ and $U \subseteq U^{\perp}$. But how really is this bound attainable for $p>2$? For $p=2$ there is an easy example - take the subspace with basis $(1,1,0,\ldots,0)$, $(0,0,1,1,0,\ldots,0)$, $(0,0,0,0,1,1,0,\ldots,0)$, etc. If you mimic this for other $p$ (by taking chunks of $p$ consecutive $1$-s) the result would be of dimension $\lfloor \frac{n}{p} \rfloor$. So is the truth closer to $\lfloor \frac{n}{p} \rfloor$ or to $\lfloor \frac{n}{2} \rfloor$? If it is too hard to answer for all $p$, perhaps focusing on $p=3$ only could give good insight. Any help appreciated!
Supplementing reuns's excellent answer with the following. Result: If $n\equiv2\pmod4$ and $-1$ is not a square in the field $K=\Bbb{F}_q$, then the space $K^n$, equipped with the standard bilinear form, does not have an $n/2$-dimensional totally isotropic subspace. Proof. Write $k=n/2$, so $k$ is odd. Assume contrariwise that $V$ is a totally isotropic subspace. Let $g_1,g_2,\ldots,g_k$ be a basis of $V$. Let $A$ be the $k\times 2k$ matrix with rows $g_1,\ldots,g_k$. Without loss of generality we can assume that $A$ is in a reduced row echelon form. By shuffling the columns, if necessary, we can further assume that $A$ has the block form $$ A=\left(\,I_k\,\vert\, B\,\right) $$ where $B$ is some $k\times k$ matrix. The assumption about total isotropicity is equivalent to the requirement $AA^T=0_{k\times k}$. By expanding the matrix product in block form we see that this is equivalent to the requirement $$ BB^T=-I_k. \qquad(*) $$ Let's look at the determinants. On the left hand side of $(*)$ we have $(\det B)^2$. On the right hand side we have $(-1)^k$. As $-1$ was not assumed to be a square, and $k$ is odd, this is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4346736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
How do I determine A polynomial "P" that has a degree equal or inferior to $3$ that has these properties $1$ is a root of degree 2 and has a remainder of $6X+2$ when divided by $X^2+1$. I tried writing it as $$P(X)=(X-1)^2(aX+b) + R(X)$$ and $$P(X)=(X^2+1)(cX+d) + 6X+2$$ and couldn't find a solution.
It is $R(X)=0$. In second case it is $cX+d$ and not $aX+b$ . Then write $P(X) = P(X)$: $$(X-1)^2(aX+b) =(X^2+1)(cX+d) + 6X+2$$ and do the comparing of coefficents.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4346862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that isometries of the hyperbolic plane map horocycles to horocycles Working with the half-plane Poincaré model, I need to prove that isometries of the hyperbolic plane map horocycles to horocycles. The definition I've been given of horocycle is that of a curve that's perpendicular to every geodesic in an "ultraparallel family". An "ultraparallel family" is the set of all the geodesics that have a point $(x,0)$ as one of their extremes. I've tried using the knowledge of the form of the isometries in this model, which is $$ \varphi(z) = \frac{az+b}{cz+d}, $$ but I haven't reached anything interesting. Up until now I had worked just with the disk model, so I'm at a loss. Thanks in advance for any help.
Having a more explicit description of horocycles helps here. In the disk model, horocycles are exactly the circles on the Riemann sphere contained in $\mathbb{D} = \{ z \mid |z| < 1\}$ and which are tangent to the circle $S = \{ z \mid |z| = 1\}$. The isometries of $\mathbb{D}$ in this model are the Möbius transformations that send $\mathbb{D}$ and $S$ to themselves. It is a standard lemma in complex analysis is that Möbius transformations map circles on the Riemann sphere to circles on the Riemann sphere. So under a hyperbolic isometry, any horocycle is mapped to a circle on the Riemann sphere contained in $\mathbb{D}$ and tangent to $S$, hence another horocycle. An addendum: with this definition of horocycles, a proof is to observe (a) ultraparallel families of geodesics are invariant under hyperbolic isometries, and (b) the property of being everywhere perpendicular to an ultraparallel family of geodesics is invariant under isometries. The first point follows from the description of isometries as Möbius transformations, the second from angle-preservation of isometries.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4347112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The limit of sequence of functions Let $(f_n)(x)=2n$, $0 < x ≤ 1/n$ and $ f_n(x) = 1$, otherwise. Find the pointwise limit function $f$ on $[0, 1]$. I tried observing $f_n(x)$ for $n= 1,2,..$. We see that the value of $f_n$ is $2n$ on $(0,1/n]$, and $0$ elsewhere. My question is, as the interval in which the function takes non zero value is decreasing in size, and its size tends to $0$ as $n\rightarrow \infty$, will the point wise limit be $1$? Edit: If N is any natural number, for all n greater than N, $1/n \leq 1/N$. So, $f_n(x)=2n$ for all $n\geq N$. Does this imply that there is no point wise limit?
hint Let $ x\in[0,1] $ fixed. If $ x=0 $, then $ f_n(0)=1=f(0)$. If $ x>0 $, then for $ n $ large enough , $$0<\frac 1n <x \implies f_n(x)=1=f(x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4347252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Let $X$ be a random variable with Cauchy distribution, compute the density function of $Y=\frac{1}{1+X^2}$ Let $X$ be a random variable with Cauchy distribution, if $$Y=\frac{1}{1+X^2}$$ then compute the density function of $Y$ I have tried to use some random variable transformation theorem using the functions $\frac{1}{1+X}$ and $X^2$ but I have not been able to get very far, any suggestion or help would be greatly appreciated.
$$P(Y \leq y)=P(X^{2} \geq \frac 1y -1)$$ $$=2P(X \geq \sqrt {\frac 1 y -1})$$ $$=2\int_{\sqrt {\frac 1 y -1}}^{\infty} \frac 1 {\pi} \frac 1 {1+t^{2}} dt.$$ Now just differentiate w.r.t. $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4347371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Atiyah-Macdonald Proposition 1.2 Proposition 1.2. Let $A$ be a ring $\neq 0$. Then the following are equivalent: i) $A$ is a field ii) the only ideals in $A$ are $0$ and $(1)$ iii) every homomorphism of $A$ into a non-zero ring $B$ is injective. I am confused by how ii) implies iii). Given a homomorphism of $A$ into a non-zero ring $B$, the kernel of the homomorphism is an ideal. So by ii), it has to be $0$ or $(1)$. If it is $0$, $A$ is injective. But why can't it be $(1)$?
Say we have that $\phi:A\rightarrow B$ is a homomorphism and $B$ is a non-zero ring, then we have that $\ker\phi$ is an ideal of $A$, so $\ker\phi=0$ or $A$. $\ker\phi\neq A$ since if $\ker\phi=A$ that would imply that $\phi(1_A)=0_B$, but $\phi(1_A)=1_B$, so this would give us that $1_B=0_B$, but the only ring where $1$ and $0$ are the same is the zero ring (and we assumed that $B$ is a non-zero ring). Thus, we conclude that $\ker\phi=0$, so $A$ is injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4347502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is $f \mapsto (x\mapsto \int_0^x f\,\mathrm d\lambda)$ weak-strong continuous from $L^1([0,1])$ to $C([0,1])$? A mapping $T : L^1([0,1]) \to C([0,1])$ is called weak-strong continuous at a point $f\in L^1([0,1])$, if for every sequence $(f_n)_n$ of functions in $L^1([0,1])$ that converges weakly to $f$ $\left(\text{i.e.}~\forall g\in L^\infty([0,1]): \lim_{n\to\infty} \left| \int_0^1 f_n\cdot g\,\mathrm d\lambda - \int_0^1 f\cdot g\,\mathrm d\lambda \right| = 0\right)$, we have $$ \lim_{n\to\infty}\left\lVert T(f) - T(f_n) \right\rVert_\infty := \lim_{n\to\infty} \max_{x\in[0,1]} T(f)(x) - T(f_n)(x) = 0 .$$ I am interested in the Volterra operator $T:L^1([0,1]) \to C([0,1]), f\mapsto (x\mapsto \int_0^x f\,\mathrm d\lambda)$. Is it weak-strong continuous in the sense above? Edit: I have tried the standard "template" for a counterexample: $f_n = n \cdot \mathbf 1_{[0,1/n]}$ and $f \equiv 1$, but $f_n$ does not converge weakly to $f$ as the witness $g= \mathbf 1_{[0,1/2]}$ demonstrates.
To complete MaoWao's answer, here's how you can do without the Dunford-Pettis theorem: suppose that $f_n$ converges weakly to $f$ in $L^1$. Let $\epsilon >0$, then there exists $\delta >0$ such that $$\tag{1}\left|\int_A f(z)\, dz\right|\leq\int_A |f(z)|\, dz < \epsilon $$ whenever $|A|< \delta$. Now weak convergence implies that there exists $N$ such that $$\left|\int_x^y f_n(z)\, dz \right| < 2\epsilon$$ for every $n >N$ and every $x,y\in [0,1]$ with $|x-y|<\delta$. It remains to control the first $N$ terms and for this we can repeat the argument used for $(1)$: for every $n\leq N$, there exists $\delta_n>0$ such that $$\left|\int_x^y f_n(z)\, dz \right| <2\epsilon$$ for every $x,y \in [0,1]$ such that $|x-y| < \delta_n$. Now take $\delta' = \min(\delta, \min_{n\leq N} \delta_n)$. This implies that $$|Tf_n(x) - Tf_n(y)| =\left|\int_x^y f_n(z)\, dz \right|< 2\epsilon$$ for every $n$ whenever $|x-y|< \delta'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4347627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$|a_n-\alpha|<2\epsilon$ valid as a definition of convergence? My question is concerning the proof below taken from Ivan Wilde’s notes on real analysis which are freely available on the web. Is coming within $\large k\epsilon$ [$k\in\mathbb Z$] of a point as effective a proof of convergence as coming within $\large \epsilon$ of it? For k$\epsilon$ can be made arbitrarily small. Against - (and specifically regarding the question below) - since the reasoning always comes out at 2$\epsilon$ it cannot ever meet the definition of convergence * *but then you could just substitute and redefine the variable at the last step. *is there a meaningful analogy between this and one of the better known proofs of the irrationality of $\surd 2$ whereby the numerator of $\frac{p}{q}$ is always even, and it is that which provides the contradiction. THEOREM Suppose that $(a_n) [n ∈ N]$ is a sequence such that $a_n$ → α and also $a_n$ → β as n → ∞. Then α = β, that is, a convergent sequence has a unique limit. PROOF Let ε > 0 be given. Since we know that $a_n$ → α, then we are assured that eventually $(a_n )$ is within ε of α. Thus, $\exists\;N_1 ∈ N: n > N_1\implies |a_n - α|<ε$ Similarly, we know that $a_n$ → β as n → ∞ and so eventually $(a_n)$ is within ε of β. Thus, $\exists\; N_2 ∈ N: n > N_2\implies |a_n - α|<ε$ So far so good. What next? To get both of these happening simultaneously, we let $N = max { N_1 , N_2 }$. Then n > N means that both $n > N_1$ and also $n > N_2$ . Hence we can say that if n > N then both $| a_n − α | < ε$ and $| a_n − β | < ε$. Now what? We expand out these sets of inequalities. n > N (for example n = N + 1 would do). Then Pick and fix any $α − ε <a_n < α + ε$ $β − ε < a_n < β + ε$ The left hand side of the first pair together with the right hand side of the second pair of inequalities gives $α − ε < a_n < β + ε$ and so $α − ε < β + ε$. Similarly, the left hand side of the second pair together with the right hand side of the first pair of inequalities gives β − ε < a n < α + ε and so $β − ε < α + ε$. Combining these we see that $−2ε < α − β < 2ε$ which is to say that $\large| α − β | < 2ε$. This happens for any given ε > 0 and so the non-negative number $| α − β |$ must actually be zero. But this means that α = β and the proof is complete.
It is totally correct actually you can fix $ε>0$; then when applying the definition of convergence in coming steps set $ε'=ε/2>0$ then you will have at the end $ε/2+ε/2=ε$ actually you can prove easily that for a given function of epsilon $f(ε)$ such that if epsilon goes to 0 , then $f(ε)$ goes to 0 . if you have $(∃N∈N) n≥N⟹|a_n−α|<f(ε).$ then $(∀ε>0)(∃N'∈N) n≥N'⟹|a_n−α|<ε.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4347760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Exercise Problem 43, Chapter 4, Intro to Probability, Blitzstein and Hwang I am self-learning basic undergrad calculus-based probability. I would like someone to verify if my solution to the below exercise problem on the mathematical expectation of a random variable is correct. [BH 4.43] A building has $n$ floors labelled $1,2,\ldots,n$. At the first floor, $k$ people enter the elevator, which is going up and is empty before they enter. Independently, each decides which of the floors $2,3,\ldots,n$ to go to and press that button (unless someone has already pressed it.) a) Assume for this part alone that the probabilities for floors $2,3,\ldots,n$ are equal. Find the expected number of steps the elevator makes on floors $2,3,\ldots,n$. b) Generalize (a) to the case that floors $2,3,\ldots,n$ have probabilities $p_2,\ldots,p_n$ respectively, you can leave your answer as a finite sum. Solution. (My attempt) a) As each person presses buttons from $\{2,3,\ldots,n\}$, we are essentially sampling from this population with replacement $k$ times. Let $A_j$ be the event that the floor $j$ is included in the floor list and let $I_{A_j}$ be its indicator function. \begin{align*} P(A_j) = \frac{(n-1)^{k-1}}{(n-1)^k} = \frac{1}{n - 1} \end{align*} Let $X$ be the total number of floors that the elevator stops at. Then, \begin{align*} X &= \sum_{j=2}^{n} I_{A_j}\\ E(X) &= \sum_{j=2}^{n-1}E(I_{A_j}) = 1 \end{align*} b) In this case, $P(A_j) = p_j$, so $E(X) = \sum_{j=2}^{n-1}p_j$.
You write $\begin{align*} P(A_j) = \frac{(n-1)^{k-1}}{(n-1)^k} = \frac{1}{n - 1} \end{align*}$. But that is not correct. It can be any of the persons or more who choose floor $j$. In other words, you cannot fix a specific person to choose floor $j$. So the right approach will be to first find the probability that nobody chooses floor $j$, That is given by, $\displaystyle \left(\frac{n-2}{n-1}\right)^k$ So, $ \displaystyle P(A_j) = 1 - \left(\frac{n-2}{n-1}\right)^k$ $E[X] = (n-1) \left[1 - \left(\frac{n-2}{n-1}\right)^k\right]$ You can follow the same approach for $(b)$ but the probability for each floor is different.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4347968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving series converges using Fourier series I need to prove that for every $0 \le x \le \pi$ $$\sum^{\infty}_{n=-\infty}\frac{e^{2inx}}{1-4n^2}=\frac{\pi}{2}\sin{x}$$ using Fourier series. Let $f(x)$ be a function such that its Fourier series is $$f(x)\sim\sum^{\infty}_{n=-\infty}\frac{e^{2inx}}{1-4n^2}=\sum^{\infty}_{n=-\infty}\frac{1}{1-4n^2}e^{2inx}=\sum^{\infty}_{n=-\infty}\frac{1}{1-4n^2}e^{in\pi x/(\pi/2)}$$ So I know $L=\pi/2$, $b_n=-2Im(\frac{1}{1-4n^2})=0$, and $a_n=2Re(\frac{1}{1-4n^2})=\frac{2}{1-4n^2}$ ($a_n$ and $b_n$ are the Fourier coefficients) hence the the series is of the form $$f(x)\sim \sum^{\infty}_{n=0}\frac{2}{1-4n^2}\cos{2nx}$$ I assume that in order to prove the required convergence I need to find $f(x)$ such that its Fourier series is as described above, but this is a series of an even function(it has only cosine) and $\frac{\pi}{2}\sin{x}$ is clearly an odd function, thus I don't understand how can I find such $f(x)$
For $0\le x\le\pi$, \begin{align} \sin(x)=|\sin(x)|&=\frac2{\pi}-\frac4{\pi}\sum_{n=1}^\infty\frac{\cos(2nx)}{4n^2-1}\\ &=\frac2{\pi}-\frac2{\pi}\sum_{n=1}^\infty\frac{\cos(2nx)+\cos(-2nx)+i\sin(2nx)+i\sin(-2nx)}{4n^2-1}\\ &=\frac2{\pi}-\frac2{\pi}\left(1+\sum_{n=-\infty}^\infty\frac{\cos(2nx)+i\sin(2nx)}{4n^2-1}\right)\\ &=\frac2{\pi}\sum_{n=-\infty}^\infty\frac{\cos(2nx)+i\sin(2nx)}{1-4n^2}\\ \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4348093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Collatz proven for primes = proven for all integers? I recently saw a video stating that if the Collatz conjecture was proven for prime numbers, it was proven for all numbers. That's the first time I see this and I can't find any references with the classical JFGI. It says something like this ($a$, $b$, $n$ are odd integers): $\sigma_p(n)$ is the Collatz trajectory ($p$ steps applied to $n$) and for a fixed function $f(b)$ not provided you have: $\sigma_p(an+b)=a\cdot\sigma_p(n)+f^p(b)$ and specially for odd prime factorization: $\sigma_p(an)=a\cdot\sigma_p(n)+f^p(0)$ Apparently, it can be shown that if $\sigma(n)<n$ than $\sigma(an)<an$ and it suffice to show for all prime $n$ since a finite stopping time for primes implies a finite stopping time for all its composite I tried on some numbers (e.g., $85=5\cdot 17$) but couldn't figure it out. Is there a paper on this subject? The video seems to talk about a $15$ page proof.
To prove this, you’d likely show “for any n >= 2, the collatz sequence leads to an integer < n, or to a prime.” This is most likely true, but very hard to prove, because the sequence is quite chaotic. That’s the same as the original problem, which is most likely true, but very hard to prove, because the sequence is quite chaotic. If someone had a proof for this conjecture that could be very insightful. I’d be surprised.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4348356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Two consecutive zeros in Poisson random variables, and consecutive wins in a binomial setting We have $X_1, ..., X_{100}$ i.i.d.r.v. having a Poisson distribution of parameter $\lambda$. I'd like to compute the expected value $E(N)$ for the number of times two consecutive $X_i$ are zero: $$N = \sum_{\qquad i \geq 1\\ X_i = 0\ \rm{and}\ X_{i+1} = 0} 1$$ I'm not sure if this is equivalent or not to calculate the expected value of the number of two consecutive wins in a binomial setting with $p=e^{-\lambda}$, and $n=100$. Is there a law for the k consecutive wins of a binomial setting?
As requested in comments: It would be "the same" as each case is independent. Precisely what this means slightly depends on how you count. If $100$ consecutive $0$s count as $99$ cases of two consecutive $0$s, then by linearity of expectation the expected number would be $$99p^2=99e^{-2\lambda}$$ and more generally $(n+1-k)\ p^k$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4348533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For what $\alpha$ is the series uniformly convergent on $[0,\infty)$ The problem is stated as: Prove that $\sum_{n=1}^{\infty} \frac{x^\alpha}{\sqrt{n} (n^2+x^3)}$ is uniformly convergent on $[0, \infty)$ for $\alpha = 2$. I thought I might challenge myself a bit, so from hereon, I'll try to show for what $\alpha$ in general, that this series converges uniformly on $[0,\infty)$. My attempt Let $u_n(x) := \frac{x^\alpha}{\sqrt{n} (n^2+x^3)}$. We want to use Weierstrass M Test, in order to do this, we want to find $\sup_{x\in[0,\infty)} u_n(x)$ Observe that $\alpha \geq 0$ in order to have continuity at $ x=0$. Let's work with $\alpha \geq 0$ from hereon. Taking the derivative, we get: $u'_n(x) = \frac{x^{\alpha - 1}\sqrt{n}[\alpha n^2+(\alpha-3)x^3]}{\sqrt{n}(n^2+x^3)^2}$ Obseve that $u_n(x)$ becomes a decreasing function for a certain x - value, let's call it $\hat{x}$ if we let $\alpha < 3$. For $\alpha \geq 3$ we have that $u_n(x)$ are monotonically increasing as functions. In that case, we can't find a supremum of our $u_n(x)$'s, where we wouldn't be able to apply Weierstrass M Test. Solving for $x$ in the case that $\alpha < 3$, we get that we have a maximum at $\hat{x} = (\frac{\alpha}{3-\alpha}n^2)^{1/3}$ Hence, we have that: $M_n(x) := \sup_{x\in[0,\infty)} |u_n(x)| = |u_n(\hat{x})| = |\frac{(\frac{\alpha}{3-\alpha}n^2)^{1/3})^\alpha}{\sqrt{n} (n^2+(\frac{\alpha}{3-\alpha}n^2))}|$ which asymptotically behaves as $n^{2\alpha/3-5/2}$ and for which we set $2\alpha/3-5/2<-1$ in order to apply Weierstrass M test. Thus, we get that $\alpha < 9/4$, and with what we find above, we decude that we have uniform convergence for $\alpha \in [0, 9/4)$ I'd be glad if you could share any tips if you find anything wrong in the solution. Thanks!
I think OP approach is good but here is a somewhat more complete analysis. Note that the sum always converges since it can be bounded by $\leq c n^{-2}$ for some $c$ (depending on $x$). One way to tackle the problem of uniform convergence is to consider the largest value per summand. First, we can easily discard $\alpha \geq 3.$ Indeed, uniform convergence signifies that the sequence $\gamma_p = \sup\limits_{x \in [0, \infty)} \sum\limits_{n \geq p} \dfrac{x^\alpha}{\sqrt{n}(n^2 + x^3)}$ obeys $\gamma_p \to 0.$ Clearly $\gamma_p \geq \sup\limits_{x \in [0, \infty)} \dfrac{x^\alpha}{\sqrt{p} (p^2 + x^3)}.$ If $\alpha = 3,$ the right hand side equals 1 and if $\alpha > 3$ the right hand side is unbounded. Therefore, $\alpha < 3$. Consider $u = u_\beta = \dfrac{x^\alpha}{\beta + x^3}.$ Then $u$ is maximised at $x = \sqrt[3]{\dfrac{\alpha \beta}{3 - \alpha}}.$ And considering anything but $\beta$ a constant, we see that $u \leq \hat{u} := u\left( \sqrt[3]{\dfrac{\alpha \beta}{3 - \alpha}}\right) \asymp \beta^{-(1 - \frac{\alpha}{3})}$ where $X \asymp Y$ means that the $aX \leq Y \leq b X$ for some "universal constants" $a$ and $b$ (the constants depend on $\alpha$ but not on $\beta$). Then, $$ \sum\limits_{n = 1}^\infty \dfrac{x^\alpha}{\sqrt{n}(n^2 + x^3)} \leq \sum_{n = 1}^\infty \dfrac{\hat{u}_{n^2}}{\sqrt{n}} \asymp \sum_{n = 1}^\infty \dfrac{n^{-(2 - \frac{2\alpha}{3})}}{\sqrt{n}} = \sum_{n = 1}^\infty n^{-\zeta} $$ where $\zeta = 2 + \dfrac{1}{2} - \dfrac{2\alpha}{3}.$ The right hand side converges when $\zeta > 1$ which entails $\alpha < \dfrac{9}{4}.$ Therefore, there there is uniform convergence for $0 \leq \alpha < \dfrac{9}{4}.$ The question remains open for $\dfrac{9}{4} \leq \alpha < 3.$ However, I suspect there will not be uniform convergence. Note that the proof gives that the maximums of each summand are at $c n^{\frac{2}{3}}$ for some $c$ depending solely on $\alpha.$ This says that the tail $\gamma_p$ will always take into account the worst-case scenario for an infinite number of summands, and the function $x \mapsto \dfrac{x^\alpha}{\beta + x^3}$ is quite flat for $\alpha < 3$ but close to 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4348791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show that the equation $\tan x = −x$ has a single root on the interval $(\frac{-\pi}{2}, \frac{\pi}{2})$. Show that the equation $\tan x = −x$ has a single root on the interval $(\frac{-\pi}{2}, \frac{\pi}{2})$. We have $0 \in (\frac{-\pi}{2}, \frac{\pi}{2})$ then $\tan(0)=-0$ But its unique because $f(x)=\tan(x)+x$ is differentiable in $(\frac{-\pi}{2}, \frac{\pi}{2})$ then $f'(x)= 1+\frac{1}{\cos^2(x)}=0$ we have $-1=\cos^{2}(x)$ its a contradiction. Am I right?
Alternative approach: Assuming that $-\pi/2 < x < \pi/2$ and that $x \neq 0$: * *$x$ is positive if and only if $\tan(x)$ is positive. *$x$ is negative if and only if $\tan(x)$ is negative. The above two bullet points follow, since the $\cos(x)$ is positive throughout the interval, the $\sin(x)$ is positive for $x$ positive and the $\sin(x)$ is negative for $x$ negative. Therefore, $x = 0$ is the only possible solution to the original problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4348902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Finding quotient of additive abelian group in Sage I have $G=\mathbb{Z}^3$, $H=\langle(3,2,4),(6,1,7),(2,3,6)\rangle \leq G$. My task was to find the quotient $G/H$ as the direct product of infinite cyclic groups and prime-power-order finite groups $(\mathbb{Z}^r \times \mathbb{Z}/n_1\mathbb{Z} \times ... \times \mathbb{Z}/n_k\mathbb{Z})$. Doing it by hand I obtained that $G/H$ is isomorphic to $\mathbb{Z}/25\mathbb{Z}$. However I wanted to check this result with Sage Math, and created these: G = AdditiveAbelianGroup([0,0,0]) H = G.submodule([(3,2,4),(6,1,7),(2,3,6)]) However doing G.quotient(H) returned that it was not an implemented function yet. sage: G.quotient(H) --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-64-ebaeef77a3b4> in <module> ----> 1 G.quotient(H) /opt/sagemath-9.2/local/lib/python3.7/site-packages/sage/groups/old.pyx in sage.groups.old.Group.quotient (build/cythonized/sage/groups/old.c:2910)() 204 NotImplementedError 205 """ --> 206 raise NotImplementedError 207 208 cdef class AbelianGroup(Group): NotImplementedError: sage: The fact that I can only find documentation for quotients of multiplicative abelian groups is discouraging. If this is not it, what can be the way to find and thus check quotients like these in Sage?
Welcome to MSE! If you look at the documentation for additive abelian groups, it says Additive abelian groups are just modules over $\mathbb{Z}$. Hence the classes in this module derive from those in the module sage.modules.fg_pid. The only major differences are in the way elements are printed. This is a clue that, if things aren't working, we should check out the fg_pid module instead. Going to the documentation there, we see an example which is easily adapted to your problem: # make a new ZZ-module with the obvious basis vectors G = span([[1,0,0],[0,1,0],[0,0,1]], ZZ) # make a submodule of G H = G.span([[3,2,4],[6,1,7],[2,3,6]]) # take the quotient G / H When you run G/H above, you should get the print out Finitely generated module V/W over Integer Ring with invariants (25) which tells you that $G/H \cong \mathbb{Z} / (25)$, agreeing with your computation. I hope this helps ^_^
{ "language": "en", "url": "https://math.stackexchange.com/questions/4349058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $\sum\limits_{n\in \mathbb N} \frac{2^{\omega(n)}}{n^s}=\frac{\zeta^2(s)}{\zeta(2s)}$. I am a graduate student of Mathematics. I have started reading number theory. I encountered a problem of analytic number theory. Show that $\sum\limits_{n\in \mathbb N} \frac{2^{\omega(n)}}{n^s}=\frac{\zeta^2(s)}{\zeta(2s)}$, where $\omega(n)$ is the number of distinct prime divisors of $n$. I have started by showing that $\omega(mn)=\omega(m)+\omega(n)$ for $(m,n)=1$. Which implies that $f(n)=2^{\omega(n)}$ is multiplicative. I don't know what to do next. Anyone has a clue?
By definition, we know $$ 2^{\omega(n)}=\prod_{p|n}(1+1)=\sum_{d|n}\mu^2(d) $$ Moreover, it can be verified that $$ \mu^2(d)=\sum_{d^2|n}\mu(d) $$ This suggests that \begin{aligned} \sum_{n\ge1}{\mu^2(n)\over n^s} &=\sum_{n\ge1}{1\over n^s}\sum_{d^2|n}\mu(d)=\sum_{d\ge1}\mu(d)\sum_{\substack{n\ge1\\d^2|n}}{1\over n^s} \\ &=\sum_{d\ge1}\mu(d)\sum_{k\ge1}{1\over(d^2k)^s}=\zeta(s)\sum_{d\ge1}{\mu(d)\over d^{2s}} \end{aligned} By the properties of Möbius inversion, it is evident that $\sum_{n\ge1}\mu(n)n^{-s}=1/\zeta(s)$, so using the properties of Dirichlet convolution we have $$ \sum_{m\ge1}{2^{\omega(m)}\over m^s}=\zeta(s)\sum_{n\ge1}{\mu^2(n)\over n^s}={\zeta^2(s)\over\zeta(2s)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4349331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that $S\subset \mathbb{N}$ generates the group $\mathbb{Z}$ iff $n_1s_1 + \dots + n_ks_k =1$ Show that a subset $S$ of $\mathbb{N}$ generates the group $\mathbb{Z}$ of all integers iff there exist $s_1, ..., s_k$ in $S$ and $n_1, ...,n_k$ in $\mathbb{Z}$ such that $n_1s_1 + \dots + n_ks_k =1$. It can be proved that if $S$ is a nonempty subset of a group $G$ then $\langle S\rangle=\{s_1^{n_1}s_2^{n_2}\cdots s_k^{n_k}\mid s_i\in S\text{ for }1\leq i\leq k,n_1,\cdots,n_k\in \mathbb{Z}\}$ where $\langle S\rangle=\cap_{H\in F}\quad\forall\quad \{H \mid H\leq G,S\subset H\}$ which is the smallest subgroup of $G$ containing $S$. i.e., If $S$ is a subset of a group $G$ then $\langle S\rangle$ is the subgroup generated by $S$ (every element of $\langle S\rangle$ can be expressed as a combination under group operation of finitely many elements of the subset $S$). If the group binary operation is addition then any element is of the form $n_1s_1+\cdots+n_ks_k$ My Attempt A subset $S$ of $\mathbb{N}$ generates the group $\mathbb{Z}$ implies, for any $m\in \mathbb{Z}$ we can write $m=m_1s_1 + ... + m_ks_k$ where $s_i\in S$ and $m_i\in \mathbb{Z}$ But where does the condition $n_1s_1 + ... + n_ks_k =1$ comes in ?
Once $1\in \langle S\rangle$, we can be sure that $S$ generates $\Bbb Z$. Can you see why? Also, since $1\in\Bbb Z$, if $S$ generates $\Bbb Z$, then there must be a linear combination of elements in $S$ that is equal to $1$. Your confusion might be because you're viewing $\langle S\rangle$ multiplicatively, not additively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4349456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that a function (involving the Cantor discontinuum ) is surjective. Exercise. Let $\mathcal{D} = \{2\sum_{i=1}^{\infty} x_i3^{-i} : \forall i \in \Bbb N, x_i \in \{0,1\}\}$ be the Cantor discontinuum. Now, consider the function \begin{equation*} g : \mathcal{D} \rightarrow [0,1] \quad \text{ s.t. } \quad g\left(2\sum_{i=1}^{\infty} x_i3^{-i}\right) = \sum_{i=1}^{\infty} x_i2^{-i}. \end{equation*} Prove that $g$ is a surjective function. My attempt. We want to prove that $g$ is surjective, i.e., that \begin{equation*} \forall x \in [0,1], \exists y \in \mathcal{D} \hspace{.15cm} | \hspace{.15cm} g(y) = x \end{equation*} Let $y \in \mathcal{D}$. Then, we might assume that $y = 2\sum_{i=1}^{\infty}x_i3^{-i}$, where $x_i \in \{0,1\}$. Besides this, we also see the following: \begin{equation*} g(y) = \sum_{i=1}^{\infty} x_i2^{-i} = \frac{1}{2}\left(2\sum_{i=1}^{\infty}x_i3^{-i}\left(\frac{2}{3}\right)^{-i}\right) = \frac{1}{2}y\sum_{i=1}^{\infty}\left(\frac{2}{3}\right)^{-i} \end{equation*} and thus, \begin{equation*} g(y) = x \Leftrightarrow \frac{y}{2}\sum_{i=1}^{\infty}\left(\frac{2}{3}\right)^{-i} = x \Leftrightarrow y = 2\sum_{i=1}^{\infty} (x2^i)3^{-i}. \end{equation*} And I believe this should be enough to prove the surjectivity. BUT $y$ doesn't seem to be a member of $\mathcal{D}$ since we can't say that $x_i = x2^{i} \in \{0,1\}$ (If $i$ is any integer and $x$ is any number on the unitary interval, then this obviously isn't true). This leads me to conclude that my resolution might not be right at all, and this is why I am posting this. Thanks for any help in advance.
I think your approach is too complicated. You should know that each sequence $(x_i)$ with $x_i \in \{0,1\}$ determines a unique point $\phi((x_i)) = 2\sum_{i=1}^\infty x_i3^{-i} \in \mathcal{D}$. And should also know that each $x \in [0,1]$ has a dyadic representation $x = \sum_{i=1}^\infty x_i2^{-i}$ with $x_i \in \{0,1\}$. This is not unique, e.g. we have $1/2 = \sum_{i=1}^\infty x_i2^{-i}$ with $x_1 = 1$ and $x_i = 0$ for $i > 1$ and also $1/2 = \sum_{i=1}^\infty x_i2^{-i}$ with $x_1 = 0$ and $x_i = 1$ for $i > 1$. But despite of non-uniqueness this shows that your function $g$ is surjective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4349567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to solve coupled second order differential equations I have the following coupled differential equations: $$ 2y''- 3y' + 2z' + 3y + z = e^{2x}$$ $$y''- 3y' + z' + 2y - z = 0 $$ I'm not sure how to solve them as if I try $y = Ae^{\lambda x} $ and $z = Be^{\lambda x}$, I only get one value for $\lambda$ ($\lambda = 1$) so am not sure how to form a complementary function. Any suggestions? Thanks.
It is visible after some contemplation that the highest derivatives of $z$ and $y$ occur in the same linear combination in both equations, marking the system as a DAE system of index at least one. To better handle the structure, define $u=y'+z$ for this combination and eliminate $z$ from the system. \begin{align} 2u' - 4y' + 3y + u &= e^{2x}\\ u'- 2y' + 2y - u &= 0 \end{align} Again the terms in the highest derivatives $y',u'$ occur in the same linear combination. So set $v=u-2y=y'+z-2y$ and eliminate $u$ against $v$ to get an explicit index-1 DAE system. \begin{align} 2v' + 5y + v &= e^{2x}\\ v' - v &= 0 \end{align} This now can be isolated for the highest order derivatives (or non-derivatives) $(y,v')$ to get \begin{align} 5y &= e^{2x}-3v\\ v' &= v \end{align} This system, which resulted just from easily reversible variable substitutions, now indeed has only one integration constant. So, inserting backwards, $$ v=Ce^x,\\ y=\frac15(e^{2x}-3Ce^x),\\ u=v+2y=\frac15(2e^{2x}-Ce^x)\\ z=u-y'=\frac15(2e^{2x}-Ce^x)-\frac15(2e^{2x}-3Ce^x)=\frac25Ce^x $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4349716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Find the minimum value of $1000a+\frac{1}{1000b(a-b)}$ Given that $a>b>0$ , find the least value of:- $$1000a+\frac{1}{1000b(a-b)}$$ Can anyone please help me out with this one? I tried but couldn't think of anything. Tried using A.M.G.M inequality but that gives an expression containing both $a$ and $b$ in the expression so it can't be used to find the least value i think. Please! Help! Thanks a lot!
Hint: For any fixed $a > 0$, the quantity $1000a+\dfrac{1}{1000b(a-b)}$ is minimized by making $b(a-b)$ as large as possible. What value of $b$ (in terms of $a$) does this? After figuring that part out, you'll be left with a one variable problem, which can be done with AM-GM or basic calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4349999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is there only one inner product? I've read a book about linear algebra, which does not rush into introducing vectors as a list of numbers, i.e. looking at axioms only. Looking at it this way, it seemed that the dot product $\sum x_iy_i$ is not a definition of the inner product, but rather can be derived from the inner product axioms and is the only inner product (in a sense). Is this true? If you just take the vector axioms and the inner product axioms, then vectors are just objects and there is no mention of lists of numbers. It is only when you decide on a representation with a basis, that lists of numbers appear. To me it seems, that you can always choose an orthonormal basis which implies $\sum x_iy_i$. Therefore this dot product can be derived from the axioms, it is not an arbitrary definition and it is the only one. Or in other words, if you make up a different equation for the inner product, then your basis is probably not orthonormal and you should have chosen a different basis instead. Therefore, is $\sum x_iy_i$ the only inner product given that you choose an appropriate basis? (What changes if you use a different signature for the multiplication, i.e. vectors square to -1 or 0? Is there something additional to consider if you use a complex vector space?)
Assuming that a vector space $V$ is finite-dimensional, then, yes, if $\langle\cdot,\cdot\rangle$ is an inner product defined on $V$, then there is some basis $\{v_1,v_2,\ldots,v_n\}$ of $V$ (where $n=\dim V$) such that if $v=\sum_{k=1}^n\alpha_kv_k$ and $w=\sum_{k=1}^n\beta_kv^k$, then$$\langle v,w\rangle=\sum_{k=1}^n\alpha_k\beta_k.$$Simply take any basis $\{e_1,e_2,\ldots,e_n\}$ of $V$, apply the Gram-Schmidt orthogonalization to it, and then you will the basis $\{v_1,v_2,\ldots,v_n\}$ that you're after.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4350181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why does putting linearly dependent vectors in a matrix and row reducing yield their coordinates? Say we have the vectors $a = (1,2,3,4),$ $b = (2,-1,1,1),$ $c = (-1,8,7,10)$ and $d = (-5, 5, 0, 1).$ They are linearly dependent and span a plane. We can prove this by putting them as columns in a matrix $$(a \:\:\: b \:\:\: c \:\:\: d) = \begin{pmatrix}1&2&-1&-5\\2&-1&8&5\\3&1&7&0\\4&1&10&1 \end{pmatrix},$$ and row reducing to find its rank. Row reducing yields $$ \begin{pmatrix} 1 & 0 & 3 & 1\\ 0 & 1 & -2& -3 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{pmatrix},$$ we see the rank is two and that $a$ and $b$ span the plane. My question is the following: if we let $a$ and $b$ be a basis for the plane that these four vectors span, we see that the first two rows tell us the coordinates with respect to this (ordered) basis, for example $c$ with respect to the basis $a$ and $b$, has coordinates $(3,-2)$ since $c = 3a -2b$. Why is it the case that this shows up in this row reduced matrix? I've been thinking about it for a while and I can't seem to prove it or even explain it with intuition.
Starting with the matrix $A$ as follows $A = [ v_1 , v_2, ..., v_m ] $ Reduce it to reduced-row echelon form $\widetilde{A} $, this is equivalent to premultiplying $A$ with the matrix $E$ $ \widetilde{A} = E A = [E v_1, Ev_2, ..., E v_m] $ where $E v_k$ is either an elementary unit vector $e_i$ or a linear combination of the other $e_i$'s. Thus if $ v_k = \displaystyle \sum_{i=1, i \ne k}^m \alpha_i v_i $ Then $ E v_k = \displaystyle \sum_{i=1, i \ne k}^m \alpha_i E v_i $ Thus we can identify the vectors that are linearly independent, by identifying those columns that reduce to different $e_i$'s, while the others are linearly dependent on them. And the coefficients of this dependency are the entries in the reduced row echelon form for that column.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4350365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Equivalent characterizations of Henselian Rings (Theorem 4.2 in James Milne's "Étale Cohomology") I am stuck on a step in the proof of Theorem 4.2 in Chapter I of James Milne's "Étale Cohomology". The particular implication is (c) $\Rightarrow$ (d). Let $X=\text{Spec} (A)$, where $A$ is a local ring with the maximal ideal $m$, and let $x$ be the unique closed point of $X$. (c) If $f:Y\to X$ is a quasi-finite separated morphism, then $Y=Y_0\coprod Y_1\coprod\ldots\coprod Y_n$, where $f(Y_0)$ does not contain $x$ and $Y_i$ are spectrums of local rings and are finite over $X$ for $i\geqslant 1.$ (d) For any étale morphism $g:Y\to X$ such that there is a point $y\in Y$ such that $f(y)=x,\ k(y)=k(x)$ there is a section $s: X\to Y.$ The proof states: Using (c), we may reduce the question to the case of a finite étale local homomorphism $A\to B$ such that $k(m)=k(n)$, where $n$ is the maximal ideal of $B$. According to (2.9b), $B$ is a free $A$-module, and since $k(n)=B\otimes_A k(m)=k(m)$ it must have rank 1, that is, $A\approx B.$ I think everything makes sense if we add the separated hypothesis to (d). As I understand the proof in this case, we are applying (c), restricting to the $Y_i$, and then we find at least one of them such that there is a $y\in Y_i$ with $f(y)=x,\ k(y)=k(x)$, and finally use flatness to argue that some $Y_i$ is isomorphic to $X$. But without this additional hypothesis, I don't see how (c) can be applied.
Yes, the morphism in (d) should be assumed separated. This appears to be an oversight on the author's part. Sorry I bothered trying to help. I'll refrain in future. Note, this answer was deleted almost instantly by the vigilantes José Carlos Santos, Harish Chandra Rajpoot, amWhy, but the the OP found it helpful, so I put it back up.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4350469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is it that functions with nonisolated singularities at a point do not have Laurent series at that point? Learning complex analysis, I've been taught that a function like csc(1/z) cannot have a Laurent series at 0, because there is a nonisolated singularity there. If I recall correctly, one needs to not have a nonisolated singularity such that the Cauchy-Goursat theorem can be used to prove the function to be holomorphic on the annulus. But this does not show that one cannot possibly find a Laurent series for some such function, does it? I suppose, one could argue by contradiction, that any closed integral around this point would have infinite converging residues, but even this does not seem immediately contradictory, as there are many infinite series which converge to 0. Can someone shed light as to how proving a nonisolated singularity necessarily proves there exists no Laurent series?
It's not true. Any function that is analytic on an annulus has a Laurent series about the centre of the annulus that converges on the annulus. Thus $\csc(1/z)$ has Laurent series about $0$ for each of the annuli $1/((n+1)\pi) < |z| < 1/(n \pi)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4350627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Help me understand easy (not for me) concepts in volume integral Keep looking at the page for an hour. Still not sure how I can get the sloping surface of $x+y+z=1$ and integration ranges for $x, y, z$. and why their range is different too. The book keeps throwing things at me without much explanation. Help me.
This is an attempt to capture how I tend to visualize a region when setting up a volume integral. Consider the following plots: On the left, the forest of arrows join the $y,z$-plane (i.e. the plane $x=0$) and the given plane $x+y+z=1$. They represent the range that the variable $x$ can take on, $0 \le x \le 1 - y - z$. On the right, there's a similar array of arrows that join the $z$-axis (i.e. the line $x=y=0$) to the line where the plane $x+y+z=1$ intersects $x=0$. This line has equation $y + z = 1$, or $y = 1 - z$. These arrows represent the range that $y$ can take on, $0 \le y \le 1 - z$. It's not pictured, but the last step is recognizing that $z$ must range between $0$ and $1$. Then the volume of the region is given by the triple integral $$\int_0^1 \int_0^{1-z} \int_0^{1-y-z} dx \, dy \, dz$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4350774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Calculate $\lim_{x\to \infty}(\frac{\sqrt[x]{2} + \sqrt[x]{3}} {\sqrt[x]{4} + \sqrt[x]{5}})^x$ Calculate $\lim_{x\to \infty}(\frac{\sqrt[x]{2} + \sqrt[x]{3}} {\sqrt[x]{4} + \sqrt[x]{5}})^x$. First off, it's easy to see that $\lim_{x\to \infty}\frac{\sqrt[x]{2} + \sqrt[x]{3}} {\sqrt[x]{4} + \sqrt[x]{5}}$ = 1. Therefore, I tried the following: $$\lim_{x\to \infty}(\frac{\sqrt[x]{2} + \sqrt[x]{3}} {\sqrt[x]{4} + \sqrt[x]{5}})^x= \lim_{x\to \infty}(1 +(\frac{\sqrt[x]{2} + \sqrt[x]{3}} {\sqrt[x]{4} + \sqrt[x]{5}} -1))^x = e^{\lim_{x\to \infty}x(\frac{\sqrt[x]{2} + \sqrt[x]{3}} {\sqrt[x]{4} + \sqrt[x]{5}} -1)}.$$ Now I find myself stuck at finding $\lim_{x\to \infty}x(\frac{\sqrt[x]{2} + \sqrt[x]{3}} {\sqrt[x]{4} + \sqrt[x]{5}} -1)$. Keep in mind that I am not allowed to use l'Hopital. Any hint would be appreciated. Thanks.
In fact $\lim_{x\to\infty}\left(\frac{\sqrt[x]{a}+\sqrt[x]{b}}{2}\right)^x=\sqrt{ab}$ for $a,\,b>0$. This is a comparison (in the $2$-variable unweighted case) of power means to the geometric mean. Now just take the ratio of two cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4350948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Help with computing the $n$-th derivative of $f(x)=e^{-cx}\sum_{k=1}^{\infty}\frac{c^k}{(k+m)(k-1)!}(x-x_u)^k$ I've been dealing with a problem for sometime and after several attempts, I'm still not able to solve it so here it goes: Define function \begin{align} f(x)=e^{-cx}\sum_{k=1}^{\infty}\frac{c^k}{(k+m)(k-1)!}(x-x_u)^k. \end{align} $f$ is defined for all $0\leq x \leq x_u$. From data and observations, $c>0$ and $m$ is a positive integer. We are interested in having a closed-form of the $n$-th derivative of $f(x)$. We tried different things but perhaps a good approach would be to write \begin{align} \partial_x^{(n)}f(x)=\sum_{k=1}^{\infty}\frac{c^k}{(k+m)(k-1)!}\frac{d^n}{dx^n}\left[(x-x_u)^ke^{-cx}\right]. \end{align} Then I tried to use the general Leibniz rule combined with what I learned from here and here to deal with the last part but I cannot find a closed-from expression for $\partial_x^{(n)}f(x)$. Any suggestions and/or answers would be much appreciated.
Consider this answer, and write your function not as you did but as $$\partial^{n}f(x) = \sum_{k = 1}^{+\infty} \frac{1}{(k+m)(k-1)!}\frac{\text{d}^n}{\text{d}x^n}\color{red}{\left[e^{-cx}(c(x - x_u))^k\right]}$$ The red term is nothing but $$(-1)^k\color{blue}{e^{-cx}(c(x_u-x))^k)}$$ Where the blue term is the one analysed in the liked question above. Follow the same procedure, and then you will get the $nth$ derivative expression. Further Details Copying from the other answer: $$\begin{align} & \text{when}\quad n\le k: \frac{d^nf_k(x)}{dx^n} = (-1)^n\sum_{i=0}^{n} \binom n i (c)^{n-i}[k]_if_{k-i}(x) \\ \end{align}$$ This ensures you the closed expression for that blue term, whence the derivation is ended. You can then write down glueing all the pieces: $$\partial^nf(x) = \sum_{k=1}^{+\infty}\frac{1}{(k+m)(k-1)!}(-1)^{k+n}\sum_{i=0}^{n} \binom n i (c)^{n-i}[k]_if_{k-i}(x)$$ Where we define $[k]_m = k(k-1)\cdots(k-m)$ for convenience in notations, also $[k]_0 = 1$ and $[k]_1 = k$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4351255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving orthonormal basis for linear operators given \begin{equation*} E_{ij} \mathbf{e}_k = \langle \mathbf{e}_j,\mathbf{e}_k \rangle \mathbf{e}_i, \quad 1 \leq k \leq n. \end{equation*} prove that for each $1 \leq i,j \leq n$ let $E_{ij} \in \mathrm{End}\mathbf{V}$ be the linear operator defined by \begin{equation*} E_{ij} E_{kl} = \langle \mathbf{e}_j,\mathbf{e}_k \rangle E_{il} \end{equation*} Deduce from this that $\mathcal{E} = \{ E_{ij} \colon 1 \leq i , j \leq n\}$ is an orthonormal basis of $\mathrm{End} \mathbf{V}$, where by definition the scalar product of two operators $A,B \in \mathrm{End}\mathbf{V}$ is $\langle A,B \rangle = \operatorname{Tr} A^*B$, with $\operatorname{Tr}$ the trace and $A^*$ the adjoint (aka transpose) of $A$. What is $\dim \mathrm{End}\mathbf{V}$? I know how to prove the first part but I'm confused on how to use the trace to prove that $\mathcal{E}$ is an orthonormal basis and obtain $\dim \mathrm{End}\mathbf{V}$
It is clear that $E_{ij}$ defined in this way can be described as the matrix with entries $0$ but on the line $i$ and column $j$ where the entry is a "one". The orthonormality propeerty you have to establish amounts to prove that: $$Tr(E_{ij}^TE_{kl})=\begin{cases}1 & \text{if} \ i=k \ \text{and} \ j=l \\ 0 & \text{otherwise} \end{cases}\tag{1}$$ It will be established in the following way. Using the relationship you have established: $$E_{ij} E_{kl} = \langle \mathbf{e}_j,\mathbf{e}_k \rangle E_{il},$$ the LHS of (1) can be written: $$Tr(E_{ji}E_{kl})=Tr(\langle \mathbf{e}_i,\mathbf{e}_k \rangle E_{jl})=\underbrace{\langle \mathbf{e}_i,\mathbf{e}_k \rangle}_a \underbrace{Tr(E_{jl})}_b$$ This product will be $0$ in general, unless its components $a$ and $b$ are both non zero, which happens iff * *on one hand: $a=1 \iff e_i=e_k \iff $ $ \ \color{red}{i=k}$ and *on the other hand: $Tr(E_{jl})=1$ which happens if the single entry equals to "one" in $E_{jl}$ belongs to its diagonal, which is possible iff $\color{red}{j=l}$. We have therefore obtained an equivalence with the RHS of (1).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4351371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Intersection of perpendicular bisector and circle in a triangle. In $\triangle ABC$, $\measuredangle C=2\measuredangle B$. $P$ is a point in the interior of $\triangle ABC$, that lies on the perpendicular bisector of $BC$ and the circle centred at $A$ that passes through $C$. Show that $\angle ABP=30^{\circ}$. Using the definition of point $P$, $$PB=PC ~~\text{and}~~ AP=AC.$$ Let $\angle CBP=\alpha$ and $\angle ABP=\beta$. Simple angle chasing leads to $\angle CAP=180^{\circ}-2\alpha-4\beta\;$ and $\;\angle BAP=\beta-\alpha.$ I am trying to prove, $~2\angle BAP=\angle CAP~$ since this would imply $\beta=30^{\circ}$. There is probably a smart construction that I am unable to find. Also, dropping perpendiculars from $A$ to $CP$ and $P$ to $AB$ looks like a good attempt to reach the desired result.
Please note that $DE$ is perp bisector of $BC$ and therefore $CE$ is angle bisector of $\angle C$. Extend $CE$ to $F$. Now $\angle PAF = 2 \angle PCF = 2 \beta$. So it follows that $\angle EAF = \alpha + \beta$ and as $\angle AEF = 180^\circ - 2 (\alpha + \beta), \angle AFP = \alpha + \beta = \angle EAF$. As $AF \parallel BC$, $DE$ is also the perp bisector of $AF$. So we have $AP = PF$ and we conclude that $\triangle APF$ is equilateral, leading to $~2 \beta = 60^\circ$ i.e $~\beta = 30^\circ$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4351550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
If $f,g$ are measurable and $\Phi$ is continuous, then $\Phi(f(x),g(x))$ is measurable. Let $E\subseteq\mathbb{R}^d$ be a (Lebesgue) measurable set and let $f,g$ be two measurable functions defined on E. I would like to show that if $\Phi$ is a continuous function on $\mathbb{R}^2$, then the function $h:x\mapsto\Phi(f(x),g(x))$ is measurable. The proof remains unknown to me, but I can address the problem if it is only one-dimensional. Specifically, if $\Phi$ is a continuous function on $\mathbb{R}$, then I can show that $\Phi\circ f$ is measurable. Indeed, since $\{\Phi<a\}$ is an open set $G$, we can conclude that $\{\Phi\circ f<a\}=f^{-1}(G)$ is measurable. How about the two-dimensional problem? Does anyone have an idea? Thank you.
Sketch: you need to show that $\{x:h(x)>a\}$ is measurable for each $a\in \mathbb R.$ So, set $U=\{(u,v):\Phi(u,v)>a\}.\ U$ is open because $\Phi$ is continuous. so it is a countable union of rectangles $U=\bigcup_n (a_n,b_n)\times (c_n,d_n).$ Now, for each integer $n,\ \{x:(f(x),g(x))\in (a_n,b_n)\times (c_n,d_n)\}$ is measurable because it is equal to $\{x:a_n<f(x)<b_n\}\cap \{c_n<g(x)<d_n\}$ and $f$ and $g$ are measurable. But, $\{x:h(x)>a\}=$ $\{x:(f(x),g(x))\in U\}=\bigcup_n\{x:(f(x),g(x))\in (a_n,b_n)\times (c_n,d_n)\}$ from which the claim follows because countable unions of measurable sets are measurable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4351662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding CDF from given piecewise PDF Cheers, I have the following PDF $f_X(x) = \frac{2|x|}{5}, \quad -1 \leq x \leq 2,$ and I am asked to find the distribution function $F_X(x)$. I know that to find it I must solve the following integral: $F_X(x) = \int_{-\infty}^x f(t)dt = \int_{-1}^x\frac{2|t|dt}{5}$. However, I know that if $x > 0$ then the integral will change completely, and I can't quite understand what I should do at this point. My thought was to try and split the CDF approprietly: $\bullet$ If $-1< x < 0$ then: $\int_{-1}^x\frac{-2t}{5}dt = \frac{-2}{5}\begin{bmatrix}\frac{t^2}{2}\end{bmatrix}^x_{-1} = \frac{-1}{5}(x^2-1)$ $\bullet$ If $0 < x < 2$ then $\int_{-1}^0\frac{-2t}{5}dt + \int_{0}^x\frac{2t}{5}dt = \frac{1}{5} + \frac{1}{5}x^2$. Would something like that be correct? I have not seen this before so I wouldn't know for sure, but It seems like this is going somewhere right, as the condition $\frac{dF_X(x)}{dx} = f_X(x)$ holds. Also, let's suppose we have a random variable $Y, Y = X^2$, and I want to find its CDF, then I would say: $F_Y(y) = \mathbb{P}(Y\leq y) = \mathbb{P}(X^2 \leq y) = \mathbb{P}(-\sqrt y \leq X \leq \sqrt y) = F_X(\sqrt y) - F_X(-\sqrt y)$. Then I thought of saying that $ \sqrt y \geq 0$ and $- \sqrt y \leq 0$ and then choose the appropriate piece of the function, for example: as $ \sqrt y \geq 0, F_X( \sqrt y ) = \frac{1}{5} + \frac{1}{5}|y|$ and the other way for $ -\sqrt y $. Am I going the right way here? Thanks =)
Yes your work is correct and for $Y = X^2$, $Y$ is always positive. So, For $0 \lt y \lt 1$, $ \displaystyle F_Y(y) = P(- \sqrt y \lt X \lt \sqrt y) =\frac{1 + y}{5} - \frac{1-y}{5} = \frac {2y}{5}$ For $1 \leq y \lt 4$, $ \displaystyle F_Y(y) = P(X \lt \sqrt y) = \frac{1+y}{5}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4351807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simple question about finitely generated algebras I have two finitely generated algebras $A$ and $B$ over a field $\mathbb{K}$ such that $B\subseteq A$. Is it true that $A=B[a_1,\ldots,a_n]$ for some $a_1,\ldots,a_n\in A$? Motivation: I am trying to prove that for finitely generated algebras the notions of integral extension and finite extension coincide. I managed to prove that extension $B\subseteq A$ is finite iff $A=B[a_1,\ldots,a_n]$ for some elements $a_1,\ldots,a_n\in A$ integral over $B$. Thus, if the answer to my question is positive, everything is proved.
If $A$ is finitely generated over $K$ then we have $A=K[a_1,...,a_n]$ for some elements $a_1,...,a_n\in A$. Then: $A=K[a_1,...,a_n]\subseteq B[a_1,...,a_n]\subseteq A$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4351939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
"Sequence-builder" notation I am somewhat familiar with set-builder notation, but I'm wonder what the equivalent notational conventions for sequences are. I know that one can denote a sequence as the following: $$\{a_n\}_x^y \ \ \text{and} \ \ (a_n)_x^y$$ But what if you want to state something about how that sequence works as well? Consider the sequence of all even numbers from $0$ to $24$. If it was a set, I believe one would write: $$\{x \in \Bbb N : x = 2n, \ n \in \Bbb N_0^{12}\}$$ Or equivalently: $$\{x \in \Bbb N : n \in \Bbb N, \ 0 \le x = 2n \le 24 \}$$ How would one write this as a sequence? I'll be dealing with a lot more complex sequences (and sets) than this, so I want to be as fluent in the notation as possible. Are there any free and up-to-date resources that discuss the matter in detail?
$ \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} $ A sequence can be seen as a function whose domain is a subset of natural numbers (or integers). This is also how an $n$-tuple is treated. $$ (x_1, \cdots, x_n) = (x_i)_{i=1}^{n} $$ Consider the function $f:\{0, 1,\cdots, 12\} \to \Z$ defined by $\forall n \in \{0,1,\dots,12\}: f(n) = 2n$. Then, the sequence can be denoted by $(f_i)_{i=0}^{12}$, or simply $(2i)_{i=0}^{12}$. Therefore, it is natural to define a function and use it as a sequence by denoting $f(i) = f_i$. Comment: I believe that your set-builder notation is not standard because the elements are not clear; $x$ or $n$? I would write $$ \{x \in \Z ~|~ \exists n \in \{0,1,\cdots,12\}: x = 2n\} $$ or simply $$ \{2(n-1) ~|~n\in \N \land n \le 13\} $$ Some authors distinguish $(x_i)$ from $\{x_i\}$; the latter denotes the image (set) of the sequence $(x_i)$, which is exactly what I wrote above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4352122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Q regarding finding sum of first 2002 terms Q : A sequence of integers $a_{1}+a_{2}+\cdots+a_{n}$ satisfies $a_{n+2}=a_{n+1}$ $-a_{n}$ for $n \geq 1$. Suppose the sum of first 999 terms is 1003 and the sum of the first 1003 terms is - 999. Find the sum of the first 2002 terms. My questions regarding this problem are: * *What will be the 1st term ? Since they say n ≥1. So , do we say 1st term of the sequence has n = 2. $a_{2+2} = a_{2+1} - a_2$ *It will be great if u could please share different-different ways u can solve it. Also , a very important point is that : What are you thinking at every step while solving this Q. This really helps me a lot because then I can also know what is the way you’re trying to find the solution. Like if it’s a method you’re using , did you already know about it ? Or you’re thinking of ways to find the solution i.e how are u thinking to find ways to solve ? 3)What I have tried: * *Sum of 1000th - 1003th term: 1003 - ( -999) = 2002. * *What we need to find is the sum of next 999terms. I’m not able to solve further than that. *Also , please share if you what are the other method that we can use ? Thank you.
M:1 = Telescoping I can see we can’t do anything after this. So , I use telescoping method w/ the help of user dxiv. $\begin{aligned} a_{n+2} &=a_{n+1}-a_{n} \\ \therefore \quad a_{n+3} &=a_{n+2}-a_{n+1} \\ &=a_{n+1}-a_{n}-a_{n+1} \\ &=-a_{n} \end{aligned}$ $$ \begin{array}{ll}a_{n+4} & =a_{n+3}-a_{n+2} \\ \therefore \quad a_{n+5} & =a_{n+4}-a_{n+3} \\ \therefore \quad a_{n+4} & +a_{n+5}=a_{n+4}-a_{n+2} \\ \therefore \quad a_{n+5} & =-a_{n+2} \\ \therefore \quad a_{n}+a_{n+1}+a_{n+2}+a_{n+3}+a_{n+4}+a_{n+5} & \\ & =a_{n}+a_{n+1}+a_{n+2}-a_{n}+a_{n+3}-a_{n+2}-a_{n+2} \\ & = & a_{n+1}+a_{n+3}-a_{n+2}=0\end{array} $$ Let $S_{n}$ denotes the sum of first $n$ terms $$ \begin{aligned} S_{999} &=S_{6 \times 166+3}=S_{3} &(\because \text { every '6' consecutive }\\ S_{1003} &=S_{6 \times 167+1}=S_{1}^{\circ} &\text { terms has sum zero. }) \\ S_{2002} &=S_{6 \times 333+4}=S_{4} & \\ S_{2002} &=S_{4}=a_{1}+a_{2}+a_{3}+a_{4} \\ &=S_{3}-a_{1} \\ &=1003-(-999) \\ &=2002 \end{aligned} \quad\left(\because a_{4}=-a_{1} \text { since } a_{n+3}=-a_{n}\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4352316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $a, b, c, d$ are natural numbers, such that, $ab = cd$, prove that $a^2 + b^2 + c^2 + d^2$ is a composite number. If $a, b, c, d$ are natural numbers, such that, $ab = cd$, prove that $a^2 + b^2 + c^2 + d^2$ is a composite number. Came across this question while solving an exercise on prime numbers. Now I found a pretty simple solution to this problem here. (It is not the exact same problem but the method described there in most of the answers can be easily used on this one too.) But when I was trying to solve this on my own, I took another path. I noticed that $a^2 + b^2 + c^2 + d^2$ is even (and therefore composite) if: (1) All of $a, b, c, d$ have the same parity (i.e. all are odd or all are even). (2) Two of them are even and two are odd (say $a, c$ are even and $b, d$ are odd). So the only case left to worry about is when $a, b, c$ are even and $d$ is odd (wlog). Example: $26\times 28 = 56\times 13$ ...and... $13^2 + 26^2 + 28^2 + 56^2 = 4765$. And this is where I'm stuck. Any ideas? EDIT: As pointed out in some if the answers, my initial assumption of the given expression always being divisible by 5 was wrong (I apologise for that). But I'd still like to see if it can proved, in the manner I started out, that the given expression is composite.
It's not always divisible by $5$: $6 \times 6 = 4 \times 9$ but $6^2+6^2+4^2+9^2 = 169$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4352435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Let $X_n,Y_n$ be two sequences of random variable, where $0Let $X_n,Y_n$ be two sequences of random variable, where $0<X_n<Y_n$. Does $Y_n=O_p(1)$ imply $X_n=O_p(1)?$ My proof Since $Y_n=O_p(1)$, $$ (\forall \epsilon>0)(\exists k)\Big(P(|Y_n|>k)\leq \epsilon\Big)\ \ $$ Next, by law of total probability $$ P(|X_n|>k)=\underbrace{P(|Y_n|>k)}_{\leq \epsilon}\underbrace{P(|X_n|>k|\ |Y_n|>k)}_{<1}+\underbrace{P(|Y_n|\leq k)}_{\geq 1-\epsilon }\underbrace{P(|X_n|>k|\ |Y_n|\leq k)}_{0}\leq \epsilon $$ So the same $k$ works. Is my proof correct? I will also accept any alternative proof (or counter proof) as answer.
What you did is correct. I think that you already saw the fact that if $A\subset B$, then $\mathbb P(A)\leqslant \mathbb P(B)$ (which is also a consequence of the law of total probability) hence applying this to $A=\left\{\lvert X_n\rvert>k\right\}$ and $B=\left\{\lvert Y_n\rvert>k\right\}$ would directly give that $\mathbb P\left(\left\{\lvert Y_n\rvert>k\right\}\right)<\varepsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4352606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are there any other "involutive" (a la the orthocenter) points on the Euler line? Throughout, given a triangle $T$ let $G(T)$ and $H(T)$ be the centroid and orthocenter of $T$, respectively. For $r\in\mathbb{R}$ and $p\in\mathbb{R}^2$, let $h_{p:r}$ be the homothety with focus $p$ and factor $r$ (with the understanding that $h_{p:0}(q)=p$ for all $q\in\mathbb{R}^2$). So, for example, the Euler line of a triangle $T$ is $$\{h_{G(T): r}(H(T)): r\in\mathbb{R}\}.$$ I've asked before about "generalized triangle centers" satisfying the same involutive property as the orthocenter. This question is a much more local version of that one: roughly speaking, are there any such generalized triangle centers (besides the orthocenter itself) which always lie on the Euler line? Precisely, say that an Eulerian triangle center function is a continuous function $f:D\rightarrow\mathbb{R}$ with the following properties: * *$D$ is a dense open connected subset of $(\mathbb{R}^2)^3$. *Both $D$ and $f$ are $S_3$-invariant: for each permutation $\sigma\in S_3$ and $(p_1,p_2,p_3)\in D$ then $(p_{\sigma(1)},p_{\sigma(2)},p_{\sigma(3)})\in D$ and $f(p_1,p_2,p_3)=f(p_{\sigma(1)},p_{\sigma(2)},p_{\sigma(3)})$. *Both $D$ and $f$ are similarity-invariant: if $\triangle pqr$ is similar to $\triangle p'q'r'$ and $(p,q,r)\in D$, then $(p',q',r')\in D$ and $f(p,q,r)=f(p',q',r')$. If $f$ is an Eulerian triangle center function, let $$\hat{f}:D\rightarrow \mathbb{R}^2: (p,q,r)\mapsto h_{G(\triangle pqr): f(p,q,r)}(H(\triangle pqr))$$ be the corresponding appropriately-scaled point on the Euler line. So, for example, the constant map ${\bf 1}:(p,q,r)\mapsto 1$ recovers the orthocenter $\hat{\bf 1}: (p,q,r)\mapsto H(\triangle p,q,r)$ (modulo issues re: degenerate triangles). My question is: Is there a non-constantly-$1$ Eulerian triangle center $f:D\rightarrow\mathbb{R}$ such that, for comeager-many triples of points $(p,q,r)\in(\mathbb{R}^2)^3$, we have $$\hat{f}(p, q, \hat{f}(p,q,r))=r$$ (so basically, it's not the orthocenter but it satisfies the above-mentioned involutive property)? If the answer is positive, my next question is how many there are - up to the obvious equivalence relation of "agree on a dense subset of the intersection of their domains." I tentatively suspect the answer is negative, however.
A point $P$ on the Euler line has barycentric coordinates we can parameterize as $$(a^2 + b^2 - c^2) (a^2 - b^2 + c^2)-p\,(2 a^4 - a^2 b^2 - b^4 - a^2 c^2 + 2 b^2 c^2 - c^4)\;:\;\cdots\;:\;\cdots$$ (with the second and third coordinates derived cyclically from the first); here, $p$ is the dilation factor of the circumcenter with respect to the orthocenter. Straightforward symbol-crunching shows that (barring degeneracies in the triangle) the corresponding center of $\triangle PBC$ is $A$ if and only if $p=0$, which makes $P$ the orthocenter. For the specific symbol-crunching, let the vertices of the triangle have Cartesian coordinates $$A=(0,0) \qquad B = (c,0) \qquad C = (b\cos A, b\sin A)$$ Then $P$ is given by $$\begin{align} x &=\frac{-a^2 + b^2 + c^2 + (a^2-b^2)p}{ 2 c} \\[10pt] y &= \frac{a b ((-a^2+b^2+c^2)(a^2-b^2+c^2) + p(a^4 - 2 a^2 b^2 + b^4 + a^2 c^2 + b^2 c^2 - 2 c^4)}{ 2(-a+b+c)(a+b-c)(a-b+c)(a+b+c)r} \end{align}$$ where $r$ is the circumradius. Then, the corresponding point of $\triangle PBC$ is more of a mess to calculate, since $P$ is already much more complicated than $A$, and since we have to replace $b^2\to|PC|^2$ and $c^2\to|PB|^2$ in the barycentric formulas; luckily, Mathematica doesn't see this as too much of a burden, and dutifully churns-out coordinates of the new point. $$\begin{align} x &= p\;\frac{ \left(\begin{array}{l} \phantom{+} 2 (a^4 b^2 - 2 a^2 b^4 + b^6 + 2 a^2 b^2 c^2 - 2 a^2 c^4 -3 b^2 c^4 + 2 c^6) \\ -p(a^6 + 2 a^4 b^2 - 7 a^2 b^4 + 4 b^6 - 3 a^4 c^2 + 12 a^2 b^2 c^2 - b^4 c^2 - 5 a^2 c^4 - 10 b^2 c^4 + 7 c^6) \\ + p^2(a^6 - 3 a^2 b^4 + 2 b^6 - 2 a^4 c^2 + 6 a^2 b^2 c^2 - b^4 c^2 - 2 a^2 c^4 - 4 b^2 c^4 + 3 c^6) \end{array}\right)}{2 c (-(a^2 + b^2 - c^2) (a^2 - b^2 + c^2) + p(2 a^4 - a^2 b^2 - b^4 - a^2 c^2 + 2 b^2 c^2 - c^4))} \\[1em] y &= p\cdot ab\;\frac{\begin{array}{l} \phantom{\cdot}\left( -a^4 + 2 a^2 b^2 - b^4 + 2 b^2 c^2 - c^4 + p(a^2 - b^2 - a c + c^2) (a^2 - b^2 + a c + c^2) \right) \\ \cdot\left( -2 b^2 (a^2 - b^2 + c^2) + p(a^4 + a^2 b^2 - 2 b^4 - 2 a^2 c^2 + b^2 c^2 + c^4) \right) \end{array}}{ 2 (-a + b + c) (a + b - c) (a - b + c) (a + b + c) (\cdots)} \end{align}$$ (Barring degeneracies in the triangle) These coordinates simultaneously vanish (so that this secondary point is the required $A$) when and only when $p$ itself vanishes; that is, whenn $P$ is the orthocenter. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4352943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do we not name $-a+bi$ in relation to $a+bi$? Given a complex number $a+bi$, it has a complex conjugate $a-bi$. The product of this complex number with its complex conjugate gives $(a+bi)(a-bi)=a^2+b^2$. One might imagine flipping the sign of the real part instead of the imaginary part to get a sort of "anticonjugate", resulting in a similar product of $(a+bi)(-a+bi)=-(a^2+b^2)$. Clearly this "anticonjugate" is the negative of the conjugate. I suspect I've never seen this concept before because of either (1) no finds this anticonjugate useful or (2) the negative of the complex conjugate is considered without giving it a special name. Is there a different reason why we don't seem to use or consider "anticonjugates"? Or does the above account for this perception?
Complex conjugates are important because $i$ and $-i$ are fundamentally indistinguishable by definition; $i$ is defined to be a number satisfying the equation $i^2 = -1$, but of course $-i$ must satisfy the same equation. So "any fact" which can be stated about complex numbers must remain true if we swap all occurrences of $i$ with $-i$ (though one must take care with "hidden" occurrences). Complex conjugation is therefore a mapping of complex numbers which preserves many algebraic properties. In contrast, complex anticonjugation as defined in your question does not preserve any useful properties, because $1$ and $-1$ are not fundamentally indistinguishable; $-1$ is not a successor of the number $0$, it is not a multiplicative identity such that $1x \equiv x$, nor does it satisfy any other reasonable definition of the number $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4353066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Multiplicity of root $1$ of $nX^{n+2}-(n+2)X^{n+1}+(n+2)X-n$? I'm solving this homework problem: Show that $1$ is a root of the polynomial $nX^{n+2}-(n+2)X^{n+1}+(n+2)X-n$. Determine its multiplicity. I believe the answer is $n+2,$ and currently working on proving this. My intuition stems from the fact that when $n=1, P(x)=(x-1)^3, so$ 1 is a polynomial of multiplicity $3.$ I attempted this: calling the polynomial $P(x),$ we see that $P(1)=0,$ hence $1$ is a root for sure. To determine its multiplicity, I'm thinking about: Taking successive derivatives $P'(1), P''(1) \dots P^{(n+3)}(1) $ of the polynomial and showing that all but the last one $P^{(n+3)}(1)$ vanishes. But I'm a bit skeptical because the antiderivative of a polynomial doesn't always have multiplicity one more than the original polynomial, which forms the basis/idea of the method I'm thinking of. For example $Q(x):=2x$ has the root $0$ of multiplicity $1,$ but the antiderivative $x^2+1$ doesn't have any real root. So my question is: in order to answer the question in the image, what is the exact theorem should we use? Is it something like this? Proposed theorem: If $Q(x)$ has the real root $a$ of order $k,$ then its antiderivative with constant of integration $0$ has the real root $a$ of order $k+1.$ The above seems to be true, and if yes, can we use this to show that the successive derivatives $P'(1), P''(1) \dots P^{(n+2)}(1) $ of the polynomial vanish but $P^{(n+2)}(1) $ doesn't vanish, and this'll show that $1$ is a root of multiplicity $n+2.$ Is my idea correct? ADDENDUM/EDIT: Can we arrive at the proof that $1$ is a root of multiplicity $n+2$ by using induction? At $n=1,$ the multiplicity is $1+2=3,$ so the induction can start, and now we just need to show the induction step. Will this work? P.S. As per Dietrich's answer, the multiplicity seems to be $3$ irrespective of $n.$ So can we just prove this using the derivatives above or by induction?
We may also look at what the coefficients tell us: the polynomial $$ \ p(x) \ \ = \ \ n·x^{n+2} \ - \ (n+2)·x^{n+1} \ + \ 0·x^n \ + \ \ldots \ + \ 0·x^2 \ + \ (n+2)·x \ - \ n \ \ $$ is anti-palindromic, which is to say that the coefficients "read left-to-right" are the negative of the sequence "read right-to-left". Consequently, $ \ x \ = \ 1 \ $ must be a zero of such a polynomial, as the terms in $ \ p(1) \ $ then "cancel in pairs". We also see from the Rule of Signs that $ \ p(x) \ $ has three "sign-changes", indicating that it has either three or one positive real zeroes, regardless of the parity of $ \ n \ \ . $ Since for $ \ n \ $ even, $$ \ p(-x) \ \ = \ \ n·x^{n+2} \ + \ (n+2)·x^{n+1} \ + \ 0·x^n \ + \ \ldots \ + \ 0·x^2 \ - \ (n+2)·x \ - \ n \ \ $$ has one "sign-change", there is also one negative real root; for $ \ n \ $ odd, $$ \ p(-x) \ \ = \ \ -n·x^{n+2} \ - \ (n+2)·x^{n+1} \ + \ 0·x^n \ + \ \ldots \ + \ 0·x^2 \ - \ (n+2)·x \ - \ n \ \ $$ has no "sign-changes", so in this case, $ \ p(x) \ $ has no negative real roots. Upon employing polynomial/synthetic division for $ \ x \ = \ 1 \ \ , $ we obtain the "reduced" polynomial $$ p_1(x) \ \ = \ \ n·x^{n+1} \ \underbrace{- \ 2·x^n \ - \ 2·x^{n-1} \ + \ \ldots \ - \ 2·x^2 \ - \ 2·x}_{n \ \text{terms}} \ + \ n \ \ , $$ which is palindromic, thus having the property that if $ \ r \ $ is a zero, then $ \ \frac{1}{r} \ $ is also. We observe that $ \ x \ = \ 1 \ $ is a zero of this polynomial as well, so it also has a zero $ \ x \ = \ \frac11 \ = \ 1 \ \ , \ $ which establishes that the three positive real zeroes of $ \ p(x) \ $ are $ \ 1 \ $ with multiplicity $ \ 3 \ \ . $ With $ \ n \ $ odd then, there are these three real zeroes with the remaining $ \ (n - 1) \ $ [even] zeroes forming complex-conjugate pairs. For $ \ n \ $ even, the anti-palindromic character of $ \ p(x) \ $ implies that $ \ x \ = \ -1 \ $ is also a zero, giving us the expected three positive and one negative real zeroes and $ \ (n - 2) \ $ [even] complex-conjugate zeroes. It might be mentioned, by Viete's relations, that as the product of all $ \ (n + 2) \ $ zeroes of $ \ p(x) \ $ is $ \ +1 \ $ for $ \ n \ $ odd and $ \ -1 \ $ for $ \ n \ $ even, the product of the complex zeroes alone is always $ \ +1 \ \ . $ This means that the complex conjugate zeroes are in pairs $ \ r \ , \ \overline{r} \ = \ \frac{1}{r} \ \ , $ so all of the complex zeroes have unit modulus (so in fact all of the zeroes do). We can also demonstrate the triple multiplicity by polynomial division. Dividing $ \ p_1(x) \ $ by $ \ (x - 1) \ $ produces $$ p_2(x) \ \ = \ \ n·x^n \ + \ \underbrace{(n - 2)·x^{n-1} \ + \ (n - 4)·x^{n-2} \ + \ \ldots \ + \ (n - [2n - 2] )·x \ + \ (n - 2n)}_{n \ \text{terms}} $$ $$ = \ \ n·x^n \ + \ (n - 2)·x^{n-1} \ + \ (n - 4)·x^{n-2} \ + \ \ldots \ - \ (n - 4)·x^2 \ - \ (n -2)·x \ - \ n \ \ , $$ which is anti-palindromic, and so has $ \ x \ = \ 1 \ $ as a zero. One further division by $ \ (x - 1) \ $ yields $$ p_3(x) \ \ = \ \ n·x^{n-1} \ + \ \underbrace{2·(n - 1)·x^{n-2} \ + \ 3·(n - 2)·x^{n-3} \ + \ \ldots \ + \ (n - 1)·(n - [n - 2] )·x \ + \ n·(n - [n - 1])}_{(n - 1) \ \text{terms}} $$ $$ = \ \ n·x^{n - 1} \ + \ 2·(n - 1)·x^{n-2} \ + \ 3·(n - 2)·x^{n-3} \ + \ \ldots $$ $$ + \ 3·(n - 2)·x^2 \ + \ 2·(n - 1)·x \ + \ n \ \ . $$ Since the exponents in the terms are non-negative integers, $ \ p_3(x) \ $ has only positive coefficients, so $ \ x \ = \ 1 \ $ cannot be one of its zeroes. Hence, $ \ p(x) \ $ has a factor limited to $ \ (x - 1)^3 \ \ . $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4353182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 4 }
If the axiom of replacement implies the axiom of specification, why are both mentioned In Tao's Book on Analysis I, we are asked to prove that the axiom of replacement implies the axiom of specification. This implication seems to be true even outside of the environment Tao sets up in his book, as I've seen it mentioned elsewhere. During my own quest into the foundations-of-mathematics jungle, I've come across the more standard lists for the Zermelo-Fraenkel Axioms of set theory, and they all seem to mention both of these axioms. My question is simply, if one implies the other, and the goal of an axiomatization is to find the fewest number of comprehensible axioms, then why are both of them listed? Is it solely to show that specification is also possible? Or is it because this implication is based on some other tacitly assumed truth. None of the above?
Two reasons, one historical and one technical. The historical reason is that when Zermelo first formulated the axioms of set theory (Zermelo set theory), Replacement was not an axiom. That was Fraenkel's contribution later on, giving us the more familiar ZF axioms. (Well technically Skolem/von Neumann also contributed the axiom of regularity). The technical reason is that in set theory one often considers structures that satisfy some partial collection of the ZFC axioms. For example, the limit stages of the von Neumann hierarchy satisfy all the ZFC axioms except Replacement. So we can say pretty succinctly that they satisfy ZFC minus Replacement, instead of the more mouthful "they satisfy ZFC minus Replacement plus Specification".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4353559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Showing infimum of distance between sets is positive I am trying to understand the proof of Lemma 45.1 in Bartle, The Elements of Real Analysis. The lemma is used to prove that diffeomorphisms map sets with zero content into sets with zero content. Bartle leaves to reader (with hint that is $\bar{A}$ is compact): If $\bar{A} \subset \Omega \subset \mathbb{R}^p$ where $\Omega$ is an open set and $\bar{A}$ is closed and bounded, then $\inf \{ \|a-x\|: a\in \bar{A}, x \notin \Omega\}> 0$. My attempt: I know $\{ \|a-x\|: a\in \bar{A}, x \notin \Omega\} = \{ \|a-x\|: (a,x) \in \bar{A} \times \Omega^c\}$ where $\Omega^c = \mathbb{R}^p - \Omega$, and the Euclidean norm $\|a-x\|$ is a continuous function of its arguments. If I knew that $\bar{A} \times \Omega^c$ was compact then I would have the infimum equal to $\|a'-x'\|$ for some point $(a',x')$. But since $\bar{A} \cap \Omega^c = \emptyset$ we must have $\|a'-x'\| > 0$. But I am stuck when $\Omega^c$ is not compact.
The key here is to work with the function $a \mapsto d(a, \Omega^c) := \inf\{\|a-x\|:x \in \Omega^c\}$, where it is easy to prove both that $$\tag{1}\inf\{\|a-x\|: a \in \bar{A}, x \in \Omega^c\}= \inf\{d(a, \Omega^c): a \in \bar{A}\},$$ $$\tag{2} d(\cdot,\Omega^c) \in C(\bar{A})$$ Since $\Omega$ is an open set, for any $a \in \bar{A} \subset \Omega$ there exists $\delta_a > 0$ such that the open ball $B(a;\delta_a)$ is contained in $\Omega$. If $x \in \Omega^c$, then $x \notin B(a; \delta_a)$ and $\|a-x\| \geqslant \delta_a > 0$. This implies that $d(a, \Omega^c) \geqslant \delta_a > 0$. Since $\bar{A}$ is closed and bounded and, hence, compact and the distance function is continuous, there exists $a^* \in \bar{A}$ such that $$\inf\{\|a-x\|: a \in \bar{A}, x \in \Omega^c\}=\inf\{d(a, \Omega^c): a \in \bar{A}\}= d(a^*,\Omega^c) \geqslant \delta_{a^*} > 0$$ Proof of (2). To prove that $a \mapsto d(a,\Omega^c)$ is continuous note that for $a_1,a_2 \in \bar{A}$ and any $x \in \Omega^c$, we have, by the reverse triangle inequality $$d(a_1,\Omega^c)- \|a_1-a_2\| \leqslant \|a_1-x\| - \|a_1-a_2\| \leqslant\|a_2-x\|$$ Thus, $d(a_1,\Omega^c)- \|a_1-a_2\| \leqslant \inf \{\|a_2 - x\|: x \in \Omega^c\} = d(a_2,\Omega^c)$ and after rearranging, $$d(a_1,\Omega^c) - d(a_2, \Omega^c) \leqslant \|a_1 - a_2\|$$ Whence, switching $a_1$ and $a_2$ leads to $$|d(a_1,\Omega^c) - d(a_2, \Omega^c)| \leqslant \|a_1 - a_2\|,$$ and it follows that $a \mapsto d(a, \Omega^c)$ is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4353715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
if $f(f(x+y))=f(x^2)+f(y^2)$ then $f(x)=?$ if $f(f(x+y))=f(x^2)+f(y^2)$ then $f(x)=?$ for all integers ($f: \mathbb Z \rightarrow \mathbb Z$) I know how to solve the following problem though: if $f(f(x+y))=f(2x)+2f(y)$ then $f(x)=?$ We can easily analyze that $f(x)$ here (in the second problem) is a linear function. And hence solve it by using the linear equation. But, As for the main problem (the one I mentioned first) I don't know how to proceed. Should I use the quadratic equation? (my calculation says its a quadratic function) Would be greatful if anyone could help me with this... Thank you
Substituting $y=-x$ grants $f(f(0))=2f(x^2)$. Thus $f(x^2)$ is constant for all $x$, i.e. $f$ is constant for nonnegative numbers, the range of values for $x^2$. (If your $f$ has domain $\mathbb R$ or something else, you'll need to specify to proceed further.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4353811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Expected value of sin of sum of n random angles Consider the following problem. Let $θ_1,θ_2,...θ_n ∈[0, \frac{π}{2}]$ be independent and uniformly distributed variables. Find $E[sin(θ_1 + ... + θ_n)].$ I was able to solve for $n=1$ (of course), but I'm not very sure how to go from there. I was thinking of using the linearity of expectation and doing induction on n (basically split up the $sin$ into two $sin*cos$ parts), but I'm not so sure if that works with deterministic functions of variables. Any help is welcome, thank you!
This is made easy when one uses complex numbers (in particular the formula for the sine), which converts the summation to a product and makes the process of separating the independent parts easier. Note that $\sin x = \frac{e^{ix} - e^{-ix}}{2i}$ for all $x$ real. Therefore, we get $$ \sin(\theta_1+\ldots+\theta_n) = \frac{e^{i(\theta_1+\ldots+\theta_n)} + e^{-i(\theta_1+\ldots+\theta_n)}}{2i}= \frac{\prod_{i=1}^ne^{i\theta_i} - \prod_{i=1}^n e^{-i\theta_i}}{2i} $$ In particular, extending the definition of the expectation to complex-valued random variables using the obvious $\mathbb E[U+iV] = \mathbb E[U] + i\mathbb E[V]$ and noting that properties of the real-valued expectation carry over, $$ \mathbb E[\sin(\theta_1+\ldots+ \theta_n)] =\frac 1{2i} \left(\mathbb E\left[\prod_{j=1}^n e^{i\theta_j}\right] - \mathbb E\left[\prod_{j=1}^n e^{-i\theta_j}\right] \right) $$ Now, if $\theta_j$ are independent, then so are the collections $\{e^{i \theta_j}\}$ and $\{e^{-i \theta_j}\}$ (this is easily seen using an argument analogous to the real number situation). In particular, we get $$ \mathbb E\left[\prod_{j=1}^n e^{i\theta_j}\right] = \prod_{i=1}^n \mathbb E[e^{i \theta_j}] = (\mathbb E[\sin \theta] + i \mathbb E[\cos \theta])^n $$ Likewise $$ \mathbb E\left[\prod_{j=1}^n e^{-i\theta_j}\right] = \prod_{i=1}^n \mathbb E[e^{-i \theta_j}] = (-\mathbb E[\sin \theta] + i \mathbb E[\cos \theta])^n $$ which upon substitution give the answer. Alternately, if you wish to stay in the realm of real numbers for a longer time (an eventual formula will involve complex numbers in some form), define the sequences $a_n, b_n$ for $n \geq 1$ by $$ a_n = \mathbb E[\sin(\theta_1+\ldots+\theta_n)] \\ b_n = \mathbb E[\cos(\theta_1+\ldots+\theta_n)] $$ Then, the formulas $\sin(A+B) = \sin A \cos B + \cos A \sin B$ and $\cos(A+B) = \cos A \cos B + \sin A \sin B$, along with the natural split $$ \theta_1+\ldots+\theta_n = \underbrace{\theta_n}_{A} + \underbrace{\theta_1+\ldots+\theta_{n-1}}_{B} $$ and independence, lead instantly to the two recursion formulas$$ a_{n} = b_{n-1}a_1+b_1a_{n-1} \quad ; \quad b_n = b_{n-1}b_1 - a_{n-1}a_1 $$ That can be written in matrix form as $$ \begin{pmatrix} a_n \\ b_n \end{pmatrix} = \begin{pmatrix} b_1 & a_1 \\ -a_1 & b_1 \end{pmatrix} \begin{pmatrix} a_{n-1} \\b_{n-1}\end{pmatrix} $$ which has the solution $$\begin{pmatrix} a_n \\ b_n \end{pmatrix} = \begin{pmatrix} b_1 & a_1 \\ -a_1 & b_1 \end{pmatrix}^{n-1} \begin{pmatrix} a_{1} \\b_{1}\end{pmatrix} $$ $a_1,b_1$ can be obtained straightforwardly, while $a_n,b_n$ can be obtained by diagonalizing the above matrix to find an explicit formula for the $n-1$th power. This will involve complex numbers : for that formula, see here. Substitution provides an answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4353932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
No. of arrangements of letters of SUCCESS such that first C precedes first S In how many ways can we arrange the letters of the word SUCCESS such that the first C precedes the first S My attempt there are four strings that satisfy the above condition, namely $$CSSSC, CSSCS, CSCSS, CCSSS$$ Now, we need to arrange $U, E$. In each string, there are six gaps where we can place our $U,E$. Since, each gap can contain both $U$ and $E$ , either one of them or neither. Thus, number of ways of arranging $U$ and $E$ is 36. Thus, number of arrangements should be $36 \cdot 4=144$. Answer is given $168$. Can anyone point out where am I going wrong
Take for example, $CSSSC$. Place a $U$ in one of the six gaps like this $CSSUSC$. Now you have six letters and seven available gaps to place an $E$. Hence $4\times 6 \times 7=168$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4354073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
convergence in measure implies integral goes to zero Prove if $f_n$ converges to $0$ in measure and $\mu$ is a finite measure, then $$\lim_{n \rightarrow \infty} \int_X \frac{f_n}{1+\vert f_n \vert} d \mu = 0$$ So I know that if $f_n$ converges to $0$ in measure, then $$\mu(\{x \in X: \vert f_n(x) \vert > \epsilon\})=0$$ Any hints would be greatly appreciated. I wanna say I can maybe bound the integral using triangle inequality? But unsure if that would work.
Hint: Let $E_n = \{x \in X: |f_n(x)| > \epsilon\}$ and note that $$x \in E_n \implies\frac{|f_n|}{1+|f_n|} > \frac{\epsilon}{1 + \epsilon}, \quad x \in X \setminus E_n \implies\frac{|f_n|}{1+|f_n|} \leqslant \frac{\epsilon}{1 + \epsilon}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4354205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Derivative of piecewise function at a point (function is given) $f(x)= \begin{cases} x^2 & \text{if } -2 ≤ x ≤ 2, \\ 2x & \text{if } x > 2. \end{cases}$ I am finding the derivative at $x = 2$. I am saying that the derivative is $4$ because $x =2$ is on $f(x)$ $=$ $x^2$, and the derivative of $x^2$ is $2x$ and $2(2) = 4$, and thus $f'(2) = 4$, but my classmates say that the function is non-differentiable. I don't think the second part of the piecewise function affects $x = 2$, but they're all saying I'm incorrect, so I'm looking for reassurance.
The problem with your argument is that the derivative depends not just on the value of $f(x)$ at $x=2$, but on its behaviour around this point. This behaviour differs depending on whether you are approaching from the left or the right. A nice way to see this is to use the limit definition of the derivative: $$ f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h} $$ For your function, at $x=2$ this limit does not exist. To see this, we calculate the left and right limits and note that they are not equal. The right limit is: $$ \lim_{h\to 0^{+}}\frac{f(2+h)-f(2)}{h} = \lim_{h\to 0^{+}}\frac{2h}{h} = 2 $$ while the left limit gives: $$ \lim_{h\to 0^{-}}\frac{f(2+h)-f(2)}{h} = \lim_{h\to 0^{-}}\frac{4h+h^{2}}{h} = 4 $$ Then since these two limits differ, we say that the (two-sided), limit which defines $f'(2)$ does not exist, and so $f(x)$ is not differentiable at $x=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4354488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the order of an element in $GL(2,\mathbb F_p)$. Let $\mathbb F_p=\mathbb{Z}/p\mathbb{Z}$. I am looking for a general method of finding the order of an element in $GL(2,\mathbb F_p)$. Suppose $p=7$ and I am given the element $\begin{pmatrix} 5 &1 \\ 1& 1\\\end{pmatrix}$, and I am asked to find the order, then how should I proceed. One way to proceed is to calculate $A^k$ for each $k$ and see when it takes the value $I_2$ for the first time. But this becomes a tedious task when the order is large. I think there is some other way to figure out the order by some group representation technique, but I do not know that topic very well. Can someone help me?
The eigenvalues are $\lambda_{1,2}=3\pm\sqrt5$, so your matrix is similar to $$ D=\left(\begin{array}{cc}3+\sqrt5&0\\ 0&3-\sqrt5\end{array}\right).$$ As $5$ is not a quadratic residue modulo $7$ these reside in the extension field $\Bbb{F}_{49}$. Because $\lambda^2=3^2\pm6\sqrt5+5=\mp\sqrt5$, it follows that $\lambda_{1,2}^4=5$. As $5$ is a primitive (=sixth) root modulo $7$, these calculations imply that the eigenvalues, and hence also he matrix $D$, have order $24$. Similar matrices share the same order, so we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4354719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Inverting a huge sparse banded matrix I have a matrix of $9,200 \times 9,200$ elements. I have approximately $90$ of these matrices to invert. The reason for this is I am running a nonlinear regression on a problem with significant errors in the independent and dependent variables. Each run, I update the covariance matrix of the dependent variable, and therefore requires inversion to include in the weighted regression problem. The matrix has a sparse banded structure. The structure of the first $1,000$ values looks like this and the structure of the first $20$ values looks like this. That is, sets of $2 \times 2$ blocks running down the main diagonal, and then also off diagonals seperated by $368$ zeros. The matrix is also symmetric positive definite. I believe this is called a banded matrix. Given only 0.5% of the matrix is non-zero, I believe there is a faster way of computing this, but am struggling.
I actually believe I have answered my own question. The key to shortening this time down it recognizing the repeating structure of the matrix will lead to most row-column multiplications going to $0$. Note: The matrix is repeating units of $2\times2$ blocks. We can consider the symmetric matrix to be a set of $2\times9200$ block matrices arranged as columns. These will only be non-zero when multiplied by themselves, or the $i\times368$ column (there is a diagonal of $2\times2$ block matrices every 368 values). We can arrange these columns into 184 matrices of 25-stacked block matrices. We can then realise that each of these columns have a bunch of bands of zeros too that can be removed. We can therefore treat this covariance matrix as a set of 184 $50\times50$ matrices, whose inversion is far quicker. Currently on python I can get 21 seconds to invert the $9200\times9200$ matrix, and 0.75 seconds using the above method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4354904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is it practical to find actual very large prime numbers (such as those up to 100 digits)? I just asked How to find extremely large BigInt prime numbers exactly in JavaScript? and was explained that all I ever can hope for is a "probable" prime, not an exact/actual prime if they are that large (up to 100 digits). To me, the reason for that would be because it takes too long to compute the possible factors of the potential prime (which is what my JS code was saying to me). Is this true? Is there no way to find an exact/actual (i.e. non- "probable") prime number that is 70 to 100 digits long? If it is possible, how is it done (at least at a high, non-software/non-code level)? If it is possible, can we go to larger integers like up to 65536 digits? If not, why can't it be done/calculated with certainty, is there no possible way?
It is absolutely practical and can be done deterministically. The AKS (Agarwal Kayal Saxena) primality test is a deterministic test which proves primality. It is also unconditional (does not depend on any unproved but widely believed hypothesis). The AKS test's complexity has been improved over time and is by now of order $O(t^6)$ where $t$ is the bitsize of the prime candidate you are testing (so $t=100/\ln 2$ for your 100 digit number). So, you can randomly choose an odd integer of 100 digits and run AKS on it. After repeating this roughly $\ln 100$ times (density of primes of that size by the Prime Number Theorem) you are likely to discover a provable prime of the form you like. AKS Paper from the Annals of Mathematics is available here. Use google for much more information on AKS.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4355040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What is sum of integer values of $m$ so that $\frac1x+\frac1{x+1}=\frac1m$ has answer? What is sum of integer values of $m$ so that $\frac1x+\frac1{x+1}=\frac1m$ has answer? $1)\text{zero}\qquad\qquad2)1\qquad\qquad3)7\qquad\qquad4)-10$ This is a high school level problem so I think by answer it means $x\in\mathbb{R}$. Here is my answer: We have $m\neq0$ and $x\neq0,-1$ $$\frac{2x+1}{x^2+x}=\frac1m\;\Rightarrow\; x^2+x=2mx+m\;\;\Rightarrow\;x^2+(1-2m)x-m=0$$ We have$\;\Delta=(1-2m)^2+4m=4m^2+1$. Which is always positive. Hence there are always two values for $x$ satisfies the original equation regardless of value of $m$ (if I plug in $x=0,-1$ in above equation I get $m=0$ which is a contradiction). And answer is "zero". Is my solution right?
The answer must be $0$ (if there are at most finitely many such $m$) by symmetry. If $(x, x+1)$ yields a solution $m$, then $(-(x+1), -x)$ also yields the solution $-m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4355349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
misconception about pointwise/uniform convergence Can anyone clear up the misconception in my argumentation. If a sequence of functions $f_k(x)$ is pointwise convergent towards some limit $f(x)$ for every $x$, then at every point $x$ we can choose an arbitrary distance $\varepsilon$ such that from a follow index $n_0$ the distance of $f_k(x)$ with $k\ge n_0$ and $f(x)$ is smaller than $\varepsilon$. So why can't we take the maximum $K$ of all the $n_0$ regarding to all the $x$ and say from this $K$ we have $|f_k(x) - f(x)| < \varepsilon$ for every $x$ which means that $\sup_x |f_k(x) - f(x)|<\varepsilon$ for $k>K$? Is the reason that since the set of all these indices $n_0$ regarding to all the $x$ is overcountable and so a maximum does not exist?
Consider the sequence of functions $f_n(x)=x^n$ defined on $(0,1)$, with limit function $f(x)=0$. Say we try to satisfy the definition of uniform convergence with $\epsilon=0.1$. We have $|f_n(x)-f(x)|=x^n$ and so we want $x^n<0.1$. For $x=0.9$ we can take $n=22$ since then $(0.9)^{22}<0.1$. For $x=0.99$ we can take $n=230$ since $(0.99)^{230}<0.1$. For $x=0.999$ we can take $n=2302$ since $(0.999)^{2302}<0.1$. Clearly there is no $N$ we can take which would work for all $x\in(0,1)$. That is, given $x$ we need $n(x)>\log_{x}0.1$ and as $x$ approaches $1$ the values of $n(x)$ approach infinity. Another way of putting this is that, as you say, we cannot take the maximum of the set of all $n$'s, simply because that set is infinite. A maximum is guaranteed to exist for finite sets, but not for infinite ones, and indeed the maximum does not exist in this example, as the above calculation shows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4355497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Left multiplications for each linear map Problem During studying linear algebra, I've stumbled upon linear maps. So obviously, for any $(m \times m)$-matrix $A$, the map $$ \lambda_A (B)=AB $$ is a linear map from $\mathfrak{M}_{m,n}$ (the set of all $(m \times n)$-matrices) to itself. Now we are looking for the converse: Let $L:\mathfrak M_{m,n}\to\mathfrak M_{m,n}$ be a linear map. Is $L=\lambda_A$ for some $A\in\mathfrak M_{m,m}$? My Attempt #1 First, I tried using the natural projections $\pi_j:V_1\times\cdots\times V_n\to V_j$ and embeddings $\iota_j:V_j\to V_1\times\cdots\times V_n$ defined as $$ \pi_j(v_1,\cdots,v_n)=v_j,\qquad\iota_j(v_j)=(0,\cdots,v_j,\cdots,0) $$ (only the $j$-th coordinate of $\iota_j(v_j)$ equals $v_j$, everything else is zero). So first we could identify $\mathfrak M_{m,n}$ as $\mathbb R^m\times\cdots\times\mathbb R^m$ ($n$ times). Then I came up with this small example: Let $L:\mathfrak M_{2,2}\to\mathfrak M_{2,2}$ be defined as $$ L\begin{pmatrix} a & b \\ c & d \end{pmatrix}=\begin{pmatrix} 3a + 2c & 3b + 2d \\ a & b\end{pmatrix}. $$ Then $$ (\pi_1 \circ L\circ\iota_1)\mathbf e_1=\begin{pmatrix}3 \\ 1\end{pmatrix}, \qquad (\pi_2 \circ L\circ\iota_2)\mathbf e_2=\begin{pmatrix}2 \\ 0\end{pmatrix} $$ (where $\{\mathbf e_1,\mathbf e_2\}$ is the standard basis for $\mathbb R^2$) and thus $L=\lambda_A$ where $$ A=\begin{pmatrix}3 & 2 \\ 1 & 0\end{pmatrix}. $$ Now seeing this example, I guessed that for a given linear map $L$, the matrix $A\in\mathfrak M_{m,m}$ defined as $$ A= \begin{pmatrix} (\pi_1 \circ L \circ \iota_1)\mathbf e_1 & \cdots & (\pi_m \circ L \circ \iota_m)\mathbf e_m \end{pmatrix} $$ would satisfy the equation $L=\lambda_A$, but I don't really get how to prove this. My Attempt #2 In the example listed above, every row of the resulting matrix is a linear combination of the rows of the original matrix. So I thought about finding the coefficients of the linear combination; for example, $$ \pi_i(L(B^\mathbf t))=\sum_{j=1}^m a_{ij}\pi_j(B^\mathbf t) \qquad(i=1,\cdots,m). $$ Then defining $A=(a_{ji})$ would do. But does the equation above always have a solution? Overall This post has became quite long for me just writing up everything I could think of. To sum up, I would like to know how to carry on my attempts, or come up with a completely new one. Also in my attempts, I didn't really use the fact that $L$ is linear, so I wonder how to utilize it in my answer.
$\mathfrak M_{m,n}$ is an $mn$-dimensional space. Therefore the dimension of the vector space of all linear operators on $\mathfrak M_{m,n}$ is $(mn)^2$. However, the dimension of the subspace consisting of all linear operators of the form $\lambda_A$ is only $m^2$ (because $A$ is $m\times m$). Therefore, unless $n=1$, there is always some linear operator on $\mathfrak M_{m,n}$ that is not in the form of $\lambda_A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4355884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
$X,Y \sim N(\mu,\sigma^2)$ Suppose $\sigma_X^2=\sigma_Y^2 \neq 0 \implies X+Y,X-Y$ are independent. $X,Y \sim N(\mu,\sigma^2)$ Suppose $\sigma_X^2=\sigma_Y^2 \neq 0 \implies X+Y,X-Y$ are independent. I know that $\sigma_{X+Y}^2=\sigma_X^2+\sigma_Y^2 , \sigma_{X-Y}^2=\sigma_X^2+\sigma_Y^2$. If I'll show that $\phi_{X+Y}(t)\cdot\phi_{X-Y}(t)=\phi_{(X+Y)-(X-Y})(t)$ , it's enough ? $\phi:=$characteristic function There is another way ? Thanks !
Without additional assumptions on the joint distribution of $(X,Y)$ (similar questions in MSE assume for example that $X$ and $Y$ are i.i.d) the claim in the OP may not hold. * *Consider for example the following situation: $X\sim N(0,1)$, $\epsilon\sim Be(\pm1,1/2)$, $\epsilon$ and $X$ independent, and $Y=\epsilon X$. Since $$E[e^{itY}]=\frac12\big(E[e^{itX}]+E[e^{-itX}]\Big)=e^{-t^2/2}$$ we have that $Y\sim N(0,1)$. On the other hand, \begin{align} E[e^{it(X+Y)}]&=\frac12E[e^{i2tX}]+\frac12=\frac12e^{-2t^2}+\frac12\\ E[e^{it(X-Y)}]&=\frac12E[e^{i2tX}]+\frac12=\frac12e^{-2t^2}+\frac12 \end{align} but \begin{align} E[e^{it(X+Y)+is(X-Y)}]&=\frac12(E[e^{2itX}]+E[e^{2isX}])\\ &=\frac12(e^{-2t^2} + e^{-2s^2})\\ &\neq E[e^{it(X+Y)}]E[e^{is(X-Y)}] \end{align} *If the joint distribution of $X$ and $Y$ is a binormal, then any linear combination $aX+bY$ is normally distributed; furthermore, for any $a,b,c,d$, $(aX+bY,cX+dY)$ is binormal (possibly degenerated). If in addition, $E[X]=\mu=E[Y]$, $\sigma_X=\sigma=\sigma_Y$, and $\operatorname{cov}(X,Y)=0$, then $X+Y\sim N(2\mu,2\sigma^2)$, $X-Y\sim N(0,2\sigma^2)$ and so, \begin{align} E[e^{it(X+Y)+is(X-Y)}]&=E[e^{i(t+s)X}e^{i(t-s)Y}]\\ &= e^{i\mu (t+s) -\sigma^2(t+s)^2/2}e^{i\mu (t-s) -\sigma^2(t-s)^2/2}\\ &=e^{2i\mu t -\sigma^2(t^2+s^2)}\\ &=E[e^{it(X+Y)}]E[e^{is(X-Y)}] \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4356015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are the solutions of a system of real polynomial equations continuous in the coefficients? Let $f_1(x,c_1),\ldots,f_n(x,c_n)$ be $n$ real polynomials in $n$ variables $x=(x_1,\ldots,x_n)$ of degree at most $d$ with coefficients $c=(c_1,\ldots,c_n)$. Thus, for each $i=1,\ldots,n$, we have $c_i\in\mathbb{R}^{{{n+d+1}\choose{d}}} $. Let $$ \Gamma(c)=\{x\in\mathbb{R}^n\mid f_1(x,c_1)=0,\ldots,f_n(x,c_n)=0\} $$ denote the set of real solutions of the given system of polynomial equations with coefficients $c=(c_1,\ldots,c_n)$. Moreover, define $$ C=\left\{c\in\mathbb{R}^{n\times{{n+d+1}\choose{d}}} \ \middle|\ \Gamma(c)\neq \emptyset\right\} $$ to be the set of coefficients for which there exists a real solution to the associated system of equations. Let us understand the solution set $\Gamma(c)$ as a set-valued function $\Gamma: C \to 2^{\mathbb{R}^n}$ over $C$. A (single-valued) continuous function $\gamma:C \to\mathbb{R^n}$ is said to be a continuous selection of the set-valued function $\Gamma: C \to 2^{\mathbb{R}^n}$ if $\gamma(c)\in\Gamma(c)$ for all $c\in C$. Question. Suppose $C$ is ``nice'', say open and connected (if more topological structure is needed, please feel free to assume so). Does $\Gamma$ admit a continuous selection? (A reference is much appreciated.) Thoughts: Of course, if there is the same number of real solutions over $C$, then this follows from an implicit function theorem. However, if solution paths intersect or the polynomials are not in general position, then I am not sure how to formally proceed. Related questions are here and here, but they only concern a single polynomial and the question is not quite the same.
Kato's theorem is for a continuous choice of complex solutions. This is different from what you are asking. Also, you need more than a constant number of solutions to apply the implicit function theorem. Here is a counterexample. A continuous selection will remain continuous when restricted to, say, a segment in $C$. Take $n=1$, $d=4$, $f(x, c(t))= ((x-1)^2-t)((x+1)^2+t)$. For $t=0$ we have 2 roots, $1$ and $-1$. For small $|t|$, if $t>0$ the roots are near $1$, and if $t<0$ they are near $-1$. There is no continuous selection $\gamma$. (You can also plot the solutions as function of $t$ to see that.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4356173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
is it true that $\zeta(\frac{1}{2} + bi) = 0 \implies \zeta(a + bi) \neq 0$ for $0 < a < 1$, $a\neq \frac{1}{2}$? I was wondering if a zero on the critical line implies no zero for the zeta function anywhere else in the critical strip for the same ordinate and vice-versa? I don't know if there is a proof for this. That is does, $\zeta(\frac{1}{2} + bi) = 0 \implies \zeta(a + bi) \neq 0$ for $0 < a <\frac{1}{2}$ and $\frac{1}{2} < a < 1$ Thank you
No, it is not. I'll try with contraposition. Your statement is if $\zeta(\frac12+bi)=0$, then for all $\frac12\ne a\in \Bbb R, \zeta(a+bi)\ne0$. So, its contraposition is if there exists $\frac12\ne a\in \Bbb R$ such that $\zeta(a+bi)=0$, then $\zeta(\frac12+bi)\ne0$. The Riemann zeta holds this functional equation: $$ \zeta(x)=2^x\pi^{x-1}\sin\frac{\pi x}2\Gamma(1-x)\zeta(1-x) $$ Substitute $x$ as $-2n$ $(0<n\in\Bbb Z)$, then the term $\sin\frac{\pi x}2$ goes to $0$, so $\zeta(-2n)=\zeta(-2n+0i)=0$. But $\zeta(\frac12+0i)=-1.46035\cdots\ne0$, so the contraposition is false. So your statement goes to false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4356343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Exercise 4, Section 18 of Munkres Topology Given $x_0 \in X$ and $y_0 \in Y$, show that the maps $f:X\to X\times Y$ and $g:Y\to X\times Y$ defined by $f(x)=x\times y_0$ and $g(y)=x_0\times y$ are imbeddings. My attempt: let $x,x’\in X$. Suppose $f(x)=f(x’)$. That means $x\times y_0 =x’\times y_0$. Thus $x=x’$. $f$ is injective. $f:X\to X\times Y$ defined by $f(x)=(f_1(x), f_2(x))= x\times y_0$. So $\pi_1 \circ f=f_1: X\to X$ and $\pi_2 \circ f=f_2: X\to \{y_0\}$. $f_1$ is the identity map and since domain and codomain are equipped with same topology, $f_1$ is homeomorphism, in particular $f_1$ is continuous. $f_2$ is continuous, since it’s a constant function, theorem 18.2(a). By theorem 18.4, $f$ is continuous. Now we work with the map $f’:X\to X\times \{y_0\}$. It is clear that image set $f(X)=X\times \{y_0\}$. Set $X\times \{y_0\}$ equipped with $\mathcal{T}_{S}$ topology, where $\mathcal{T}_{S}$ is the subspace topology of $\mathcal{T}_{X\times Y}$. It is easy to check $f’$ is bijective. By theorem 18.2(e), $f’$ is continuous. There are two ways to show $(f’)^{-1}$ is continuous. (1) The map $(f’)^{-1}:X\times \{y_0\} \to X$ defined by $(f’)^{-1}(x\times y_0)=x$ is precisely $\pi_1$ map with domain restricted from $X\times Y$ to $X\times \{y_0\}$. By theorem 18.2(d), $(f’)^{-1}$ is continuous. (2) let $V\in \mathcal{T}_{X}$. $(f’)^{-1}(V)=V\times \{y_0\}$, by basic set theory. Note $(X\times \{y_0\})\cap (V\times Y)= V\times \{y_0\}\in \mathcal{T}_{S}$, since $(V\times Y)\in \mathcal{T}_{X\times Y}$. Hence $f’$ is homeomorphism. Thus $f$ is imbedding. Similar proof show $g$ is also imbedding. This post is superposition of Exercise 4 from Munkres §18 and A careful solution to Problem 18.4 on page 111 of Munkres's Topology post. I think, this proof is more explicit and refined version of above two link(proofs). The notable difference is, I showed $f$ is continuous.
$f$ and $g$ are clearly injective. They are also continuous as each of their components is continuous so the maps $f^{\prime}: X \rightarrow X \times\left\{y_{0}\right\}$ and $g^{\prime}: Y \rightarrow\left\{x_{0}\right\} \times Y$ obtained by restricting the range of $f$ and $g$ respectively are continuous . It remains to prove that $f^{\prime}$ and $g^{\prime}$ are open maps. If $U$ is open in $X$, then $f^{\prime}(U)=U \times\left\{y_{0}\right\}=\left(X \times\left\{y_{0}\right\}\right) \cap(U \times Y)$ is open in $X \times\left\{y_{0}\right\}$, so $f^{\prime}$ is an open map. Similarly $g^{\prime}$ is an open map. It follows that $f$ and $g$ are imbeddings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4356529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Transition matrix of Fibonacci sequence I'm stuck on this problem and would appreciate some help. The set of all sequences on $\mathbb{R}$ constructs a vector space with the operations $(a_n)_{n\in \mathbb{N}} +(b_n)_{n\in \mathbb{N}}=(a_n+b_n)_{n\in \mathbb{N}}$ and $\lambda (a_n)_{n \in \mathbb{N}}= (\lambda a_n)_{n \in \mathbb{N}}$. Also real sequences that satisfy the equation $a_{n+2}=a_{n+1}+a_n$ construct a subspace $F$ of the aforementioned vector space of sequences. a) Show that two sequences $(a_n)_{n \in \mathbb{N}}, (b_n)_{n\in \mathbb{N}}$ with $a_1=1,a_2=0$ and $b_1=0,b_2=1$ is a basis of $F$. b) Show that two sequences $\left(\frac{1 +\sqrt5}{2}\right)^{n-1}$ and $\left(\frac{1 -\sqrt5}{2}\right)^{n-1}$ are also a basis of $F$. c) Determine the Transition matrix between the bases in a) and the bases in b). I think I have a acceptable proof for a) but I haven't been able to make any progress between b) and c). For part a) I start by noticing that subspace $F$ is just the set of all Fibonacci sequences and therefore $(a_n)_{n \in \mathbb{N}}, (b_n)_{n\in \mathbb{N}}$ is entirely dependent on its initial conditions in other words $(a_1,a_2),(b_1,b_2).$ As such there is an obvious correspondence between this and the standard basis of $\mathbb{R^2}$. Which means that $x_{n+2}= \begin{pmatrix} x_{n+1} \\ x_n \end{pmatrix}$ then with $A= \begin{pmatrix} 1 &0 \\ 0&1 \end{pmatrix}$ it follows that $A \begin{pmatrix} x_{n+1} \\ x_n \end{pmatrix}=x_{n+2}$.
For b) you just have to verify that the two sequences are solutions of the problem and also linearly independent as you should have done in a). That will make you a basis if you already shown that the vector space is of dimension 2. For c) you could explicit the general term of the sequences $ (a_n) $ and $(b_n)$ in the basis given in b) using the initial conditions. Also be careful because $A\left(\begin{array}{c}x_{n+1} \\ x_{n}\end{array}\right)=x_{n+2}$ is not consistent, you have a vector on the left and a scalar on the right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4356686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the line connecting the identity point and another point vertical in an elliptic curve? I'm reading through Elliptic Tales. Addition of 2 points on an elliptic curve is described as follows: $L$ is the line between $P$ and $Q$ and $R$ $L'$ is the line between $O$ and $P + Q$ and $R$ The book describes the algebraic process of adding together 2 points on an elliptic curve. First: It describes adding together $P$ and $Q$ to get $R$. It then says we need to connect $O$ and $R$ with a line, and where that line intersects $E$ will be the point $P + Q$. So far so good. It then says the line connecting $O$ and $R$ is vertical and is easy to describe in projective coordinates as $x = x_3z$ where $R$ is $(x_3, y_3)$. The line connecting $O$ and $R$ is $L'$ doesn't seem to be vertical. Clearly, in the picture it's slanted downwards. Does anyone know what's going on?
The construction and diagrams you're quoting are in Section 8.1 of the book. In Section 8.2, the author changes the definition of $\mathcal O$ to $(0:1:0)$, i.e. the point at infinity in the vertical direction. In this new context, $L'$ becomes a vertical line. This group theoretic construction in 8.1 works for nonsingular cubic curves in general where $\mathcal O$ can be any point on the curve. Elliptic curves are a special case of these curves, and always pass through (0:1:0). This simplifies the construction somewhat, in that the final stage is a simple reflection across the x-axis, rather than the intersection of the curve and a line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4356938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How many solutions does the ODE have? The question : Given ODE : $$ \begin{cases} y'-a^2(y')^3-\frac{\sin(x)}{x+y}=0 \\ y(0)=1 \end{cases} $$ Write how many solutions does the system have for $a=0$ and $a \ne 0$. My try : for $a=0$ we have a unique solution by Picard's theorem. In the case where $a \ne 0$ I'm stuck and can't understand...
If $a\ne 0$, your first equation is a cubic equation in $y'$. So you have three roots: $$y'_{1,2,3}=f_{1,2,3}(x,y(x))$$ For each root you have a unique solution (Picard–Lindelöf theorem). All you need to show is that your roots are different. Start from the cubic roots formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4357081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $1/2$ in $\mathbb{Z}[2^{1/2},2^{1/3},2^{1/4},...]$? I was wondering if $\frac{1}{2}$ is in the ring $R=\mathbb{Z}[2^{1/2},2^{1/3},2^{1/4},...]$. I don't think it is, and I've been trying to prove by contradiction. So far I've shown that if this is true, then $R=\mathbb{Z}[...,2^{-1/3},2^{-1/2},2^{1/2},2^{1/3},...]$, which seems unusual. Do I need more techniques from field theory to make headway? Any hints are appreciated!
Going off of Kenta's and Bill Dubuque's comments, we note that $\mathbb{Z}[2^{1/2},2^{1/3},...]$ is an integral extension of $\mathbb{Z}$, since every root of $2$ is integral over $\mathbb{Z}$. If $1/2\in \mathbb{Z}[2^{1/2},2^{1/3},...]$, then $\mathbb{Z}[1/2]$ must be an extension of $\mathbb{Z}$ contained in $\mathbb{Z}[2^{1/2},2^{1/3},...]$, and thus an integral extension. However, $1/2$ is not integral over $\mathbb{Z}$, and we arrive at a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4357239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solving Ax=b by FFT I read in Wiki that it is possible to solve Ax=b via Fast Fourier Transform given that A is a circulant matrix. For example, I have $\begin{bmatrix} 1 & 0 & 0 & -1 \\ -1 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 \\ 0 & 0 & -1 & 1 \end{bmatrix}x=$$ \begin{bmatrix} 1 \\ 0 \\ 0 \\ -1 \end{bmatrix} $ where A is a circulant matrix. How do I solve this system using FFT? I had a hard time understanding the concept. Any help is greatly appreciated. Thank you for your understanding.
We can use something called a similarity transform $A = F^{-1}DF$ For circulant matrices we can be sure that we can choose $F$ to be a DFT matrix and $D$ matrix will be a diagonal matrix with the Fourier coefficients of one row (or column). This $F$ matrix is dense so so far we have not saved much by doing this transformation. The FFT in matrix language is nothing else but a factorization of this $F$ matrix. We can write $F = F_1F_2\cdots F_N$ . A product of matrices. Now the neat part is that we can get these $F_k$ matrices to be sparse. Only two non-zero values every row no matter the size of the matrix. We can show that $N$ here will be $\log_2(n)$ where $n$ is the side of $A$. Quite few matrices compared to the size. So we will get away with all in all something like $n+4n\log(n)$ multiplications instead of $n^2$ multiplications if we are to do a matrix-vector multiplication.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4357399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Showing that if $\lambda \in \mathbb{R}$ and $f$ is a measurable real valued function on $(S, \mathcal{S})$, then so is $\lambda f$ I have somehow confused myself with this fairly straightforward proof. We need to show that $\lambda f$ is a measurable function on $(S, \mathcal{S})$, i.e. that for any $c \in \mathbb{R}: \{s \in S: f(s) \leq c\} \subset S$. If $\lambda \neq 0$, then the claim follows immediately from the measurability of $f$, namely as $c/\lambda \in \mathbb{R}$ it follows that $\{s \in S: f(s) \leq c/\lambda \} = \{s \in S: \lambda f(s) \leq c \}$ But then, if $\lambda = 0, \lambda f = 0$ and I am not sure how to convince myself that $\lambda f$ is measurable.
If $\lambda=0$, then your function $\lambda f=0$ is identically $0$. The inverse image of any measurable set containing $0$ is $S$ and of any set not containing $0$ is the empty set. Both of these are measurable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4357611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Doob-Meyer Decomposition of the range of a standard Brownian Motion I am curious about whether there exists a Doob-Meyer decomposition for the range process $R_t$, or the squared range $R_t^2$ of a standard Brownian motion $B_t$, defined as: $$ R_t = M_t-m_t, $$ where $ M_t := \sup_{0\leq s\leq t} B_s$ and $m_t := \inf_{0\leq s\leq t} B_s$. Clearly, $R_t$ and hence $R_t^2$ is monotonically increasing and continuous in $t$, thus they are by design submartingales which should possess unique Doob-Meyer decompositions. I know that by the Tanaka's equation, we have: $$ |B_t| = \int_0^t \mathrm{sgn}(B_s)dB_s + L_t,$$ where $L_t$ is the Brownian local time at zero. Consequently, this provides the Doob-Meyer decomposition for $|B_t|$. Also, for $B_t^2$ this is even simpler: $$B_t^2 = 2\int_0^t B_s dB_s + t,$$ directly from Ito's lemma. Therefore, I was thinking whether these results can be easily extended to describe $R_t$ or $R_t^2$ in light of the well-known relation that $M_t\overset{d}{=} |B_t| \overset{d}{=} M_t-B_t \overset{d}{=}-m_t \overset{d}{=} -B_t-m_t$? I spent hours trying to find relevant results in the literature but could not find anything relevant. Any suggestions or hints are highly appreciated.
Since $m_t=-\max_{0\le s\le t}(-B_s)$ ($-m_t$ is a disguised max) it is enough to find the Dooob-Meyer decomposition of $M_t=\max_{0\le s\le t}B_s\,.$ But that's trivial. Since $M_t$ is inreasing its decomposition is $M_t=0+A_t$ where the increasing process $A_t$ is $M_t$ itself and the martingale part is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4357839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Examining the injectivity and surjectivity of $f(x) = x/(x^2+1)$ I'm beginning with my self study in math from the $11$th grade, so bear with the simplicity. I've got to test the objectivity of $x/(x^2+1)$, function is from $\Bbb R$ to $\Bbb R$. First issue is, I think it is injective, because my algebra after equating the output of two inputs leads to both inputs coming equal to each other. The book answer says I'm wrong (without explanation). With surjectivity, I can't find a simple expression for the inverse of the function, so I thought of this: Let $g(x) = x, t(x) = x^2+1$ Then $f(x) = [g(x)/t(x)]$. Now $g(x)$ is surjective and $t(x)$ isn't. Can I argue that the ratio however, will be surjective because the numerator is surjective, and thus even if it is being divided only by numbers greater than or equal to $1$, the numerator's surjectivity will make the whole function surjective? If I can, is there a rigorous way of arguing this? If I can't, a counterexample or counter-proof would help.
In this case, I think there is a simpler way to check injectivity/surjectivity. Let $f(x)=c$. Surjective means that for any $c$ there is a solution. Injective means that for any solution, the solution is unique. $$\frac x{x^2+1}=c\\cx^2-x+c=0$$ This is a simple quadratic equation if $c\ne 0$. Then you have a real solution if the discriminant is positive. $$\Delta=1-4c^2$$ If the discriminant is negative you have no solution. In this case it means $|c|>\frac12$. So the function is not surjective. Also if $|c|<\frac12$ and $c\ne 0$ the quadratic has two distinct solution, so the function is not injective either.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4357987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Is it legal to interchange rows during finding inverse matrix? I'm calculating an inverse matrix using Gauss-Jordan algorithm. I'd like to use pivoting (with swapping rows, just like in standard Gauss algorithm) in order to avoid division by zero in some cases. But I also noticed that interchanging rows affects inverse matrix and I don't fully understand how could I avoid division by zero on when running Gauss-Jordan algorithm backwards (when I transform reduced row echelon form of matrix A to identity matrix).
The answer to the asked question is "Yes". It is most definitely legal to swap rows when finding an inverse matrix. Perhaps the above statement should be qualified with "by GaussJordan elimination" but I suspect not. Row swapping is sometimes REQUIRED in the procedure for GaussJordan elimination as described by Gilbert Strang in his lectures and textbooks. My suspicion is that certain matrices REQUIRE row swapping in order to execute GaussJordan elimination no matter what procedure. The Gilbert Strang references detail the procedure for finding the reduced row echelon format which naturally includes finding the inverse. Row swapping is a completely legal step in Gilbert Strang's references on GausJordan elimination. Note that swapping rows most definitely changes the inverse of a matrix. Observe the following results in Matlab: A = [ 1 2 3; 4 5 6; 7 8 10; ] Ainv = inv(A) B= [ 4 5 6; 1 2 3; 7 8 10; ] Binv = inv(B) But the fact that the invesrse of a matrix is not invariant to row swapping is not relevant to the answer of this question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4358089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do I expand $(1+x)^{1/x}$ for small $x$? The binomial expansion $$(1+x)^{n} = 1 + nx + \frac{n(n-1)}{2}x^{2}+...$$ didn't work because of the $n$ term being undefined at $x=0$. Taylor expansion doesn't work either since it too would depends on an undefined $1/x$ term. How does one do it?
Let's consider the function $$ f(x)=\begin{cases} \dfrac{\log(1+x)}{x} & x>-1, x\ne0 \\[6px] 1 & x=0 \end{cases} $$ Then $f$ is everywhere differentiable and its Taylor expansion at 0 is $$ f(x)=1-\frac{x}{2}+\frac{x^2}{3}-\frac{x^3}{4}+\dotsb $$ Now we have $(1+x)^{1/x}=e^{f(x)}$ (with continuous extension at $x=0$), so we can apply the series for $e^x$. Say we want to find the Taylor expansion up to degree $3$, for simplicity, so we need $$ \exp\Bigl(1-\frac{x}{2}+\frac{x^2}{3}-\frac{x^3}{4}+o(x^3)\Bigr) $$ and we get $$ e\Bigl(1-\frac{x}{2}+\frac{x^2}{3}-\frac{x^3}{4}+\frac{1}{2}\Bigl(-\frac{x}{2}+\frac{x^2}{3}-\frac{x^3}{4}\Bigr)^2+\frac{1}{6}\Bigl(-\frac{x}{2}+\frac{x^2}{3}-\frac{x^3}{4}\Bigr)^3+o(x^3)\Bigr) $$ and so $$ e\Bigl(1-\frac{x}{2}+\frac{x^2}{3}-\frac{x^3}{4}+\frac{x^2}{8}-\frac{x^3}{6}-\frac{x^3}{48}+o(x^3)\Bigr) $$ and, eventually, $$ (1+x)^{1/x}=e-\frac{ex}{2}+\frac{11ex^2}{24}-\frac{7ex^3}{16}+o(x^3) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4358281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Real functions with the property: $\ f(x_1)f(x_2) = f\left( \frac{x_1+x_2}{2} \right)^2 $ for all $\ x_1,\ x_2\in\mathbb{R}.\ $ Suppose $\ f:\mathbb{R}\to\mathbb{R}\ $ has the property:$\ f(x_1)f(x_2) = f\left( \frac{x_1+x_2}{2} \right)^2\ $ for all $\ x_1,\ x_2\in\mathbb{R}$. I made some educated guesses and stumbled upon the fact that if $\ A,\alpha\in\mathbb{R},\ $ then $\ f(x) = A e^{\alpha x}\ $ satisfies this property. I also realise that $\ f\ $ must be convex if $\ f>0\ $ and concave if $\ f<0$. So now I'm wondering if any other functions satisfy the property, and if not, how to prove uniqueness of $\ f(x) = A e^{\alpha x}\ $ in satisfying the property. Edit: I want something stronger: to classify all the solutions to this functional equation.
As stochasticboy commented while I was drafting this: Let $\lambda(x)$ be any discontinuous solution of the additive form of Cauchy's functional equation. One checks trivially that $\lambda(2u)=2\lambda(u)$ for all $u$, equivalently $\lambda(u/2)=\lambda(u)/2$. Now let $f(x)=\exp(\lambda(x))$, so $$ f(x)f(y)=e^{\lambda(x)+\lambda(y)}=e^{\lambda(x+y)} =e^{2\lambda((x+y)/2)}=\left(f\left(\frac{x+y}2\right)\right)^2,$$ taking $u=x+y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4358459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can I derive the rules for the natural logarithm from the rules for the exponential functions? I would like to use the calculation rules for exponential function $$1)\quad e^{x+y}=e^{x}\cdot e^{y}\quad\text{ and }\quad2)\quad(e^a)^b=e^{a\cdot b}$$ to derive the following calculation rules for the natural logarithm: $$\ln(x\cdot y)=\ln(x)+\ln(y)\quad\text{ and }\quad \ln(a^b)=b\cdot\ln(a)$$ My solution so far looks like this: 1)\begin{align*} e^{x+y}&=e^{x}\cdot e^{y}& \ln()\text{ on both sides}\\ \ln(e^{x+y})&=\ln(e^x\cdot e^y)\\ x+y&=\ln(e^x\cdot e^y) \end{align*} From here, I don't know any further. 2)\begin{align*} (e^a)^b&=e^{a\cdot b}& \ln()\text{ on both sides}\\ \ln(e^a)^b&=\ln(e^{a\cdot b})\\ \ln(e^a)^b&=a\cdot b \end{align*} I can't get any further here either. I really appreciate your help and tips. Thanks in advance!
If $x=e^a$ and $y=e^b$, then\begin{align}\ln(xy)&=\ln\left(e^ae^b\right)\\&=\ln\left(e^{a+b}\right)\\&=a+b\\&=\ln(x)+\ln(y).\end{align}And\begin{align}\ln\left(a^b\right)&=\ln\left(e^{b\ln a}\right)\\&=b\ln(a).\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4358578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
if $T$ is normal and a projection in a finite dimensional vector space $V$, then $T$ is an orthogonal projection Let $T$ be a normal operator ($TT^* = T^*T$) defined in a finite dimensional vector space $V$ with inner product ($<v, w>$). Then, if $T$ is a projection ($T^2 = T$) then $T$ is an orthogonal projection ($N(T)^\bot = R(T)$ and $R(T)^\bot = N(T)$). where $N(T)$ is the Kernel of $T$ and $R(T)$ is the Range or Image of $T$. Attempt: by taking $y \in R(T)$ and $w\in N(T)$ I'm trying to prove that $N(T)^\bot = R(T)$ . I must prove $\langle y,w\rangle = 0$, which means that $N(T) = R(T)^\bot$ $\langle y,w\rangle = \langle T(x),w\rangle = \langle x, T^*(w)\rangle $ Then I dont know what to do. Also $\langle y,w\rangle = \langle T(x),w\rangle = \langle T^2(x), w\rangle $ Then I don't know what to do. Also, I know that $T$ is an orthogonal projection if and only if $T$ is autoadjunct ($T^*=T$) Therefore, I could try to prove that $T$ is autoadjunct. But I don't know if this is even true. Any hint?
As V is finitely dimensional then is sufficient to prove $\mathcal N(T) = \mathcal R(T)^\bot $ let be $x \in \mathcal R(T)^\bot$ then $0 = <x, \ T(T^*(x))>\ =\ <x,\ T^*(T(x))>\ =\ <T(x),\ T(x)>$ i.e $\ <T(x),\ T(x)> = 0 \iff T(x) = 0 \iff x \in \mathcal N(T)$ Therefore $R(T)^\bot \subseteq N(T)$ To check $N(T) \subseteq R(T)^\bot $ is the same backwards let be $x \in \mathcal N(T)$ then $0= <T(x),\ T(x)> = <x,\ T^*(T(x))> = <x,\ T(T^*(x))>$ Therefore $<x,\ T(T^*(x))> = 0$ with $T(T^*(x)) \in R(T) \implies x \in \mathcal R(T)^\bot$ Therefore $N(T) \subseteq R(T)^\bot $ Then $N(T) = R(T)^\bot $ Thus $T$ is an orthogonal projection
{ "language": "en", "url": "https://math.stackexchange.com/questions/4358699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
A question about simplification of an expression involving log: $16777216^\frac{\log(64n)}{3\log(4)}$ I have a practice question regarding simplification of the following expression: $16777216^\frac{\log(64n)}{3\log(4)}$ So I have tried to do this: $(64^4)^\frac{\log64n}{\log64}$ and now I got stuck. Maybe there are something more I could do to the exponent term using change of base. Say I simplify it further to: $(64^4)^{\log_{64}{(64n)}}$ But this is as far as I can go, I think there might be more simplification that could be done. But not sure how to proceed further. Could someone help a bit?
Just switch the exponents around as $$(a^b)^c = a^{bc} = (a^c)^b \implies \left( 64^4 \right)^{\log_{64}(64n)} = \left( 64^{\log_{64}(64n)} \right)^4=(64n)^4 = 16,777,216 \cdot n^4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4358832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is there a more efficient way to calculate the determinant of this matrix? I used Gaussian Elimination to calculate $$\det\begin{pmatrix}1&4&9&16&25&36\\4&9&16&25&36&49\\9&16&25&36&49&64\\16&25&36&49&64&81\\25&36&49&64&81&100\\36&49&64&81&100&121\end{pmatrix}$$ and found the answer to be $0$. It took a lot of time to do. The lower-upper (LU) decomposition is shown below. I think that there might be a more efficient way to calculate the determinant of this kind of matrix.
Clearly every row is part of the space of vectors $(a_1,\ldots,a_6)$ for which $a_i$ can be given by a polynomial expression in$~i$ of degree${}<3$ (i.e., with $a_i=p+qi+ri^2$ for some scalars $p,q,r$ and $0<i\leq 6$). That subspace of $\Bbb Q^6$ being of dimension$~3$, any $4$ or more rows are linearly dependent, so the determinant of the matrix must be$~0$ (and the rank of the matrix at most$~3$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4358982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
The operator norm of the canonical projection $T: E \to E/M$ is $1$ I'm doing this exercise Let $(E, |\cdot|)$ be a normed linear space and $M$ a closed subspace of $E$. Consider the quotient map $$T: E \to E/M, x \mapsto \hat x := x+M.$$ We endow $E/M$ with the quotient norm $\| \cdot \|$ defined by $$\| \hat x\| := d(x, M), \quad \forall x\in E.$$ Then $T$ is linear surjective. Prove that $\|T\| = 1$. My attempt: We need the following Riesz's lemma. Let $(E, |\cdot|)$ be a normed linear space and $M$ a closed proper subspace of $E$. Let $0 < \alpha < 1$. Then there exists an $x \in E$ with $|x|=1$ such that $|x-y| \ge \alpha$ for all $y \in M$. [A proof is given here] First, we have $$\|Tx\| = \inf_{y\in M} |x-y| \le |x-0| = |x|.$$ It follows that $\|T\| \le 1$. By Riesz's lemma, for each $0<\varepsilon <1$, there is $x_\varepsilon \in E$ such that $|x_\varepsilon|=1$ and $\inf_{y\in M} |x_\varepsilon-y| \ge \varepsilon$. It follows that $$\|T x_\varepsilon\| \ge \varepsilon |x_\varepsilon|, \quad \forall \varepsilon \in (0, 1).$$ Take the limit $\varepsilon \to 1$, we get $\|T\| \ge 1$. This completes the proof.
Your proof is correct except for one minor thing (I'm sure you know this though): The limit $\lim_{\epsilon \to 0} \|T x_\epsilon\|$ doesn't need to exist, so when you say "Take the limit $\epsilon \to 1$..." you actually should apply this to the inequality $\|T\|\ge \epsilon$ (this follows from your estimate).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4359349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Unbounded Bloch function I'm studying the Bloch space $\mathcal{B}$; this is the set of all analytic fuctions $f$ defined on the open unit disc $\mathbb{D}$ such that $$\sup_{|z|<1}(1-|z|^2)|f'(z)|<\infty$$ My question is more curious: can you give an example of an unbounded Bloch function?
$f(z) = \mathrm{argtanh}\,z$ is analytic with $|f(z)| \to +\infty$ as $z\to 1$. Since $f'(z) = \frac{1}{1-z^2}$, it satisfies $(1-|z|^2)|f'(z)| \leqslant 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4359518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
2015 Cambridge Entrance Examination Q6 * *Show : $\sec^2\left(\frac{\pi}{4}-\frac{1}{2}x\right)=\frac{2}{1+\sin(x)}\\$. Then evaluate$\int\frac{dx}{1+\sin(x)}$ *Show : $\int_{0}^{\pi}x \space f(\sin(x)) \space dx=\frac{\pi}{2}\int_{0}^{\pi}f(\sin(x)) \space dx$. Then evaluate $\int_{0}^{\pi}\frac{x}{1+\sin(x)}$ *Evaluate: $\int_{0}^{\pi}\frac{2x^3-3\pi x^2}{(1+\sin(x))^2}dx$ The first trigonometric identity is quite easy to prove. $$\sec^2\left(\frac{\pi}{4}-\frac{1}{2}x\right)=\frac{1}{\cos^2\left(\frac{1}{2}\left((\frac{\pi}{2}-x\right)\right)}=\frac{1}{1/2\left(1+\cos\left(\frac{\pi}{2}-x\right)\right)}=\frac{2}{1+\sin(x)}$$ And the integral just involves substitution the integrand with the secant term, which alloys us to take advantage of the fact that the derivative of the tangent is secant squared. The first second integral were tricky at first but allowing $y=\pi-x$ did the trick. However evaluting the integral in the third part... $$\int_{0}^{\pi}\frac{2x^3-3\pi x^2}{(1+\sin(x))^2}$$ ... wasn't as simple as I thought. I'm really not sure where to begin or how to utelise the integral properties proved in the previous parts. Thanks
The beauty of this problem is that it used an allied function $\sin(x)$ with phase $\pi$ If you could see the graph of $y = E(x)\times A(x)$ it can help you understand the complexity of the problems Basically if $E(x)$ is an odd function then this can be solved: * *Solution $\int_0^\pi x^nf(\sin x)dx$ can be solved if $n $ is an odd Given: $\int_0^\pi x f(\sin x) dx = \frac \pi2\int_0^\pi f(\sin x)dx$ * *To find $\int_0^\pi \frac {2x^3-3\pi x^2}{(1+\sin x)^2}dx = \int_0^\pi (2x^3-3\pi x^2) f(\sin x)$ Take $$\begin{align*} \int_0^\pi x^3 f(\sin x)dx & = \int_0^\pi(\pi-x)^3f(\sin x)dx\\ & \text { by expansion.......and using the given property }\\ & \implies \int_0^\pi x^3f(\sin x)dx = \int_0^\pi \left(\frac {\pi^3}{2}-\frac{3\pi^3}4 + \frac {3\pi x^2}{2} \right)f(\sin x)dx \end{align*}$$ Coming back to our original problem: $$I = \int_0^\pi \frac {2\left(-\frac {\pi^3}{2} + \color{blue}{\frac {3\pi x^2}{2}}\right) - \color{blue}{3\pi x^2}}{(1+\sin x)^2}dx$$ Now, I believe you can solve $\int_0^\pi\text {constant }\frac {1}{(1+\sin x)^2}dx$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4359666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the supremum of $\frac{\sum_{i=1}^\infty |x_i|2^{-i}}{\|x\|_p}$? I have been trying to find the upper limit of this of the of elements $$\frac{\sum_{i=1}^\infty |x_i|2^{-i}}{\|x\|_p}$$ such that $x \in l^p$, i.e $\sum_{i=1}^\infty |x_i|^p < \infty$ and $ 1<p< \infty$. Some other previous results are: Given $f(x)= \sum_{n=1}^\infty x_n2^{-n}$. Then if $x \in l^\infty$ any we see $\|f\|_{(l^\infty)^*}=\sup_{x \in l^\infty -{0}}\sum_{n=1}^\infty\frac{ x_n2^{-n}}{\|x\|}$. However, each summand is less or equal to 1. Hence, to maximise this we can have $x =(1,1,....)$ the constant sequence and the norm of functional is $0.5/(1-0.5)=1$ (geometric series). For $f \in (l^1)^*$, we just need to observe, $$\|f\|_{(l^1)^*}\leq \sum_{n=1}^\infty \frac{|x_n|2^{-n}}{\sum_{k=1}^\infty|x_k|} \leq \sum_{n=1}^\infty \frac{|x_n|2^{-1}}{\sum_{k=1}^\infty|x_k|}=2^{-1} $$ And as $x=(1,0,0,...)$ satisfies $\frac{f(x)}{\|x\|_{l^1}}=\frac{1}{2}$. Then we are done. My belief is that the supremum $\frac{\sum_{i=1}^\infty |x_i|2^{-i}}{\|x\|_p} = \frac{1}{2}^{\frac{1}{p}}$ or $1$. Please can someone help point me in the correct direction.
Hint: apply Hölder's inequality in order to get $$ \frac{\sum_{i=1}^\infty |x_i|2^{-i}}{\|x\|_p}\leqslant \left(\sum_{i=1}^\infty 2^{-iq}\right)^{1/q}, $$ where $q$ is the conjugate exponent of $p$, that is, $1/p+1/q=1$. The equality is also reached by a good choice of $x_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4359852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Solving a differential equation with initial conditions only on the function I have the initial value problem $\left\{\begin{gather}E_nf_n(x)+f_n''(x) = 0\\ f_n(-a)=f_n(a)=0 \end{gather}\right.$. Solving it using Laplace transform I get $$f_n(x) = f_0\cos(\sqrt{E_n}x)+\frac{f_0'}{\sqrt{E_n}}\sin(\sqrt{E_n}x)$$ In my attempt to solve for $f_0$ and $f_0'$ I did the following $\left\{\begin{gathered}f_n(a) = f_0\cos(\sqrt{E_n}a)+\frac{f_0'}{\sqrt{E_n}}\sin(\sqrt{E_n}x)=0\\ f_n(-a) = f_0\cos(\sqrt{E_n}a)-\frac{f_0'}{\sqrt{E_n}}\sin(\sqrt{E_n}x)=0\end{gathered}\right.$ Adding and substracting both equations, assuming $f_0 \neq 0$ and $f'_0 \neq 0$, I get $\left\{\begin{gathered}\cos(\sqrt{E_n}a)=0\\ \sin(\sqrt{E_n}a)=0\end{gathered}\right.$ Here I'm a bit lost and dont' know how to proceed further. If I list all the possible solutions for $\cos(\sqrt{E_n}a)=0$ and $\sin(\sqrt{E_n}a)=0$, I can't find $E_n$ such that $\cos(\sqrt{E_n}a)=0$ and $\sin(\sqrt{E_n}a)=0$, the intersection of both sets is the empty set. How could I proceed into finding a solution for the IVP? I tried to solve it using Mathematica but it gives that $\cos(\sqrt{E_n}a)=0$ or $\sin(\sqrt{E_n}a)=0$, in which case I get a set of values for $E_n$, mainly $E_n = \frac{\pi^2n^2}{4a^2}$. How could I get to this solution with the equations I showed above? Also, How could I find $f_0$ and $f_0'$? or how could I find a relation betweeen $f_0$ and $f_0'$?
This is a eigenvalue problem, so we are looking for $E_n$ such that the differential equation has nontrivial solutions. Your choice that both $f_0\neq0$ and $f_0'\neq0$ (at the same time!) is too restrictive. You get solutions by either * *$f_0=0$ and $\sin (\sqrt{E_n^s}a)=0 \Rightarrow f_n^s=c\sin(\sqrt{E_n^s}x)$ with $E_n^s=(\tfrac{n\pi}{a})^2$ or *$f_0'=0$ and $\cos (\sqrt{E_n^c}a)=0 \Rightarrow f_n^c=c\cos(\sqrt{E_n^c}x)$ with $E_n^c=\left(\tfrac{(n+1/2)\pi}{a}\right)^2$. This concludes the problem. However we can rewrite both families of solutions solutions into one single family of solutions if we rewrite the cosine functions as shifted sine functions: $$ \cos(\sqrt{E_n^c}x)=\cos(\tfrac{(n+1/2)\pi}{a}x)=(-1)^{n}\sin(\tfrac{(n+1/2)\pi}{a}x+(n+1/2)\pi)=(-1)^{n}\sin(\tfrac{(2n+1)\pi(x+a)}{2a}) $$ If we now define $$E_k=\begin{cases}E_{k/2}^s&k \mbox{ even}\\E_{(k+1)/2}^c&k \mbox{ uneven}\end{cases}$$ and similarly with the functions, we get the result from mathematica. A potentially easier approach is to consider the shifted problem to begin with: $g(x+a):=f(x)$ which yields the problem: $$ E_ng_n(x)+g''_n(x)=0\\ g_n(0)=g_n(2a)=0 $$ this yields only sine functions as nontrivial solutions directly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4359988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Definition of optional sigma algebra require the left hand limit or just right continuous adapted processes? While reading the following post : https://almostsuremath.com/2009/11/08/filtrations-and-adapted-processes/ I have come across a question on the definition of optional and predictable processes. In most books I have read before, optional sigma algebra is defined as the sigma algebra generated by cadlag adapted processes, and predictable sigma algebra is defined as the sigma algebra generated by caglad adapted processes or in some books just left continuous adapted processes. In George's post he defines both as just right-continuous or left-continuous adapted processes. In the predictable case, I can see that both are in fact the same as it is generated by continuous adapted processes. However, in the case of optional sigma algebra I am not sure if they are the same. Does the omission of left-limit here matter?
The books quoted from in the following links were the ultimate bibles of stochastic processes when I was a student: * *The optional sigma algebra ${\cal O}(\boldsymbol F)$ is generated by the càdlàg adapted processes. *The predictable sigma algebra ${\cal P}(\boldsymbol F)$ is by the adapted processes that are continuous from the left (not necessarily with limits from the right). Interestingly, ${\cal P}(\boldsymbol F)\subset {\cal O}(\boldsymbol F)\,.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4360118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }