Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Given that $\gcd(a,b) =1$, show that $\gcd(a+2b,b)=1$ without using prime factorization theorem If $\gcd(a,b) =1$, show $\gcd(a+2b,b)=1$. I need help figuring how to showing from just that $\gcd(a,b) =1$. Does it have to do with Euclidean formula and that $\gcd(a,b) = am + bn$ for some $m,n$? Thanks.
If $d|a+2b$ and $d|b$ then $d|a+2b-2(b)=a$ so $\gcd(a+2b,b)|a$ and $\gcd(a+2b,b)|b$. So that gcd divides $\gcd(a,b)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1491967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Questions about the quotient ring $(\mathbb{Z}/2\mathbb{Z})[x]/\langle x^2+x+1\rangle$ I'm just starting to learn about quotient rings. I was able to think about what type of of elements are generated by some $a$ when $\langle a \rangle$ is simply an integer, and also with simple quotient rings, but I just don't get this example. My question about the quotient ring given, $(\mathbb{Z}/2\mathbb{Z})[x]/\langle x^2+x+1\rangle$ are: * *What kind of elements are in $\langle x^2+x+1 \rangle$? Applying the definition, I get something like $\{(x^2+x+1)q(x) \colon q(x) \in (\mathbb{Z}/2\mathbb{Z})[x]$}. I cannot readily list out the elements though. What is an intuitive way of thinking about the principal ideal generated by a polynomial? *What type of elements are in $(\mathbb{Z}/2\mathbb{Z})[x]/\langle x^2+x+1\rangle$? *The author claims $x^2 = x+1$. I don't see why. I am self learning this, so this should help illuminate how I may think about quotient rings as I proceed. Thanks in advance.
* *The set you're written is correct. In any commutative ring ideal generated by one element $a$ is just $<a>=\{ra|r\in R\}$ *Ideal and quotient ring is similar to normal subgroup and quotient group , the element in $R/I$ (here your $R$ is $Z/2Z[x]$ and $I$ is $<x^2+x+1>$) is of the form $I+r$ with $r\in R$ and $I+r_1=I+r_2 \iff r_1-r_2\in I$. *It's a short hand notation of $<x^2+x+1>+x^2=<x^2+x+1>+x+1$, which is equivalent to $x^2-x-1\in <x^2+x+1>$. In fact $x^2-x-1=x^2+x+1$ because the coefficient is in $Z/2Z$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1492056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to see this matrix equality When doing some formulas for regression I encountered this which I think is true by trying some examples: $$X^T (X X^T + \lambda I)^{- 1} = (X^T X + \lambda I)^{- 1} X^T$$ Here $\lambda > 0$. I'm stuck on how to prove this. Help/counterexamples appreciated.
By multiplying from the left with $(X^T X + \lambda I)$ and from the right with $(X X^T + \lambda I)$, we obtain $$ (X^TX + \lambda I) X^T = X^T (X X^T + \lambda I), $$ and from here it's obvious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1492166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does anyone have any advice as to what measure-theoretic Probability Theory books there are with lots of worked examples? I know that measure-theoretic probability book reference requests have been mentioned quite a few times on this site. However, I was wondering if anyone knew of any good books out there with lots of worked examples that go in a step-by-step fashion. One example I am talking about would be the Schaum's worked example series. I would like to solve as many problems as I can and so any reference with lots of worked examples would be a great resource. Thank you in advance and any advice would be greatly appreciated!
The closest books I know with lots of worked exercises on measure-theoretic probability theory are: "Problems and Solutions in Mathematical Finance: Volume 1 - Stochastic Calculus" by Chin, Nel and Olafsson. As you can see from the title, it contains more than just probability theory, but I think chapter I covers exactly the basics you need, and chapter 2 covers some part of martingale theory. The solutions are very detailed. Another relevant book (which starts at a little lower level) is "Probability Through Problems" by Capinski and Zastawniak, but it looks like it contains all the basics covered in a first graduate probability class. I've also found the book "Problems in Probability" by Shiryaev and Lyasoff, but it doesn't seem to contain step-by-step solutions, just hints. Also, the book "One Thousand Exercises in Probability" by Grimmett and Stirzaker seems to be too low level, as it is not using measure theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1492285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Arranging cards so that no two consecutive values remain consecutive Let us say we have 52 cards with values ranging from 1-13 (4 sets of cards from 1-13). Assume that you wanted no two consecutive values to be next to each other in the pile of cards. For example, a 3 cannot be next to a 2 or a 4. How many ways can I arrange these cards that there are no consecutive values next to each other? Can someone suggest a permutation that fulfills these requirements or suggest a computer program to solve the problem?
total ways are 52!. now let us assume we have two cards consecutive . consider them as a single group we can have 104 ways . they can be arranged in 2! ways between themselves.We have 1 set of 13 so number of ways of 1 pair of two consecutive cards is 2!.11! such arrangements in 104 ways can be done and of four different sets and these four sets can be arranged in 4! ways. The total ways that cards are not together =52!-(2!.11!.4!.104.4)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1492394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A collection of subsets of $\mathbb{R}$ that is not a topology I want to show that the collection of sets of the form: $$\{(-\infty, x] : x \in \mathbb{R}\}$$ together with the empty set and $\mathbb{R}$, is not a topology for $\mathbb{R}$. But if I take the infinite union of sets of the form $\{(-\infty, x] : x \in \mathbb{R}\}$, this is not itself a set of the form $\{(-\infty, y] : y \in \mathbb{R}\}$, where $y$ is the maximum of all $x$? And if I take the finite intersection of this sets, this is not again, a set of the form $\{(-\infty, x] : x \in \mathbb{R}\}$, where $x= \text{min}(x_1,...,x_n)$?
Take the union $$\bigcup_{x<y} (-\infty,x],$$ this is $(-\infty,y)$ which is not in your collection of open sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1492502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Use fourier transform to solve second-order differential equation -- an "easy" integral? I have scoured the internet for a fully-explained solution to this problem but have found none: The problem asks to solve this differential equation for $y(t)$ using Fourier Transforms, and then consider cases where $b > w_0$, $b < w_0$, and $b = w_0$ $$ \frac{d^2y(t)}{dt^2} + 2b\frac{dy(t)}{dt} + w_0^2y(t) = \delta(t) $$ So far I have, by differential properties of Fourier Transforms, converting the function of $t$ to a function of $w$, $$ -w^2\hat{y}(w) + 2biw\hat{y}(w) + w_0^2\hat{y}(w) = \frac{1}{\sqrt{2\pi}} $$ So algebraically, $$ \hat{y}(w) = \frac{\frac{1}{\sqrt{2\pi}}}{-w^2 + 2biw + w_0^2} $$ And the Inverse Fourier Transform of $\hat{y}(w)$ can yield $y(t)$ as $$ y(t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}{\frac{e^{-iwt} \frac{1}{\sqrt{2\pi}}}{-w^2 + 2biw + w_0^2}dw} $$ My professor says that this is an "easy integral; just do it". I have found no way of doing it thusfar, even for the simplest(?) case of $b = w_0$. I don't even know if this is of the right form (i.e. using the "easiest" definition of FT to solve the problem). He looked at it for like 2 seconds. All help is appreciated!
Such kind of integral usually evaluate via residue. $$ y(t) = -\frac{1}{2\pi}\int_{-\infty}^{\infty}{\frac{e^{-iwt} }{(w-ib)^2 - w_0^2-b^2}dw} $$ The function $\frac{e^{-iwt} }{(w-ib)^2 - w_0^2-b^2}$ is a holomorphic function. Tis function has two residue $w_{\pm}=i b \pm\sqrt{w_0^2+b^2}$ You should consider two different case $t>0$ and $t<0$ In the first case $t>0$ you should close path in the lower half plane and the integral equal zero becouse of all residue is located in the upper half-plane. In the second case $t<0$ one can close path in the upper half plane and using Cauchy's theorem we get $$ y(t) = -\frac{i}{2}{\frac{e^{-iw_+t}-e^{-iw_-t} }{w_+-w_-}} $$ Thus$$ y(t) = -\frac{i}{2}{\frac{e^{-iw_+t}-e^{-iw_-t} }{w_+-w_-}}\theta(-t)$$ where $\theta(x)$ is a Heaviside step function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1492627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Square root of complex number. The complex number $z$ is defined by $z=\frac{9\sqrt3+9i}{\sqrt{3}-i}$. Find the two square roots of $z$, giving your answers in the form $re^{i\theta}$, where $r>0$ and $-\pi <\theta\leq\pi$ I got the $z=9e^{\frac{\pi}{3}i}$. So I square root it, it becomes $3e^{\frac{\pi}{6}i}$. But the given answer is $3e^{-\frac{5}{6}\pi i}$. Why?
Notice, $$z=\frac{9\sqrt 3+9i}{\sqrt 3-i}$$ $$=\frac{9(\sqrt 3+i)(\sqrt 3+i)}{(\sqrt 3-i)(\sqrt 3+i)}$$ $$=\frac{9(\sqrt 3+i)^2}{3-i^2}=\frac{9(2+2i\sqrt 3)}{3+1}$$$$=9\left(\frac{1}{2}+i\frac{\sqrt 3}{2}\right)=9\left(\cos\frac{\pi}{3}+i\sin \frac{\pi}{3}\right)=9e^{i\pi/3}$$ hence, the square roots of $z$ are found as follows $$z^{1/2}=\sqrt{9\left(\cos\frac{\pi}{3}+i\sin \frac{\pi}{3}\right)}$$ $$=3\left(\cos\left(2k\pi+\frac{\pi}{3}\right)+i\sin \left(2k\pi+\frac{\pi}{3}\right)\right)^{1/2}$$$$=3\left(\cos\left(\frac{6k\pi+\pi}{6}\right)+i\sin \left(\frac{6k\pi+\pi}{6}\right)\right)$$ where, $k=0, 1$ Setting $k=0$, we get first square root $$z^{1/2}=3\left(\cos\left(\frac{6(0)\pi+\pi}{6}\right)+i\sin \left(\frac{6(0)\pi+\pi}{6}\right)\right)=3\left(\cos\left(\frac{\pi}{6}\right)+i\sin \left(\frac{\pi}{6}\right)\right)=\color{red}{e^{\frac{\pi}{6}i}}$$ Now, setting $k=1$, we get second square root $$z^{1/2}=3\left(\cos\left(\frac{6(1)\pi+\pi}{6}\right)+i\sin \left(\frac{6(1)\pi+\pi}{6}\right)\right)$$ $$=3\left(-\cos\left(\frac{\pi}{6}\right)-i\sin \left(\frac{\pi}{6}\right)\right)$$ $$=3\left(\cos\left(-\frac{5\pi}{6}\right)+i\sin \left(-\frac{5\pi}{6}\right)\right)=\color{red}{e^{\frac{-5\pi}{6}i}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1492769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that if ${s_n}$ is bounded and monotonic, then $t_n =(s_1 + \cdots+ s_n)/n$ converges to the same limit as ${s_n}$ I have already shown that $t_n$ is convergent using the monotonic convergence theorem. Let's say ${s_n}$ converges to $L_1$ and ${t_n}$ converges to $L_2$. How can I show that $L_1$=$L_2$?
$$s_n-t_n=s_n-\frac1n\sum_{i=1}^n s_i=s_n-\frac{ns_n-\sum_{i=1}^n s_i}n =\frac1n\sum_{i=1}^n (s_n - s_i) $$ $$|s_n-t_n|\le \frac1n\sum_{i=1}^n |s_n - s_i| = \frac1n\sum_{i=1}^N |s_n - s_i| + \frac1n\sum_{i=N+1}^n |s_n - s_i| $$ For any $\epsilon>0$ there is $N>0$, such that for every $n>N$ we have $|s_n - s_i|<\epsilon$ for each $N<i\le n$ (why?). Therefore $$0\le\lim_{n\to\infty} |s_n-t_n|\le \lim_{n\to\infty}\frac1n\sum_{i=1}^N |s_n - s_i| + \frac{n-N}n \epsilon=n\epsilon $$ Take $\epsilon\to0$ to finish the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1492854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
heading angle calculation using atan2 I am trying to find heading angle for my three wheeled robot my robot setup is as below I know all co-ordinate values. (x1 y1) (x2 y2) two back wheels and (x3 y3) is front wheel co-ordinate (xm ym) is the midpoint of (x1 y1) and (x2 y2) (xt yt) is the target point I am trying to find angle between (x3 y3) and (xt yt) For the first case angle range must be 0 to +180 and for second case angle range must be 0 to -180 to make necessary turnings. How can I use atan2 method for this? Is there any other better method to find angle in the necessary range?
You have a line from (xm,ym) to (x3,y3) and another line between (xm,ym) and (xy,yt). Now all you interesting in is the angle between the two lines. The answer is here. This is officially unanswered question, but the correct answer is there. Pick up the one with highest votes (Rory Daulton) and you good. Just use $\mathrm{atan2}$ instead of $\tan^{-1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1492950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Possibility that two students answer the same on a 25 question multiple choice test I am trying to get some approximation of the possibility that two students would answer $25$ questions in a row exactly the same. There are 4 possible responses for each question. I calculate the random possibility as $4^{25}=1.13\times10^{15}$. Since, however, I assume some amount of familiarity with the material, I divided that by $4$. I imagine, however, that this is not a precise method. If I assume an average score of $60%$, is there a more precise model to determine the probability?
While you can come up with a mathematical answer. It would only show how unlikely the possibility of the two students getting the same answers assuming they were to randomly guess. In reality, it is not as unlikely as you would think. For example some questions might have a really deceptive choice and both students pick that choice. Their was a trig test in my Algebra 2 Trig class where two students both failed the test with a score of a 43 and had the same exact answers as each other. Does this prove they cheated? No, they both had their calculators in radians instead of degrees and answered every question correctly. Yes, while mathmatically you can calculate the probability of two students getting the same answers with an average score of 60, but I would advise against it. Just because they have the same answers doesn't mean they cheated...If thats what you are trying to prove.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Let $f(x)= \sin^{-1}(2e^x)$. What is $f '(\ln(3/10)$? So I tried switching $y= \sin^{-1}(2e^x)$ with $x= \sin(2e^y)$ and then finding the derivative. The answer does not seem to match up. Thanks for the help
It should be $x=\log\left(\frac{1}{2}\sin y\right)$. $$\sin y = 2e^x\\ \frac{1}{2}\sin y = e^x\\ \log\left(\frac{1}{2}\sin y\right) = x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find a particular solution of a nonhomogenous equation Find a particular solution to $y''-2y' + y= {e^{px}}$ where p is any real constant. My attempt/idea is as follows: Since $Y_p$ is ${e^{px}}$ and it appears in the complementary solution, $Y_c$ is $c_1{e^x} + c_2x{e^x}$ , we will have to multiply $Y_p$ by $x^2$ so that $Y_p$ becomes $A{x^2}e^x$ Then, find $Y_p'$ and $Y_p''$ to get: $2Ae^x = e^{px}$ $2A = 1$ $A= \frac{1}{2}$ Then, $Y_p = \frac{1}{2}x^2e^{px}$ Is this right?
Working the differential equation in the most general manner, the particular solution is $$y_p=\frac{e^{p x}}{(p-1)^2}$$ which shows that the case of $p=1$ is totally different from the other possible cases just as Dylan pointed it out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What exactly did Hermann Weyl mean? "The introduction of numbers as coordinates is an act of violence." - Hermann Weyl. A lot of people like this quote, apparently. They also seem to associate it to the manifold context in the obvious way: they interpret the quote as saying that focusing on coordinate charts is insightless, or at least dry. Although I aggree with this, I feel like the way the quote is phrased is not inclined to that interpretation. If not by the fact that Weyl is connected to geometry (and that this quote is in Bredon), I would interpret this phrase as saying that "Representing numbers by a given basis is insightless and can lead to mistakes, like thinking that divisibility is dependent of the number basis you are using." I think the words numbers and coordinates are strangely placed. Therefore, my question is: What was the precise context on which this quote arose and, if possible, can we pinpoint exactly what did Weyl mean?
From a Google search, it appears the quote is from Hermann Weyl's Philosophy of Mathematics and Natural Science. I found a copy online here; the relevant passage is on page 90 (search "act of violence"): The introduction of numbers as coordinates by reference to the particular division scheme of the open one dimensional continuum is an act of violence whose only practical vindication is the special calculatory manageability of the ordinary number continuum with its four basic operations. The topological skeleton determines the connectivity of the manifold in the large. His meaning appears to turn on the idea of a "division scheme." In a preceding paragraph, he wrote: In general a coordinate assignment covers only part of a given continuous manifold. The 'coordinate' $(x_1, \ldots, x_n)$ is a symbol consisting of real numbers. The continuum of real numbers can be thought of as created by iterated bipartition. In order to account for the nature of a manifold as a whole, topology had to develop combinatorial schemes of a more general nature. To paraphrase, I believe he's saying that the topological structure of both the real number line and manifolds more generally is determined by how it breaks into smaller pieces, and how those pieces fit together (in modern terminology, we might look at a space's locale of open sets). To add a coordinate system rooted in arithmetic operations, either to the real line or to a manifold, in his view, is to add too much structure (although it does help with calculations).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 1, "answer_id": 0 }
Find the nullity and rank of a 3x5 matrix C where three columns = 0? Tricky problem. So I have a 3x5 matrix C, where {s,t,u,v,w} (I'm assuming those are its columns) is a linearly independent set of vectors in R^5, and that Cu=0, Cv=0, Cw=0. What is the rank and nullity of C? I'm guessing with the latter part of the question that Cu, Cv, and Cw are linearly independent in R^3, meaning the other two (Cs and Ct) are free columns, making the nullity equal to 2 and thus making the rank equal to 3. This is a wild guess, so it's probably wrong.
* *The elements $\{s,t,u,v,w\}$ are not the columns of $C$, they are just linearly independent vectors in $\mathbb{R}^5$. *The nullity of $C$ is the dimension of its nullspace, which is the subspace of $\mathbb{R}^5$ consisting of vectors $x$ satisfying $Cx=0$. You already have three linearly independent vectors in the nullspace of $C$, so the nullity is at least $3$. *I am not sure you can say any more. If we let your five vectors be the standard basis vectors for $\mathbb{R}^5$, then both the matrices $$C=\begin{bmatrix}0&0&0&0&0\\0&0&0&0&0\\0&0&0&0&0\end{bmatrix}$$ and $$C=\begin{bmatrix}1&0&0&0&0\\0&1&0&0&0\\0&0&0&0&0\end{bmatrix}$$ send $(0,0,1,0,0)^\top$, $(0,0,0,1,0)^\top$, and $(0,0,0,0,1)^\top$ to zero, but they have different ranks and nullities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to evaluate $\int_0^A \frac{\tanh x}{x}dx$? How to evaluate $$\int_0^A \frac{\tanh x}{x}dx$$ Where $A$ is a large positive number. The answer is: $$\ln (4e^\gamma A/\pi)$$, where $\gamma$ is Euler constant. I have no idea how to get this result. Here is a numerical result, blue is the original integral.
That formula is indeed an asymptotic expansion. Indeed, integration by parts yields $$ \int_{0}^{A} \frac{\tanh x}{x} \, dx = \tanh A \log A - \int_{0}^{A} \frac{\log x}{\cosh^2 x} \, dx $$ for $A > 0$, and we easily see $$ \int_{0}^{A} \frac{\log x}{\cosh^2 x} \, dx = \int_{0}^{\infty} \frac{\log x}{\cosh^2 x} \, dx + \mathcal{O}(e^{-2A}\log A). $$ Finally, it is not impossible to compute the last integral, and the result is $$ \int_{0}^{\infty} \frac{\log x}{\cosh^2 x} \, dx = \log(\pi/4) - \gamma. \tag{*} $$ (I will skip this part, but if you want I will add a proof of this.) Putting altogether, $$ \int_{0}^{A} \frac{\tanh x}{x} \, dx = \log A + \gamma - \log(\pi/4) + \mathcal{O}(e^{-2A}\log A). $$ Addendum. (Proof of $\text{(*)}$) Notice that $$ I(s) := \int_{0}^{\infty} \frac{x^{s-1}}{\cosh^2 x} \, dx $$ defines a holomorphic function for $\Re(s) > 0$ and that the integral in $\text{(*)}$ is $I'(1)$. Our goal is to identify $I(s)$. This can be done by the following standard technique: \begin{align*} I(s) &= \int_{0}^{\infty} 4x^{s-1} \cdot \frac{e^{-2x}}{(1 + e^{-2x})^2} \, dx \\ &= \int_{0}^{\infty} 4x^{s-1} \sum_{k=1}^{\infty} (-1)^{k-1} k e^{-2kx} \, dx \\ &= 2^{2-s} \Gamma(s) \sum_{k=1}^{\infty} (-1)^{k-1} k^{1-s} \\ &= 2^{2-2s}(2^s - 4) \Gamma(s) \zeta(s-1). \end{align*} This calculation works only when $\Re(s) > 2$, but the result remains valid for all of $\Re(s) > 0$ by the principle of analytic continuation. Then logarithmic differentiation gives $$ I'(1) = I(1)\left( -3\log 2 + \frac{\Gamma'(1)}{\Gamma(1)} + \frac{\zeta'(0)}{\zeta(0)} \right). $$ Now the conclusion follows from known facts $$ \zeta(0) = -\frac{1}{2}, \quad \zeta'(0) = -\frac{1}{2}\log(2\pi), \quad \Gamma'(1) = -\gamma. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
integrate over a cube given some differential form What is process of integrating a differential form given some cube (hyperdimensional obejcts)? I read a lot qualitative problems on this, but seem to find rare examples on how to compute such integrations step by step. For example, if I have a 2-cube$$[0,1]^2->R^3$$ with c defined to be $$c(t_1,t_2)=(t_1^2,t_1t_2,t_2^2)$$ and I want to integrate dα on this, where α is:$α=x_1dx_2+x_1dx_3+x_2dx_3$, how should I approach this generally? In a neater way, what is $\int_{c}dα$?
Pull back the form to $[0,1]^2$ and integrate it there. The computation looks like this: we have $(x_1,x_2,x_3) = (t_1^2, t_1 t_2, t_2^2)$ and thus $(dx_1,dx_2,dx_3) = (2t_1 dt_1, t_1 dt_2 + t_2 dt_1, 2t_2 dt_2)$. Differentiating $\alpha$ we have $$d\alpha = dx_1 \wedge dx_2 + dx_1 \wedge dx_3 + dx_2 \wedge dx_3.$$ Substituting the expressions for $dx_i$ in we get $$c^* d\alpha = 2t_1^2 dt_1 \wedge dt_2 + 4t_1 t_2 dt_1 \wedge dt_2 +2t_2^2 dt_1 \wedge dt_2.$$ Thus $$\int_c d\alpha = \int_{[0,1]^2} c^* d\alpha = \int_{[0,1]^2}2(t_1 + t_2)^2dt_1 \wedge dt_2 = \int_0^1 \int_0^1 2(t_1 + t_2)^2 dt_1 dt_2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Question about the derivative definition The derivative at a point $x$ is defined as: $\lim\limits_{h\to0} \frac{f(x+h) - f(x)}h$ But if $h\to0$, wouldn't that mean: $\frac{f(x+0) - f(x)}0 = \frac0{0}$ which is undefined?
Actually, the $ \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} $ is the value which $\frac{f(x+h) - f(x)}{h}$ approaches when you keep reducing $h$ as much as you can. Here is how my teacher explained me the idea: Think of drawing a tangent. It's a line touching a curve at only one point. But how could that be possible? Two points determine a line. The tangent is the limiting case of a secant where the two points close in on each other. Link to Animation Like he said, and you correctly pointed out, the expression is meaningless if $h=0$. Here is another paradox that Zeno used: Suppose an Arrow is shot which travels from A to B. Consider any instant. Since, no time elapses during the instant, the arrow does not move during the motion. But the entire time of flight consists of instances alone. Hence, the arrow must not have moved. The key here is to introduce the idea of speed, which is once again, a limit, the distance travelled per duration as the duration approaches an instant (tends to zero).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 7, "answer_id": 1 }
Finding the reflection that reflects in an arbitrary line y=mx+b How can I find the reflection that reflects in an arbitrary line, $y=mx+b$ I've examples where it's $y=mx$ without taking in the factor of $b$ But I want to know how you can take in the factor of $b$ And after searching through for some results, I came to this matrix which i think can solve my problems. But it doesn't seem to work. $$ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} \frac{1-m^2}{1 + m^2} & \frac{-2m}{1 + m^2} & \frac{-2mb}{1 + m^2} \\ \frac{-2m}{1 + m^2} & \frac{m^2-1}{1 + m^2} & \frac{2b}{1 + m^2} \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix}. $$ The example I tried to use using this matrix is the point $(0,8)$ reflected on $y=-\frac{1}{2}x+2$. The result I get from that matrix is $[6.4,-0.6,0]$. The actual answer should be $[-4.8, -1.6]$ , according to Geogebra
One way to do this is as a composition of three transformations: * *Translate by $(0,-b)$ so that the line $y=mx+b$ maps to $y=mx$. *Reflect through the line $y=mx$ using the known formula. *Translate by $(0,b)$ to undo the earlier translation. The translation matrices are, respectively, $$ \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & -b \\ 0 & 0 & 1 \end{pmatrix} \quad\text{and}\quad \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix} $$ and the matrix of the reflection about $y=mx$ is $$ \frac{1}{1 + m^2} \begin{pmatrix} 1-m^2 & 2m & 0 \\ 2m & m^2-1 & 0 \\ 0 & 0 & 1 + m^2 \end{pmatrix}. $$ Applying these in the correct sequence, the transformation is $$ \frac{1}{1 + m^2} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & b \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1-m^2 & 2m & 0 \\ 2m & m^2-1 & 0 \\ 0 & 0 & 1 + m^2 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & -b \\ 0 & 0 & 1 \end{pmatrix} $$ $$ = \frac{1}{1 + m^2} \begin{pmatrix} 1-m^2 & 2m & -2mb \\ 2m & m^2-1 & 2b \\ 0 & 0 & 1 + m^2 \end{pmatrix}. $$ This is much like the matrix you found, but the entries that you set to $\frac{-2m}{1+m^2}$ are instead $\frac{2m}{1+m^2}$. Setting $m=-\frac12$, $b=2$, the matrix is $$ \frac45 \begin{pmatrix} \frac34 & -1 & 2 \\ -1 & -\frac34 & 4 \\ 0 & 0 & \frac54 \end{pmatrix} = \begin{pmatrix} 0.6 & -0.8 & 1.6 \\ -0.8 & -0.6 & 3.2 \\ 0 & 0 & 1 \end{pmatrix} $$ and applying this to the point $(0,8)$ we have $$ \begin{pmatrix} 0.6 & -0.8 & 1.6 \\ -0.8 & -0.6 & 3.2 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 8 \\ 1 \end{pmatrix} = \begin{pmatrix} -4.8 \\ -1.6 \\ 1 \end{pmatrix}, $$ that is, the reflection of $(0,8)$ is $(-4.8, -1.6)$, so the matrix multiplication has the desired effect.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Finding the coordinate C. A triangle $A$, $B$, $C$ has the coordinates: $A = (-1, 3)$ $B = (3, 1)$ $C = (x, y)$ $BC$ is perpendicular to $AB$. Find the coordinates of $C$ My attempt: Grad of $AB$ = $$\frac{3-1}{-1-3} = -0.5$$ Grad of $BC = 2$ ($-0.5 \times 2 = -1$ because AB and BC are perpendicular). Equation of $BC$ $(y-1) = 2(x-3)$ $y = 2x - 5$ Equation of $AC$ $(y-3) = m(x--1)$ $y = mx+m+3$ I do not know how to proceed further. Please help me out.
The equation of $AC$ is $x-3y=-10$ as slope of any line is $-\frac{a}{b}$ where $a$ is $x$-coordinate and $b$ is $y$-coordinate so slope is $-\left(\frac{-1}{3}\right)$. $C$ is the point where $AC$ and $BC$ so meet we have two simultaneous equations $2x-y=5$ and $x-3y=-10$ solving them you get $x=5$ and $y=5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Radius of convergence of this power series We're given the power series $$ \sum_1^\inf \frac{j!}{j^j}z^j$$ and are asked to find radius of convergence R. I know the formula $R=1/\limsup(a_n ^{1/n})$, which leads me to compute $\lim \frac{j!^{1/j}}{j}$, and then I'm stuck. The solution manual calculates R by $1/\lim|\frac{a_{j+1}}{a_j}|$, but I can't figure out the motivation for that formula.
Use Stirling's formula $$n! \sim {n^n e^{-n}\over\sqrt{2\pi n}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1493965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A and B play a series of games. Find the probability that a total of 4 games are played. A and B play a series of games. Each game is independently won by A with probability $p$ and by B with probability $1 - p$. They stop when the total number of wins of one of the players is two greater than that of the other player. The player with the greater number of total wins is declared the winner of the series. (a) Find the probability that a total of 4 games are played. (b) Find the probability that A is the winner of the series. I though I could do this using conditional probability and independent trials/bernoulli trials but I am really confused. let's say I want to find the probability that A wins three games and B wins 1 game and add that to the probability of B wins 3 games and A wins 1 game, but I am not sure of how to do that because B has to win at least 1 game before A wins 2. If I set $n$ = number of games played would the sample space equal $p^n * (1-p)^n$? How do I solve this? is there an easier way?
I would consider two games at a time. Two games can result in AA (team A wins, probability $p^2$) or BB (team B wins, probability $(1-p)^2$) or AB, BA considered together (match is back to starting state, with probability $2p(1-p)$). Then the probability that the match goes for four games (two pairs) is $(2p(1-p))(p^2+(1-p)^2)$, occurring when the first pair is split and the second pair is swept. The probability that team A wins the match is $\frac{p^2}{p^2+(1-p)^2}$ since that is the ratio of team A winning to team B winning in any given round of two games.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1494066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Is it possible to find the absolute value of an integer using only elementary arithmetic? Using only addition, subtraction, multiplication, division, and "remainder" (modulo), can the absolute value of any integer be calculated? To be explicit, I am hoping to find a method that does not involve a piecewise function (i.e. branching, if, if you will.)
I think you need this: https://stackoverflow.com/questions/9772348/get-absolute-value-without-using-abs-function-nor-if-statement. Please note that remainder is modulo, absolute is modulus, so please correct the question, because I can't suggest edit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1494167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Evaluate $\int\frac {\sin 4x }{\sin x}\ dx$ Evaluate $$\int \frac{\sin 4x}{\sin x} dx$$ Attempt: I've tried to use the double angle formulas and get it all into one identity, which came out as: $$ \int \left( 8\cos^3x - 4\cos x \right) dx $$ But I'm not sure if this is the right way to go about it. Any help would be much appreciated!
We can put the complex identity $$\sin \alpha := \frac{\exp(i \alpha) - \exp(-i \alpha)}{2i}$$ to efficient use here. Taking $\alpha = 4 x$ gives $$\sin 4x = \frac{\exp(4 i x) - \exp(-4 i x)}{2i},$$ We can use a difference-of-squares factorization to write the numerator as \begin{align*} \exp(4 i x) - \exp(-4 i x) &= [\exp(2 i x) + \exp(-2 i x)][\exp(2 i x - \exp(2 i x)] \\ &= [\exp(2 i x) + \exp(-2 i x)][\exp (i x) + \exp(- i x)][\exp (i x) - \exp(- i x)] \end{align*} Now, substituting using the complex identity for $\alpha = x$ gives \begin{align*} \sin 4x = [\exp(2 i x) + \exp(-2 i x)][\exp (i x) + \exp(- i x)] \cdot \frac{\exp (i x) - \exp(- i x)}{2 i} = [\exp(2 i x) + \exp(-2 i x)][\exp (i x) + \exp(- i x)] \sin x. \end{align*} So, expanding and using the corresponding cosine identity $$\cos \alpha = \frac{\exp(i \alpha) + \exp(-i \alpha)}{2}$$ gives \begin{align*} \frac{\sin 4x}{\sin x} &= [\exp(2 i x) + \exp(-2 i x)][\exp (i x) + \exp(- i x)]\\ &= \exp(3 i x) + \exp(-3 i x) + \exp(i x) + \exp(-i x) \\ &= 2 (\cos 3x - \cos x) . \end{align*} In this form, we can compute the antiderivative quickly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1494248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Show that a matrix $A=\pmatrix{a&b\\c&d}$ satisfies $A^2-(a+d)A+(ad-bc)I=O$ Let $A= \begin{bmatrix} a & b \\ c & d \end{bmatrix} ,a,b,c,d\in\mathbb{R}$ . Prove that every matrix $A$ satisfies the condition $$A^2-(a+d)A+(ad-bc)I=O .$$ Find $$ \begin{bmatrix} a & b \\ c & -a \end{bmatrix}^n .$$ For $a=1,b=2,c=3,d=4$ equality $A^2-(a+d)A+(ad-bc)I=O$ holds. How to prove this equality for every $A= \begin{bmatrix} a & b \\ c & d \end{bmatrix}$?
One can of course prove this directly by substituting and computing the entries of the $2 \times 2$ matrix on the left-hand side, but here's an outline for a solution that reduces the amount of actual computation one needs to do. Hint * *Prove the claim for diagonal matrices. *Prove the claim for matrices similar to diagonal matrices. (Recall that the trace and determinant of a matrix, which up to sign are the linear and constant terms of the polynomial in $A$, are invariant under similarity.) *Use the fact that the set of diagonalizable matrices (those similar to diagonal matrices) is dense in the set of all $2 \times 2$ matrices. Remark This is anyway a special case of a more general fact: That any matrix $A$ satisfies $p_A(A)$, where $p_A$ is the characteristic polynomial $p_A(t) := \det(t I - A)$ of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1494369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Regular categories and epis stable under pullbacks The definition of a regular epi is an arrow which coequalizes some parallel pair. Isn't this just another name for a coequalizer? One of the (usual) axioms for a regular category says each arrow has a kernel pair. Another says regular epis are stable under pullbacks. But, some books warn this is not equivalent to stability of coequalizers under pullbacks. I don't understand this. Generally, if some arrow is a coequalizer and has a kernel pair, it is the coequalizer of its kernel pair. Hence, in a regular category an arrow is a coequalizer iff it coequalizes its kernel pair, but what does this change? A counterexample sometimes given in $\mathsf{Ab}$ is to pull back the coequalizer of the canonical injections $\iota _1,\iota _2:A\rightarrow A\oplus A$ along the zero morphism. But I don't see what this goes to show. Can someone explain?
I think your confusion is ultimately a confusion about what it means for the various notions to be "stable under pullbacks". First, "regular epis are stable under pullbacks" means that if $$\require{AMScd} \begin{CD} A @>{}>> B\\ @V{f'}VV @V{f}VV \\ C @>{}>> D \end{CD}$$ is a pullback square and $f$ is regular epi, then $f'$ is also regular epi. Second, "coequalizers are stable under pullbacks" means the following: Suppose you have a diagram $$\begin{CD} && Z\\ & @V{a,b}VV \\ && B\\ & @V{f}VV \\ C @>{g}>> D \end{CD}$$ where $f$ is the coequalizer of a pair of parallel arrows $a$ and $b$. (The arrow labelled $a,b$ should really be a double arrow, but as far as I know MathJax does not support double arrows in commutative diagrams.) Form the pullback $C\leftarrow A\to B$ of $f$ and $g$ and the pullback $C\leftarrow Y \to Z$ of $fa=fb$ and $g$. Let $a':Y\to A$ be induced by the given map $Y\to C$ and the composition $Y\to Z\stackrel{a}\to B$, and similarly let $b':Y\to A$ be induced by $Y\to C$ and $Y\to Z\stackrel{b}\to B$. We now have the following diagram: $$\begin{CD} Y @>{}>> Z\\ @V{a',b'}VV @V{a,b}VV \\ A @>{}>> B\\ @V{f'}VV @V{f}VV \\ C @>{g}>> D \end{CD}$$ (Again, the vertical arrows on top should be double arrows.) To say that "coequalizers are stable under pullbacks" means that in this diagram, $f'$ is the coequalizer of $a'$ and $b'$. In particular, even if regular epis are stable under pullback, there is no particular reason to believe that $f'$ is the coequalizer of $a'$ and $b'$: all you know is that there must exist some maps $c,d:X\to A$ which $f'$ is the coequalizer of. It is a worthwhile exercise to work out what's going on in this diagram in $\mathsf{Ab}$ when $B=Z\oplus Z$, $a,b:Z\to B$ are the two inclusions, $f:B\to D=Z$ is the coequalizer (i.e., the map $(x,y)\mapsto x+y$), and $C=0$. You should get that $A\cong Z$ and $Y=0$, so unless $Z=0$, $f'$ is not the coequalizer of $a'$ and $b'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1494439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find $\lim_{x\to0}{\frac{\arctan (2x)}{3x}}$ without using '0/0=1' Find limit as $$ \lim_{x \to 0} \frac{\arctan (2x)}{3x} $$ without using $\frac{0}{0} = 1$. I wanted to use $$ \frac{2}{3} \cdot \frac{0}{0} = \frac{2}{3} \cdot 1, $$ but our teacher considers $\frac{0}{0}$ a "dangerous case" and we are not allowed to use this method. We have not studied integrals yet, so I cannot use integral formulas. We have only studied a bit of derivative taking... I tried many substitutions, but failed :( Any suggestion?
You may use L'Hospital's rule: $$ \lim_{x\to0}{\frac{\arctan (2x)}{3x}}=\lim_{x\to0}{\frac{\frac{2}{1+4x^2}}{3}}=\frac23. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1494521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Other ways to compute this integral? The following (improper) integral comes up in exercise 2.27 in Folland (see this other question): $$I = \int_0^\infty \frac{a}{e^{ax}-1} - \frac{b}{e^{bx}-1}\,dx.$$ I computed it as follows. An antiderivative for $a(e^{ax}-1)^{-1}$ is $\log(1-e^{-ax})$, found by substituting $u = e^{ax}-1$ and noting that $1/u(u+1) = 1/u - 1/(u+1)$. Therefore, $$\int_{\varepsilon}^R \frac{a}{e^{ax}-1} - \frac{b}{e^{bx}-1}\,dx = \log\left(\frac{1-e^{-aR}}{1-e^{-bR}}\right) + \log\left(\frac{1-e^{-b\varepsilon}}{1-e^{-a\varepsilon}}\right).$$ The first term goes to $\log(1) = 0$ as $R\to\infty$. For the second term, we have $$\lim_{\varepsilon\to 0^+} \log\left(\frac{1-e^{-b\varepsilon}}{1-e^{-a\varepsilon}}\right) = \log \lim_{\varepsilon\to 0^+} \frac{1-e^{-b\varepsilon}}{1-e^{-a\varepsilon}},$$ which looks like $0/0$. Applying l'Hospital's rule, we get $$\lim_{\varepsilon\to 0^+} \frac{1-e^{-b\varepsilon}}{1-e^{-a\varepsilon}} = \lim_{\varepsilon\to 0^+} \frac{be^{-b\varepsilon}}{ae^{-a\varepsilon}} = \frac{b}{a}$$ so $I = \log(b/a)$. What are some other ways to compute this integral? Perhaps there is a method incorporating Frullani's theorem?
Here's a nice solution. If we let $$f(x) = \frac{x}{e^x-1} = \frac{1}{1+\frac{x}{2}+\frac{x^2}{6}+\dotsb}$$ then $f(0) = 1$ and $f(\infty) = 0$ and $$\frac{f(ax) - f(bx)}{x} = \frac{a}{e^{ax}-1} - \frac{b}{e^{bx}-1},$$ and now apply Frullani's theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1494620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is $x^2-1$ applied n times For the function $F(x)=x^2-1$. How do I write $F^n(x)$ ($F$ applied $n$ times) in terms of $x$?
A somewhat trivial way would be $F^n(x) = x^2-1$ if $n=1$ and $F^n(x)=(F^{n-1}(x))^2-1$ otherwise. Presumably you want a closed form, though, and it is probably pretty obvious that in the closed form the leading coefficient is 1, the degree is $2n$, and the constant term is $-1$ if $n$ is odd and 0 if $n$ is even. Perhaps if you write out large examples you'll see a pattern for intermediate terms, perhaps expressible by some kind of manipulation of the choose function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1494683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Isn't every set a G set? It is clear to me that every G-set (X) can be described as an action on X(a subgroup of $S_X$). The trouble I am having is why specify a set X as a G set without specifying that their must exist an embedding $F:G\to S_X$. It seems not very useful to just say X is a G-set.
A $G$-set isn't just a set $X$ such that there exists a homomorphism $G\to S_X$; it is a set $X$ equipped with a specific homomorphism $G\to S_X$. That is, a $G$-set is more properly speaking a pair $(X,\rho)$ where $X$ is a set and $\rho:G\to S_X$ is a homomorphism, and only by abuse of terminology do we say "$X$ is a $G$-set". As JHance commented, this is just like how we say "$G$ is a group", when really it is the ordered pair $(G,\cdot)$ which is a group (where $\cdot:G\times G\to G$ is the group operation). You are correct in observing that every set can be made into a $G$-set by just choosing the trivial homomorphism $G\to S_X$. But there might be many other different, more interesting homomorphisms $G\to S_X$, and each one of them defines a different $G$-set which just happens to have the same underlying set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1494833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove $\frac xy + \frac yx \ge 2$ I am practicing some homework and I'm stumped. The question asks you to prove that $x \in Z^+, y \in Z^+$ $\frac xy + \frac yx \ge 2$ So I started by proving that this is true when x and y have the same parity, but I'm not sure how to proceed when x and y have opposite partiy This is my proof so far for opposite parity $x,y \in Z^+ $ $|$ $x \gt 0,$ $y \gt 0$. Let x be even $(x=2a, $ $ a \in Z^+)$ and y be odd $(y=2b+1, $ $b \in Z^+)$. Then, $\frac xy + \frac yx \ge 2$ $\frac {2a}{2b+1} + \frac {2b+1}{2a} \ge 2$ $\frac {2b^2 + 4a^2 + 4a + 1}{2b(2a+1)} \ge 2$ $4b^2 + 4a^2 +4a + 1 \ge 4b(2a+1)$ $4b^2 + 4a^2 + 4a +1 \ge 8ab + 4b$ $4b^2 - 4b + (2a + 1)^2 \ge 8ab$ $(2b-1)^2 + 1 + (2a+1)^2 \ge 8ab$ I feel like this is the not the correct way to go about proving it, but I can't think of a better way to do it. Does anyone have any suggestions? Just hints please, not a solution.
Here is "another" simple method(this is essentially same as completing the square.) of proving $\frac{x}{y}+\frac{y}{x}\geq 2$ by the help of AM-GM Inequality : Consider the set $\{\frac{x}{y},\frac{y}{x}\}.$ Applying AM-GM Inequality on these two values , we have : $$\frac{\frac{x}{y}+\frac{y}{x}}{2} \geq \sqrt {\frac {x}{y} \times \frac{y}{x}}$$ $$\implies \frac{\frac{x}{y}+\frac{y}{x}}{2} \geq \sqrt {1}$$ $$\implies \frac{x}{y}+\frac{y}{x} \geq 1 \times 2$$ $$\implies \frac{x}{y}+\frac{y}{x} \geq 2$$ Hople this helps. :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1494887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 10, "answer_id": 7 }
Envelope of Projectile Trajectories For a given launch velocity $v$ and launch angle $\theta$, the trajectory of a projectile may be described by the standard formula $$y=x\tan\theta-\frac {gx^2}{2v^2}\sec^2\theta$$ For different values of $\theta$ what is the envelope of the different trajectories? Is it a parabola itself? The standard solution to this "envelope of safety" problem is to state the formula as a quadratic in $\tan\theta$ and set the discriminant to zero. The resulting relationship between $x,y$ is the envelope. This question is posted to see if there are other approaches to the solution. Edit 1 Thanks for the nice solutions from Jack and Blue, received so far. From the solution of the envelope it can be worked out that the envelope itself corresponds to the right half of the trajectory of a projectile launched at $(-\frac{v^2}{g^2},0)$ at a launch angle $\alpha=\frac{\pi}4$ and a launch velocity $V=v\sqrt2$. This means that both vertical and horizontal components of the launch velociy are equal to $v$. It would be interesting to see if these conclusions can be inferred from the problem itself by inspection and without first solving it. If so, then this would form another solution. See also this other question posted subsequently.
Although there are several very nice solutions provided above, here is another approach. No calculus is needed. Consider a projectile launched with angle $\theta$ with relation to an inclined plane with tilt angle $\varphi$. The initial speed is $v_0$. We want to know at what distance from the launch point the projectile will hit the inclined plane. We use a Cartesian coordinate system rotated by the angle $\varphi$ (see figure 1). The trajectory is given by \begin{aligned} x'(t) &=v_{0x'}t+\frac{g_{x'}t^2}2,\\ y'(t) &= v_{0y'}t+\frac{g_{y'}t^2}2, \end{aligned} where $x'$ and $y'$ indicate the coordinates of the projectile in the rotated coordinate frame. Furthermore, we have $v_{0x'}=v_0\cos\theta$, $v_{0y'}=v_0\sin\theta$, $g_{x'}= -g\sin\varphi$ and $g_{y'}=-g\cos\varphi$. At the point where the projectile hits the inclined plane $y'=0$. Hence, we obtain that the time span for the collision is $t_{hit}=-2v_{0y'}/g_{y'}$. Replacing this time, we find \begin{aligned} x'_{hit} &=-2\frac{v_{0x'}v_{0y'}}{g_{y'}}+2\frac{g_{x'}v^2_{0y'}}{g^2_{y'}}\\ &=\frac{v_0^2}{g\cos^2\varphi}\left[\cos\varphi\sin(2\theta)-\sin\varphi(1-\cos(2\theta))\right]\\ &=\frac{v_0^2}{g\cos^2\varphi}\left[\sin(2\theta+\varphi)-\sin\varphi\right] \leq\frac{v_0^2}{g\cos^2\varphi}\left(1-\sin\varphi\right)=\frac{v_0^2}{g(1+\sin\varphi)}. \end{aligned} If we now turn to polar coordinates, the envelope of the projectile trajectories is given by $$ r(\varphi)=\frac{v_0^2}{g(1+\sin\varphi)},\qquad(1) $$ with $0\leq\varphi\leq\pi$. See in figure 2 that this curve is the envelope. Projectile trajectory hitting an inclined plane Envelope given by equation (1)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1495086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 7, "answer_id": 3 }
Using CRT to reason about powers of $2$ where the power is two less than an odd prime Let $p$ be any odd prime. Using the Chinese Remainder Theorem and Fermat's Little Theorem, we know that: $$\frac{2p}{2} + \left(\frac{2p}{p}\right)\left(\frac{2p}{p}\right)^{p-2} = p + 2^{p-1} \equiv 1 \pmod {2p}$$ There exists $c$ where $c < p$ and $2^{p-2} \equiv c \pmod p$ and $p + 2c \equiv 1 \pmod {2p}$ Since $c < p$, it follows that $p < p + 2c < 3p$ so that $p + 2c = 2p+1$ This gives us that $c = \frac{p+1}{2}$ So, then it follows in all cases that: $$2^{p-2} \equiv \frac{p+1}{2} \pmod p$$ Is my reasoning correct?
In the first line, you have a term $\frac{2p}{2}$ mod $2p$. That is a dangerous thing to write, because we cannot divide by $2$ mod $2p$. This is because both $2\cdot 0 = 2p$ mod $2p$ and $2\cdot p = 2p$ mod $2p$, so both 0 and $p$ can be said to be equal to $2p/2$ mod $2p$. In general, it is a bad idea to divide by zero divisors. However, in this case it works out, because in the end you are taking the result mod $p$, and then this ambiguity disappears. Note also that your argument can be done more easily mod $p$ instead of mod $2p$: setting as you do $c$ positive such that $c < p$ and $c = 2^{p - 2}$ mod $p$, we have by Fermat's little theorem $2c = 1 \mod p$, and we have $1 < 2c < 2p$, which again leads to $2c= p +1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1495150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is a harmonic conjugate unique up to adding a constant? If $v$ and $v_0$ are harmonic conjugates of $u$, then $u + iv$ and $u + iv_0$ are analytic functions. Then $i(v - v_0)$ is analytic, but how does this imply $v - v_0$ is a constant function?
It does not follow from $v-v_0$ being (real) analytic, if that is what you are asking for. $u+iv$ and $u+iv_0$ are complex analytic, i.e. holomorphic, so they satisfy the Cauchy Riemann differential equations, $u_x = v_y$ an $u_y = -v_x$, the same holds for $v_0$ and $u$. So, consequently, $(v-v_0)_x = 0 = (v-v_0)_y$. I think you be able to finish the reasoning from here, at least on open connected sets (the statemnent is true for connected sets only). (Edit: of course, if you assume that $i(v-v_0)$ is a complex analytic function, then obviously the same reasoning applies, so any holomorphic function the real part of which vanishes identically is locally constant)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1495277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Laplace's equation after change of variables Show that if $u(r, \theta)$ is dependent on $r$ alone, Laplace's equation becomes $$u_{rr} + \frac{1}{r}u_r=0.$$ My first reaction is to replace $r=x$ and $\theta=y$, but obviously it does not work. Then I recall $x=r\cos \theta$ and $y=r\sin \theta$. Then I obtain the following: $$v(r, \theta) = u(r\cos \theta, r\sin \theta).$$ Then I start to differentiate it, but what is $u_r$ and $u_{rr}$? Can anyone give me some hints to move on?
You have $$ \frac{\partial f}{\partial r} = \frac{\partial x}{\partial r} \frac{\partial f}{\partial x} + \frac{\partial y}{\partial r} \frac{\partial f}{\partial y} = f_x \cos{\theta} + f_y \sin{\theta}. $$ Then differentiating again (and using that $\partial \theta/\partial r=0$), $$ f_{rr} = \cos{\theta} (\cos{\theta} \partial_x+\sin{\theta} \partial_y)f_x + \sin{\theta} (\cos{\theta} \partial_x+\sin{\theta} \partial_y)f_y \\ f_{xx} \cos^2{\theta} + f_{yy}\sin^2{\theta} + 2f_{xy} \sin{\theta}\cos{\theta} $$ Similarly, we can show that $$ \partial_{\theta} = -r\sin{\theta} \partial_x + r\cos{\theta}\partial_y, $$ so $$ f_{\theta\theta} = \partial_{\theta}(-r\sin{\theta} f_x + r\cos{\theta}f_y) = r^2[(-\cos{\theta} f_x -\sin{\theta} f_y ) +\sin^2{\theta} f_{xx}+\cos^2{\theta}f_yy -2\sin{\theta}\cos{\theta} f_{xy}] $$ Thus $$ f_{rr} + \frac{1}{r^2}f_{\theta\theta} + \cos{\theta} f_x + \sin{\theta} f_y = f_{xx}+f_{yy} $$ But the last terms on the left are just $r^{-1}\partial_r $, so $$ f_{xx} + f_{yy} = f_{rr}+\frac{1}{r}f_r + \frac{1}{r^2}f_{\theta\theta}. $$ That's a bit of a mess, though. Instead, we can use $u(r,\theta)=f(r)$, and $r=\sqrt{x^2+y^2}$, and compute $$ \Delta f = \frac{\partial^2}{\partial x^2} f(\sqrt{x^2+y^2})+\frac{\partial^2}{\partial y^2} f(\sqrt{x^2+y^2}) \\ = \frac{\partial}{\partial x} \left( \frac{x}{\sqrt{x^2+y^2}} f'(\sqrt{x^2+y^2}) \right) + \frac{\partial}{\partial y} \left( \frac{y}{\sqrt{x^2+y^2}} f'(\sqrt{x^2+y^2}) \right) \\ = \frac{y^2}{(x^2+y^2)^{3/2}}f'(\sqrt{x^2+y^2}) + \left(\frac{x}{\sqrt{x^2+y^2}}\right)^2 f''(\sqrt{x^2+y^2}) + \frac{x^2}{(x^2+y^2)^{3/2}}f'(\sqrt{x^2+y^2}) + \left(\frac{y}{\sqrt{x^2+y^2}}\right)^2 f''(\sqrt{x^2+y^2}) \\ = f''(\sqrt{x^2+y^2})+\frac{1}{\sqrt{x^2+y^2}}f'(\sqrt{x^2+y^2}) = u_{rr}+\frac{1}{r}u_r. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1495360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$A^+ \subset B^+$ if $A \hookrightarrow B$ is inclusion of $C^*$ algebras. Is it true that $A^+ \subset B^+$ if $A \hookrightarrow B$ is inclusion of $C^*$ algebras, where $A^+$ denotes the positive elements in $A$. I read in Murphy 2.1.11 that this is true if $B$ is unital and $A$ contains the unit of $B$. Does it make sense to look at unitizations of $A$ and $B$ if they are non-unital ?
You don't need to look at the spectra. You can characterize postive elements as those of the form $z^*z$. So $a\in A^+$. then $a=z^*z$ for some $z\in A\subset B$. So $a$ is positive in $B$ too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1495471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Finding real numbers such that $(a-ib)^2 = 4i$ Prove that $(a^2 - b^2) = 0$ I sometimes find myself overcomplicating my life... overthinking simple concepts. Here I don't use what's given, i.e., $$(a − ib)^2 = 4i$$ So I might say let $a = 1$ and $b = 1$ then $a = b$ and $a^2 = b^2$ thus $a^2 - b^2 = 0$ Now that seems fine but I'm given the complex number $(a - ib)^2 = 4i$ Now I know $i^2 = -1$ So here's my attempt $(a - ib)(a - ib) = 4i$ $a^2 - abi - abi + (bi)^2 - 4i = 0$ $a^2 - 2abi - b^2 - 4i = 0$ $ a^2 - b^2 -(ab + 2)2i = 0$ I've obviously not grasped this correctly as this certainly is not what I have been asked to prove. Please could I have come guidance on how to solve this simple equation. I noticed I left out what I am intending to prove which has been included in the title now namely: Prove that $a^2 - b^2 = 0$ Thanks!
You are on the right track, but you are overthinking it. The basic idea behind these sorts of questions is the fact that if $z = x + iy$ and $w = q + ip$ where $z,w \in \mathbb{C}$ and $x,y,q,p \in \mathbb{R}$ then $z = w$ iff $x = q$ and $y = p$. Equipped with this we may deal with your problem. Indeed first we compute the left side $$ (a - ib)^2 = (a^2 - b^2) + i (-2ab) = x + iy$$ and $$ 4i = 0 + 4i = p + i q$$ Now we must have $$x = p \iff a^2 - b^2 = 0, \ y = q \iff 4 = -2ab$$ Using these two equations can you now solve for $a$ and $b$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1495579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 2 }
Is there a way to parametrise general quadrics? A general quadric is a surface of the form: $$ Ax^2 + By^2 + Cz^2 + 2Dxy + 2Eyz + 2Fxz + 2Gx + 2Hy + 2Iz + J = 0$$ It can be written as a matrix expression $$ [x, y, z, 1]\begin{bmatrix} A && D && F && G \\ D && B && E && H \\ F && E && C && I \\ G && H && I && J \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} = \mathbf{p}^\intercal \mathbf{Q} \mathbf{p} = 0 $$ Is it possible to represent this quadric as a parametric surface $\mathbf{p}(u, v): \mathbb{R}^2 \to \mathbb{R}^3$? $$ \forall u, v, \mathbf{p}(u, v)^\intercal \mathbf{Q}\mathbf{p}(u, v) = 0 $$
Given a point $p$ on the quadratic surface $Q$, every line $L$ through $p$ is either tangent to $Q$, or it intersects $Q$ in another point $p_L$. In this way the lines not tangent to $Q$ parametrize $Q-\{p\}$. These lines are in turn parametrized by $\Bbb{R}^2$, so this is possible if you omit one point from the parametrization.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1495645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Solving a quadratic equation in $\Bbb{Z}/19\Bbb{Z}$. I am working on a problem and I have written down my thoughts but I am having trouble convincing myself that it is right. It very may well be it is also completely wrong, which Is why I thought id post it and my attempt and try to get some feedback. I want to find all the values c such that the system $xy=c$ and $x+y=1$ has a solution in the field $$\mathbb{Z} / 19 \mathbb{Z}$$ What I know: I know that for a solution to a quadratic in this sense to exist it must have that its discriminant is a square in the field we are working in. Rearranging terms I get that $x^2-x+a=0\bmod(19)$ so I concluded that a solution will exist iff and only if $1-4a$ is a square mod $19$. I then calculated the squares mod 19 they are, $$0 , 1, 4, 9 ,16 ,6 ,17, 11, 7, 5,$$ and then repeating backwards ie 5, 7, ... so then I just set $1-4a$ to each of these and solved for $a$. I also would be interested in knowing if there is an a that gives a unique solution. My initial thoughts are not since all the roots repeat. Again, I am not sure if this is correct. Does anyone have any insight?
It seems to me that you are doing too much work here. You have $y = 1-x$, so $x(1-x) = c$. So now you just have to list all values of $x(1-x)$ mod $19$ for $0 \le x < 19$, and you are done. This is hardly more difficult than listing all the squares mod $19$, which you did as the first step of your solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1495744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Can this function be integrated? Can't seem to figure out this integral! I'm trying to integrate this but I think the function can't be integrated? Just wanted to check, and see if anyone is able to find the answer (I used integration by parts but it doesn't work). Thanks in advance; the function I need to integrate is $$\int\frac{x}{x^5+2}dx$$
Probably you'd want to use the calculus of residues to do this. But below I do it using first-year calculus methods. The cumbersome part may be the algebra, and that's what I concentrate on here. \begin{align} x^5 + 2 & = \left( x+\sqrt[5]{2} \right) \underbrace{\left( x - \sqrt[5]{2} e^{i\pi/5}\right)\left( x - \sqrt[5]{2} e^{-i\pi/5}\right)}_\text{conjugates}\ \underbrace{\left( x - \sqrt[5]{2} e^{i3\pi/5}\right)\left( x - \sqrt[5]{2} e^{-i3\pi/5}\right)}_\text{conjugates} \\[10pt] & = \left( x+\sqrt[5]{2} \right) \left( x^2 - 2 \sqrt[5]{2} \cos\frac{\pi} 5 + \sqrt[5]{2}^2 \right)\left( x^2 - 2\sqrt[5]{2}\cos\frac{3\pi}5 + \sqrt[5]{2}^2 \right) \end{align} Then use partial fractions. That the polynomials $x^2 - 2\sqrt[5]{2}\cos\frac{\pi} 5 + \sqrt[5]{2}^2$ and $x^2 - 2\sqrt[5]{2}\cos\frac{3\pi} 5 + \sqrt[5]{2}^2$ cannot be factored using real numbers can be seen from the way we factored them above. So you'll have $$ \cdots + \frac{Bx+C}{x^2 - 2\sqrt[5]{2}\cos\frac{\pi} 5 + \sqrt[5]{2}^2} + \cdots $$ etc. and you'll need to find $B$ and $C$. Let $u=x^2 - 2x\sqrt[5]{2}\cos\frac{\pi} 5 + \sqrt[5]{2}^2$ then $du = (2x + \sqrt[5]{2}\cos\frac{\pi}5)\,dx$ and so $$ \frac{Bx + C}{x^2 - 2x\sqrt[5]{2}\cos\frac{\pi} 5 + \sqrt[5]{2}^2}\,dx = \underbrace{\frac{\frac B 2 \left( 2x+\sqrt[5]{2}\cos\frac{\pi}5 \right)}{x^2 - 2x\sqrt[5]{2}\cos\frac{\pi} 5 + \sqrt[5]{2}^2}\,dx}_\text{Use the substitution for this part.} + \underbrace{\frac{\left( 1 - \frac B2 \right) \sqrt[5]{2}\cos\frac{\pi} 5}{x^2 - 2x\sqrt[5]{2}\cos\frac{\pi} 5 + 1}\,dx}_\text{See below for this part.} $$ To integrate $\frac{\left( 1 - \frac B2 \right) \sqrt[5]{2}\cos\frac{\pi} 5}{x^2 - 2x\sqrt[5]{2}\cos\frac{\pi} 5 + \sqrt[5]{2}^2}\,dx$, complete the square: \begin{align} & x^2 - 2x\sqrt[5]{2}\cos\frac{\pi} 5 + \sqrt[5]{2}^2 = \left( x^2 - 2x\sqrt[5]{2}\cos\frac{\pi} 5 + \sqrt[5]{2}^2\cos^2\frac{\pi}5 \right) + \sqrt[5]{2}^2 \sin^2\frac{\pi} 5 \\[10pt] = {} & \left( x - \sqrt[5]{2}\cos\frac{\pi} 5 \right)^2 + \sqrt[5]{2}^2\sin^2 \frac{\pi} 5 \\[10pt] = {} & \left(\sqrt[5]{2}^2 \sin^2 \frac{\pi} 5 \right) \left( \left( \frac{x-\sqrt[5]{2}\cos\frac{\pi} 5}{\sqrt[5]{2}\sin\frac{\pi} 5} \right)^2 + 1 \right) = (\text{constant})\cdot(w^2 + 1) \\[10pt] & \text{and } dw = \frac{dx}{\sqrt[5]{2}\sin\frac{\pi}5}. \end{align} So you get an arctangent from this term.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1495885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can one do anything useful with a functional equation like $g(x^2) = \frac{4x^2-1}{2x^2+1}g(x)$? I got $$g(x^2) = \frac{4x^2-1}{2x^2+1}g(x)$$ as a functional equation for a generating function. Is there a way to get a closed form or some asymptotic information about the Taylor coefficients from such an equation? Here g(0) = 1, g'(0) = 0, and g''(0) = 4. Edit: Thanks, everyone; as has been pointed out , I've just made a mistake in obtaining the recurrence relation. I'll leave this as-is instead of editing because I think some of the comments will be helpful to others and if I change it now they won't make sense.
I get that the only solution is $g(x) = 0$ if we can write $g(x) =\sum_{n=0}^{\infty} a_n x^n $. Here is my proof: We have $g(x^2) = \frac{4x^2-1}{2x^2+1}g(x) $ or $(2x^2+1)g(x^2) = (4x^2-1)g(x) $. From this, as copper.hat pointed out, $g(0) = 0$. If $g(x) =\sum_{n=1}^{\infty} a_n x^n $, (since $g(0) = 0$) the left side is $\begin{align*} \sum_{n=1}^{\infty} (2x^2+1)a_nx^{2n} &=\sum_{n=1}^{\infty} 2a_nx^{2n+2}+\sum_{n=1}^{\infty} a_nx^{2n}\\ &=\sum_{n=2}^{\infty} 2a_{n-1}x^{2n}+\sum_{n=1}^{\infty} a_nx^{2n}\\ &=a_1x^2+\sum_{n=2}^{\infty} (2a_{n-1}+a_n)x^{2n}\\ \end{align*} $ and the right side is $\begin{align*} \sum_{n=1}^{\infty} (4x^2-1)a_nx^{n} &=\sum_{n=1}^{\infty} 4a_nx^{n+2}-\sum_{n=1}^{\infty} a_nx^{n}\\ &=\sum_{n=3}^{\infty} 4a_{n-2}x^{n}-\sum_{n=1}^{\infty} a_nx^{n}\\ &=-a_1x-a_2x^2+\sum_{n=3}^{\infty} (4a_{n-2}-a_n)x^{n}\\ \end{align*} $ Equating coefficients, $-a_1x = 0$, $-a_2x^2 = a_1x^2$, and $2a_{n-2}+a_n =4a_{2n-2}-a_{2n} $ and $0 =4a_{2n-1}-a_{2n+1} $. Therefore $a_1 = a_2 = 0$ and $a_{2n} =4a_{2n-2}-2a_{n-2}-a_n $ and $a_{2n+1} =4a_{2n-1} $. From this second recurrence, since $a_1 = 0$, $a_{2n+1} = 0$ for all $n$. From $a_{2n} =4a_{2n-2}-2a_{n-2}-a_n $, for $ n=2, a_4 =4a_2-2a_0-a_2 =0 $, for $ n=3, a_6 =4a_4-2a_2-a_3 =0 $. By strong induction, all the $a_{2n} = 0$, so the only solution is $g(x) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1495946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to determine image of the fundamental group of a covering space of $S^1 \vee S^1$ Consider the covering space of $S^1 \vee S^1$ in $(1)$. Then distinct loops in $(1)$ are represented by $\langle a, b^2, bab^{-1} \rangle$. Thus elements of the fundamental group are words generated by these distinct loops. This fundamental group will map to a subgroup $H$ of $\pi_1( S^1 \vee S^1) \cong \mathbb{Z} * \mathbb{Z}$ under the covering map. With this covering map, I think, $a \mapsto a$, $bab^{-1} \mapsto a$ and $b^2 \mapsto b^2$. I can't quite put this information together to determine $H$. My guess would be $\langle a, b^2 \rangle$ but I am not quite sure if this is correct. Any help is appreciated!
1)is $S^1\vee S^1\vee S^1$ its fundamental group is $Z*Z*Z$ and it is the subgroup of $\pi_1(S^1\vee S^1)$ generated by $a,b^2,bab^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1496065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to estimate of coefficients of logistic model Consider model $logit(p)=a+bx$. I would like to get a analytic formula of $a$ and $b$ like in linear regression. In linear regression, we can get a formula of estimates of $a$ and $b$. I tried using MLE to estimate it. But it is too complicated for me.
For estimate the values of $a$ and $b$ in your model: $$logit(p)=a+bx^{(i)}$$ For simplify you can consider that the $a$ is multiplying by $x^{(0)}$ with value $1$, and use the matrix notation. $$ Z = \theta\cdot x$$ In logistic regression you can use the sigmoid function as below. $$ h_\theta (x) = \dfrac{1}{1 + e^{\theta^T.x}} $$ Now we need define one Cost Function $J(\theta)$, the MSE(Mean Square Error) is a function very used for it. $$J(\theta) = \dfrac {1}{2m} \Big[\displaystyle (h_\theta (x) - y)^T (h_\theta (x) - y) \Big]$$ The update the values for $\theta$ is using gradient descent is define by: $$ \theta = \theta - \gamma \dfrac{dJ(\theta)}{d\theta} $$ For calculate the gradient $\dfrac{dJ(\theta)}{d\theta}$ is used the chain rule: $$ \dfrac{dJ(\theta)}{d\theta} = \dfrac{dJ(\theta)}{dh_\theta (x)}\dfrac{h_\theta (x)}{dZ}\dfrac{dZ}{d\theta} $$ The derivative of MSE $\dfrac{dJ(\theta)}{dh_\theta (x)}$: $$\dfrac{dJ(\theta)}{dh_\theta (x)} = -\dfrac{1}{m}(h_\theta (x) - y)$$ Considering that the derivative of the Sigmoid Function is: $$ \dfrac{h_\theta (x)}{dZ} = h_\theta (x)\odot(1-h_\theta (x)) $$ And the $\dfrac{dZ}{d\theta} = x$, so the final update function is: $$ \theta = \theta + \dfrac{\gamma}{m} x^T \cdot \Big[(h_\theta (x) - y) \odot h_\theta (x)\odot(1-h_\theta (x))\Big] $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1496177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Extensions of $\mathbb{Q}((T))$ and $\mathbb{F}_p((T))$ Okay, I'm having some trouble finding good references for this, so here goes: Is every finite extension of $\mathbb{Q}((T))$ isomorphic to $K((T^{1/e}))$ where $K$ is finite over $\mathbb{Q}$, and $e$ is an integer? Is every finite extension of $\mathbb{F}_p((T))$ isomorphic to $\mathbb{F}_q((T^{1/e}))$, $q = p^r$? Examples, proofs, and references would be appreciated. EDIT: Julian pointed out that obviously the answer is no. So my question is now: Is it true that for every finite extension $L$ of $\mathbb{Q}((T))$ or $\mathbb{F}_p((T))$, there exists an extension of $L$ which is of the form I described above?
Answer to edited question: Yes and it works for arbitrary field $k((T))$ if either $k$ is of characteristic 0 or when the ramification index is not divisible by characteristic of $k$, assuming that your notation $k((T))$ refers to power series ring with coefficients in $k$. This is theorem 6 of chapter 4 section 1 (page 264) in Borevich-Shafarevich's Number Theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1496236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
If a function of two variables has a unique critical point, which is a local maximum, is it a global maximum? $f(x,y)$ has partial derivatives in all $\mathbb R^2$ and a unique critical point at $(x_0,y_0)$ (local maximum). Is it a global maximum? I know that in compact sets, it isn't enough to say that if a point is the only maximum inside the set, then it's a global maximum, because in the frontier of the set it could happen that the function has a maximum greater than the inside one. But for the entire $\mathbb R^2$ there is no frontier, therefore can I admit that this unique point is a point of global maximum?
Loose description of the geometry: imagine a flat plane and then you put a lone hill with a peak on it. Now tilt the plane a little. Now you still have one peak, but hopefully you can also see that you have introduced a saddle point. Imagine sliding the saddle point location off to infinity to make it effectively no longer there. Basic description of the construction I saw somewhere once: you can have something whose cross sections in one direction are cubic, but whose cross-sections in the other direction is a sum of exponentials in just the right way for one of the cubic cross-sectional peaks to meet the peak of one of the exponential sum cross-sections. This can happen with no other critical points, and of course the cubic component alone is unbounded in either direction (so there is no global max or min). The direction that the exponentials taper off in aligns with the idea of the saddle point being slid away to infinity. Consider $f(x,y)=x^3+axe^{my}+be^{ny}$ for some constants $a,b,m,n$. We have: $$\begin{align} \frac{\partial}{\partial x}f(x,y)&=3x^2+ae^{my}&\frac{\partial}{\partial y}f(x,y)&=amxe^{my}+bne^{ny} \end{align}$$ To find a critical point, the $\frac{\partial}{\partial x}$ shows us $a$ must be negative. We are just looking for one example, so we pick some numbers for the constants that make the algebra that follows simpler. With $a=-3$, $b=1$, $m=1$, $n=3$: $$\begin{align} \frac{\partial}{\partial x}f(x,y)&=3x^2-3e^{y}&\frac{\partial}{\partial y}f(x,y)&=-3xe^{y}+3e^{3y}\\ &=3(x^2-e^y)&&=3e^y(e^{2y}-x) \end{align}$$ So at a critical point, $x=e^{2y}$ according to the latter relation. Then using the $\frac{\partial}{\partial x}$ relation: $$e^{4y}-e^y=0\implies e^{3y}=1\implies y=0$$ which in turn says that $x=1$. So the only critical point is at $(1,0)$. Is it a local min, max, or neither? $$\begin{align} f_{xx}(1,0)&=6&f_{xy}(1,0)&=-3&f_{yy}=6 \end{align}$$ So the Hessian determinant is $6^2-(-3)^2=27>0$, so this critical point is either a local max or a local min. OK, it turns out to be a local min, since for example $f_{xx}(1,0)$ is positive. But $f$ has no global min, since you could fix $y$ and let $x\to-\infty$ and get unbounded large negative output. Take this $f$ and negate it to get the direct counterexample. Here is an image of $-f$ from GeoGebra. Note how there would be a saddle point, except it has been slid off the map (to infinity) to the left.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1496321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Proving an intuitively true statement. Let $X \subseteq Y$ and $X\neq Y$ Also let $f: Y → X$ define a bijection. Prove that $Y$ is infinite. Here's what I have as a proof, but I'm not really sure if it's enough. Let $X \subseteq Y$ and $X\neq Y$, this must mean that $|X| < |Y|$ which also implies there's an injection. there's a bijection from $Y$ to $X$ implies |X| = |Y|. Suppose that Y is infinite. Then it can either be countable infinite, or uncountable infinite. Let Y be countable infinite. Y = {$y_1$, $y_2$,...,$y_n$} and let X = {$x_1$,$x_2$,...,$x_m$} if $ m \neq n$ then there cannot be a surjection, which contradicts the fact that f defines a bijection. Now suppose $m = n$ and Y is countable infinite. This fact holds as long as X is countable infinite. Now suppose that Y is uncountable infinite. Then there can be a bijection as long as X is uncountable infinite. Another strategy? Could this proof be as simple as showing a contradiction? Assumptions I'll use: $f: [m] \rightarrow [n]$ defines a bijection then $m=n$ Suppose that Y is finite. if $f: Y \rightarrow X$ is a bijection, we know that |X| = |Y|, but |X| < |Y|, which implies that $m \neq n$ which is a contradiction.
Using fundamental theorems of cardinality, if $Y$ happens to be finite then so is $X$. Then, from the inclusion and using the fact that $Z_1\subset Z_2$, $|Z_1|=|Z_2|$ implies $Z_1=Z_2$ if $Z_1$ and $Z_2$ are finite, we see that $|X|<|Y|$ on the other hand, using the bijection $f$ we get $|X|=|Y|$ which is a contradiction. I would like to highlight another proof which is better, IMHO, because we are not reasoning ad absurdum. Clearly $f(Y)\subseteq Y$ and $f(Y)\neq Y$. Now both affirmation are necessarily stable by applying $f$ (since $f$ is a bijection) so : $$f^2(Y)\subseteq f(Y)\text{ and } f^2(Y)\neq f(Y) $$ and so on... so that for all $n\geq 0$ : $$f^{n+1}(Y)\subseteq f^n(Y)\text{ and } f^{n+1}(Y)\neq f^n(Y) $$ This means that you can find $x_n\in f^{n+1}(Y)-f^n(Y)$ for each $n$. Clearly if $n<m$ then $x_m\notin f^{n+1}(X)$ so $x_n\neq x_m$. Finally we constructed an infinite sequence $(x_n)$ where each pair of elements is distinct. Of course I use here the countable axiom of choice, if you don't want to use this, just construct $(x_1,...,x_N)$ for each $N>0$ (you just make a finite number of choice) and conclude that $N\leq |Y|$ for each $N>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1496400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is $n_1 \sqrt{2} +n_2 \sqrt{3} + n_3 \sqrt{5} + n_4 \sqrt{7} $ never zero? Here $n_i$ are integral numbers, and not all of them are zero. It is natural to conjecture that similar statement holds for even more prime numbers. Namely, $$ n_1 \sqrt{2} +n_2 \sqrt{3} + n_3 \sqrt{5} + n_4 \sqrt{7} + n_5 \sqrt{11} +n_6 \sqrt{13} $$ is never zero too. I am asking because this is used in some numerical algorithm in physics
If $p_1,\dots,p_n$ are prime numbers with $p_i\ne p_j$ for $i\ne j$, then the field extension $\mathbb Q\subset\mathbb Q(\sqrt{p_1},\dots,\sqrt{p_n})$ has degree $2^n$ and a basis is given by the set $$\{1,\sqrt{p_{i_1}\cdots p_{i_k}}:1\le i_1<\cdots<i_k\le n, 1\le k\le n\}.$$ See Proving that $\left(\mathbb Q[\sqrt p_1,\dots,\sqrt p_n]:\mathbb Q\right)=2^n$ for distinct primes $p_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1496567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Three lines are concurrent (or parallel) $\iff$ the determinant of its coordinates vanishes. I'm trying to prove the concurrency condition for three lines lying on a plane. This condition says that: Let \begin{cases} ax + by + cz=0 \\ a'x – b'y + c'z=0 \\ a''x + b''y + c''z=0 \end{cases} be three lines (barycentric coordinates), with the cordinates of the lines not all equal. Prove that they are concurrent or parallel iff $$ \left| \begin{array}{ccc} a & b & c \\ a' & b' & c' \\ a'' & b'' & c'' \end{array} \right| =0. $$ My try: $$\left| \begin{array}{ccc} a & b & c \\ a' & b' & c' \\ a'' & b'' & c'' \end{array} \right| =0 \iff \exists P=(x,y,z)\neq(0,0,0),$$ and we've found a common $P$ in those three lines, different of $(0,0,0)$ and with $(x,y)\neq (0,0)$, because it would follow that $P=(0,0,0)$. So the lines are concurrent. Is it correct? How I see that they're parallel?
I have also tackled this problem recently. It looks very easy and a basic problem, and yet I haven't been able to convince myself of this proposition. What I did: If the determinant |A| is 0, then the rank of the coefficient matrix is $\leq 2$. Thus, the zero space of the given system of equations A x = 0 has dimension at least $\geq 1$. So, any solution to the system looks like \[ x'(t) = (tA, B, -1)^{transpose} \quad t \in \mathbb{R} \] for some fixed $A, B \in \mathbb{R}$, if the dimension of the zero space is $1$. Inserting x' into Ax = 0, we have \[ \begin{split} taA + Bb - c & = 0, \\ ta'A + Bb' - c' & = 0, \\ ta''A + Bb'' - c'' & = 0 . \end{split} \] Solving for B in each relation, we have \[ \begin{split} B & = - taA/b + c/b, \\ B & = - ta'A/b' + c'/b', \\ B & = - ta''A/b'' + c''/b'' \end{split} \] for all real $t$, and this is possible if coefficients are equal to one another; that is, each relation is of the form \[ y = a x + c_{i}, \quad i = 1, 2, 3. \] If solutions to Ax = 0 look like \[ x'(t) = (tA, tB, -1)^{transpose} \quad t \in \mathbb{R}, \] we can still equate \[ - taA/b + c/b = - ta'A/b' + c'/b' \] and use the same argument concerning coefficients of linear equations. But I do not think this is correct. Just a suggestion so that anyone else might correct me.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1496688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Show that maximal abelian normal subgroup of $p$-group contains the commutator subgroup Let $P$ be a $p$-group and $A$ a maximal abelian normal subgroup of $P$. If $|A : C_A(x)| \le p$ for all $x \in P$, then $P' \le A$. As far as I know I have no idea how to bring the condition about the index of $C_A(x)$ in $A$ into an argument showing that $P' \le A$, or equivalently that $P/A$ is abelian. The only "idea" what might be helpful: As $A \unlhd P$ the group $P$ acts on $A$ by conjugation, hence it acts on the cosets of $C_A(x)$ in $A$. Any hints?
This seems quite challenging! For $x,y \in P$, it is enough to prove that $[x,y]$ centralizes $A$, or equivalently that the automorphisms of $A$ induced by conjugation by $xy$ and $yx$ are the same, because then the fact that $A$ is maximal abelian implies $C_G(A)=A$, so $[x,y] \in A$. This is easy if $C_A(x)=C_A(y)$, or if either $x$ or $y$ centralizes $A$. So assume not and let $N = C_A(x) \cap C_A(y)$. Then $A/N$ is elementary abelian of order $p^2$. Let $C_A(x) = \langle a,N \rangle$ and $C_A(y) = \langle b,N \rangle$. If both $x$ and $y$ centralize $A/N$, then again it is easy, so assume not. Since $x$ and $y$ generate a $p$-group, and their centralizers project onto different subgroups of $A/N$, they cannot both induce nontrivial actions on $A/N$. So let's assume that $x$ acts nontrivially and $y$ acts trivially on $A/N$. Then we may assume that $a$ is chosen such that $b^x = ab$. Now $y$ does not centralize $a$, so $a^y=ac$ for some $1 \ne c \in N$. But now we find that $xy$ does not centralize any element of $A \setminus N$, contradicting the assumption that $|A:C_A(xy)| \le p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1496848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving using mean value theorem Let f be a function continuous on [0, 1] and twice differentiable on (0, 1). a) Suppose that f(0) = f(1) =0 and f(c) > 0 for some c ∈ (0,1). Prove that there exists $x_0$ ∈ (0,1) such that f′′($x_0$) < 0.) b) Suppose that $$\int_{0}^{1}f(x)\,\mathrm dx=f(0) = f(1) = 0.$$ Prove that there exists a number x$_0$ ∈ (0,1) such that f′′(x$_0$) = 0. How do I solve the above two questions? I have tried using Mean Value Theorem but it gives me zero when I differentiate for the first time. Not sure how I can get the second derivative to be less than zero. Any help is much appreciated! Thanks!
We solve the first problem only. The function $f$ reaches a (positive) maximum at some point $p$ strictly between $0$ and $1$. At any such $p$, we have $f'(p)=0$. Since $f(1)=0$, by the Mean Value Theorem there is a $q$ strictly between $p$ and $1$ such that $f'(q)\lt 0$. Thus by the Mean Value Theorem there is a point $r$ strictly between $p$ and $q$ such that $f''(r)\lt 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1497086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Mutually exclusive countable subsets of a countable set This is part of a bigger problem I'm trying to prove, but my argument relies of the validity of the following idea. Note that when I say countable, I don't mean finite -- I mean countable infinity. Consider the set of natural numbers, $\mathbb{N}$. Now take a countable subset of them, call it $\mathsf{S}_1$. Then take a countable subset of $\mathbb{N}\setminus{\mathsf{S}_1}$ and call it $\mathsf{S}_2$. Then take a countable subset of $\mathbb{N}\setminus\{{\mathsf{S}_1}\cup{\mathsf{S}_2}\}$ and call it $\mathsf{S}_3$. Continue in this manner to construct $\mathsf{S}_i$'s such that ${\mathsf{S}_i}\cap{\mathsf{S}_j}=\emptyset$ for $i≠j$. My question is this: is it necessarily the case that eventually there will be finitely many $\mathsf{S}_i$'s with $\mathbb{N}\setminus\bigcup_{i}{\mathsf{S}_i}={P}$ where $P$ is either empty or finite? My guess is yes, but I'm not how to prove it or how to find a counterexample.
Edit: @ErickWong is right (see his comment below). This doesn't answer the question. I did cover myself with the caveat "If I understand ...". I may leave this nonanswer up for a while since others may learn from it. If I understand your question correctly here's another way to show you can have an infinite set left over. Just carry out your construction on the countably infinite set of even numbers. All the odds will remain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1497189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 0 }
Prove $x^{4}-x+1=0$ has no solution I would like to prove that the following equation has no solution in $\mathbb{R}$ $$x^{4}-x+1=0$$ my question : could we use Intermediate Value Theorem to prove it otherways I'm interested in more ways of prove that has no solution in $\mathbb{R}$. without : i know that we can prove it by that way : $x=x^4+1\geq 1$ and $x^4-x+1=x^2(x^2-1)+x^2-x+1>0$
If $x = x^{4}+1$, then certainly $x \geq 1$ as $x^{4} \geq 0$ for all real $x$. But then $x^{4} = x^{3}x \geq x$ and $x^{4}+1 \geq x+1 > x$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1497270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Determining if $973$ is prime Without a calculator, determine if $973$ is prime or not I was given this question to solve. I know $973$ is not prime. I was told a strategy to solve whether a number is prime or not is to test all the numbers less than the square root of $973$ So I would have to test till $32$ and i find $1,7,139$ and $973$ are factors of this number. Basically, what I want to find out is are there any other strategies to solve this question? then i wouldn't have to check till 1-32 to see if any of the numbers are factors.
Another way, $973 = 910+63$ This gives $973 = 7.(130+9) = 7 . 139$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1497404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 4 }
Proving convergence by the comparison test I need to prove the convergence of the following series : $$\sum_{n=1}^{\infty} \frac{1}{(3n-2)(3n+1)}$$ I suppose I need to find two series like $$\sum_{n=1}^{\infty}(k_1a_n + k_2b_n) = \sum_{n=1}^{\infty} = k_1\sum_{n=1}^{\infty}a_n + k_2\sum_{n=1}^{\infty}b_n$$ I thought that the series $a_n$ and $b_n$ can be the partial sums, so $$\sum_{n=1}^{\infty} \frac{1}{(3n-2)(3n+1)} = \frac{1}{3}\sum_{n=1}^{\infty} \frac{1}{(3n+1)} - \frac{1}{3}\sum_{n=1}^{\infty} \frac{1}{(3n-2)}$$ I know, by the comparison test, that if $a_n$ and $b_n$ converge, the main series will it too. But, how can I demonstrate that the series $a_n$ and $b_n$ converge?
One may just observe a telescoping sum here $$ \sum_{n=1}^N \frac{1}{(3n-2)(3n+1)}=\frac13\sum_{n=1}^N \left(\frac{1}{3(n-1)+1}-\frac{1}{3n+1}\right)=\frac13-\frac{1}{3(3N+1)} $$ giving the convergence and the sum of your initial series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1497528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $v$ is an eigenvalue of $e^{t A}$ for all $t \geq 0$, is it also an eigenvalue of $A$? I'm currently writtng a proof and I wont use that, if $v$ is an eigenvector of $e^{tA}$ for all $t \geq 0$ where $A$ is some generating matrix, then $v$ is an eigenvector of $A$ itself. However, I found neither the proof nor a contradiction in available sources. The case of diagonalisable matrices is self-evident as a popular computation approach utilises this fact. However, it is still unclear for me if the same is true for non-diagonalisable matrices at least for the case of basic (non generalised) eigenvectors. Recalling Jordan structure I think that all kinds of eigenvectors are preserved. But are there any better rigorous proof for this conjecture to make me totally confident? Can this fact be assumed as obvious?
If $Av = \lambda v$, it's easy to see $\exp(tA)v = \exp(t\lambda) v$. Conversely, if $v$ is an eigenvector of $\exp(tA)$ for all non-zero $t$, then $v$ is an eigenvector of $\frac{1}{t}(\exp(tA) - I) = A + O(t)$ for all $t \neq 0$, and by continuity $v$ is an eigenvector of $A$. Erratum: Though the question stipulates "...for all $t \geq 0$" (so the preceding argument suits the OP's needs), my initial wording, "if $v$ is an eigenvector of $\exp(tA)$ for some (hence all) non-zero $t$...", is generally incorrect. For example, if $$ A = \left[\begin{array}{@{}rr@{}}0 & -1 \\ 1 & 0 \\ \end{array}\right], $$ then $\exp(2\pi A) = I$, while $A$ has no real eigenvectors. (As noted in the question, the "some (hence all)" assertion does hold under additional hypotheses, such as if all eigenvalues of $A$ are real.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1497595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Understanding modulos and polynomials when dividing two polynomials I understand the idea of modulo with integers but I am having an incredibly hard time wrapping my head around the idea of modulus with polynomials. I am taking a course in which I need to divide polynomials in terms of a specific modulus. I am able to divide the polynomials but don't understand how the mod applies. So for example, $f(x)$=$$4x^4+2x^3+6x^2+4x+5$$ and $g(x)$=$$3x^2+2$$ and I am asked to divide f(x) by g(x) in mod 7.
All you need is computing the inverse of the leading coefficient modulo $7$. The operations are the same as in $\mathbf R$, but you compute modulo $7$. Here is an example:
{ "language": "en", "url": "https://math.stackexchange.com/questions/1497715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $p=4k+1$ then $p$ divides $n^2+1$ I am stuck in one step in the proof that if $p$ is congruent to $1 \bmod 4$, then $p\mid (n^2+1)$ for some $n$. The proof uses Wilson's theorem, $(4k)!\equiv -1 \pmod p$. The part I am stuck is where it is claimed that $(4k)!\equiv (2k)!^2 \bmod p$. Why is this so?
You group $\{1,p-1\},\{2,p-2\},...,\{2k,2k+1\}$ together, and the product of the first terms are equal to to product of second terms as there are even number of terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1497843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Help me find my mistake when finding the exact value of the infinite sum $\sum_{n=0}^{\infty}\frac{e^{n-2}}{5^{n-1}}$ Finding the exact value of the infinite sum: $$\sum_{n=0}^{\infty}\frac{e^{n-2}}{5^{n-1}}$$ My Approach: First term (a); $$\frac{e^{-2}}{5^{-1}}=\frac{5}{e^2}$$ Second term: $$\frac{e^{-1}}{5^{0}}=\frac{1}{e}$$ The difference (r): $$r=\frac{e}{5}$$ Applying the geometric sum formula: $$\frac{\frac{5}{e^2}}{1-\frac{e}{5}}=\frac{\frac{5}{e^2}}{\frac{5-e}{5}}$$ $$=\frac{25}{e^2(5-e)}$$ Unfortunately this is wrong, where have I gone wrong in this?
Your sum is correct. One may recall a standard result concerning geometric series $$ \sum_{n=0}^\infty r^n=\frac{r}{1-r},\qquad |r|<1. $$ Applying it with $r=\dfrac{e}5\,\,\left(\left|\dfrac{e}5\right|<1\right)$, gives $$ \sum_{n=0}^{\infty}\frac{e^{n-2}}{5^{n-1}}=\frac{e^{-2}}{5^{-1}}\sum_{n=0}^{\infty}\frac{e^{n}}{5^{n}}=\frac{e^{-2}}{5^{-1}}\frac1{1-e/5}=\frac{25}{e^2(5-e)} $$ as you have found.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1497953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Guy with $7$ friends Some guy has $7$ friends$(A,B,...,G)$. He's making dinner for $3$ of them every day for one week. For how many ways can he invite $3$ of them with condition that no couple won't be more then once on dinner. (When we took $A,B,C$ we cannot take $A$ with $B$ or $C$ and $B$ with $C$). My idea: We have $7$ subset with $3$ elements. So $7!$,but subset $\{A,B,C\}=\{B,A,C\}$ etc. So we have to divide by $3!$, but I don't think that's correct thinking. Should I divide also by couples? ($2!^3$) Any hints?
Either I am missing something or the book answer is wrong. I take it that when you write "no couple won't be more than once on dinner", you actually mean "no couple will ....." as has been amply clarified in your examples. Now there can only be $\dbinom72 = 21$ distinct couples, and each day $3$ such couples get eliminated, e.g. if $ABC$ are invited, $AB, BC$ and $AC$ can't be invited again. Thus there are only $7$ trios to be invited, and if the order in which they are invited is to be considered, there are $7!$ ways of inviting them. Note There is no question of dividing by $3!$, because we considered combos of $3$, and for the life of me, I can't understand multiplication by $2$ and division by $2^2$]
{ "language": "en", "url": "https://math.stackexchange.com/questions/1498062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Bounded linear functional is necessarily continuous proof verification I want to prove that a bounded linear functional $f$, must be continuous. I have defined: $f$ is bounded means that $\exists c> 0, |f(x)|\leq c\|x\|, \quad \forall x\in X$ and continuous means that $x_n\to x \implies f(x_n)\to f(x)$. Proof: Let $f$ be bounded. Then $\exists c, |f(x)| \leq c\|x\|$ Then \begin{align} &\lim_{n\to\infty} |f(x_n-x)|\leq \lim_{n\to\infty} c\|x_n-x\|\\ \implies& \lim_{n\to\infty} |f(x_n)-f(x)|\leq c\|x-x\|=0\\ \implies& \lim_{n\to\infty} f(x_n)=f(x) \end{align} Is that all there is to it? I used linearity in the second line, but what if we removed the condition that $f$ is linear. Is a non-linear bounded functional $f$ necessarily continuous
(1) Yes, that's all. (2) Without linearity, the statement becomes wrong, even in the $X = \mathbf R$ case. Consider for example, $f \colon \def\R{\mathbf R}\R\to \R$ given by $$ f(x) = \begin{cases} x & x \in [-1,1] \\ 0 & x \not\in [-1,1] \end{cases} $$ Then $f$ is bounded, as $|f(x)| \le |x|$ for all $x \in \R$, but not continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1498153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Formula for Nested Radicals I know that: $$\sqrt{2+\sqrt{2+\sqrt{2+...\sqrt{2}\; (upto\; n\; times)}}}=2\cos(2^{-n-1}\:\pi)$$ I was wondering whether such a formula exists for $$\sqrt{3+\sqrt{3+\sqrt{3+...\sqrt{3}\; (upto\; n\; times)}}}$$ or in general for, $$\sqrt{k+\sqrt{k+\sqrt{k+...\sqrt{k}\; (upto\; n\; times)}}}$$ I've tried scaling the formula for 2 to get an approximate result. For example for the 6 case: $$\sqrt{6+\sqrt{6+\sqrt{6+...\sqrt{6}\; (upto\; n\; times)}}}\approx\left(\frac{2\cos(2^{-n-1})-\sqrt2}{2-\sqrt2}\right)(3-\sqrt6)+\sqrt6$$
Let $$f(k,n)=\underbrace{\sqrt{k+\sqrt{k+\sqrt{\ldots+\sqrt k}}}}_n $$ We know that $\cos\frac x2=\pm\frac12\sqrt{1+\cos x}$. Therefore if $f(n,k)=a\cos b$ then $$f(n+1,k)=\sqrt{k+f(n,k)}=2\sqrt k\cdot\frac12\sqrt{1+\frac{f(n,k)}{k}}=2\sqrt k\cdot\frac12\sqrt{1+\frac ak\cos b}. $$ This works out nicely with our haf-angle formula only if $a=k$ and then becomes $$ f(n+1,k)=2\sqrt k\cdot\frac12\sqrt{1+\cos b}=2\sqrt k\cos\frac b2.$$ We'd like to have $2\sqrt k=a=k$ again, but that works only if $k=2$. In other words, there won't be formulas of the form $f(n,k)=a_k\cdot \cos 2^{-n}\alpha_k$ except for $k=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1498257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Coin change problem Given a set of coins $S = \{w^0, w^1, w^2, ....., w^n\}$, for a given $w$, how to test whether an amount $X$ can be changed for i.e. find such subsets $S1, S2$ such that $$\sum_{c \in S1} c + X = \sum_{c \in S2} c$$ $S1, S2 \subseteq S$ and $S1 \cap S2 = \emptyset$ i.e. each coin can be used at most once. Both $n$ and $w$ can be as large as $1$ billion. I can only think of a brute force solution. For each coin, there are three cases either it is not used or it is on the left side of equation else it is on the right side of equation. Rather, I need an insightful mathematical concept to apply here. Here are a couple of points I want myself clarified about(so please, try to enlighten me on all these points :D): * *The solution from the site I obtained this problem uses representing $X$ in base $w$ and after that I don't know what is done. (How) *If $X \le w^k$, where $w^k$ is the smallest such coin, then all coins $w^{k + 2}, w^{k + 3}, ...w^{n}$ need not be considered. (How to prove this?). *Often I make assertions while solving problems that sound logical, but when I try to prove it mathematically, I get stuck. So, what should I do in such situation(I am in such situation with the proof of point 2)? Right now what I am thinking about point 2 is this "To prove assertion 2, I need to prove that $w^{k+2}$ is not used in all of the valid solutions which means, $w^{k+2}$ doesn't occur either on the left or right side of the equation shown at the top. Now, if I could somehow prove that using $w^{k+2}$ in left side or right side, I cannot arrive at a solution I would be complete with my proof. I will first put $w^{k+2}$ in the left (along with $X$) and see. I have $X + w^{k+2}$ on the left, I also know that $X \le w^k$. I can't work any further. Thank you.!!
The amounts of change that can be made are the numbers of the form $$\sum_{k=0}^n\epsilon_kw^k\;,\tag{1}$$ where each $\epsilon_k\in\{-1,0,1\}$, and if $\ell=\max\{k:\epsilon_k\ne 0\}$, then $\epsilon_\ell=1$. Equivalently they are the non-negative integers that can be expressed with at most $n+1$ digits, each of which is $-1,0$, or $1$, in a modified base $w$ notation that uses the digits $-1,0,1,2,\ldots,w-2$ instead of the usual $0,1,2,\ldots,w-1$. The usual algorithm for changing base, involving repeated division by $w$, still works, provided that one handles remainders of $w-1$ correctly. Perhaps the easiest way to explain is by an example. Let $w=4$. If I want to convert $27$ to base $w$ in the usual way, I use the following algorithm: Divide $27$ by $4$ to get a quotient of $6$ and a remainder of $3$. Replace $27$ by the quotient $6$ and repeat; you get a quotient of $1$ and a remainder of $2$. Replace the $6$ by the new quotient of $1$ and repeat; you get a quotient of $0$ and a remainder of $1$. Read off the remainders in reverse order to get $123$; this is the ordinary base four representation of $27$. As a check, $1\cdot4^2+2\cdot4+3=16+8+3=27$. To convert to the modified base four notation, you have to replace remainders of $3$ by remainders of $-1$, which of course requires increasing the quotient by $1$. This time the steps are: $$\begin{align*} 28&=4\cdot 6+3=4\cdot 7-1\\ 7&=4\cdot 1+3=4\cdot 2-1\\ 2&=4\cdot 0+2\;, \end{align*}$$ so $28=2\cdot 4^2-1\cdot 4-1$. Using $\bar1$ to represent $-1$, we can write this $2\bar1\bar1$. Since this modified base four representation of $27$ uses the digit $2$, $27$ is not an amount of change that can be made when $w=4$. $61$, however, can be made, provided that $n\ge 3$: $$\begin{align*} 61&=4\cdot 15+1\\ 15&=4\cdot 3+3=4\cdot 4-1\\ 4&=4\cdot 1+0\\ 1&=4\cdot 0+1\;, \end{align*}$$ so $61$ is $10\bar11$, a representation that uses only $\bar 1,0$, and $1$. Indeed, $61=(4^3+4^0)-4^1$. Added: Note that the smallest amount $n$ that uses the coin $w^{k+2}$ is the number with the modified base $w$ representation $$1\underbrace{\bar1\bar1\ldots\bar1\bar1}_{k+2}\;,$$ so $$n=w^{k+2}-\sum_{i=0}^{k+1}w^i=w^{k+2}-\frac{w^{k+2}-1}{w-1}=\frac{w^{k+3}-2w^{k+2}+1}{w-1}\;.$$ If $w\ge 3$, then $w^{k+3}-2w^{k+2}+1>w^{k+2}(w-2)\ge w^{k+2}>w^{k+1}>w^k(w-1)$, and $n>w^k$. Thus, if $n\le w^k$, we won’t need the $w^{k+2}$ coin (or of course any larger coin). In fact, the calculation shows that we won’t even need the $w^{k+1}$ coin. If $w=2$ none of the foregoing analysis is necessary, since every integer from $0$ through $2^{n+1}-1$ has a binary representation of length at $n+1$ and can be written as a sum of powers of $2$, the largest of which is at most $2^n$. In that case it’s still true that if $n\le 2^k$, the $2^k$ coin is the largest that we might need: the binary representation of $n$ will certainly not use any power of $2$ greater than $2^k$, and it will use $2^k$ only if $n$ is actually equal to $2^k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1498374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that if $ab=e$ then $ba=e$ Suppose that instead of the property $ab=ba=e$ a group G has the condition that for every element $a$ there exists an element $b$, such that $ab=e$. Prove that $ba=e$. Is the following a valid proof? Since $ab=e$ then under the condition of the group there exists an element $k$ such that $bk=e$ for some $k$ in the group. Now $bk=e$ so $abk=ae$ therefore $(ab)k=a$ and finally $ek=a$ and $k=a$. Is this a valid proof?
I assume that you mean if for SOME $a,b$ (not every) $ab = e$ then $ba = e$. Your proof is valid, but you could write it much easier without playing with $k$. $$ab = e \Rightarrow bab = b.$$ If you already know the cancellation law, then we are done. Otherwise you may continue by writing $baba = ba$, so that $(ba)^2 =ba$. Just note that the only idempotent in a group is $e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1498536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that if $AB = 0$, then rank(A) + rank(B) ≤ p Let $A$ be an $m \times$ n matrix and $B$ be an $n \times p$ matrix. I understand that since $AB=0$, the column space of $B$ is contained within the nullspace of $A$. Does this mean that $\operatorname{rank}(B) \leq \operatorname{nullity}(A)$? How do I proceed to show that $\operatorname{rank}(A) + \operatorname{rank}(B) \leq p$ ?
Yes, you may indeed deduce that the rank of $B$ is less than or equal to the nullity of $A$. From there, simply apply the rank-nullity theorem (AKA dimension theorem). Counterexample to question as stated: $$ A = \pmatrix{0&1&0\\0&0&1\\0&0&0} ,\quad B = \pmatrix{1\\0\\0} $$ $B$ is $3 \times 1$ and $AB = 0$, but $\operatorname{rank}(A) + \operatorname{rank}(B) = 3 > 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1498747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $a \equiv b \mod{p^j} \iff |a-b|_{p}\leq p^{-j}$ Prove that $a \equiv b \mod{p^j} \iff |a-b|_{p}\leq p^{-j}$ Typically, when dealing with a congruence I go to the division statement. i.e $$a\equiv b\mod{p^j}\Rightarrow p^j|a-b \;\;\;(\star)$$ Moreover, I know that \begin{align} |a-b|_{p}&=p^{-vp(a-b)}\\ &\leq\max{(p^{-vp(a)},p^{-vp(a)})}\;\;\; (\star\star) \end{align} However, I am having trouble with the key part that will connect $(\star)$ and $(\star\star)$. I am hoping someone can help me fill in the missing parts. Thank you.
Instead of thinking of $a-b$ as a difference of two numbers, think of it as a single $p$-adic number. Then $a-b \equiv 0 \pmod {p^j}$ is exactly the statement that $p^j \mid (a-b)$, which is exactly the statement that the $p$-adic norm of $a-b$ is at least $j$ (and is more than $j$ if and only if additional factors of $p$ divide $a-b$, i.e. $a -b \equiv 0 \pmod{p^{j + \ell}}$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1498828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does $f(0)=0$ and $\left|f^\prime(x)\right|\leq\left|f(x)\right|$ imply $f(x)=0$? Let $f:\mathbb{R}\to\mathbb{R}$ be a function such that $f(0)=0$ for all real numbers $x$, $\left|f^\prime(x)\right|\leq\left|f(x)\right|$. Can $f$ be a function other than the constant zero function? I coudn't find any other function satisfying the property. The bound on $f^\prime(x)$ may mean that $f(x)$ may not change too much but does it mean that $f$ is constant? I thought for a while and found that $f^\prime(0)=0$ and by using mean value theorem, if $x\neq0$ then there's a real number $y$ between $0$ and $x$ such that $\left|f(x)\right|=\left|xf^\prime(y)\right|\leq\left|xf(y)\right|$. Anything further?
Continuing the idea that I mentioned after proposing the question: Let's define $y_1:=y$. By the mean value theorem there's a real number $y_2$ between $0$ and $y_1$ such that $\left|f(y_1)\right|\leq\left|y_1f^\prime(y_2)\right|\leq\left|y_1f(y_2)\right|$ so $\left|f(x)\right|\leq\left|xy_1f(y_2)\right|\leq\left|x^2f(y_2)\right|$. Continuing this way, we inductively conclude that for every positive integer $n$, there's a real number $y_n$ between $0$ and $x$ such that $\left|f(x)\right|\leq\left|x^nf(y_n)\right|$. So if $0<x<1$, using the fact that $f$ is bounded on a bounded interval, we take the limit of the right-hand side of the last equation as $n$ tends to infinity and conclude that $f(x)=0$. Since $f$ is continuous, we have $f(x)=0$ for $0\leq x\leq1$. Now, if $m$ is a positive integer, the function $g(x)=f(x+m)$ has the property $\left|g^\prime(x)\right|\leq\left|g(x)\right|$. This lets us to prove that $f(x)=0$ for $m\leq x\leq m+1$ inductively. The function $h(x)=f(-m+1-x)$ can be treated in the same manner and that allows us to prove $f(x)=0$ for every real number $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1498898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 2 }
Finding region for Change of Variables and Double integral problem I'm running into some trouble on a problem in Vector Calc by Marsden and Tromba. I don't think I am correctly finding the region for my change of variables and the book doesn't have a similar example. Question: Let $D$ be the region $0 \leq y \leq x$ and $0 \leq x \leq 1$. Evaluate $$\iint_D (x + y) \,dx\,dy$$ by making the change of variables $x = u + v$, $y = u - v$. Attempts: $T(u,v) = A\left( \begin{array}{c} u \\ v \\\end{array} \right) = \left( \begin{array}{cc} 1 & 1 \\ 1 & -1 \\\end{array} \right)\left( \begin{array}{c} u \\ v \\\end{array} \right)$ Therefore $det A = -2$. From this we can find that $\left( \begin{array}{cc} 1/2 & 1/2 \\ 1/2 & -1/2 \\\end{array} \right)\left( \begin{array}{c} x \\ y \\\end{array} \right) = \left( \begin{array}{c} u \\ v \\\end{array} \right) $. From here I am unsure how to proceed. By just solving the double integral in terms of $x,y$ I know that I should be getting $1/2$ but none of the regions $D^{*}$ that I've found have gotten me this (I run into the same issue on the next problem in the book.
If $x=u+v$ and $y=u-v$, you get the Jacobian $$J=\begin{vmatrix}\dfrac{\partial x}{\partial u}&\dfrac{\partial y}{\partial u}\\[1ex]\dfrac{\partial x}{\partial v}&\dfrac{\partial y}{\partial v}\end{vmatrix}=\begin{vmatrix}1&1\\1&-1\end{vmatrix}=-2~~\implies~~|J|=2$$ So, $$\begin{align*}\iint_D(x+y)\,\mathrm{d}x\,\mathrm{d}y&=2\iint_{D^*}(u+v+u-v)\,\mathrm{d}u\,\mathrm{d}v\\[1ex] &=4\int_0^{1/2}\int_v^{1-v}u\,\mathrm{d}u\,\mathrm{d}v\end{align*}$$ It seems to me that you're having trouble coming up with the right limits for integrating over $D^*$. What I basically did was transform each side of the triangular region $D$ via the given change of variable to make a corresponding plot of the region in the $u$-$v$ plane. $D$ is bounded by three lines: * *$y=x$ *$y=0$ *$x=1$ Since $x=u+v$ and $y=u-v$, this means the first line translates to $u-v=u+v$, i.e. $v=0$. Similarly, $y=0$ gives $u-v=0$, or $u=v$; and $x=1$ gives $u+v=1$. If you plot each of * *$v=0$ *$v=u$ *$v=1-u$ in the $u$-$v$ plane, you'll make another triangle with its hypotenuse along the $v$ axis. (Essentially, the transformation rotates the triangle $45^\circ$ clock- or counter-clockwise, depending on the orientation of the $u$-$v$ plane's axes.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1499025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How should I read and interpret $A = \{\,n^2 + 2 \mid n \in \mathbb{Z} \text{ is an odd integer}\,\}$ As my question states, I need to interpret: $A = \{\,n^2 + 2 \mid n \in \mathbb{Z} \text{ is an odd integer}\,\}$ Does this mean that any odd integer $n$ will work, or does this mean that the output of $n^2 + 2$ must be an odd integer? For example, would these numbers be in the set: $\{1,3,5,7,\ldots\}$ or would $5$ not be in the set because $2^2 + 2 = 6$?
The condition on the right of the "$|$" is not about what is on the left of it. So for each $n\in \Bbb Z$ that is an odd integer (that is, for $n=\ldots,-5,-3,-1,1,3,5,\ldots$) we form the expression $n^2+2$ (that is, $\ldots, 27,11,3,3,11,27,\ldots$) and collect the results in the set $A$. In orther words, $$A=\{3,11,27,51,83,\ldots\}.$$ (Incidentally, in the given situation $n^2+2$ is odd if and only if $n$ is odd; the distinction to be made might be clearer if we considered $\{\,n^2+1\mid n\in\Bbb Z\text{ is an odd integer}\,\}$) Some remarks on this notation: Admittedly, in a strict sense when introducing set theory axiomatically, one usually defines the followng two set-builder notations (used in the Axiom Schema of Comprehension and the Axiom Schema of Replacement, repsectively): $$\tag1 \{\,x\in S\mid \Phi(x)\,\}$$and$$\tag2\{\,f(x)\mid x\in S\,\}$$ (where $S$ is a set, $\Phi$ is a predicate, and $f$ is a function). These are determined by $$ a\in \{\,x\in S\mid \Phi(x)\,\}\iff a\in S\land \Phi(a)$$ and $$ a\in\{\,f(x)\mid x\in S\,\}\iff \exists x\in S\colon f(x)=a.$$ What you have is a mix of these, i.e., is of the form $$\tag3\{\,f(x)\mid x\in S\land \Phi(x)\,\}$$ and thereby possibly confusing. To follow the notational convention strictly, we need $x\in\text{(some set)}$ on the right if we want to use a function on the left, so should write something like $\{\,f(x)\mid x\in\{\,y\in S\mid \Phi(y)\,\}\,\}$, but in my opinion that would be less legible (and thereby more confusing): $$ A=\bigl\{\,n^2+2\bigm|n\in\{\,k\in\Bbb Z\mid k\text{ odd}\,\}\,\bigr\}$$ At the same time this way to rewrite $(3)$ in terms of $(1)$ and $(2)$ shows that introducing $(3)$ as a shorthand(?) can be justified. Also, since it is clear in your example that the function values are themselves in an already well-known set (namely $\Bbb Z$), we could get by with a notation that uses only comprehension, not replacement: $$ A=\{\,m\in\Bbb Z\mid\exists n\in\Bbb Z\colon(n\text{ odd}\land m=n^2+2)\,\}.$$ Again, this probably does not lead to more enlightenment about $A$ than the given notation, so is not necessarily really a notational improvement. Since oddness can be easily expressed ($n=2k+1$ with $k\in\Bbb Z$), I'd prefer $$ A=\{\,(2k+1)^2+1\mid k\in\Bbb Z\,\}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1499127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Union of set and interval I'm working on finding the boundaries of sets, I feel like I understand this. However, one problem asks for the boundary of $\{1,2,3\}\cup(2,4)$ and I'm unsure as to how to take the union of an interval and a set. Here's my thoughts: The union will include points $1$ and $2$ and then the interval from $2$ to $4$ but not including $4$. I'm not sure how I'd write that though. Possibly $\{1,[2,4)\}$?
Let's think formally about what a boundary is. If you have a set $A$, with closure $\bar{A}$ and interior $\mathring{A}$, then the boundary of $A$ is $\partial{A} = \bar{A} \setminus \mathring{A}$. Let $A = \{1,2,3\} \cup (2,4)$. What is the closure of this set? The easy way is to find the points whose neighborhoods always contain some points in $A$ (the closure is the set of these points, by definition). In this case, the closure is $\{1\} \cup [2,4]$. The interior is, by definition, the set of points who have at least one neighborhood contained totally in the set. So, the interior of this set is $(2,4)$. The boundary of the set is therefore $\bar{A} \setminus \mathring{A} = \{1,2,4\}$, just three points! You can always display a set in $\mathbb{R}$ as a union of points (singletons) with intervals. So, the original set would be $\{1\} \cup [2,4)$ as you essentially suggested.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1499244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Linear Algebra - Reflection in a hyperplane We have a matrix $A$: $$ A = \dfrac{1}{7} \cdot \begin{pmatrix} 5 & -4 & -2 & 2 \\ 4 & -1 & -4 & 4 \\ -2 & -4 & 5 & -2 \\ 2 & 4 & 2 & 5 \end{pmatrix} $$ The map $f_a : \mathbb{R}^4 \to \mathbb{R}^4 $ is a reflection in the hyperplane $H \subset \mathbb{R}^4 $. Determine $H$. I don't quite how to find the hyperplane $H$. I think the root of my problem is that I don't really understand what is meant mathematically with "reflection in a hyperplane". Can anyone clear up what is asked and a strategy to find the hyperplane?
A reflection about the hyperplane $H$ will fix vectors in $H$ and reflect other vectors of $\mathbb{R}^4$ across $H$. Since vectors in $H$ are fixed by $f_A$, they are eigenvectors of $f_A$ with eigenvalue 1. To find $H$, you must find the eigenspace of $f_A$ associated to the eigenvalue 1. An example in a smaller dimension is the matrix \begin{equation*} A = \left(\begin{matrix}-1 & 0 \\ 0 & 1\end{matrix}\right), \end{equation*} which represents reflection in $\mathbb{R}^2$ about the $y$ axis. The eigenvectors of this $A$ with eigenvalue 1 are vectors which lie on the $y$ axis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1499345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Some questions in measure theory. I got two questions. Suppose we have the statement If a property Q holds a.e, then P holds a.e. Wouldd the contrapositive be, If ~P a.e, then ~Q a.e. Or would we remove a.e and say it holds everywhere? Also can someone check if this argument is valid (I've seen the proof elsewhere, but this is the one I wrote). If $f,g$ are continuous functions on $[a,b]$ with $f = g$ a.e, then in fact $f = g$. I basically said let $x \in \{ x : f \neq g \}$ and WLOG, let $f - g > 0$. Setting $h = f - g$, and for small $\delta$, we get $h(x + \delta) - h(x) > 0$. Passing the limit gives $0 > 0$ (by continuity), a contradiction.
You should insert quantifiers: "$P$ holds a.e." can be written as $$(\exists N \in \mathcal{A}) \: \mu(N)=0 \wedge \left [ (\forall x \in X \setminus N) \: P(x) \right ].$$ So the negation of that can be written as $$(\forall N \in \mathcal{A}) \: \mu(N) \neq 0 \vee \left [ (\exists x \in X \setminus N) \: \neg P(x) \right ].$$ A slightly nicer form: $$(\forall N \in \mathcal{A}) \: \mu(N) = 0 \Rightarrow \left [ (\exists x \in X \setminus N) \: \neg P(x) \right ].$$ The intuitive meaning is "the property fails on a set of positive measure".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1499455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solution of $\tan(nx)=k\tan(x)$ Could you kindly suggest me ways to find out closed form solution of the following equation type: $$\tan(nx)=k\tan(x),$$ where $n$ and $k$ are some positive real numbers (that excludes zero). I can solve it using series expansion, but that gives me only an approximate solution for $x$. Thanks in advance. PS: I am in engineering stream, but I usually don't need to solve problems that seems very tough to me, mostly just couple of additions/subtractions etc.
I think i have a trivial solution . If $n=k=1$ and $x=45$ then we have $\tan(1\cdot45)=1\tan(45)$ so $1=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1499584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Second order differential equation Is this correct Is this solution in this pic correct or not? I want to solve this equation and that is my understanding.
Continuing from my comment, if there is no typo and the equation is really as you have written it, then the right approach would be to assume $z=y'$.Then we can write $y''-6y'+13=0$ as $z'-6z+13=0$. Therefore, we solve the above equation as follows: $$z'= 6z-13$$ $$6z'= 6(6z-13)$$ $$\frac{d(6z-13)}{(6z-13)}=6 \,\ dt$$ $$\int \frac{d(6z-13)}{(6z-13)}=6 \int dt$$ $$\ln |6z-13| =6t+k$$ $$6z-13=e^{6t+k}=c_1e^{6t}$$ $$6\frac{dy}{dt}=c_1e^{6t}+13$$ $$6 \int dy=c_1e^{6t} \int dt+13 \int dt$$ $$y(t)=\frac{c_1e^{6t}}{36}+\frac{13t}{6}+c_2$$ This is the required solution. And if the equation you have written has some typo error i.e. there is a term of $y$ with coefficient $13$ and no constant term, then your solution is right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1499680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Encrypt/Compress a 17 digit number to a smalller 9(or less) digit number. I have a unsigned long integer(8 bytes) which is guaranteed to be of 17 digits and i want it to store in int(4 bytes) which is of 9 digits at max. Basically i want to encrypt or compress the number so that i could retrieve the number without any loss of information.
There are $9 \times 10^{16}$ different decimal integers with $17$ decimal digits. There are $2^{4\times 8} \lt 4.3 \times 10^9$ possible values of four bytes, a much smaller cardinality. So you cannot find a $1-1$ injection from the former set to the latter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1499767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Write on my own my first mathematical induction proof I am trying to understand how to write mathematical induction proofs. This is my first attempt. Prove that the sum of cubic positive integers is equal to the formula $$\frac{n^2 (n+1)^2}{4}.$$ I think this means that the sum of cubic positive integers is equal to an odd number. However, let's go on proving... 1) I start by proving the base case $n=1$ and I show that the formula holds. 2) I assume than any number $k$ other than $1$, which appartains at $N$, holds for the formula and I write the same formula but with $k$ which replaces $n$. 3) For mathematical induction, I assume that the formula holds also for $k+1$ = $n$ So, the left side of the equation should be: $$\sum^{k+1}_{i=1} i^3 = 1^3 + 2^3 + 3^3 + ... + (k+1)^3$$ I am wondering about which one of these 2 forms (equivalents, I think) should have the right side : this one, with $k+1$ in place of the $n$ of the original formula / or $k$ in the second version: $\frac{(k+1)^2[(k+1)+1]^2}{4}$ or this one: $\frac{k^2(k+1)^2 }{4} + (k+1)^3$ ? I think that, in order for the proof to be convincing, we should write an equivalent statement for the original form of the formula, namely $$\sum^{n}_{i=1} i^3= \frac{n^2(n+1)^2}{4}$$ and perhaps we do it by showing that after algebraic passages $\frac{k^2(k+1)^2 }{4} + (k+1)^3$ is equal to $\frac{(k+1)^2[(k+1)+1]^2}{4}$ ? Sorry for my soliloquy but it helps to understand and I would appreciate confirmation from you!
Your inductive assumption is such that the formula marked $\color{red}{\mathrm{red}}$ (several lines below) holds for $i=k$: $$\sum^{i=k}_{i=1} i^3=\frac{k^2 (k+1)^2}{4}$$ You need to prove that for $i=k+1$: $$\sum^{i=k+1}_{i=1} i^3=\color{blue}{\frac{(k+1)^2 (k+2)^2}{4}}$$ To do this you cannot use: $$\sum^{i=n}_{i=1} i^3=\color{red}{\frac{n^2 (n+1)^2}{4}}$$ as this is what you are trying to prove. So what you do instead is notice that: $$\sum^{i=k+1}_{i=1} i^3= \underbrace{\frac{k^2 (k+1)^2}{4}}_{\text{sum of k terms}} + \underbrace{(k+1)^3}_{\text{(k+1)th term}}$$ $$\sum^{i=k+1}_{i=1} i^3= (k+1)^2\left(\frac{1}{4}k^2+(k+1)\right)$$ $$\sum^{i=k+1}_{i=1} i^3= (k+1)^2\left(\frac{k^2+4k+4}{4}\right)$$ $$\sum^{i=k+1}_{i=1} i^3= (k+1)^2\left(\frac{(k+2)^2}{4}\right)=\color{blue}{\frac{(k+1)^2 (k+2)^2}{4}}$$ Which is the relation we set out to prove. So the method is to substitute $i=k+1$ into the formula you are trying to prove and then use the inductive assumption to recover the $\color{blue}{\mathrm{blue}}$ equation at the end.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1499897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Behaviour of ($\overline{a}_n)_{n=1}^{\infty}$ and ($\underline{a}_n)_{n=1}^{\infty}$ I have been trying to understand the following definition and just needed some clarification. For each bounded sequence $(a_n)_{n=1}^{\infty}$ we define the sequences ($\overline{a}_n)_{n=1}^{\infty}$ and ($\underline{a}_n)_{n=1}^{\infty}$ in the following way: \begin{eqnarray*} \overline{a}_n&=& \sup\left\{a_n,a_{n+1},\dots \right\},\\ \underline{a}_n&=&\inf\left\{a_n,a_{n+1},\dots\right\}. \end{eqnarray*} How does this definition imply that $\overline{a}_n$ is decreasing and $\underline{a}_n$ is increasing?
Note that $$\bar a_2 = \sup\{ a_2, a_3, \cdots \} \le \sup\{ a_1, a_2, a_3, \cdots \} = \bar a_1$$ as the set $\{ a_2, a_3, \cdots \}$ is contained in $\{ a_1, a_2, a_3, \cdots \}$. Similarly we have $\bar a_{n+1} \le \bar a_n$ for all $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1499973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
Three point numerical differentiation Is there any generalized way to calculate numerical differentiation using a certain number of points? I have found 2-point and 5-point methods, but could not find information about using any other number of points. I am interested in doing 3-point, but am not sure if this would be practical or possible.
The general method is as follows. * *Decide which points you want to use: maybe $x-2h$, $x+h$ and $x+3h$ for some reason. Here $x$ refers to the point at which I want to compute the derivative. *Write down Taylor expansions for those points, centered at $x$. Use as many terms as you have points: $$f(x-2h) = f(x) - 2h f'(x) + 2h^2 f''(x)+O(h^3) $$ $$f(x+h) = f(x) + h f'(x) + 0.5 h^2 f''(x)+O(h^3) $$ $$f(x+3h) = f(x) + 3h f'(x) + 4.5 h^2 f''(x)+O(h^3) $$ *Find a linear combination of these lines that eliminates all derivatives except the one you want, and makes the coefficient of that derivative $1$. This means solving a linear system of $3$ equations with $3$ unknowns, in the above case. If $f'(x)$ is desired, the combination is $$ -\frac{4}{15h}f(x-2h) + \frac1{6h} f(x+h) + \frac{1}{10h} f(x+3h) = f'(x) +O(h^2) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1500202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find an Ideal of $\mathbb{Z}+x \mathbb{Q}[ x ]$ that is NOT principal The ring $\mathbb{Z}+x \mathbb{Q}[ x ]$ cannot be a principal ideal domain since it is not a unique factorization domain. Find an ideal of $\mathbb{Z}+x \mathbb{Q}[ x ]$ that is not principal. My book gives no examples of how to show an ideal is not principal. I'm pretty sure if I let $I=(2,1/2 x)$ then I can show it's not principal. I'm pretty confident that the best route to go is to do a proof by contradiction. But how do I start? What is my initial assumption? Thanks!
Call $R= \Bbb{Z}+ x \Bbb{Q}[x]$. I highly suspect that $R$ is a Bezout domain, (i.e. every finitely generated ideal is principal), so I give you a non finitely generated ideal. Consider the ideal $$I=(x, x/2 , x/4 , x/8 , \dots) = \bigcup_{k \ge 1} \left( \frac{1}{2^k}x \right)$$ Clearly, for all $k \ge 1$ we have $$\left( \frac{1}{2^k}x \right) \subsetneq \left( \frac{1}{2^{k-1}}x \right)$$ because $$\frac{1}{2^{k-1}}x = 2 \frac{1}{2^k}x$$ and $2$ is not a unit of $R$ : this shows that $R$ is not Noetherian, and that the union of this chain of ideals is not finitely generated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1500333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Study the monotonicity of this function The function is $$y=x^2-5x+6$$ I have made $$[f(x_2)-f(x_1)]/(x_2-x_1)$$ It results in $$x_1+x_2-5.$$ What should I do next?
Now suppose that $x_2+h =x_1$ with $ h \to 0$. For a positive monotonicity it has to be $$\lim_{h \to 0 } x_1+x_1+h -5\geq 0$$ Solving for $x_1$ $x_1 \geq \frac{5}{2}$ Therefore for $x \geq \frac{5}{2}$ the function is positive monotonic. And for $x < \frac{5}{2}$ the function is strictly negative monotonic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1500436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $\binom{n}{k} \frac{1}{n^k}\leqslant \frac{1}{k!}$ holds true for $n\in \mathbb{N}$ and $k=0,1,2, \ldots, n$ $$\binom{n}{k} \frac{1}{n^k}\leqslant \frac{1}{k!}$$ How would I prove this? I tried with induction, with $n$ as a variable and $k$ changing, but then I can't prove for $k+1$, can I? Is there a better method than using induction (if induction even works)?
The simplest way I see is the following: $$\binom{n}{k}\frac{1}{n^k} = \frac{\overbrace{n(n-1)\dots(n-k+1)}^{\text{$k$ positive terms, each no larger than $n$} }}{k!} \frac{1}{n^k} \le \frac{1^k}{k!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1500567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Proof that given equation(quartic) doesn't have real roots $$ (x^2-9)(x-2)(x+4)+(x^2-36)(x-4)(x+8)+153=0 $$ I need to prove that the above equation doesn't have a real solution. I tried breaking it up into an $(\alpha)(\beta)\cdots=0$ expression, but no luck. Wolfram alpha tells me that the equation doesn't have real roots, but I'm sure there's simpler way to solve this than working trough the quartic this gives.
Your polynomial is $$P(x) = ({x^2} - 9)(x - 2)(x + 4) + ({x^2} - 36)(x - 4)(x + 8) + 153\tag{1}$$ Now consider theses $$\eqalign{ & f(x) = ({x^2} - 9)(x - 2)(x + 4) \cr & f({x \over 2}) = \left( {{{\left( {{x \over 2}} \right)}^2} - 9)} \right)\left( {\left( {{x \over 2}} \right) - 2} \right)\left( {\left( {{x \over 2}} \right) + 4} \right) \cr & \,\,\,\,\,\,\,\,\,\,\,\,\, = {1 \over 4}\left( {{x^2} - 36} \right){1 \over 2}\left( {x - 2} \right){1 \over 2}\left( {x - 8} \right) \cr & \,\,\,\,\,\,\,\,\,\,\,\,\, = {1 \over {16}}\left( {{x^2} - 36} \right)\left( {x - 2} \right)\left( {x - 8} \right) \cr}\tag{2}$$ combine $(1)$ and $(2)$ to get $$P(x) = f(x) + 16f({x \over 2}) + 153\tag{3}$$ Next, we try to find the range of $f(x)$. For this purpose, consider this $$\eqalign{ & f(x) = ({x^2} - 9)(x - 2)(x + 4) \cr & \,\,\,\,\,\,\,\,\,\,\,\, = {x^4} + 2{x^3} - 17{x^2} - 18x + 72 \cr & \,\,\,\,\,\,\,\,\,\,\,\, = {\left( {{x^2} + x - 9} \right)^2} - 9 \cr}\tag{4}$$ Now, by $(4)$ you can conclude that $$\left\{ \matrix{ f(x) \ge - 9 \hfill \cr f({x \over 2}) > - 9 \hfill \cr} \right.\,\,\,\,\,\,\,\,\,\,or\,\,\,\,\,\,\,\left\{ \matrix{ f(x) > - 9 \hfill \cr f({x \over 2}) \ge - 9 \hfill \cr} \right.\tag{5}$$ Notice the equality signs! Can you figure out why this happens? Then using $(5)$ you can conclude that $$\left\{ \matrix{ f(x) \ge - 9 \hfill \cr 16f({x \over 2}) > - 144 \hfill \cr} \right.\,\,\,\,\,\,\,\,\,\,or\,\,\,\,\,\,\,\left\{ \matrix{ f(x) > - 9 \hfill \cr 16f({x \over 2}) \ge - 144 \hfill \cr} \right.\tag{6}$$ and then summing up either of the relations $(6)$ will lead to $$f(x) + 16f({x \over 2}) > - 153\,\,\,\,\,\,\,\,\,\,\, \to \,\,\,\,\,\,\,\,\,\,\,f(x) + 16f({x \over 2}) + 153 > 0\,\,\,\,\,\, \to \,\,\,\,\,\,\,P(x) > 0\tag{7}$$ I think we are done now! :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1500705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Definition of the ordered triple (a, b, c) according to Kuratowski's Set Theory. Can someone give Kuratowski's definition of the ordered triple $(a,b,c)$ assuming $A \times B \times C$ is rewritten as $(A \times B) \times C$, please? I noticed there is already an answered question for the ordered $n$-tuple, but (as I'm very new to Maths) I didn't understand it, and I only need the definition for the ordered triple.
Ordered triples are defined recursivley, so that $(x,y)=\{\{x\},\{x,y\}\}$ and $(x,y,z)=((x,y),z)$. Observe that $((x,y),z)$ only has two elements, $(x,y)$ and $z$, so we can just apply the definition. To make our lives easier, let $q=(x,y)=\{\{x\},\{x,y\}\}$. Then the substitution is simple: $$\begin{align}\label{eqn:einstein} (x,y,z) &= ((x,y),z) \\ &=(q,z) \\ &= \big\{\{q\},\{q,z\}\big\} \\ &= \big\{\{\{\{x\},\{x,y\}\}\},\{\{\{x\},\{x,y\}\},z\}\big\} \\ \end{align}$$ Likewise, $(x,y,z,w)=((x,y,z),w)$, so letting $q=(x,y,z)$, we can write $(x,y,z,w)=(q,w)= \big\{\{q\},\{q,w\}\big\}$, which we can expand as shown above. Substitutions are your friend.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1500787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Questions about infinite intersections of sets I am making some examples to make sure I understand these ideas correctly, but some of them im unsure of, I posted what I think the intersection are. Are they correct? * *$\displaystyle\bigcap^{\infty}_{k=1}\left[1,1+\frac{1}{k}\right]=\{1\}.$ *$\displaystyle\bigcap^{\infty}_{k=1}\left(1,1+\frac{1} {k}\right)=\emptyset.$ *$\displaystyle\bigcap^{\infty}_{k=1}\left(1,1+\frac{1}{k}\right]=\emptyset.$ *$\displaystyle\bigcap^{\infty}_{k=1}\left[1,1+\frac{1}{k}\right)=\{1\}.$
All four are correct. They are not justified, but they are correct An example of a justification (for 3.): Let $\displaystyle A = \bigcap_{k=1}^\infty(1, 1+\frac1k)$, and let $x\in\mathbb R$. If $x\leq 1$, then obviously, $x\notin A$. If $x>1$, then $x=1+\epsilon$ for some $\epsilon>0$. There then exists some $k$ for which $\frac 1k < \epsilon$, meaning that $x > 1+\frac1k$ and so $x\notin (1, 1+\frac1k)$. This also means that $x\notin A\subseteq (1, 1+\frac1k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1500868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Real Analysis, Folland Proposition 2.7 If $\{f_j\}$ is a sequence of $\overline{\mathbb{R}}$-valued measurable functions on $(X,M)$, then the functions $$\begin{aligned} g_1(x) = \sup_{j}f_j(x), \ \ \ \ g_3(x) = \lim_{j\rightarrow \infty}\sup f_j(x) \end{aligned}$$ $$\begin{aligned} g_2(x) = \inf_{j}f_j(x), \ \ \ \ g_4(x) = \lim_{j\rightarrow \infty}\inf f_j(x) \end{aligned}$$ are all measurable functions. If $f(x) = \lim_{j\rightarrow \infty}f_j(x)$ exists for every $x\in X$, then $f$ is measurable. proof: $$\begin{aligned} g_1^{-1}((a,\infty)) = \bigcup_{1}^{\infty}f_j^{-1}((a,\infty)), \ \ \ \ \ g_2^{-1}((-\infty,a)) = \bigcup_{1}^{\infty}f_j^{-1}((-\infty,a)) \end{aligned}$$ so $g_1$ and $g_2$ are measurable by proposition 2.3. Now we can define $$\begin{aligned} g_3(x) = \lim_{j\rightarrow \infty}\sup f_j(x) = \inf_{k\geq 1}\left( \sup_{j \geq k} f_j(x)\right) \ \ \ \ \ g_4(x) = \lim_{j\rightarrow \infty}\inf f_j(x) = \sup_{k\geq 1}\left(\inf_{j \geq k} f_j(x)\right) \end{aligned}$$ so, $g_3$ and $g_4$ are measurable. I am not sure if this is right, any suggestions is greatly appreciated.
One has \begin{align*} \limsup f_j(x) \geq a &\iff \inf \sup_{j \geq k} f_j(x) \geq a\\ &\iff \forall k: \quad \sup_{j \geq k} f_j(x) \geq a \\ &\iff \forall k, \forall \epsilon > 0, \exists j \geq k: \quad f_j(x) \geq a - \epsilon\\ &\iff \forall k, \forall n > 0, \exists j : \quad f_j(x) \geq a - \frac{1}{n}\\ &\iff x \in \bigcap_{k=1}^{\infty} \bigcap_{n = 1}^{\infty} \bigcup_{j=k}^{\infty} f_j^{-1}\left(\left[a - \frac{1}{n}, \infty\right)\right) \end{align*} This shows that $$g_3^{-1}([a,\infty)) = \bigcap_{k=1}^{\infty} \bigcap_{n = 1}^{\infty} \bigcup_{j=k}^{\infty} f_j^{-1}\left(\left[a - \frac{1}{n}, \infty\right)\right)$$ Now by assumption, each $f_j^{-1}([a - \frac{1}{n}, \infty))$ is measurable. So are their countable intersections and unions. Reflecting every inequality or working with $-f_j$ will give you the measurability of $g_4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1500998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Given the function $ f(x) = \sin^2(\pi x) $, show that $ f \in BV[0, 1] $ I'm learning about functions of bounded variation and need to verify my work to this problem since my textbook does not provide any solution : Given the function $ f(x) = \sin^2(\pi x) $, show that $ f $ is of bounded variation on $ [0, 1] $. Here's my attempt : $ f'(x) = 2(\sin(\pi x)) \frac{d}{dx}[\sin(\pi x)] = 2\pi \sin(\pi x) \cos(\pi x) = \pi \sin(2 \pi x)$. The function $ f $ is differentiable on $ [0, 1] $ and $ \forall x \in [0, 1] $ we have : $$ \lvert f'(x) \rvert = \lvert \pi\sin(2 \pi x) \rvert \le \pi $$ Since the derivative of $ f $ is bounded on $ [0,1] $ this implies that $ f \in BV[0, 1] $. Is my work correct?
For a proof from scratch, suppose $f:[0,1]\rightarrow \mathbb R$ has a Lipschitz constant $M$. Let $\mathcal P=\left \{ 0,x_{1},\cdots ,x_{n-2},1 \right \}$ be a partition of $[0,1]$. Then using MVT, and the Lipschitz assumption, we have with $x_i<x_i^{*}<x_{i+1}$, $\sum_{i=0}^{n-1}\left | f(x_{i+1})-f(x_i) \right |\leq \sum_{0}^{n-1}\vert f'(x^{*}_i)(x_{i+1}-x_i)\vert \leq \sum_{0}^{n-1}\vert M(x_{i+1}-x_i)\vert\leq M$ and the result follows as soon as we set $f(x)=\sin ^2(\pi x)$ and note that $f$ has a Lipschitz constant we may take to be $\pi $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1501208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Orders of Elements in GL(2,R) Let A = $$\begin{pmatrix} 0&1\\ -1&0\\ \end{pmatrix}$$ and B = $$\begin{pmatrix} 0&-1\\ 1&-1\\ \end{pmatrix}$$ be elements in $GL(2, R)$. Show that $A$ and $B$ have finite orders but AB does not. I know that $AB$ = $$\begin{pmatrix} 1&-1\\ 0&1\\ \end{pmatrix}$$ and that $GL(2,R)$ is a group of 2x2 invertible matrices over $R$ with the matrix multiplication operation. Firstly, I am confused as to how an element of this group can have an order. Is it that A and B are the products of invertible matrices in this group? Given that this is over $R$ the orders should be infinite. My book did not define the general linear group very well. Secondly, I would like some guidance on how to proceed following this problem. Any help is much appreciated.
To have an order n means that n is the smallest positive number such that $A^n = I$. In this case, $I$ is of course the identity matrix. $A^2$ = $\begin{pmatrix} 0&1\\ -1&0\\ \end{pmatrix}\begin{pmatrix} 0&1\\ -1&0\\ \end{pmatrix} = \begin{pmatrix} -1&0\\ 0&-1\\ \end{pmatrix} = -I$ and so $A^4 = I$, order of A is 4 $B^3$= $\begin{pmatrix} 0&-1\\ 1&-1\\ \end{pmatrix}^3=I$ so the order of B is 3 Given $AB$ = $\begin{pmatrix} 1&-1\\ 0&1\\ \end{pmatrix}$, I'll let you check that $(AB)^n \neq I$ for any $n$ , thus proving $AB$ does not have finite order Now to rigorously prove that $AB^n \neq I$ for any n, one way is to show that $AB^n$ = $\begin{pmatrix} 1&-n\\ 0&1\\ \end{pmatrix}$ by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1501288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
For what values of $p$, the series $\sum_{n=1}^{\infty}|\frac{\sin(n)}{n}|^p$ is convergent? Let $p>1$ , $p\in\mathbb{R}$. For what values of $p$, the series $\sum_{n=1}^{\infty}|\frac{\sin(n)}{n}|^p$ is convergent? When $p=1$, I know the series is divergent but how about other cases? Thanks!
If $p > 1$ we can write: $0 < \sum\limits_{n=1}^{\infty}|\frac{\sin(n)}{n}|^p \le \sum\limits_{n=1}^{\infty}|\frac{1}{n}|^p$ and the right part converges.So the middle part also converges as $\sum\limits_{n=1}^{k}|\frac{\sin(n)}{n}|^p$ is an increasing function of $k$ and it's limited by $\sum\limits_{n=1}^{\infty}|\frac{1}{n}|^p$. So for every $p > 1$ it converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1501389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expanding $(x-2)^3$ I was trying to expand $(x-2)^3$. This is what I did * *Expanded the term so $(x-2)(x-2)(x-2)$ *Multiplied the first term and second term systematically through each case The answer I got did not match the one at the back of the book, can someone show me how to do this please?
Make use of the identity: $$(x-y)^3=x^3-3x^2y+3xy^2-y^3.$$ Letting $y=2$, we obtain: $$(x-2)^3=x^3-6x^2+12x-8.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1501496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to prove that the set $A = \{\ q \in \mathbb{Q}\ |\ q = n + \frac{1}{2n} \mathrm{\ for\ }n\in\mathbb{N}\ \}$ is closed in $\mathbb{R}$? I am working in the metric space $\mathbb{R}$ equipped with the distance function $d(x,y)=|x-y|$. Let $A = \{\ q \in \mathbb{Q}\ |\ q = n + \frac{1}{2n} \ \}$. How do I formally prove that $A$ is closed? In order to prove that $A$ is closed in $\mathbb{R}$ I need to show that $A=\mathrm{cl}(A)$. It is elementary that $A\subseteq\mathrm{cl}(A)$. So I just need to prove that $A\supseteq\mathrm{cl}(A)$ I have two definitions of $\mathrm{cl}(A)$: DEFINITION 1:$\ $ $\mathrm{cl}(A)$ = { $b \in X$ | there exists a sequence $\{a_{n}\}_{n=1}^{\infty}\subset A$ such that $\lim\limits_{n \to \infty}=b$ } DEFINITION 2:$\ $ $\mathrm{cl}(A)$ = { $b \in X$ | $B(b;r)\cap A \neq \emptyset$ for all $r>0$ } I am confused as to how to prove $\mathrm{cl}(A) \subseteq A$ formally. My ATTEMPT: I think the second definition will be easier to use. Let $b \in \mathrm{cl}(A)$. Then for any $r>0$ we have $B(b;r)\cap A \neq \emptyset$. Fix any $r>0$. Since $B(b;r)\cap A$ is non-empty, take an element $p \in B(b;r)\cap A$. Since $p \in A$, there exists an $n\in\mathbb{N}$ such that $p = n + \frac{1}{2n}$. Since $p \in B(b;r)$, we have $|p-b|<r$, or equivalently, $|n+\frac{1}{2n}-b|<r$. How do I show that this implies $b=m+\frac{1}{2m}$ for some $m\in \mathbb{N}$? This would show that $b\in A$, and hence $\mathrm{cl}(A) \subseteq A$. Can someone help me out?
$A$ is set of the solutions of the equation $$\sin\big(\pi \dfrac{x\pm\sqrt{x^2-2}}{2}\big)=0$$and so is the union of the zero sets of two continuous functions, hence closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1501727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Can polar coordinates always be used to calculate a limit in a multivariable function? Are polar coordinates always a viable way to calculate the limit of a multivariable function? In lecture, it appeared as if converting a function into polar coordinates and then checking the limit as r approaches 0 would be a foolproof way to determine a limit. However, after doing some online reading it appears as if it is not a viable method when the function is not "independently bound of theta". Could someone please explain this to me? I am having difficulty understanding this concept.
If your function is $\frac{x}{x^2+y^2}$, then the polar substitution makes the denominator $r^2(\cos^2\theta+\sin^2\theta)=r^2$. If your function is $\frac{x}{x^2+2y^2}$, then the polar substitution makes the denominator $r^2(\cos^2\theta+2\sin^2\theta)$, from which you can't just eliminate $\theta$ the same way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1501823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
show that for any prime p: if $p|x^4 - x^2 + 1$, with $x \in \mathbb{Z}$ satisfies $p \equiv 1 \pmod{12}$? show that for any prime p: if $p|x^4 - x^2 + 1$ satisfies $p \equiv 1 \pmod{12}$ I suppose that if $p$ divides this polynomial we can see that: $x^4 - x^2 + 1 = kp$ for some $k \in \mathbb{N}$. But then $x^4 - x^2 + 1 \equiv 0 \pmod{p}$. This means that $(2x^2 - 1)^2 \equiv 4(x^4 - x^2 +1) - 3 \equiv - 3 \pmod {p}$. So $-3$ must be a quadratic residue for this to have a solution, but $\left(\frac{-3}{p}\right) = 1$ if $p \equiv +- 1\pmod{3}$. Is there something i am doing wrong here?
If $p\mid x$, then $p\mid x^4-x^2+1\implies p\mid 1$, contradiction. Therefore $p\nmid x$. $(2x^2-1)^2\equiv -3\pmod{p}$ and $\left(\left(x^2-1\right)x^{-1}\right)^2\equiv -1\pmod{p}$. We can't have $p=3$, because $2x^2-1\equiv 0\pmod{3}$ has no solutions. We also can't have $p=2$, because $x^4-x^2+1$ is always odd, so $2$ can't divide it. By Quadratic Reciprocity we get $p\equiv 1\pmod{3}$ and $p\equiv 1\pmod{4}$, respectively, so $p\equiv 1\pmod{12}$. Alternatively, $\left(\left(x^2+1\right)x^{-1}\right)^2\equiv 3\pmod{p}$ and $\left(\left(x^2-1\right)x^{-1}\right)^2\equiv -1\pmod{p}$ and again we can't have $p=2$ or $p=3$, because $\left(x^2+1\right)x^{-1}\equiv 0\pmod{3}$ has no solutions and $x^4-x^2+1$ is always odd. By Quadratic Reciprocity we get $p\equiv \pm1\pmod{12}$ and $p\equiv 1\pmod{4}$, respectively, so $p\equiv 1\pmod{12}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1502002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to find the determinant of this $5 \times 5$ matrix? How can I find the determinant of this matrix? I know in matrix $3 \times 3$ $$A= 1(5\cdot 9-8\cdot 6)-2 (4\cdot 9-7\cdot 6)+3(4\cdot 8-7\cdot 5) $$ but how to work with a $5\times 5$ matrix?
Multiplying the 1st row by $3$ and then adding it to the 4th row, and then multiplying the 3rd row of the resulting matrix by $-\frac 1 4$ and adding it to the 5th row, we obtain $$\det \begin{bmatrix} 1 & 2 & 3 & 4 & 1\\ 0 & -1 & 2 & 4 & 2\\ 0 & 0 & 4 & 0 & 0\\ -3 & -6 & -9 & -12 & 4\\ 0 & 0 & 1 & 1 & 1\end{bmatrix} = \det \begin{bmatrix} 1 & 2 & 3 & 4 & 1\\ 0 & -1 & 2 & 4 & 2\\ 0 & 0 & 4 & 0 & 0\\ 0 & 0 & 0 & 0 & 7\\ 0 & 0 & 1 & 1 & 1\end{bmatrix} = \det \begin{bmatrix} 1 & 2 & 3 & 4 & 1\\ 0 & -1 & 2 & 4 & 2\\ 0 & 0 & 4 & 0 & 0\\ 0 & 0 & 0 & 0 & 7\\ 0 & 0 & 0 & 1 & 1\end{bmatrix}$$ We have obtained a block upper triangular matrix. Hence, $$\det \begin{bmatrix} 1 & 2 & 3 & 4 & 1\\ 0 & -1 & 2 & 4 & 2\\ 0 & 0 & 4 & 0 & 0\\ 0 & 0 & 0 & 0 & 7\\ 0 & 0 & 0 & 1 & 1\end{bmatrix} = \det \begin{bmatrix} 1 & 2 & 3\\ 0 & -1 & 2\\ 0 & 0 & 4\end{bmatrix} \cdot \det \begin{bmatrix} 0 & 7\\ 1 & 1\end{bmatrix} = (-4) \cdot (-7) = 28$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1502099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Calculating two specific limits with Euler's number I got stuck, when I were proving that $$\lim_{n \to \infty} \frac {\sqrt[2]{(n^2+5)}-n}{\sqrt[2]{(n^2+2)}-n} = \frac {5}{2}$$ $$\lim_{n \to \infty}n(\sqrt[3]{(n^3+n)}-n) = \frac {1}{3}$$ First one I tried to solve like $$\lim_{n \to \infty} \frac {\sqrt[2]{(n^2+5)}-n}{\sqrt[2]{(n^2+2)}-n} = \lim_{n \to \infty} \frac {n\sqrt[2]{(1+\frac {5}{n^2})}-n}{n\sqrt[2]{(1+\frac{2}{n^2})}-n}= \lim_{n \to \infty} \frac {\sqrt[2]{(1+\frac {5}{n^2})}-1}{\sqrt[2]{(1+\frac{2}{n^2})}-1}$$ and now I think, that this one sholud go like $$\lim_{n \to \infty} \frac{\frac{5}{n^2}}{\frac{2}{n^2}}=\frac{5}{2} $$ but I have no idea how to prove this. In the second one I made $$\lim_{n \to \infty}n(\sqrt[3]{(n^3+n)}-n) = \lim_{n \to \infty}n(n\sqrt[3]{(1+\frac{1}{n^2}}-n)= \lim_{n \to \infty}n^2(\sqrt[3]{(1+\frac{1}{n^2}}-1)= \lim_{n \to \infty}n^2(e^{\frac{1}{3n^2}}-1) $$ And now I do not know what to do next... I would be really grateful, for any help, or prompt, how to solve these ones (or information, where is the mistake).
$$(a)\;\;\lim_{n\rightarrow \infty}\frac{\sqrt{n^2+5}-n}{\sqrt{n^2+2}-n} =\lim_{n\rightarrow \infty}\frac{\sqrt{n^2+5}-n}{\sqrt{n^2+2}-n}\times \frac{\sqrt{n^2+5}+n}{\sqrt{n^2+5}+n}\times \frac{\sqrt{n^2+2}+n}{\sqrt{n^2+2}+n} $$ So we get $$=\lim_{n\rightarrow \infty}\frac{5}{2}\times \frac{\sqrt{n^2+2}+n}{\sqrt{n^2+5}+n} = \frac{5}{2}\lim_{n\rightarrow \infty}\frac{n(\sqrt{1+\frac{2}{n^2}}+1)}{n(\sqrt{1+\frac{5}{n^2}}+1)}=\frac{5}{2}$$ ..................................................................................................................................................................... $$(b) \lim_{n\rightarrow \infty}n\left[\sqrt[3]{n^3+n}-n\right]$$ Now Put $\displaystyle n=\frac{1}{y}\;,$ When $n\rightarrow \infty\;,$ Then $y\rightarrow 0$ So we get $$\lim_{y\rightarrow 0}\frac{(1+y^2)^{\frac{1}{3}}-1}{y^2}$$ Now Let $(1+y^2)^{\frac{1}{3}}=A$ and $1=B\;,$ Then $A^3-B^3 = 1+y^2-1=y^2$ So we get $$\lim_{y\rightarrow 0}\frac{A-B}{A^3-B^3} = \lim_{y\rightarrow 0}\frac{A-B}{(A-B)(A^2+B^2+AB)} = \lim_{y\rightarrow 0}\frac{1}{A^2+B^2+AB}$$ So we get $$\lim_{y\rightarrow 0}\frac{1}{(1+y^2)^{\frac{2}{3}}+1^2+(1+y^2)^{\frac{1}{3}}} = \frac{1}{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1502198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Understanding telescoping series? The initial notation is: $$\sum_{n=5}^\infty \frac{8}{n^2 -1}$$ I get to about here then I get confused. $$\left(1-\frac{3}{2}\right)+\left(\frac{4}{5}-\frac{4}{7}\right)+...+\left(\frac{4}{n-3}-\frac{4}{n-1}\right)+...$$ How do you figure out how to get the $\frac{1}{n-3}-\frac{1}{n-1}$ and so on? Like where does the $n-3$ come from or the $n-1$.
Looking a little closer at the question, he is asking about partial fraction decomposition, as opposed to the value of the sum itself. For this particular example, it's fairly straight forward. When given a fraction which contains a polynomial denominator, you can factor this fraction and break it into a sum of other fractions with denominators of a lower polynomial order. For example, we begin be identifying that: $$\frac{8}{n^2 - 1} = \frac{8}{(n+1)(n-1)}$$ We are then interested in finding factors $A,\ B$ such that $$\frac{8}{n^2 - 1} = \frac A{n+1} + \frac B{n-1}$$ All we need to do now is combine the right side, and solve for the necessary $A,\ B$, to equate to the right side. $$\frac A{n+1} + \frac B{n-1} = \frac{A(n-1) + B(n+1)}{(n+1)(n-1)} = \frac{(A+B)n + B - A}{(n+1)(n-1)}$$ Since your original fraction has $8$ in the numerator, we make the following comparison: $$(A+B)n + B - A = 8$$ To conclude that $$A+B = 0;\ \ \ B - A = 8\implies -A = B = 4$$ Hopefully this gets you past where you are stuck, and the supplementary answers can take you the rest of the way! EDIT: The $(n-1)^{-1}$ and $(n-3)^{-1}$ terms arise as part of the series. When considering this series as it pushes on to infinity, the observation of $(n-1)^{-1}$ and $(n-3)^{-1}$ is no different from looking at $(n+1)^{-1}$ and $(n-1)^{-1}$. What is important is the difference between them ($2$ indices).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1502309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
What's the Summation formulae of the series $2*2^0 + 3*2^1 + 4*2^2 + 5*2^3.......$? I faced this question where I was asked to find a summation formulae for $n$ terms of $2*2^0 + 3*2^1 + 4*2^2 + 5*2^3.......$ I did try generalizing it with $$a_n = (n + 1)2^{n - 1}; n2^{n - 2}$$ but to no avail then I tried subtracting $2*(\sum 2^n)$ increasing geometric series from the above series( $2*2^0 + 3*2^1 + 4*2^2 + 5*2^3.......$) and I actually got something like a general term of $n2^n$ of the series obtained from this subtraction but then it lead me nowhere and I also think there is no scope for telescoping this kind of series. I think the problem here is now I am devoid of any idea of how to approach this problem.
I can provide another idea: $$ \begin{array}{cccccccc} a= & 2\times2^{0}+ & 3\times2^{1}+ & 4\times2^{2}+ & 5\times2^{3}+ & \cdots & \left(M+1\right)\times2^{M-1}+ & \cdots\\ 2a= & & 2\times2^{1}+ & 3\times2^{2}+ & 4\times2^{3}+ & \cdots & \left(M\right)\times2^{M-1}+ & \cdots \end{array} $$ Then, by subtracting the first equation by the second one, you can get $$ -a=2\times2^{0}+2+2^{2}+2^{3}+2^{4}+\cdots+2^{M-1}+\cdots $$ If it is the finite sum, i.e., $a_M=\sum_{n=1}^{M}(n+1)2^{n-1}$, then the above equation has the last term which is negative, given by $$ -a_{M}=2\times2^{0}+2+2^{2}+2^{3}+2^{4}+\cdots+2^{M-1}-\left(M+1\right)\times2^{M} $$ I think you can get the result from here, which is $a_M = M\times2^M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1502391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Where, if ever, does the decimal representation of $\pi$ repeat its initial segment? I was wondering at which decimal place $\pi$ first repeats itself exactly once. So if $\pi$ went $3.143141592...$, it would be the thousandth place, where the second $3$ is. To clarify, this notion of repetition means a pattern like abcdabcdefgh...
This is unknown, but conjectured to be false; see e.g. Brian Tung's answer to PI as an infinite set of integers. An interesting point here is the different kinds of "patternless-ness" numbers can exhibit. On the one hand, there is randomness: where the idea is that the digits of a number are distributed stochastically. Random numbers "probably" don't have such moments of repetition, and in particular no random real will have infinitely many such moments of repetition. Randomness is connected with measure: measure-one many reals are random. On the other hand, there is genericity: where the idea is that 'every behavior that can happen, does.' Having specified finitely many digits $a_1. . . a_n$ of a number, it is possible for the next digits to be $a_1. . . a_n$ again; so generic numbers do have such moments of repetition (in fact, they have infinitely many). Genericity is connected with category: comeager-many reals are generic. It seems to be a deep fact of mathematics that naturally-occurring real numbers which are not rational tend to be random as opposed to generic. In particular, $\pi$ is conjectured to be absolutely normal, and absolute normality is on the randomness side of the divide. These notions of patternlessness, as well as others, are studied in the set theory of the reals as notions of forcing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1502527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
$\bigcup\limits_{i=1}^n A_i$ has finite diameter for each finite $A_i$ Let $(X,d)$ be a metric space. The diameter of a set $A\subset X$ is defined to be $\operatorname{diam}(A)= \sup\{d(x,y):x,y\in A\}$. Suppose $A_1, \dots, A_n$ is a finite collection of subsets of $X$ each with finite diameter. Prove that $\bigcup\limits_{i=1}^n A_i$ has finite diameter. My approach: I am trying to show that $\operatorname{diam}(\bigcup\limits_{i=1}^n A_i)\leq \operatorname{diam}(A_1)+\operatorname{diam}(A_2)+\cdots+\operatorname{diam}(A_n)$. If we can show that since each $A_i$ has finite diameter then $\operatorname{diam}(\bigcup\limits_{i=1}^n A_i)$ must also have a finite diameter. But I stumbled on how to show that inequality, should we use the triangle inequality somewhere? But not very sure how to write it. Thanks very much!
It is not true that $\operatorname{diam}\left(\bigcup\limits_{i=1}^n A_i\right)\leq \operatorname{diam}(A_1)+\operatorname{diam}(A_2)+\cdots+\operatorname{diam}(A_n)$. For example, suppose the diameter of each of ten sets is two inches. But one of those ten sets is in Constantinople and another is in Adelaide. Pick a point $a_i$ in each of the sets $A_i$. Find the the two indices $k,\ell$ such that $d(a_k,a_\ell) = \max \{ d(a_i,a_j) : i,j \in \{ 1,\ldots,n \} \}$. Given points $x\in A_k$, $y\in A_\ell$, we have $$ d(x,y) \le d(x,a_k)+d(a_k,a_\ell) + d(a_\ell, y) \le \underbrace{\operatorname{diam}(A_k) + d(a_k,a_\ell) + \operatorname{diam}(A_\ell)}. $$ Then show that the quantity over the $\underbrace{\text{underbrace}}$ is an upper bound on the diameter of the union.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1502669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Problem related to $L^p$ space Problem Let $k: \mathbb R^{d\times d} \to \mathbb R^d$ be a measurable function such that there is $c>0$ with $$\sup_{x \in \mathbb R^d}\int |k(x,y)|dy \leq c, \space \sup_{y \in \mathbb R^d}\int |k(x,y)|dx \leq c$$ Show that for $1<p<\infty$, the function $K:L^p(\mathbb R^d) \to L^p(\mathbb R^d)$ given by $$K(f)(x)=\int k(x,y)f(y)dy$$ is well defined and uniformly continuous. I got stuck trying to show both properties. First I would like to prove that $K(f) \in L^p(\mathbb R^d)$ for each $f \in L^p(\mathbb R^d)$, so take a function $f$ from that space and $$\int |K(f)(x)|^pdx=\int |\int (k(x,y)fy)dy|^pdx$$ I thought of applying Hölder's inequality to get $$\int |\int (k(x,y)fy)dy|^pdx \leq \int(\int |k(x,y)|^qdy)^{\frac{p}{q}})(\int|f(y)|^pdy)dx$$$$=\int|f(y)|^pdy(\int(\int |k(x,y)|^qdy)^{\frac{p}{q}}dx)$$ I don't know where to go from there. As for uniform continuity I am lost. I would appreciate hints and suggestions to prove the two properties of the exercise. Thanks in advance.
Use (Riesz-Thorin) interpolation. Show that $\|K\|_{L^1(\Bbb R^d) \to L^1(\Bbb R^d)} \le c$ using $\sup_y \int |k(x,y)|\, dx \le c$, and show that $\|K\|_{L^\infty(\Bbb R^d)\to L^\infty(\Bbb R^d)} \le c$ using $\sup_x \int |k(x,y)|\, dy \le c$. These imply $\|K\|_{L^p(\Bbb R^d) \to L^p(\Bbb R^d)} \le c$ for all $1 < p < \infty$ by interpolation. This also gives well-definedness. For all $f,g\in L^p(\Bbb R^d)$, $$\|K(f) - K(g)\|_{L^p(\Bbb R^d)} = \|K(f-g)\|_{L^p(\Bbb R^d)} \le c\|f - g\|_{L^p(\Bbb R^d)}$$ So $K$ is uniformly continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1502906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Elementary Infinite Limit Question Suppose $f:\mathbb{R}\to\mathbb{R}$ is a continuous odd function. Does the following statement hold (assuming the limits exist): $$ \lim_{x\to-\infty}f(x)=-\lim_{x\to\infty}f(x)$$ This certainly seems to be true to me, because we can make this statement for any arbitrarily large, finite $x$. However, I just wanted to ensure that this statement holds in the infinite case.
Clearly $$\lim_{x \to -\infty}f(x) = \lim_{y \to \infty}f(-y) = \lim_{y \to \infty}(-f(y)) = -\lim_{x \to \infty}f(x)$$ The above holds without any regard to continuity of $f$ and also without any regard to the existence of the limit. However, when the limit does not exist we need to interpret the equality in a different manner. Thus we have the following options: * *If $\lim_{x \to -\infty}f(x)$ exists then $\lim_{x \to \infty}f(x)$ also exists and $\lim_{x \to -\infty}f(x) = -\lim_{x \to \infty}f(x)$. *If $f(x) \to \infty$ as $x \to -\infty$ then $f(x) \to -\infty$ as $x \to \infty$. *If $f(x) \to -\infty$ as $x \to -\infty$ then $f(x) \to \infty$ as $x \to \infty$. *If $f(x)$ oscillates finitely as $x \to -\infty$ then $f(x)$ also oscillates finitely as $x \to \infty$. *If $f(x)$ oscillates infinitely as $x \to -\infty$ then $f(x)$ also oscillates infinitely as $x \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1502996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the limit of $x^{e^{x}}$ if $x$ tends to $-\infty$? I was looking at the function $f(x)=x^{e^{x}}$ and was curious as to why its domain is defined for $x\geq0$, when it looks like there are no problems with negative values of $x$. Also, when plugging in negative values for $x$, I noticed that it looks like $f(x)$ is approaching $-1$, but when doing the actual limit, this is the result: $$\lim_{x\to-\infty} f(x) = 1$$ Could someone give me an explanation as to why these are the results? I feel really lost. I want to say that it has to do with trying to take $\ln({f(x)})$, but I am not really sure if that has anything to do with it. Any help/insight is greatly appreciated. Thanks.
First of all: The function at stake is not defined on the negative reals. However, if one considers complex numbers then a negative real $(x<0)$ as a complex number can be written as $$z=-|x|+i0,\,\,\text{ or } \,\, z=|x|e^{i\pi}.$$ With this in mind $$z^{e^z}=e^{\ln\left(z^{e^z}\right)}=e^{e^z\ln(z)}=e^{e^z\ln\left(|x|e^{i\pi}\right)}=e^{e^z\ln(|x|)+e^xi\pi}=e^{e^{-|x|}\ln(|x|)}e^{e^{-|x|}i\pi}. \tag 1$$ Since $$\lim_{x\to -\infty}e^{-|x|}\ln(|x|)=0$$ and $$\lim_{x\to -\infty}e^{-|x|}i\pi=0$$ the original limit is, indeed $1=1+i0$. However, $e^{e^{-|x|}i\pi}=(-1)^{e^{-|x|}}$, the second term in $(1)$, goes through the path shown below, while $x$ goes through the interval $[0,\infty)$. The other factor in $(1)$ is real. So, for the time being I don't understand how to get negative numbers by substituting negative reals in $x^{e^x}$ -- let alone that substituting negative reals here is not comme il faut. BTW, here is how Wolfram $\alpha$ plots our function: (Here the blue line represents the real part and the orange line represents the imaginary part.) For negative $x$'s the result is never real. The limit, of course, is $z=1+i0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1503101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }