Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Factoring inequalities on Double Summation (Donald Knuth's Concrete Mathematics) If you have the Concrete mathematics book please refer to page 40 and 41. So how come this given sum $$ \sum_{1 \le j < k + j \le n} \frac{1}{k} $$ becomes $$ \sum_{1\le k \le n}\sum_{1\le j \le n-k} \frac{1}{k} $$ ? I do not understand how this is proven since to my understanding i am following from the book's method of factoring inequalities, $$ [1\le j < k+j \le n] = [1 \le j \le n][j < k+j\le n] \text{ or } [1\le k+j \le n][1 \le j < k+j] $$ And it doesn't look like the double summation above. I Tried simplifying the two above and it leads me to nowhere near the answer.
If $1\le j<k+j\le n$, then $1\le j$ and $k+j\le n$; the latter inequality is equivalent to $j\le n-k$, so $1\le j<k+j\le n$ implies that $1\le j\le n-k$, the inequality governing the inner summation. Thus, we want to find some expression such that $$[1\le j<k+j\le n]=[\text{expression}][1\le j\le n-k]\;.$$ What part(s) of $1\le j<k+j\le n$ can we not infer from $1\le j\le n-k$? The inequality $1\le j\le n-k$ says everything about $j$ that can be inferred from $1\le j<k+j\le n$, and it also implies that $k\le n-1$, but it imposes no lower bound on $k$; the missing factor will have to take care of that. The original inequality $1\le j<k+j\le n$ implies that $1\le k$, since $j<k+j$, and that $k<n$, since $k+1\le k+j\le n$; thus, it implies that $1\le k\le n-1$. The inequality $1\le j\le n-k$ gives us half of that, but it doesn’t say that $1\le k$. Thus, we should have $$[1\le j<k+j\le n]=[1\le k][1\le j\le n-k]\;,$$ and you can fairly easily check that this is indeed the case. If we use this decomposition, we get $$\sum_{1\le j<k+j\le n}\frac1k=\sum_{1\le k}\,\sum_{1\le j\le n-k}\frac1k\;.$$ This is technically correct, but it’s harder than necessary to work with. The inner summation is non-zero only for $k\le n-1$, so we might as well add this condition to the outer sum as well: $$[1\le j<k+j\le n]=[1\le k\le n-1][1\le j\le n-k]\;,$$ and $$\sum_{1\le j<k+j\le n}\frac1k=\sum_{1\le k\le n-1}\,\sum_{1\le j\le n-k}\frac1k\;.$$ This is a perfectly reasonable way to rewrite the original summation as a double summation, but as the marginal note in Concrete Mathematics points out, it’s a little messier than necessary. There is no harm in simplifying the outer condition to $1\le k\le n$ to get $$[1\le j<k+j\le n]=[1\le k\le n][1\le j\le n-k]$$ and $$\sum_{1\le j<k+j\le n}\frac1k=\sum_{1\le k\le n}\,\sum_{1\le j\le n-k}\frac1k\;:$$ when $k=n$, the inner summation is $0$ anyway.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1698826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to prove that the sum of the areas of triangles $ABR$ and $ CDR$ triangle is equal to the $ADR$? In the convex quadrilateral $ABCD$, which is not a parallelogram, the line passing through the centers of the diagonals $AC$ and $BD$ intersects the segment $BC$ at $R$. How to prove that the sum of the areas of triangles $ABR$ and $CDR$ is equal to the area of triangle $ADR$? I have no idea how to do this. Can this be proved with simple geometry?
This is tricky. We shall use an easy fact that if we are given fixed points $Y,Z$, and a variable point $X$ that changes linearly, then $[XYZ]$ changes linearly, where $[\mathcal{F}]$ denotes the oriented area of $\mathcal{F}$. Let $M,N$ be midpoints of $AC, BD$. Using the fact we know that the function $MN \ni X \mapsto [ABX]+[CDX]$ is linear. However $$[ABN]+[CDN]=\frac 12 [ABD] + \frac 12 [CDB] = \frac 12 [ABCD]$$ and $$[ABM]+[CDM]=\frac12 [ABC] + \frac 12 [CDA] = \frac 12 [ABCD]$$ so this function is actually constant. In particular $$[ABR]+[CDR]=\frac 12 [ABCD].$$ This implies that $$[DAR] = [ABCD]-([ABR]+[CDR])=[ABCD] - \frac 12 [ABCD]=\frac 12[ABCD] = [ABR]+[CDR].$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1698945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Why is $r^2-9=0$ the characteristic equation for $y'-9y=0$ I'm trying to learn how to solve second order differential equations and I don't understand something here: http://tutorial.math.lamar.edu/Classes/DE/SecondOrderConcepts.aspx Question says find two solutions to $y'-9y=0$ It then says the characteristic equation is: $r^2-9=0 \Rightarrow +-3$ and I understand why $-3$ and $3$ are the solutions, but not the equation. So before I can understand the two solutions I just need help understanding the characteristic equation. I'm specifically confused at how $ar^2+br+c=0$ is related to $r^2-9=0$? I would think it would be $r-9=0$ since $y'$ is the first derivative. Sorry if this is a silly question!
At first, observe that $y(x) \equiv 0$ solves the equation. Then the usual ansatz for such equations for $y \not \equiv 0$ is $y(x) = Ae^{\lambda x}$ with $\Bbb C \ni A \neq 0$ and $\lambda \in \Bbb C$, since this gives you constant coefficients when you derive. Assuming, that you mean the equation $y''-9y=0$ (since you said second order ODE), you have to insert the ansatz into the equation: $$A \lambda^2 e^{\lambda x} - 9Ae^{\lambda x} = 0$$ Since $Ae^{\lambda x} \neq 0$ for all $x$, we may divide and obtain $$\lambda^2 - 9 = 0,$$ which is the characteristic equation for this ODE.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1699030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Transform complex exponential integral to real Question: Transform $J_n (x)=\frac{1}{2 \pi} \int_{\theta=-\pi}^\pi e^{i(n\theta-x\sin\theta)}$ On answer sheet: $J_n (x)=\frac{1}{\pi} \int_{\theta=0}^\pi \cos(n\theta-x\sin\theta)$ But how? I am not very good at complex analysis, thanks for any help!
By definition, the real part: $$e^{i(n\theta-x\sin\theta)}=\cos(n\theta-x\sin\theta)+i\sin(n\theta-x\sin\theta)\implies$$ $$\implies \text{Re}\,e^{i(n\theta-x\sin\theta)}=\cos(n\theta-x\sin\theta)$$ and as this last one is an even function, we get $$\frac1{2\pi}\int_{-\pi}^\pi\cos(n\theta-x\sin\theta)=2\left(\frac1{2\pi}\int_0^\pi\cos(n\theta-x\sin\theta)\right)=\frac1{\pi}\int_0^\pi\cos(n\theta-x\sin\theta)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1699117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Hamilton paths/cycles in grid graphs Let G be a grid graph with m rows and n columns, i.e. m = 4, n = 7 is shown here: For what values of m and n does G have a Hamilton path, and for what values of m and n does G have a Hamilton cycle? So far I've figured out that a grid graph always has a Hamilton path, and has a Hamilton cycle when at least one of m or n is even. I'm struggling to provide justification as to why this is true...
HINT: You simply need to explain carefully how to produce the desired paths. * *There is always a Hamilton path that simply traverses the rows in alternate directions. *If, say, $m$ is even, as in your example, you can generalize the following idea (which I’ve left slightly incomplete). The evenness of $m$ is what makes the idea work. *-->*-->*-->*-->* | V * *<--*<--*<--* | V * *-->*-->*-->* | V *<--*<--*<--*<--*
{ "language": "en", "url": "https://math.stackexchange.com/questions/1699203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Find the necessary and sufficient condition on $g$ for which $f$ is differentiable at $0$ Suppose two functions $f,g: \mathbb{R} \to \mathbb{R}$ satisfy $f(x) = xg(x)$ for all $x \in \mathbb{R}$. Find the necessary and sufficient condition on $g$ for which $f$ is differentiable at $0$. In such case find $f'(0)$. We have $f'(0) = xg'(0) +g(0)$ and thus $f'(0) = g(0)$ if $g'(0)$ exists. So in order for $f'(0)$ to exist $g$ must be differentiable at $0$?
Using the definition we have $\lim_{h\rightarrow 0}\frac{f(0+h)-f(0)}h =\lim_{h\rightarrow 0}\frac{hg(h)-0\cdot g(0)}h =\lim_{h\rightarrow 0}g(h)$ so f'(0) exists iff g is continuous at 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1699305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
bad explanation in how to solve the cubic The question asked me to solve $t^3+pt+q=0$ where $p,q$ real and $27q^2+4p^3<0$ using the identity $\cos{3\theta}=4\cos^3{\theta}-3\cos{\theta}$. Answer goes like this and I have stopped where it just makes no sense at all to me. Since the discriminant $-(27q^2+4p^3)>0$ implies there are three real roots this is the case here. Let $t=v \cos{\theta}$. Then $$0=t^3+pt+q=v^3\cos^3{\theta}+pv\cos{\theta}+q \implies 4\cos^3{\theta}+\frac{4p}{v^2}\cos{\theta}+\frac{4q}{v^3}=0$$ as $p<0$ we may solve $v=\sqrt{-\frac{4p}{3}}$ for real $v$ and blah blah.. Whoa, where did that last statement randomly pop out from? What does it mean "as $p<0$"? Reverse engineering tells me that this only holds given that $1=-\frac{4p}{3v^2}$ is true. Where in heaven's name does that come from the equation above? Badly explained if you ask me, I cannot figure out how they Harry Potter-ed $v=\sqrt{-\frac{4p}{3}}$ right there. What is going on? Does someone see the bizarre magic going on behind it? Great if someone would explain the trick to me because right now it just appeared out of thin air to me
Since $27q^2+4p^3<0$, $p^3<-(27/4)q^2<0$ so $p<0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1699395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Use Cauchy Product to Find Series Representation of $[\ln(1 + x)]^2$ Problem: Let $f(x) = [\ln(1 + x)]^2$. Use the series for the logarithm to compute that \begin{align*} f(x) = [\ln(1 + x)]^2 = \sum_{n = 2}^{\infty}(-1)^n\Bigg(\sum_{k = 1}^{n - 1} \frac{1}{(n - k)k}\Bigg) x^n. \end{align*} Use this to evaluate the 5th derivative of $f$ evaluated at $0$. My attempt: Using $\ln(1 + x) = \sum_{k = 1}^{\infty} (-1)^{k - 1} \frac{x^k}{k}$, the Cauchy product of $f(x) = (\ln(1 + x))(\ln(1 + x))$ gives us \begin{align*} f(x) &= \sum_{n = 1}^{\infty} \Bigg(\sum_{k = 1}^{n} \frac{(-1)^{n - k - 1}}{(n - k)} \Bigg(\frac{(-1)^{k - 1}}{k}\Bigg)\Bigg)x^n\\ &= \sum_{n = 1}^{\infty} \Bigg(\sum_{k = 1}^{n} \frac{(-1)^{n - 2}}{(n - k)k} \Bigg)x^n\\ &= \sum_{n = 1}^{\infty} \Bigg(\sum_{k = 1}^{n} \frac{(-1)^{n - 2}}{(n - k)k} \Bigg)x^n\\ &= \sum_{n = 1}^{\infty} (-1)^{n}\Bigg(\sum_{k = 1}^{n} \frac{1}{(n - k)k} \Bigg)x^n.\\ \end{align*} This has the form of the solution but I'm having trouble shifting the indices. As for using it to find the 5th derivative, I just need to take the 5th derivative of the series and plug in $x = 0$? Thanks for your help!
Note that for $-1<x< 1$ we have $$\begin{align} \log^2(1+x)&=\sum_{k=1}^\infty\sum_{m=1}^\infty\frac{(-1)^{k+m}x^{k+m}}{k\,m}\tag 1\\\\ &=\sum_{k=1}^\infty\sum_{n=k+1}^\infty\frac{(-1)^{n}x^{n}}{k\,(n-k)} \tag 2\\\\ &=\sum_{n=2}^\infty\sum_{k=1}^{n-1}\frac{(-1)^{n}x^{n}}{k\,(n-k)} \tag 3\\\\ \end{align}$$ as was to be shown! NOTES: In going from $(1)$ to $(2)$, we introduced a new index $n=k+m$. Then, with $m=n-k$, the lower limit for $n$ begins at $k+1$. In going from $(2)$ to $(3)$, we interchanged the order of summation and note that $n\ge 2$ from $(2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1699526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is $\int_0^\infty e^{-nx}x^{s-1}dx = \frac {\Gamma(s)}{n^s}$? Why is the following equation true?$$\int_0^\infty e^{-nx}x^{s-1}dx = \frac {\Gamma(s)}{n^s}$$ I know what the Gamma function is, but why does dividing by $n^s$ turn the $e^{-x}$ in the integrand into $e^{-nx}$? I tried writing out both sides in their integral forms but $n^{-s}$ and $e^{-x}$ don't mix into $e^{-nx}$. I tried using the function's property $\Gamma (s+1)=s\Gamma (s)$ but I still don't know how to turn it into the above equation. What properties do I need?
Note that the definition of the gamma function is: $$\Gamma(s)=\int_0^{\infty} x^{s-1}e^{-x}dx$$ So, taking $y=nx$ so that $dy=ndx$ and $x=\frac{y}{n}$ yields: $$\int_0^{\infty}e^{-nx}x^{s-1}=\int_0^{\infty}e^{-y}(\frac{y}{n})^{s-1}ndy=\frac{1}{n^s}\int_0^{\infty} y^{s-1}e^{-y}dy=\frac{\Gamma(s)}{n^s}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1699627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Asymptotic relations at infinity I am attempting to show that If $f(x) - g(x) \ll 1,\, x \to \infty$, then $e^{f(x)}\sim e^{g(x)}, \,x\to \infty$ From the first line, I am able to show that $$ \lim_{x\to \infty} \frac{f(x) - g(x)}{1} = 0$$ from which it is clear that $$ \lim_{x\to \infty}\left[f(x) - g(x)\right] = 0$$ However, I am a little stuck at this point. Where do exponentials come in? Any help/hints much appreciated.
As you've shown $$f(x)-g(x)\to 0,\quad\text{as}\, x\to\infty$$ So $$\frac{e^{f(x)}}{e^{g(x)}}=e^{f(x)-g(x)}\to 1$$ All you need is $$e^v\to 1,\quad\text{as}\,v\to 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1699750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Linear system 2 unknowns There are $x$ white and $y$ black pearls and their ratio is $z$. If I add six black and six white pearles, the ratio doubles. I did the following: $ \frac{x+6}{y+6} = \frac{2x}{y}$ and then I get $xy -6(2x-y)=0$ I can find solutions by guessing. Is there any other way? ADDED: Now I have to solve the problem for $ \frac{c(x+y)+6}{x+y+12} = 2c $ and once again I am lost in factoring out variables.
$\frac{x}{y} = z$ and $\frac{x+6}{y+6} = 2z$ $\implies x = zy$ and $\frac{zy+6}{y+6} = 2z$ $\implies 6 = z(y+12) \implies z = \frac{6}{y+12}$ $\implies x = zy = \frac{6y}{y+12}$ Since $x$ and $y$ are the number of pearls, they must be integers. That is $(y+12) | (6y)$. There are only 3 possible $y$ values by giving $x=2, x=3, x=4$. Other $x$ values gives non-integer or non-positive $y$'s. Thus, all possible $(x,y)$ pairs are $(2,6), (3,12), (4,24)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1699871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In how many ways can the numbers $1, 2, 3, \ldots, 100$ be arranged around a circular table such that any two adjacent numbers differ by at most $2$? In how many ways can the numbers $1, 2, 3, \ldots, 100$ be arranged around a circular table such that any two adjacent numbers differ by at most $2$?
First place ‘1’ at a position. On its either side you can place only two numbers (‘2’ and ‘3’). Let us place ‘2’ to left of ‘1’ and ‘3’ to right of ‘1’. Now to the left of ‘2’ you can place only ‘4’ (‘3’ has been already place at right of ‘1’). On proceeding, all even numbers are next to each other forming a half of net circle and odd numbers are together forming another half. So there is only one arrangement. If clockwise and anti clock wise arrangement re different, then answer is two possible arrangements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1699960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $p>3$ and $p+2$ are twin primes then $6\mid p+1$ I have to prove that if $p$ and $p+2$ are twin primes, $p>3$, then $6\ |\ (p+1)$. I figure that any prime number greater than 3 is odd, and therefore $p+1$ is definitely even, therefore $2\ |\ (p+1)$. And if I can somehow prove $3\ |\ (p+1)$, then I would be done. But I'm not sure how to do that. Help!
The easy way. Note that one of $p,p+1,p+2$ must be divisible by $3$, since they are three consecutive numbers, and since $p$ and $p+2$ are prime, that must be $p+1$. We can do the same to show that $p+1$ is divisible by $2$. Looking modulo $6$. We can look $\mod 6$. We see that \begin{align} 6k+0\equiv 0\mod 6&\Rightarrow 6|6k+0\\ 6k+1\equiv 1\mod 6&\Rightarrow \text{possibly prime}\\ 6k+2\equiv 2\mod 6&\Rightarrow 2|6k+2\\ 6k+3\equiv 3\mod 6&\Rightarrow 3|6k+3\\ 6k+4\equiv 4\mod 6&\Rightarrow 2|6k+4\\ 6k+5\equiv 5\mod 6&\Rightarrow \text{possibly prime}\\ \end{align} So for a number to be prime it must be either of the form $6k+1$ or $6k-1$ (equivalent to $6k+5$ since $6k-1=6(k-1)+5$). So if you have two primes, $p$ and $p+2$, then $p=6k-1$ and $p+2=6k+1$ for some $k$; thus, $p+1=6k$ is a multiple of $6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1700094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Possibility of determining a certain length in a triangle. The figure is on the bottom of the question. Suppose I have a triangle $ABC$ with an additional point $M$ on the middle of $AB$. Suppose that I know the following three quantitites, where $\ell(\cdot)$ represents the lenth of the line. * *$y:=\ell(AM)$ *$k:=\ell(AC) - \ell(CM)$ *$l:=\ell(BC) - \ell(CM)$ Is it possible to determine $x:=\ell(CM)$? And if so, is this value unique? Thank you in advance. EDIT:
The 2nd and 3rd values determine two hyperbolas (with foci respectively $M,A$ and $M,B$). The point C is the intersection of these two hyperbolas. So yes, it's possible. We first draw $A,B,M$, this we can obviously do. Then we draw the two hyperbolas, we see where they intersect, and that determines $C$ (I do not say it's a unique $C$, maybe there are more than $1$ or maybe $0$ solutions for $C$, I didn't investigate that). See also: Hyperbola
{ "language": "en", "url": "https://math.stackexchange.com/questions/1700223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find closed formula for $a_{n+1}=(n+1)a_{n}+n!$ $a_{n+1}=(n+1)a_{n}+n!$ where a0=0 and n>=0. To get the closed form, I'm trying to find an exponential generating function for the above recurrence, but it doesn't seem to be very nice. Am I going about this the wrong way? If so, are there any helpful tricks I can use to solve this recurrence for a(n) explicitly? Thanks.
Dividing by $(n+1)!$ gives us that $$\frac{a_{n+1}}{(n+1)!}=\frac{a_{n}}{n!}+\frac{1}{n+1}$$ Now substituting $\frac{a_{n}}{n!}=b_{n}$ $$b_{n+1}=b_{n}+\frac{1}{n+1}$$ Thus $b_{n}$ is is the harmonic series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1700363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Puzzle: Each entry in a number grid is the average of its neighbors I'm trying to solve the following puzzle: Each number should be the average of its four neighbors. For example, $x$ should be equal to $\frac{1}{4}(4+10+y+z)$. I don't know how to make a formula out of it. What's the trick? Can anyone give me a clue ?
This is a Laplace equation with Dirichlet boundary condition, you can construct a matrix and solve it using Matlab, or you can use Jacobi iteration to solve it manually as the number of unknowns is very small. You start with all unknowns set to zero. Then starting from the boundary, replace each unknown with the average of its four neighbors. Since this is a puzzle, the answer is likely to be integers so you can round the result safely. After two or three iterations you get the answer. Mine is: $$ 6, 5, 3, 2, 1 \\ 5, 4, 4, 2, 1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1700474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is A open or closed? Is it a bounded set? Verify if the given set $A$ it's open or closed. Also verify if it's bounded. a) $A = \{ (x,y) \in \mathbb{R}^2 \mid x^2 + y^2 < 1\}$ I just want the method for verify this things in case of this example, because I have items a,b,c,d,e to do. The other sets are usually points of $\mathbb{R}^2$.
Some highlights: suppose $\;\{(x_n,y_n)\}\subset \Bbb R^2\setminus A\;$ convergent, then we have that $$\begin{cases}x_n\xrightarrow[n\to\infty]{}x_0\\{}\\y_n\xrightarrow[n\to\infty]{}y_0\end{cases}\;\;\;\text{and}\;\;(x_0,y_0)\in\Bbb R^2\setminus A$$ as we know from limits in the real line, and thus the complement of $\;A\;$ is closed so $\;A\;$ is open. It also is trivially bounded, say by the unit circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1700577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Irreducible and recurrent Markov chain - theorem notation question In [J. R. Norris] Markov Chains (Cambridge Series in Statistical and Probabilistic Mathematics) (2009), page 35, Theorem 1.7.5 says: In (ii), does it mean $\gamma^k$ is notation for $\gamma^k_i$ over all $i \in I$ or for some fixed arbitrary $i \in I$? I think it's the latter but I just want to make sure that I'm interpreting this nice and compact way of writing it correctly.
Your first interpretation is correct: for example if $I=\{1,2,3\}$, then $\gamma^k$ is short for $(\gamma_1^k,\gamma_2^k,\gamma_3^k)$. Another way this is notated is $(\gamma_i^k)_{i \in I}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1700647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $\{9^n|n\in \mathbb Q\} = \{3^n|n\in \mathbb Q\}$ The statement is: $\{9^n|n\in \mathbb Q\} = \{3^n|n\in \mathbb Q\}$. I am more familiar with traditional proofs. Do I just split up $9^n$ into $3^{n}\times 3^{n}$?
Observe that for any $\;n\in\Bbb Q\;$ $$\begin{cases}9^n=3^{2n}\\{}\\3^{n}=9^{n/2}\end{cases}\;\;\;\implies\left\{\,9^n\;:\;n\in\Bbb Q\right\}=\left\{\,3^n\;:\;n\in\Bbb Q\right\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1700748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Winding number of composition of maps If $f,g:S^{1}\rightarrow S^{1}$ maps, show that $N(f\circ g)=N(f)N(g)$, where $N(f)$ is the winding number of $f$. We defined the winding number of $f$ to be $N(f)=\frac{1}{2\pi}(\tilde{f}(1)- \tilde{f}(0) )$ where $\tilde{f}$ is a lift of $f$. I tried to look for a lift of $f \circ g$ but couldn't manage to find anything that makes sense for this problem. Can anybody help me, please? Thank you in advance!
This may be a bit "cheating", but anyway... If $n = N(f)$ and $m = N(g)$, we know from the calculation of $\pi_1(S^1)$ that $f$ is homotopic to $z \mapsto z^n$ and $g$ is homotopic to $z \mapsto z^m$ (two maps with the same winding number are homotopic). But then $f \circ g$ is homotopic to $z \mapsto (z^m)^n = z^{mn}$, which has winding number $mn = N(f)N(g)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1700824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Proof similarity transformations are bijective. I'm trying to prove that similarity transformations from $\mathbb{R}^2$ to $\mathbb{R}^2$ are bijective, I've already proven its injectivity by contradiction, so I'm really just stuck on the surjectivity. Let's define similarity transformations, we call a function $\psi : \mathbb{R}^2 \longrightarrow \mathbb{R}^2$ a similarity transformation if it is not constant and if there exists an $\alpha \in \mathbb{R}_{>0}$ such that for all $x, y \in \mathbb{R}^2$ so that $x \neq y$ we have $\alpha \cdot d(x,y) = d(\psi(x), \psi(y))$. I proved that a similarity transformation is injective by contradiction, a contradiction follows from the definition of non-injectivity and the fact that $\alpha$ is bigger than zero. Any help would be greatly appreciated!
Hint: Prove $\psi$ is an affine map, and the associated linear map is injective. Hence, as it is an endomorphism in finite dimension, this linear map is bijective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1700921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
For what odd prime p is -3 a quadratic residues? Non-residue? For what odd prime p is -3 a quadratic residues? Non-residue? * *Having a bit of trouble with this question, we are currently covering a section on quadratic reciprocity and didn't really see anything in my notes that helped me solve this. Any help is greatly appreciated.
We have $-3$ is a QR of the odd prime $p\ne 3$ in two situations: (i) $-1$ is a QR of $p$ and $3$ is a QR of $p$ or (ii) $-1$ is a NR of $p$ and $3$ is a NR of $p$. Case (i): $-1$ is a QR of $p$ if $p\equiv 1\pmod{4}$. Since $p\equiv 1\pmod{4}$, by Quadratic Reciprocity we have that the Legendre symbol $(3/p)$ is equal to $(p/3)$. Note that $(p/3)=1$ if $p\equiv 1\pmod{3}$, and $(p/3)=-1$ if $p\equiv -1\pmod{3}$. So Case (i) occurs if $p\equiv 1\pmod{4}$ and $p\equiv 1\pmod{3}$, or equivalently if $p\equiv 1\pmod{12}$. Case (ii): We must have $p\equiv 3\pmod{4}$. In this case, Reciprocity says that since $p$ and $3$ are both of the form $4k+3$, we have $(3/p)=-(p/3)$. Thus $(3/p)=-1$ if $p\equiv 1\pmod{3}$. So Case (ii) happens if $p\equiv 3\pmod{4}$ and $p\equiv 1\pmod{3}$, or equivalently if $p\equiv 7\pmod{12}$. We can put the two cases together, and say that $-3$ is a quadratic residue of the odd prime $p$ if $p\equiv 1\pmod{6}$. Thus $-3$ is a quadratic non-residue of the odd prime $p$ if $p\equiv 5\pmod{6}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Sequence of Measurable Sets Converge Pointwise to a Measurable Set I was working on the folllowing exercise from Tao's Measure Theory book: We say that a sequence $E_n$ of sets in $R^d$ converges pointwise to another set $E$ if the indicator functions $1_{E_n}$ converge pointwise to $1_E$. Show that if $E_n$ are all Lebesgue measurable and converge pointwise to E then E is measurable also. He gives the hint to use the identity $1_{E(x)} =$ lim $ inf_{n - > \infty} 1_{E_n(x)} $to write E in terms of countable unions and intersections of $E_n$. If instead we changed the assumptions such that the indicator functions converged uniformly to $1_E$ on the domain $\cup E_n \bigcup E$ then there exists an $n$ such that $E = \cup_{m > n} E_m$. However I am not quite sure how to work with the weaker assumption of pointwise convergence. Any help would be appreciated!
This is basically a baby version of the following result (you can skip to bottom). If $(f_n)$ are measurable and $f_n \to f$ pointwise, then $f$ is measurable. Why ? Because of pointwise convergence $$\{x:f(x)> a\}=\cup_{n=1}^{\infty} \{x: f_m(x) > \alpha \mbox{ for all } m \ge n\}.$$ Read the RHS as all $x$ such that $f_m(x)>\alpha$ for all $m$ (possibly depending on $x$) large enough. We can rewrite the RHS as $$ \cup_{n=1}^\infty \cap_{m\ge n} \{x:f_m(x)>\alpha\}.$$ The sets on the RHS are all measurable. The result follows. Note that enough to consider $\alpha=0$, in which case the set $\{x:f_m(x)>\alpha\}$ is $E_m$. In other words, $$E = \cup_{n=1}^\infty \cap_{m\ge n} E_m,$$ and this identity is all you need, and which you can derive directly (with the same argument -- just follows from the definition of pointwise convergence: a point is in $E$ if and only if for all $m$ large enough belongs to $E_m$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to determine if a probability problem has a binomial distribution or not? I'm studying for my Stats midterm and I am confused about the binomial distribution concept. Among these 2 problems, why does the second question have a binomial distribution and not the first question? 1) In a large 2 lb bag of candies, 15% of the candies are green. The chances of pulling out at least one green candy in three tries is... 2) An owner suspects that only 5% of the customers buy a magazine and thinks that he might be able to use the display space to sell something more profitable. What is the probability that exactly 2 of the first 10 customers buy magazines? We already know about SPIN: S(Success/Failure) P(Probability) I(Independent) N(Fixed number of trials) and I'm curious as to why SPIN doesn't apply to the first problem as well? Any help would be appreciated. Thank you.
From the wording of the first situation, it sounds like you are not replacing the candy after you pull it out. Therefore you are not repeating the same fixed experiment three times in a row; rather, you are changing the experiment (and the associated probability of success) with each pick.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Solve $ 6\sin^2(x)+\sin(x)\cos(x)-\cos^2(x)=5 $ Question: Solve $$ 6\sin^2(x)+\sin(x)\cos(x)-\cos^2(x)=5 $$ $$ 0 \le x \le 360^{\circ} $$ My attempt: $$ 6\sin^2(x)+\sin(x)\cos(x)-\cos^2(x)=5 $$ $$ 6(\frac{1}{2} - \frac{\cos(2x)}{2}) + \sin(x)\cos(x) -(\frac{1}{2} + \frac{\cos(2x)}{2}) = 5 $$ $$ 3 - 3\cos(2x)+ \sin(x)\cos(x) - \frac{1}{2} - \frac{\cos(2x)}{2} = 5$$ $$ \frac{7\cos(2x)}{2} - \sin(x)\cos(x) + \frac{5}{2} = 0 $$ $$ 7\cos(2x) - 2\sin(x)\cos(x) + 5 = 0 $$ $$ 7\cos(2x) - \sin(2x) + 5 = 0 $$ So at this point I am stuck what to do, I have attempted a Weierstrass sub of $\tan(\frac{x}{2}) = y$ and $\cos(x) = \frac{1-y^2}{1+y^2}$ and $\sin(x)=\frac{2y}{1+y^2} $ but I got a quartic and I was not able to solve it.
HINT Perhaps the other methods are easier but to continue where you left off, Realize that $$\sin 2x=7\cos 2x+5$$ Use $$\sin^2 2x+\cos^2 2x=1$$ To make your last equation into a quadratic for $\cos 2x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 9, "answer_id": 2 }
Ansätze to solve 2-dimensional nonlinear integral curve I try to solve the initial value problem analytically $X(t_0) = X_0, X \in \mathbb{R}^2$ for the following integral curve: $x'(t) = -y(t)+x(t) (x(t)^2+y(t)^2-1) \\ y'(t) = x(t)+y(t) (x(t)^2+y(t)^2-1) $ According to the Picard–Lindelöf theorem there should be a solution for at least some period of time since the system is locally Lipschitz continuous. All differential equations I solved analytically so far were either one dimensional (seperation of variables, using $e^\lambda$) or linear (variation of parameters). So far I figured out, that trajectories will converge towards the unit circle. But I still can not figure out a method how to calcluate an analytical solution. Is there a known method to solve these systems? I would be gratefull just for the hint to a method I could use. Or is it completely hopeless to find a closed solution for this problem.
Try the following. We expand both equations by $y$ (first equation) and by $x$ (second equation). $$x'y=-y^2+xy(x^2+y^2-1)$$ $$y'x=x^2+xy(x^2+y^2-1)$$ Now subtract both equations $$y'x-x'y=x^2+y^2$$ After deviding by $x^2$, we obtain: $$\frac{y'x-yx'}{x^2}=1+\frac{y^2}{x^2}$$ Notice that we have the derivative of $y/x$ on the left side: $$\frac{d}{dt}\left(\frac{y}{x}\right)=1+\frac{y^2}{x^2}$$ Substitute $u(t)=\frac{y}{x}$ to get $$\frac{d u}{dt}=1+u^2$$ This DE has $u(t)=\tan(t)$ as solution. Now you can substitute $y(t)=x(t)\tan(t)$ into the second equation. Which will be in $x$ and $t$ only.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Product of $W^{1,p}_0$ functions Let $p>n$, and let $f,g\in W^{1,p}_0(\mathbb{R}^n)$ be two sobolev functions. Prove that $fg\in W^{1,p}_0(\mathbb{R}^n)$. I was able to prove the Leibniz formula for weak derivativatives, but still I do not understand why the function should be in $L^p$. Probabily it is Morrey inequality or something like that. But I cannot figure out.
Just a slight correction to @Diesirae92's answer, for posteriority's sake: we can't really write $$||f(g_n - g)||_{W^{1,p}(\mathbb{R}^n)} \leq ||f||_{\infty} ||(g_n - g)||_{W^{1,p}(\mathbb{R}^n)}$$ as the $W^{1,p}$ also involves the derivative, which may not be bounded. Instead, what we should write is \begin{equation} ||f(g_n - g)||_{W^{1,p}(\mathbb{R}^n)} = ||f(g_n - g)||_{L^p} + \sum_i || \frac{\partial}{\partial x_i} f(g_n - g)||_{L^p} \end{equation} We can take care of the first term as done above, by taking out the infinity norm of $f$, but the second one needs more work: $$|| \frac{\partial}{\partial x_i} f(g_n - g)||_{L^p}\leq || \frac{\partial f}{\partial x_i} (g_n - g)||_{L^p} + ||f \frac{\partial}{\partial x_i}(g_n - g)||_{L^p} \leq ||g_n - g||_{L^\infty} ||\frac{\partial f}{\partial x_i}||_{L^p} + ||f||_{L^\infty} ||\frac{\partial}{\partial x_i}(g_n - g)||_{L^p}$$ and this goes to $0$, as $||g_n - g||_{L^\infty} \rightarrow 0$ (by continuity of the embedding $W^{1,p} \hookrightarrow L^\infty$) and $||\frac{\partial f}{\partial x_i}||_{L^p} \leq \infty$, and $|f||_{L^\infty} \leq \infty$ and $||\frac{\partial}{\partial x_i}(g_n - g)||_{L^p} \rightarrow 0$. Doing the same for the other term, $||(f_n - f)g_n||_{W^{1,p}}$, we get the result
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the probability of dealing all $52$ cards in a standard well shuffled deck getting a single quad? I would like to know if you take a well shuffled deck of $52$ cards and then deal out all of them one at a time without replacement, what is the probability that they will be dealt such that there is exactly one quad dealt in order such as $4$ consecutive kings? That run of $4$ has to be the only run of that length, there cannot be any others. For example, 5, ... ,K,K,K,K,7,J,J,J,J is a "loser". Other than computer simulation I am not sure how to solve this.
Joriki provides the elegant solution ... but also shows the value of checking theoretical work via simulation. The discussion got me playing with simulating it in Mathematica, and thought I would post some code. A pack of cards can be represented by the set: cards = Table[Range[1, 13], 4] // Flatten Sampling without replacement, and dealing out a hand of 52 cards i.e. RandomSample[cards] ... and then counting all the cases where there are four identical cards in a row {x_,x_,x_,x_}, and doing it 10 million times: 4Row = ParallelTable[Count[Split[RandomSample[cards], SameQ], {x_,x_,x_,x_}], {10^7}]; All done. The empirical distribution is given by: Tally[4Row] {{0, 9976557}, {1, 23415}, {2, 28}} In other words, out of 10 million drawings, there was: * *exactly 1 sequence of 4 identical cards 23,415 times (out of 10 million) *exactly 2 sequences of 4 identical cards just 28 times (out of 10 million) The above Monte Carlo simulation (10 million times) takes about 42 seconds on my Mac. I am sure there are faster ways of doing it, but that's another question :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Write $x_n=22..244...45$ as sum of $2$ squares I've recently came across this problem and, although I've spent time looking for a solution, I don't have any interesting ideas. Let the numbers $$x_1=25$$ $$x_2=2245$$ $$x_3=222445$$ and $x_n=22..244...45$ with $n$ digits $'2'$ and $n-1$ digits $'4'$ Prove that $x_n$ can be written as a sum of two perfect squares for any natural number $n\ge1$. The first impulse was to decompose the number: $$ x_n = 2 \times \left(10^{2n-1}+\dots+10^n\right) + 4 \times \left(10^{n-1}+\dots+10\right)+ 5$$ $$ x_n = 2 \times \frac{10^{2n}-10}{9} + 2 \times \frac{10^{n}-10}{9} + 5$$ $$ x_n = \frac{20}{9} \left(10^{2n-1} - 1 + 10^{n-1} - 1\right) + 5$$ Anyway, this doesn't seem to lead to the good track. A piece of advice or a hint would be apreciated. P.S. I guess that it's because of the Stack Exchange android app, but although I wrote something on the first line ("Hi there"), this doesn't appear in the post. This happened several times.
\begin{align*} x_{n} &= \left( \frac{10^{n}+a}{3} \right)^{2}+ \left( \frac{10^{n}+b}{3} \right)^{2} \\ &= \frac{2(10^{2n})+2(a+b)10^{n}+a^{2}+b^{2}}{9} \end{align*} Solving $$\left \{ \begin{array}{ccc} a+b &=& 1 \\ a^{2}+b^{2} &=& 5 \end{array} \right.$$ $$(a,b)=(2,-1),(-1,2)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Evaluate $\binom{m}{i} - \binom{m}{1}\binom {m-1}{ i} + \binom{m}{2}\binom{m - 2}{i} - \ldots + (-1)^{m-i} \binom{m}{m-i}\binom{ i }{i} $ Evaluate the expression $$\binom{m}{i} - \binom{m}{1}\binom {m-1}{ i} + \binom{m}{2}\binom{m - 2}{i} - \ldots + (-1)^{m-i} \binom{m}{m-i}\binom{ i }{i} $$ I'm really stumped about trying to get anything meaningful out of this. For some context I am trying to find the determinant of a matrix consisting of binomial coefficients, and have been given the hint "it may be helpful to evaluate [above expression]".
Here’s a combinatorial version Semiclassical’s computational argument. You have a box of white balls numbered $1$ through $n$. For some $r$ such that $0\le r\le m-i$ you pick $r$ of the balls and coler them red, and then you pick $i$ of the remaining white balls and color them blue; for a given value of $r$ there are $\binom{m}r\binom{m-r}i$ possible outcomes. Your expression is $$\sum_{r=0}^{m-i}(-1)^r\binom{m}r\binom{m-r}i\;;\tag{1}$$ clearly this is the number of outcomes with an even number of red balls minus the number of outcomes with an odd number of red balls. Every outcome has exactly $i$ blue balls. Let $\mathscr{I}$ be the family of all sets of $i$ balls from the box. For each $I\in\mathscr{I}$ and $r$ such that $0\le r\le m-i$ there are $\binom{m-i}r$ outcomes having $r$ red balls and $I$ as its set of blue balls. The contribution of these outcomes to $(1)$ is $$\sum_{r=0}^{m-i}(-1)^r\binom{m-i}r\;,\tag{2}$$ and $|\mathscr{I}|=\binom{m}i$, so $$\sum_{r=0}^{m-i}(-1)^r\binom{m}r\binom{m-r}i=\binom{m}i\sum_{r=0}^{m-i}(-1)^r\binom{m-i}r\;.$$ Finally, $(2)$ is the number of even-sized subsets of $[m-i]$ minus the number of odd-sized subsets of $[m-i]$, and it’s well known that this is $0$ unless $m=i$, in which case it’s $1$. Thus, $$\sum_{r=0}^{m-i}(-1)^r\binom{m}r\binom{m-r}i=\begin{cases} 1,&\text{if }i=m\\ 0,&\text{otherwise}\;. \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Looking for websites to brush up on algebra skills needed for calculus I've enrolled in an 8 week online Calculus 1 class, we're currently in week 2 and while I understand the calculus concepts (average rate of change, limits) I'm having a hard time on my homework due to not having a strong background in algebra. Are there any websites that I could go check out that would help me brush up on algebra, I'm mostly struggling on factoring, simplifying, square roots, Pi)
Paul's Online Math Notes may be a good place to check out. You can either look it up on google or go to it here http://tutorial.math.lamar.edu/ It can help out with algebra as well as calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How does the formula for length of a graph relate to that of parametric description? So I know how to solve for the length of a regular graph and for a parametric description. Regular graph length formula: $$L = \int_{x=a}^{x=b}\sqrt[]{1 + (f'(x))^2} \,\,\,dx$$ Parametric description length formula: $$\int_{t = a}^{t=b} \sqrt[]{(x'(t))^2+(y'(t))^2} \,\,\,\,dt$$ The form of these two are so radically different that I can't see how one relates to the other. The parametric description uses the form for path velocity but I don't understand why.. If more elaboration is needed, please ask. I'll supply. Highly appreciated, -Bowser
They are radically similar ;) you forgot the integral in your first formula (edit: I see you fixed it now) Notice that you can parametrize the graph of $f$ by $$(t, f(t))$$ Now use the second formula, you will get the first!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1701989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How to show that $2 ^\binom{N}{2} \sim \exp(\frac{N^2}{2}\ln(2))$ How can we show that: $2 ^\binom{N}{2} \sim \exp(\frac{N^2}{2}\ln(2))$ for large N.
$\binom{N}{2}=N(N-1)/2\approx N^2/2$. So $2^{\binom{N}{2}}=\exp(\binom{N}{2}\ln(2))\approx \exp(\frac{N^2}{2}\ln(2))$. Note that $N^2$ is in the exponent, not outside as you have written.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1702081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Could we "invent" a number $h$ such that $h = {{1}\over{0}}$, similarly to the way we "invented" $i=\sqrt{-1}$? $\sqrt{-1}$ was completely undefined in the world before complex numbers. So we came up with $i$. $1\over0$ is completely undefined in today's world; is there a reason we haven't come up with a new unit to define it? Is it even possible, or would it create logical inconsistencies? What would be the effect on modern math if we did so?
There are the dual numbers, which is another two dimensional associative algebra over the reals like the complex numbers. The basis elements are 1 and h, where we define h as a nonzero number whose square is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1702197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Can an open ball be closed if the open ball contains infinite points? Consider a metric space $(X,d)$. Is the following statement true? An non-finite open ball in X with finite radius is never closed. Non-finite in this sense means that the open ball contains an infinite amount of points.
When $d$ is a metric then $e_1=d/(1+d)$ and $e_2=\max (d,1)$ are metrics equivalent to $d,$ that is, they generate the same topology that $d$ does. Since $\sup e_1\leq 1\geq \sup e_2,$ an open $e_1$-ball or $e_2$-ball of radius $2$ is the whole space, which is both open and closed, regardless of whether it is a finite or infinite space. Another common misconception is that $\{y:d(y,x)\leq r\}$ is always the closure of $\{y:d(y,x)<r\}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1702276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
Let $G$ be a simple graph on $10$ vertices and $38$ edges. Prove that $G$ contains $K_4$ as an induced subgraph. Let $G$ be a simple graph on $10$ vertices and $38$ edges. How do I prove that $K_4$ is the induced subgraph of G?
Hint: There are only $7$ edges missing from $K_{10}$. Each missing edge can prevent up to $28$ different 4-vertex sets from inducing a $K_4$. How many 4-vertex sets are there?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1702368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
When a graded ring is Cohen-Macaulay? I am trying to solve exercise 19.10 from Eisenbud's Commutative Algebra. I want to show that if $R=k[x_0,...,x_n]/I$ is a graded ring, then $R$ is Cohen-Macaulay iff $R_{\mathfrak p}$ is Cohen-Macaulay, where $\mathfrak p=(x_0,...,x_n)$. The hint is to use the graded Auslander-Buchsbaum formula. I believe currently I am able to prove if $R_P$ is Cohen-Macaulay ring, then codim($p$)=depth($p$), but I do not know how to proceed. Is it enough to prove this homogeneous maximal ideal satisfies Cohen-Macaulay condition? If it is not, could someone give some hint? Thanks!
This is consequence of Bruns-Herzog, Exercise 2.1.27 (c): Let $R$ be a Noetherian graded ring and $M$ a finite graded R-module. Suppose in addition that $(R,m)$ is $^*$local. Then $M$ is Cohen-Macaulay if and only if $M_m$ is. Assuming that we know proofs of former parts, this part can be easily proved: Let $p$ be a graded prime ideal. Then $p\subseteq m.$ Hence $M_p=(M_m)_{pR_m}$ is Cohen-Macaulay. So $M$ is Cohen-Macaulay, by part (b):(ii)$\to$ (i).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1702509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Decreasing perpetuity problem A perpetuity pays 1000 immediately. The second payment is 97% of the first payment and is made at the end of the fourth year. Each subsequent payment is 97% of the previous payment and is paid four years after the previous payment. Calculate the present value of this annuity at an annual effective rate of 8%. My attempt: Let X denote the present value. $x/(1+i) + 0.97x/(1+i)^5 + 0.97^2x/(1+i)^9 + ...$ = 1000 where $i = 0.08$, however I am not sure how to solve this or if this is correct
This is a perpetuity due decreasing in geometric progression and payable less frequently than interest is convertible. The effective interest rate per period is $$ i=(1+0.08)^4-1=36.05\% $$ and the growing rate is $g=-3\%$ (decreasing). So the perpetuity due has the present value $$ PV=1000\frac{1+i}{i-g}=3,484.07 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1702665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
If a, b, c are three natural numbers with $\gcd(a,b,c) = 1$ such that $\frac{1}{a}+\frac{1}{b}=\frac{1}{c}$ then show that $a+b$ is a square. If a, b, c are three natural numbers with $\gcd(a,b,c) = 1$ such that $$\frac{1}{a} + \frac{1}{b}= \frac{1}{c}$$ then show that $a+b$ is a perfect square. This can be simplified to: $$a+b = \frac{ab}{c}$$ Also, first few such examples of $(a,b,c)$ are $(12, 4, 3)$ and $(20, 5, 4)$. So, I have a feeling that $b$ and $c$ are consecutive. I don't think I have made much progress. Any help would be appreciated.
To show this, we note that $c(a+b)=ab$. Now let $g$ be the gcd of $a$ and $b$, which need not necessarily be $1$. Denote $a=a'g$ and $b=b'g$ so that we get $c(a'+b') = a'b'g$. Because $a' + b'$ is relatively prime to both $a'$ and $b'$, it follows that it divides $g$. But g also divides $c(a'+b')$. Further, note that $g$ is coprime to $c$, because it was the gcd of $a$ and $b$, so that gcd($g$,$c$)=gcd($a$,$b$,$c$)=1. It follows that $a'+b'=g$, and therefore that $a+b = (a'+b')g = g^2$. For example, $\frac{1}{6} + \frac{1}{3} = \frac{1}{2}$ and $(3,6)=3$ and $3+6=9=3^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1702758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Probability that one soccer player will have his favourite t-shirt in the dressing room. In a soccer team there are $11$ players and $11$ t-shirts numbered from $1$ to $11$.The players go in the dressing room one at a time, casually. Each one, once he arrives, takes a random t-shirt, except Danilo who prefers the numbered $8$ t-shirt and, if this last one is available, he will choose it. ' What's the probability that Danilo will be able to select his favourite t-shirt? (Correct answer is $\cfrac{6}{11}$) My effort I've approached this problem considering the probability that Danilo will be the $n^{th}$ player going in the dressing room and for any $n$ this happens with probability $\cfrac{1}{11}$. Then for each case, I've considered the probability that the players before Danilo take the numbered $8$ t-shirt but at this point is where I had some conceptual difficulty and looked the proposed solution of the problem. For example(following the logic of the solution),if we take the case where Danilo is the $3^{rd}$ player going in the dressing room ,we have then that the probability that the two players before him take his favourite t-shirt is $\cfrac{2}{11}$, so the probability that Danilo takes his t-shirt is $\cfrac{9}{11}. So the overall probability would be $P=\cfrac{1}{11}\cdot \cfrac{2}{11}$ so the probability that Danilo takes his t-shirt is $\cfrac{1}{11}\cdot \cfrac{9}{11}=\cfrac{9}{121}$. What doesn't click to my mind is why the probability of the previous two players is $\cfrac{2}{11}$. This doesn't seem to me to be correct as they can't have the same probability since one of these two, which entered the dressing room first, will have already chosen his t-shirt so the probability that the second player chooses Danilo's t-shirt must take care of the scenario where one t-shirt has already been taken. It seems to me that $\cfrac{2}{11}$ is only plausible if the players are choosing their t-shirt at the same time (though I've also my doubts on this). Question Can you guys help me make this clear?
$P(Danielo\; selects\; his\; prefered\; 8\; number\; Tshirt)=1$ $P(Danielo \; enters \; at\; any\; position )=\frac{1}{11}$ How will Danielo get his prefered T shirt?? This can happen if Danielo goes first and picks his tshirt OR goes Second and picks his tshirt but keeping in mind that the person before him selects a tshirt from the other $10$ available OR Danielo goes in third and picks his tshirt provided the other two before him pick from the other $9$bavailable....OR....so on $P(Daniel\;goes \;first\; and \;picks\; his\;tshirt )=\frac{1}{11}.1$ $P(Daniel\;goes \;second\; and \;picks\; his\;tshirt )=\frac{10}{11}\frac{1}{11}.1$ $P(Daniel\;goes \;third\; and \;picks\; his\;tshirt )=\frac{10}{11}\frac{9}{10}\frac{1}{11}$ Similary you do it for the time he goes in 4th,5th...11th. Since any of these cases guarentees that he picks his tshirt you just add all thse probabilities and get it as $$\frac{1}{11}+\frac{10}{11}\frac{1}{11}+\frac{10}{11}\frac{9}{10}\frac{1}{11}+.............+$$ $$= \frac{1}{11}+\frac{10}{11^2}+\frac{9}{11^2}+..........+\frac{2}{11^2}+\frac{1}{11^2}$$ $$=\frac{1}{11}+\frac{10.11}{2.11^2}$$ $$=\frac{1}{11}+\frac{5}{11}$$ $$=\frac{6}{11}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1702860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Looking for an example of an infinite metric space $X$ such that there exist a continuous bijection $f: X \to X$ which is not a homeomorphism I am looking for an example of an infinite metric space $X$ such that there exists a continuous bijection $f: X \to X$ which is not a homeomorphism. Please help. Thanks in advance.
The Cantor set minus one point. Indeed, in general let $u:U\to V$ is a non-open continuous bijection between topological spaces. Define $X=U\times\mathbf{N}^c\sqcup V\times\mathbf{N}$ (I assume $0\in\mathbf{N}$ and I mean the complement in $\mathbf{Z}$). Define $f(x,n)=(x,n+1)$ for $n\neq 0$ and $f(x,-1)=(u(x),0)$. Then $f$ is a non-open continuous permutation of $X$. We can apply this to $V$ Cantor, $U$ Cantor minus one point. Indeed, fix a point $x\in U$ and let $U'=U\cup\{y\}$ be the 1-point compactification of $U$ and $V$ the quotient of $U'$ identifying $x$ and $y$. Then $V$ is a Cantor space, and the canonical map induced by inclusion $U\to V$ works. Then $X$ is also homeomorphic to $U$. Indeed, first both $U\times\mathbf{N}$ and $V\times\mathbf{N}$ are homeomorphic to $U$, and hence so is their disjoint union $X$. (I'm using that every metrizable, non-compact locally compact, non-empty, perfect and totally disconnected topological space $X$ is homeomorphic to $U$. Indeed, the 1-point compactification of $X$ then satisfies the same properties with non-compact replaced by compact, hence is a Cantor space, so $X$ is homeomorphic to a Cantor space minus a point, and this is unique, because the Cantor space is homogeneous under self-homeomorphisms, e.g., because it admits topological group structures.) Added: For every $(X,f)$, $X$ topological space and $f$ bijective continuous non-open self map of $X$, we can obtain a another connected example taking the cone. So all examples here can be used to provide examples to the same question for restricted to connected spaces (2011 MSE post).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1702979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
explanation for multiplication rule for independent events with|A| > 2 This is my assumption: if A is the set of events: $ A = \{A1, A2, A3\} $ And i want to find out if they are mutually independent, all i have to do is check that the two following conditions hold true: $ r1: P(A1 \cap A2) = P(A1)P(A2) $ $ \land $ $ r2: P(A1 \cap A2 \cap A3) = P(A1)P(A2)P(A3)$ Am i right? thank you very much
A finite set of events is mutually independent if and only if every event is independent of any intersection of the other events — that is, if and only if for every $n$-element subset ${A_i}$, $$ \mathrm{P}\left(\bigcap_{i=1}^n A_i\right)=\prod_{i=1}^n \mathrm{P}(A_i) $$ (see here). Hence, we also have to check that $P(A_1\cap A_3)=P(A_1)P(A_3)$ and $P(A_2\cap A_3)=P(A_2)P(A_3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1703092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If the tangent at the point $P$ of an ellipse $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ meets the major axis and minor axis at $T$ and $t$ respectively If the tangent at the point $P$ of an ellipse $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ meets the major axis and minor axis at $T$ and $t$ respectively and $CY$ is perpendicular on the tangent from the center,then prove that $Tt.PY=a^2-b^2$ Let the point $P$ be $(a\cos\theta,b\sin\theta)$,then the equation of the tangent to the ellipse $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ is $\frac{x\cos\theta}{a}+\frac{y\sin\theta}{b}=1$. It meets the major axis at $T(a\sec\theta,0)$ and the minor axis at $t(0,b\csc\theta)$ $Tt=\sqrt{a^2\sec^2\theta+b^2\csc^2\theta}$ Now I found $Y$,the foot of perpendicular from center $C$ of the ellipse to the tangent as $(\frac{\frac{\cos\theta}{a}}{\frac{\cos^2\theta}{a^2}+\frac{\sin^2\theta}{b^2}},\frac{\frac{\sin\theta}{b}}{\frac{\cos^2\theta}{a^2}+\frac{\sin^2\theta}{b^2}})$ $PY=\sqrt{(a\cos\theta-\frac{\frac{\cos\theta}{a}}{\frac{\cos^2\theta}{a^2}+\frac{\sin^2\theta}{b^2}})^2+(b\sin\theta-\frac{\frac{\sin\theta}{b}}{\frac{\cos^2\theta}{a^2}+\frac{\sin^2\theta}{b^2}})^2}$ I simplified this expression to get $PY=\frac{(a^2-b^2)\sin\theta\cos\theta}{(\frac{\cos^2\theta}{a^2}+\frac{\sin^2\theta}{b^2})a^2b^2}\sqrt{a^4\sin^2\theta+b^4\cos^2\theta}$ and $Tt=\sqrt{a^2\sec^2\theta+b^2\csc^2\theta}=\frac{\sqrt{a^2\sin^2\theta+b^2\cos^2\theta}}{\sin\theta \cos\theta}$ But $PY.Tt$ is not $a^2-b^2$I do not understand where have i gone wrong.Maybe i found $Y$ wrong,i used the formula for the foot of the perpendicular $(x',y')$ from $(x_1,y_1)$ on the line $Ax+By+C=0$ as $\frac{x'-x_1}{A}=\frac{y'-y_1}{B}=\frac{-(Ax_1+By_1+C)}{A^2+B^2}$
Equation of $Tt$ is $bx\cos \theta + ay\sin \theta - ab = 0$. Hence $C{Y^2} = \frac{{ab}}{{{b^2}{{\cos }^2}\theta + {a^2}{{\sin }^2}\theta }}$ and $C{P^2} = {a^2}{\cos ^2}\theta + {b^2}{\sin ^2}\theta $. Now, $$P{Y^2} = C{P^2} - C{Y^2} = \frac{{ab}}{{{b^2}{{\cos }^2}\theta + {a^2}{{\sin }^2}\theta }} - \left( {{a^2}{{\cos }^2}\theta + {b^2}{{\sin }^2}\theta } \right)$$ $$ = \frac{{{a^2}{b^2}\left( {{{\cos }^4}\theta + {{\sin }^4}\theta - 1} \right) + {{\sin }^2}\theta {{\cos }^2}\theta \left( {{a^4} + {b^4}} \right)}}{{{b^2}{{\cos }^2}\theta + {a^2}{{\sin }^2}\theta }}$$ $$ = \frac{{{a^2}{b^2}\left( {{{\left( {{{\cos }^2}\theta + {{\sin }^2}\theta } \right)}^2} - 2{{\sin }^2}\theta {{\cos }^2}\theta - 1} \right) + {{\sin }^2}\theta {{\cos }^2}\theta \left( {{a^4} + {b^4}} \right)}}{{{b^2}{{\cos }^2}\theta + {a^2}{{\sin }^2}\theta }}$$ $$ = \frac{{{a^2}{b^2}\left( {1 - 2{{\sin }^2}\theta {{\cos }^2}\theta - 1} \right) + {{\sin }^2}\theta {{\cos }^2}\theta \left( {{a^4} + {b^4}} \right)}}{{{b^2}{{\cos }^2}\theta + {a^2}{{\sin }^2}\theta }}$$ $$ = \frac{{{{\sin }^2}\theta {{\cos }^2}\theta {{\left( {{a^2} - {b^2}} \right)}^2}}}{{{b^2}{{\cos }^2}\theta + {a^2}{{\sin }^2}\theta }}$$ $$\Rightarrow PY = \frac{{\left( {{a^2} - {b^2}} \right)\sin \theta \cos \theta }}{{\sqrt {{b^2}{{\cos }^2}\theta + {a^2}{{\sin }^2}\theta } }}$$ Hence, $Tt \cdot PY=a^2-b^2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1703222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Definite Integral - Question about notation For an arbitrary function $f(t)$ we define: \begin{equation} \bar{f}(t) =\int^{t}_{0} f(\tau) d\tau \end{equation} Is it true the following? For an arbitrary function $f(x,t)$ we define: \begin{equation} \bar{f}(x,t) =\int^{t}_{0} f(x) dx \end{equation} I get confused because it's a single integration but 2D function.
No, the right-hand-side does not depend on x, since you integrate over it. You might define a function like $$\bar{f}(x,t)=\int_0^t f(x,y)dy$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1703306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Morphisms in the category of rings We know that in the category of (unitary) rings, $\mathbb{Z}$ is the itinial object, i.e. it is the only ring such that for each ring $A,$ there exists a unique ring homomorphism $f:\mathbb{Z} \to A$. This means, in particular, that $\mathbb{R}$ does not satisfy this property, so for a certain ring $B$, we can construct $g:\mathbb{R} \to B$ and $h:\mathbb{R} \to B$ such that $g \ne h$. So far, I have proven that $B$ cannot be an ordered ring and cannot be $\mathbb{R}^n \ (n\in\mathbb{N})$. Can you help me finding such a ring $B$?
In fact, there is a universal ring with two distinct morphisms from $\mathbb{R}$, the ring $\mathbb{R}\otimes_\mathbb{Z}\mathbb{R}$. I am unsure of whether it has nicely-presented quotients with the same property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1703553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 7, "answer_id": 2 }
Fourier analysis — Proving an equality given $f, g \in L^1[0, 2\pi]$ and $g$ bounded We were given a challenge by our Real Analysis professor and I've been stuck on it for a while now. Here's the problem: Consider the $2\pi$-periodic functions $f, g \in L^1[0, 2\pi]$. If $g$ is bounded show that $$ \lim_{n \to +\infty}\frac{1}{2\pi}\int_{0}^{2\pi}f(t)g(nt)\,dt = \frac{1}{2\pi}\int_{0}^{2\pi}f(t)\,dt \cdot \frac{1}{2\pi}\int_{0}^{2\pi}g(t)\,dt. $$ I first thought of using the Riemann-Lebesgue theorem, i.e. if $f \in L^1[a, b]$ then $$\lim_{R \to +\infty}\int_{a}^{b}f(t)\cos{Rt} \, dt = 0 \;\; \text{and} \;\; \lim_{R \to +\infty}\int_{a}^{b}f(t)\sin{Rt} \, dt = 0$$ but it didn't get me far.
Let $\epsilon > 0$ be given. Let $M$ be a bound for $|g|$, which is assumed to exist. Define $f_{N}(t)=f(t)\chi_{\{ x : |f(x)| \le N\}}(t)$. Let $g_{\delta}$ be a standard mollification of $g$. Then $g_{\delta}\in\mathcal{C}^{\infty}(\mathbb{R})$ is $2\pi$-periodic, $|g_{\delta}| \le M$, and $$ \lim_{\delta\rightarrow 0}\int_{0}^{2\pi}|g(t)-g_{\delta}(t)|dt =0. $$ Let $S_{g_{\delta}}^{K}$ be the truncated Fourier series for $g_{\delta}$; the Fourier series converges uniformly to $g_{\delta}$ as $K\rightarrow\infty$. Then \begin{align} \int_{0}^{2\pi}f(t)g(nt)dt &= \int_{0}^{2\pi}(f(t)-f_{N}(t))g(nt)dt \tag{1}\\ & +\int_{0}^{2\pi}f_{N}(t)(g(nt)-g_{\delta}(nt))dt \tag{2}\\ & +\int_{0}^{2\pi}f_{N}(t)(g_{\delta}(nt)-S_{g_{\delta}}^{K}(nt))dt \tag{3}\\ & +\int_{0}^{2\pi}f_{N}(t)S_{g_{\delta}}^{K}(nt)dt.\tag{4} \end{align} The first term on the right is bounded by $$ \int_{0}^{2\pi}|f(t)-f_{N}(t)||g(nt)|dt \le M\int_{0}^{2\pi}|f(t)\chi_{\{x : |f(x)| > N\}}|dt, \tag{1} $$ which tends to $0$ as $N\rightarrow\infty$. Choose $N$ large enough that the above is strictly bounded by $\frac{\epsilon}{4}$. Then, for this fixed $N$, \begin{align} \left|\int_{0}^{2\pi}f_N(t)(g(nt)-g_{\delta}(nt))dt\right| & \le N\int_{0}^{2\pi}|g(nt)-g_{\delta}(nt)|dt \\ & = N\int_{0}^{2\pi/n}|g(nt)-g_{\delta}(nt)|d(nt) \\ & = N\int_{0}^{2\pi}|g(t)-g_{\delta}(t)|dt \rightarrow 0 \mbox{ as } \delta\rightarrow 0. \tag{2} \end{align} Choose $\delta > 0$ small enough that the above is strictly bounded by $\frac{\epsilon}{4}$. And $(3)$ is bounded by $$ 2\pi N\sup_{0\le t\le 2\pi}|g_{\delta}(t)-S_{g_{\delta}}^{K}(t)|, $$ which is strictly bounded by $\frac{\epsilon}{4}$ for $K$ large enough because $g_{\delta}\in\mathcal{C}^{\infty}(\mathbb{R})$ is periodic. Finally, $(4)$ can be bounded by $\frac{\epsilon}{4}$ by taking $n$ large enough, which follows from the Riemann-Lebesgue lemma. Hence, it follows that the following holds for all large enough $n$: $$ \left|\int_{0}^{2\pi}f(t)g(nt)dt\right| < \epsilon $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1703655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is there a function for every possible line? I'm currently in Pre-Calculus (High School), and I have a relatively simple question that crossed my mind earlier today. If I were to take a graph and draw a random line of any finite length, in which no two points along this line had the same $x$ coordinate, would there be some function that could represent this line? If so, is there a way we can prove this?
The only straight lines in the $x$-$y$ plane that are not functions are those that are perfectly vertical. Those are of the form $x=c$, where $c$ is a constant. All other lines can be expressed in the form $y =f (x)= mx + b$ where $m $ is the slope of the line and $b $ is the $y$-intercept-- the $y$ value when $x$ is $0$. Given any two points on the line we can find this formula, and given its formula we can find any point on the line. Such a function is called a linear function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1703765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 0 }
Is $\sum_{n=0}^\infty (a \cdot r^n)$ equivalent to $\lim_{n \to \infty}\sum_{k=0}^n (a \cdot r^k)$? In other words, when writing down an infinite sum, are we always implying that it's actually the limit of that series as the number of terms approaches infinity, or is there some subtle difference?
The sum $$\sum_{n=0}^\infty$$ is defined to be $$\lim_{k\to\infty}\sum_{n=0}^k$$ so yes they are the same. Of course this is an abuse of notation, since $\infty$ is not a number. In the same way, it is not "proper" to write the interval $[0,\infty)$, but we all understand what it means.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1703847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve the differential equation $y'' + y' + a(x e^{-x} - 1 + e^{-x}) = 0$ I am a bit stuck on the particular solution for this system. According to Wolfram, the general solution is $$\frac{1}{2} a e^{-x} x^2 + 2 a x e^{-x} + \frac 12 x + \frac 12 c_1 e^{-x} + c_2$$ The last two terms are from the homogenous solution, but I do not know how to come up with the guess for the particular solution.
Observe that $xe^{-x}+e^{-x}-1$ is sum of "product of polynomial and exponential" and polynomial. Intiutively, we can suppose that some particular solutions is also "product of polynomial and exponential" and polynomial. Let $y=p(x)e^{-x}+ \alpha x^2+\beta x+\gamma$, where $p(x)$ is a polynomial. Then \begin{align} y''+y'&=p''(x)e^{-x}-p'(x)e^{-x}-p'(x)e^{-x}+p(x)e^{-x}+p'(x)e^{-x}-p(x)e^{-x}+2\alpha x+2\alpha+\beta\\ &=(p''(x)-p'(x))e^{-x}+2\alpha x+2\alpha+\beta \end{align} and we can get $p''(x)-p'(x)=-a(x+1)$ and $\alpha=0,\beta=a$. After some calculation, $p(x)=\frac{1}{2}ax^2+2ax$ fits the differential equation. Therefore, $$ p(x)e^{-x}+ \alpha x^2+\beta x+\gamma =(\frac{1}{2}ax^2+2ax)e^{-x} +ax +\gamma $$ is a particular solution. here $\gamma$ is "free".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1703981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How can I calculate this limit $\lim_{(x,y)\to(0,0)} \frac{xy(1-cos(x^2+y^2))}{(x^2+y^2)^{\frac{5}{2}}}$? How can I calculate this limit $$\lim_{(x,y)\to(0,0)} \dfrac{xy(1-cos(x^2+y^2))}{(x^2+y^2)^{\frac{5}{2}}}$$ at the origin? I tried to use the substitution $x ^2 + y^2=t$ but how can I evaluate the value of $xy$? I even tried to use polar coordinates but to avail.
Outline: Note that $(x-y)^2\ge 0$, so $|xy|\le \frac{1}{2}(x^2+y^2)$. One can also get this from polar coordinates, for $|xy|=r^2|\cos\theta\sin\theta|=\frac{1}{2}r^2|\sin(2\theta)|\le \frac{r^2}{2}$. Now you can comfortably let $t=x^2+y^2$. You will need to look at the behaviour of $1-\cos t$ near $0$. This can be taken care of mechanically by using L'Hospital's Rule. Or else one can use the beginning of the Maclaurin series for $\cos t$. Or else one can note that $1-\cos t=\frac{1-\cos^2 t}{1+\cos t}=\frac{\sin^2 t}{1+\cos t}$. The conclusion will be that the limit is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1704075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Does the following sequence converge, if so find the limit, if not prove that no limit exists: $a_n=(-1)^nn^2$ $$a_n=(-1)^nn^2$$ I know that this sequence does not converge and hence does not have a limit. I have tried proving that this sequence does not have a limit by contradiction. I assumed that the limit 'a' existed and then performed the reverse triangle inequality. Thus: $$|(-1)^nn^2-a|<\epsilon$$ $$=|(-1)^nn^2|-|a|<\epsilon$$ From here I am stuck, I have been told that I should be considering two cases for the even and odd I believe, but am not too sure how to go forward and make these contradictions.
There are different ways to see that it does not converge. One of the simplest ways is to note that your sequence $a_n$ is unbounded: It tends to $\infty$ along all even numbers $2n$, and tends to $-\infty$ along all odd numbers $2n+1$. Since a convergent sequence is bounded, this gives the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1704192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Prove that there are infinitely many integers n such that $4n^2+1$ is divisible by both $13$ and $5$ Prove that there are infinitely many integers n such that $4n^2+1$ is divisible by both $13$ and $5$. This can be written as: $$65k = (2n)^2 + 1$$ It's clear that $k$ will always be odd. Now I am stuck. I wrote a program to find the first few $n$'s. They are $4, 9$. $n$ can only end with $4, 6, 8, 9$ if I'm correct in my deductions. I have made no further progress. Please help me find the solution. Thanks.
$$4n^2+1\equiv0\pmod{13}\iff4n^2\equiv-1+65$$ As $(4,13)=1,4n^2\equiv64\iff n^2\equiv16\iff n\equiv\pm4\pmod{13} $ Similarly, $n\equiv\pm4\pmod5$ Now use Chinese Remainder Theorem
{ "language": "en", "url": "https://math.stackexchange.com/questions/1704353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 10, "answer_id": 7 }
Show $F_1$ is a continuous linear functional Let $(X,||\cdot ||)=(C[0,1],||\cdot ||_{\infty})$ and $F_1 :X \rightarrow \mathbb R$ be defined by $$F_1 (f)=\int _{1/2}^{3/4} f(t) dt$$ Show $F_1$ is a continuous linear functional. So we need to show it is a linear functional first and then show it is continuous. By our lecture notes, if we show it is continuous at $0$, then it is continuous everywhere. So then the question is done. Linear functional: Let $a \in \mathbb R$ and $f,g \in X$. Then $$F_1(af+g)=\int _{1/2}^{3/4} (af+g)(t) \, \, dt = \int _{1/2}^{3/4} af(t)+g(t) \, \, dt=\int _{1/2}^{3/4} af(t) dt + \int _{1/2}^{3/4} g(t) dt$$ $$ = a \int _{1/2}^{3/4} f(t) dt + \int _{1/2}^{3/4} g(t) dt = aF_1 (f) + F_1 (g)$$ So it is a linear functional. Prove that it is continuous at $f=0$: Need to prove $$\forall \varepsilon >0, \, \exists \delta >0: \, ||F_1(f)-F_1(0) ||_{X^{*}} < \varepsilon \, , \text{whenever} \, ||f-0||_{\infty}$$ So $$||F_1 (f) ||_{X^{*}} = \sup _{f \in X, \, \, ||f||_{\infty}<\delta} \bigg( | \int _{1/2}^{3/4} f(t) dt | : ||t ||_{\infty}\leq 1 \bigg) = \sup _{f \in X, \, \, ||f||_{\infty}<\delta} \bigg( \int _{1/2}^{3/4} |f(t)| dt : ||t ||_{\infty}\leq 1 \bigg) \leq \int _{1/2}^{3/4} \delta dt =\frac14 \delta < \delta = \varepsilon$$ We let $\delta = \varepsilon$. Is this correct? I have a feeling I made some notation errors somewhere.
For any $\;\epsilon>0\;$ choose $\;\delta=4\epsilon\;$ , so that $$||f-0||_\infty\iff ||f||_\infty<\delta\implies\left\|\int_{1/2}^{3/4}f(t)dt\right\|\le\int_{1/2}^{3/4}||f(t)||dt\le\delta\int_{1/2}^{3/4}dt=\frac14\delta=\epsilon$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1704431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Contour integral of $\frac{x^{p-1}}{1+x}$ I am trying to find the integral $$\int_0^\infty\frac{x^{p-1}}{1+x}\;\mathbb{d}x$$ I know that this is easily expressible in terms of beta function. But i need to prove that it's value is $\dfrac{\pi}{\sin{p\pi}}$ using a good contour I guess. I tried of taking a counter like that of a keyhole having. Sorry for bad drawing. But I have no idea how to continue. P.S.: I am little weak at complex integrals. The contour has 1 semicircle of $\epsilon_1$ radius around $x=-1$ and two of those at $x=0$ where $R\to\infty$ and $\epsilon_1,\epsilon_2,\epsilon_3 \to 0$
There is no contour for the function as it is in the question, since $p-1$ is a real number in the interval $(-1,0).$ If $p-1$ is rational we have a multivalue function with as many values as the denominator of $p-1$ and this implies to create some brunch cuts on the contour. If $p-1$ is irrational then there are an infinte number of multivalues. Hence we want to change the integrand to somethig more friendly. This is the case of substitution by exponentials as shown here: It is interesting that \begin{eqnarray*} \Gamma(x) \Gamma(1-x) = \mathrm{B}(x, 1-x) = \int_0^{\infty} \frac{s^{x-1} ds}{s+1} \end{eqnarray*} This identity is key to show the Euler's reflection formula We show, by using contour integration, that \begin{eqnarray*} \mathrm{B}(x, 1-x) = \frac{\pi}{\sin \pi x}. \end{eqnarray*} Observe that the Beta Function with $y=1-x$ yields the equation above. Here is where we need to use contour integrals. We first make the substitution $s=e^t$, $ ds = e^t dt$, and $t \in (-\infty, \infty)$. So we need to compute We compute \begin{eqnarray*} \mathrm{B}(x, 1-x) = \int_{-\infty}^{\infty} \frac{\mathrm{e}^{t(x-1)} \mathrm{e}^t dt}{\mathrm{e}^t+1} = \mathrm{B}(x, 1-x) = \int_{-\infty}^{\infty} \frac{\mathrm{e}^{tx} dt}{\mathrm{e}^t+1} \quad , \quad 0 < x < 1. \end{eqnarray*} Let us consider the contour integral \begin{eqnarray} I = \int_C f(z) dz, \label{intC} \end{eqnarray} with \begin{eqnarray*} f(z) = \frac{\mathrm{e}^{zx}}{\mathrm{e}^z+1} \quad , \quad 0 < x < 1. \end{eqnarray*} and $C$ is the contour that we need to determine. In the complex plane, the poles of the integrand are the roots of $e^z+1$, that is $\mathrm{e}^z = -1 = \mathrm{e}^{(2k+1) \mathrm{i} \pi}$ so the roots are $z_k= (2k+1) \mathrm{i} \pi$, for $k=0, \pm 1, \pm 2, \cdots$. Then $f(z)$ as an infinite number of poles all lying on the imaginary axis. We will select a contour that has only one pole as shown in the figure below. The contour $C$ can be seen as the union of $C=C_1 \cup C_2 \cup C_3 \cup C_4$, where $C_1$ and $C_3$ are horizontal lines from $-R$ to $R$ with opposite orientation. We want to let $R$ grow to $\infty$. The paths $C_2$ and $C_4$ are vertical lines between $0$ and $2 \pi \mathrm{i}$ with opposite orientations showed in the figure below From the Residue Theorem we evaluate the integral over $C$. The residue corresponding to the pole $z_0= \pi \mathrm{i}$, is computed using the exprsession \begin{eqnarray*} \lim_{z \to z_0} (z-z_0) \, f(z) = \lim_{z \to z_0} \frac{ (z-z_0) \mathrm{e}^{z x}} {\mathrm{e}^z + 1} = \lim_{z \to z_0} \frac{\mathrm{e}^{zx} + (z-z_0) \mathrm{e}^{zx}}{e^z} = \mathrm{e}^{z_0 (x-1)}. \end{eqnarray*} where we use L'H\^{o}pital's rule. Hence $I = 2 \pi \mathrm{i} \; \mathrm{e}^{\pi i (x-1)}$ since the only residue inside the contour is at $z=\mathrm{i} \pi$ . That is, \begin{eqnarray*} 2 \pi \; \mathrm{i} \; \mathrm{e}^{ \mathrm{i} \pi (x-1)} =\int_{C_1} f(z) dz + \int_{C_2} f(z) dz + \int_{C_3} f(z) dz + \int_{C_4} f(z) dz . \end{eqnarray*} We want to find $I_1=\int_{C_1} f(z) dz$, as $R \to \infty$. Let us first find the integral along the vertical path $C_3$. \begin{eqnarray*} I_3 &=& \int_R^{-R} \frac{\mathrm{e}^{ (t + 2 \pi \mathrm{i} ) x}}{\mathrm{e}^{t+2 \pi \mathrm{i}} + 1 } d t \\ &=& \mathrm{e}^{2 \pi \mathrm{i} x} \int_R^{-R} \frac{\mathrm{e}^{ t x}}{\mathrm{e}^{t+2 \pi \mathrm{i}} + 1 } d t \\ &=& \mathrm{e}^{2 \pi \mathrm{i} x} \int_R^{-R} \frac{\mathrm{e}^{ t x}}{\mathrm{e}^{t} + 1 } d t \\ &=& -\mathrm{e}^{2 \pi \mathrm{i} x} I_1, \end{eqnarray*} where we reversed the sign since $I_1$ is computed from $-R$ to $R$ instead of going in the opposite direction. The integral $I_2$ along the path $C_2$ is evaluated as follows \begin{eqnarray*} I_2 = \int_0^{2 \pi} \frac{\mathrm{e}^{ (R + \mathrm{i} t ) x}} {\mathrm{e}^{R+\mathrm{i} t} + 1 } d t = \frac{\mathrm{e}^{R x} }{\mathrm{e}^R} \int_0^{2 \pi} \frac{\mathrm{e}^{ (\mathrm{i} t ) x}} {\mathrm{e}^{\mathrm{i} t} + 1/\mathrm{e}^{R} } d t = \mathrm{e}^{R(x-1)} \int_0^{2 \pi} \frac{\mathrm{e}^{ (\mathrm{i} t ) x}} {\mathrm{e}^{\mathrm{i} t} + 1/\mathrm{e}^{R} } d t. \end{eqnarray*} Now, since $0 < x < 1$ (so $x-1 < 0)$, and the last integral is bounded we have that $\lim_{R \to \infty} I_2 = 0$. The same argument applies for the integral $I_4$ along the path $C_4$. We then have that, from $I=I_1+I_2+I_3+I_4$, \begin{eqnarray*} 2 \pi \mathrm{i} \mathrm{e}^{\mathrm{i} \pi (x-1)} = (1 - \mathrm{e}^{2 \pi \mathrm{i} x}) I_1 \end{eqnarray*} and \begin{eqnarray*} I_1 = \int_{0}^{\infty} \frac{s^{x-1} ds}{s+1} = \frac{ 2 \pi \mathrm{i} \mathrm{e}^{ \mathrm{i} \pi (x-1)}}{1 - \mathrm{e}^{2 \pi \mathrm{i} x}} = \frac{ \pi}{ \frac{\mathrm{e}^{\mathrm{i} \pi x} - \mathrm{e}^{-\mathrm{i} \pi x}}{2 \mathrm{i}}} = \frac{\pi}{\sin \pi x}. \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1704578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Prove $x^2+y^4=1994$ Let $x$ and $y$ positive integers with $y>3$, and $$x^2+y^4=2(x-6)^2+2(y+1)^2$$ Prove that $x^2+y^4=1994$. I've tried finding an upper bound on the value of $x$ or $y$, but without sucess. Can anyone help me prove this problem? Note that $x^2+y^4=1994$ is the result we are trying to prove, not an assumption.
Hint replacing the $x^2,y^4$ with given condition we get $(x-6)^2+(y+1)^2=992$ so thats equal to the equation of a circle located at $h,k$ ie(6,-1) so it got only $4$ integer points which can be proved by using symmetry and at $x,y$ axis as its radius is approximately $31.5$ but out of those $4$ integer points only $37,5$ are the points which satisfy the original equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1704660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
How can addition be non-recursive? Tennenbaum's theorem says neither addition nor multiplication can be recursive in a non-standard model of arithmetic. I assume recursive means computable and computable means computable by a Turing machine (TM). The 3-state TM described below takes the unary representation of two natural numbers as input and outputs the unary representation of the sum of the two numbers. A unary representation is a string of 1's (possibly empty) followed by a 0. Given the input 110110 this TM will output 111100. A0:1RB A1:1RA B0:0LC B1:1RB C0:0Halt C1:0Halt This TM halts on any input tape with two 0's. It can add any two standard natural numbers in a finite number of steps. It must also work inside any non-standard model. Assume it didn't. Then we could define the set of numbers, $x$, such that this TM correctly adds $x+x$. Assuming this set has no largest element, it would be an inductive set since it includes all standard natural numbers. A non-standard model can't have an inductive proper subset proving this TM must work in any non-standard model. Another scenario is when we have a non-standard model in a meta-theory like ZFC. We must assume our meta-theory uses a standard model of arithmetic. The TM above still adds standard natural numbers, but Tennenbaum's theorem says there can't be an algorithm in the meta-theory that computes addition in this non-standard model. Why can't the TM I describe above compute addition in this non-standard model? Is it simply because the input tape would have to be infinitely long if one of the input numbers is an "infinite" number?
If you take a countable nonstandard model, you first enumerate that model in $\omega$. That is, you consider the domain of the model to be a set $\{ a_0, a_1, \ldots \}$. In your algorithm, the input $n$ represents the actual number $n$, and not the element $a_n$ of the model. But this is not what the question is asking, it's asking if there is an algorithm that, given input $\langle n, m \rangle$, outputs a number $k$ such that $a_n + a_m = a_k$. Tenenbaum's theorem says that there is no such algorithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1704855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
tricky GCD Question I am trying to show that the $ gcd (\frac {n}{gcd(n,d)} , \frac {d}{gcd(n,d)} ) = 1 $. My steps let $gcd(n,d) = K = xn+yd$ then I need to find some linear combination of $\frac {n} {K} $ and $\frac {d} {K} $ that gives 1. Any hints would be appreciated.
We have $$ x \cdot \frac nK + y \cdot \frac dK = \frac{xn + yd}K = 1. $$ Hence $\gcd(\frac nK, \frac dK) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1704962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can I determine if this function is continuous at x=1? Is 2/0 a discontinuity or infinity for a function? For the question: Given the function $ (x^2+1)/(x-1) $, is the function continuous at x=1? When I took the right hand and left limits, I got infinity in both cases and f(1) would be 2/0 which I was informed equals infinity, yet there is a vertical asymptote at x=1, which is an obvious indicator of discontinuity. In this case, will the function be continuous?
For a function $f(x)$ to be continuous at some point $c$ of its domain, it has to satisfy the following three conditions: * *$f$ has to be defined at $c$ *$\lim\limits_{x \to c} f(x)$ has to exist *the value of the limit must equal to $c$ In your case, the function $\frac{x^2+1}{x-1}$ is not defined at $x=1$, so the function is not continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1705148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expansion and factorisation I have a little problems with a few questions here and I need help.. Thanks ... * *Factorise completely $$9x^4 - 4x^2 - 9x^2y^2 + 4y^2 $$ My workings .. $$ (3x^2+2x)(3x^2-2x) - y^2 (9x^2-4) = (3x^2 + 2x)(3x^2 -2x) - y^2 (3x+2)(3x-2) $$ *Factorise $3x^2 + 11x - 20$ and , hence Factorise completely $$11a - 11b - 20 + 3a^2 + 3b^2 - 6ab$$ My workings ... $$ 11(a-b) - 20 + (3a-3b)(a-b)$$ *Evaluate the following by algebraic expansion of factorisation (A) $78^2 + 312 + 4$ (B) $501^2 - 1002 + 1$ Thanks a lot !
* *$$9x^4 - 9x^2y^2 - 4x^2 + 4y^2$$ Group in paris such that; $9x^2(x^2-y^2) - 4(x^2 - y^2) = (x^2 - y^2)(9x^2 - 4)$ Then using the difference of 2 squares we get: $$(x-y)(x+y)(3x-2)(3x+2)$$ 2.$3x^2 + 11x - 20$, factorise to $(x-5)(3x-4)$. As @mathlove pointed out $11(a-b) - 20 + 3(?)^2$, where $ ? = a-b$. Which is similar to the previous equation. So utilising it's result we get it factorised down to $(a-b-5)(3(a-b-4)$ *$78^2 + 312 + 4 = 78^2 + 316$, using the fact that $(a + b)^2 = a^2 + 2ab + b^2$, we get $(70 + 8)^2 + 316 = (4900 + 1120 + 64) + 316 = 6400$ *Apply the same reasoning as number 3
{ "language": "en", "url": "https://math.stackexchange.com/questions/1705373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding modular inverse (wrong approach) I'm trying to find the modular inverse of $$30 \pmod{7} $$ I have tried using the Euclidean algorithm and it gave me the right answer, which is $x \equiv 6 \pmod{7} $. However, I tried using another approach that I thought would be simpler, but it resulted in a wrong answer. These were my steps: Suppose x is the modular inverse of 30 mod 7. $$30x \equiv 1 \pmod{7} $$ $$(7*4 + 2)x \equiv 1 \pmod{7} $$ $$2x \equiv 1 \pmod{7}$$ <- I have a feeling it's the previous line of simplification that's causing the problem.) So the inverse of 2 mod 7 is 4. Thus the resulting answer is $x \equiv 4 \pmod{7} $, which is wrong. Could anyone point out what is the problem here?
First, write $\;30=2\pmod 7\;$ , and now use the Euclidean algorithm with this, which is way easier. By the way, the answer indeed is $\;4\;$ , since $\;30\cdot4=120=1+17\cdot7\;$ , or simpler: $\;2\cdot4=1+7\;$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1705459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Using properties of limits, calculate $\lim_{n\to \infty}\left(\frac1{n^2}+\frac1{(n+1)^2}+\cdots+\frac1{(2n)^2}\right)$ Using the properties of limits, calculate the following limits, if they exist. If not, prove they do not exist: $$\lim_{n\to \infty}\left(\frac1{n^2}+\frac1{(n+1)^2}+\frac1{(n+2)^2}+\cdots+\frac1{(2n)^2}\right)$$ This is what I have done, I have expressed the limit in the form: $\lim_{n\to \infty}\frac1{(n+a)^2}$ where 'a' belongs to the reals. Then using the $\epsilon-N$ definition of limits, I assumed that: $$\lim_{n\to \infty}\frac1{(n+a)^2}=0$$ and carried forward with the proof. I would like to use the $\epsilon-N$ definition of limits since it is what we are covering right now, is this the right way of solving this problem?
Let F(n): $$\frac{1}{n^2}+\frac{1}{(n+1)^2}+\cdots+\frac{1}{(2n)^2}$$ Consider two series: $$G(n): \frac{1}{n^2}+\frac{1}{n^2}+\frac{1}{n^2}+\cdots +\frac{1}{n^2}$$ and $$H(n):\frac{1}{(2n)^2}+\frac{1}{(2n)^2}+\frac{1}{(2n)^2}+\cdots+\frac{1}{(2n)^2}$$ Notice that $$H(n)<F(n)<G(n)$$ Now, $$\lim_{n \to \infty} G(n) = \lim_{n \to \infty} H(n)=0$$ Hence, by the sandwich theorem, the given limit becomes $0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1705543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Does this apply: $\sin^2(\pi t-\pi) + \cos^2(\pi t-\pi) = 1$ So I'm solving for the length of the following parametric description: $$ \Bigg[x(t)=3\cos(\pi t-\pi)\,\,\,\,\,\,\,y(t)=3\sin(\pi t - \pi)$$ I applied the formula for solving for length, namely: $$\int_{0}^{2} \sqrt{(-3\pi \sin(\pi t - \pi))^2+(3\pi \cos(\pi t - \pi))^2 }\,\,\, dt$$ $$\int_{0}^{2} \sqrt{9\pi^2\times1}\,\,\, dt$$ My question is: Is it correct that [ $(\sin(\pi t - \pi))^2+ (\cos(\pi t - \pi))^2 = 1$ ] because of the rule: $\cos^2(x) + \sin^2(x) = 1$ ? Can I always apply this convention as long as I have the same on the inside of the sin and cos? Highly appreciated, -Bowser
Hint you can verify using formulae $$\sin(a-b)=\sin(a)\cos(b)-\cos(a)\sin(b)$$ and $$cos(a-b)=\cos(a)\cos(b)+\sin(a)\sin(b)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1705606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Show that $R(k,l) = R(l,k)$ Let $R(k,l)$ denote the Ramsey number.We proved in class a theorem that says $$R(k,l) \leq {k+l-2\choose{k-1}} $$ And supposedly we can use this to show that $R(k,l) = R(l,k)$ for all $k,l \in \mathbb{N}$. However I am not seeing it. I feel like I use should induction, but I'm not sure how. Any suggestions?
$${k+l-2\choose{k-1}}=\frac{(k+l-2)!}{(k-1)!(k+l-2-(k-1))!}=\frac{(k+l-2)!}{(k+l-2-(l-1))!(l-1)!}={k+l-2\choose{l-1}}$$ Thus, $k$ and $l$ are interchangable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1705707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $f(x)\equiv 0$ if $ \int_0^1x^nf(x)\,dx=0$ Let $f \in C[0,1]$. If for each integer $n \ge 0$ we have $$ \int_0^1x^nf(x)\,dx=0$$ show that $f(x) \equiv 0$
The equality implies that $f$ is orthogonal to every polynomial in $L_2[0,1]$, so in particular it is orthogonal to the Legendre polynomials (which form a basis in $L_2[0,1]$). This implies that $f$ must be $0$ a.e., and Since $f$ is continuous, we must conclude that $f \equiv0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1705836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solving equation with Complex Numbers and two unknowns. I have the following track: If $z=a+bi$, with the imaginary number $i$, solve the following equation $$z^2+|z|^2-18=0$$ Well, first of all I replaced $z$ with $a+bi$ and then $|a+bi|^2$ with $(a^2+b^2)$. After, I got $a$ as a function of $b$, but in this case I have two unknowns. Any suggestion?
Assume that $z$ is a solution, then $$ z^2 = 18-|z|^2 $$ and hence $$ |z|^2 = |18 - |z|^2| $$ Put $t = |z|$. Thus $t^2 = 18-t^2$ or $t^2 = t^2 - 18$, but the second possibility clearly can't occur. Hence $2t^2 = 18$, so $t = \pm 3$, but since $t$ is positive, we must have $t=3$. Returning to the original equation, it now reads $$ z^2 - 9 = 0 $$ so $z = \pm 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1705925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Prove that finding a cliques is NP-Complete? I presented my proof bellow, so is it complete (formal) proof. Proof : 1- We can verify the solution in Polynomial time. 2- 3-SAT (NP-Complete) can be polynomialy reduced to clique! (as the following): * *We assume a formula 3-sat (F) as an example : F= (X1 + X2 + X3).(^X1 + ^X2 + X3).(X4 + ^X3 + X2) note: ^X is the negation of X [] 3-SAT is satisfiable $<=>$ $G(V,E)$ has a clique of size k>0. 1 Clique =>3-SAT : if we have a clique then it is SAT "by construction" literal represented as Node. such that no edge between $X_i$,$X_j$ belongs $C_k$, and no edge between ^$X_i$ and $X_i$ in both $C_k, C_j$. let G has a clique of size k we can set each literal represented by a node v belongs the clique so 3-SAT is there. since it is enough to have only 1 literal = true in each clause. [2] 3-SAT => Clique: Let x be 3-SAT found in CNF, and it is satisfiable. => by the graph construction- we can connect all nodes that have true for each clause, and hence we have a clique # Is my proof correct ?
Your proof is correct and your steps are clear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1706043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\frac{\sec(x) - \csc(x)}{\tan(x) - \cot(x)}$ $=$ $\frac{\tan(x) + \cot(x)}{\sec(x) + \csc(x)}$ Question: Prove $\frac{\sec(x) - \csc(x)}{\tan(x) - \cot(x)}$ $=$ $\frac{\tan(x) + \cot(x)}{\sec(x) + \csc(x)}$ My attempt: $$\frac{\sec(x) - \csc(x)}{\tan(x) - \cot(x)}$$ $$ \frac{\frac {1}{\cos(x)} - \frac{1}{\sin(x)}}{\frac{\sin(x)}{\cos(x)} - \frac{\cos(x)}{\sin(x)}} $$ $$ \frac{\sin(x)-\cos(x)}{\sin^2(x)-\cos^2(x)}$$ $$ \frac{(\sin(x)-\cos(x))}{(\sin(x)-\cos(x))(\sin(x)+\cos(x))} $$ $$ \frac{1}{\sin(x)+\cos(x)} $$ Now this is where I am stuck , I thought of multiplying the numerator and denominator by $$ \frac{\frac{\sin(x)}{\cos(x)} + \frac{\cos(x)}{\sin(x)}}{\frac{\sin(x)}{\cos(x)} + \frac{\cos(x)}{\sin(x)}} $$ but that did not work out well..
multiply and divide $ \frac{1}{\sin{x}+\cos{x}}$ by $ \frac{1}{\sin{x}\cos{x}}$ then in the numerator substitute $1$ by $sin^2{x} + cos^2{x}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1706184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 2 }
What is the mean value of $\max\limits_{j}X_j$? Let $X_j$ be a random variable that is $1$ with probability $x_j^*$, and $0$ with probability $1-x_j^*$. The random variables $X_j$ are independent and $j$ belongs to $\{1,\ldots,n\}$ for some positive integer $n$. I would like to calculate the mean value of $\max\limits_{j}X_j$. That is, $$\mathbb{E}\left[\max\limits_{j}X_j\right]$$ My try is: since $X_j$ is a binary random variable, than the maximum would be $1$. Therefore, $$\mathbb{E}\left[\max\limits_{j}X_j\right]=\Pr\left[X_j=1\right]=x_j^*,$$ but I do not know if this is the right argument.
The max is either $0$ or $1$. The probability that it's $0$ is the probability that all (independent) $X_j$ are $0$, that is $$P(max X_j =0)=\prod_j P(X_j=0)=\prod_j (1-x^*_j)$$ And of course, $$P(max X_j =1)=1-\prod_j (1-x^*_j)$$ Thus $E(max X_j)=1-\prod_j (1-x^*_j)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1706298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How to find the unknown in this log inequality?? Find all values of the parameter a $\in\Bbb R$ for which the following inequality is valid for all x $\in\Bbb R$. $$ 1+\log_5(x^2+1)\ge \log_5(ax^2+4x+a) $$ I'm lost when I got to this stage: $ 5x^2-4x+5\ge ax^2+a$ I did this but still don't know how to proceed: $ (5-a)x^2-4x+(5-a)\ge0 $ My reasoning is that the discriminant for $ (5-a)x^2-4x+(5-a)\ge0 $ must be $\ge0$ because x $\in\Bbb R$. And from that I get a <= 7 or a <= 3. Then because $log_5(ax^2+4x+a)$, then $(ax^2+4x+a)$ > 0. Then..? Here I use same reasoning as for finding the above a <= 3 or 7 too, that is x $\in\Bbb R$. Then I get a <= -2 and a <= 2 by using discriminant. Where does my reasoning go wrong? Can anyone explain to me how to solve this? Answer given is $(2,3]$.
Hint: To find where a quadratic function $f$ satisfies $f(x)\geq 0$, first find where $f(x)=0$. On each of the remaining intervals, the sign of $f$ must be constant. The same method is useful for any such inequality ($<,>,\leq,\geq$). This method works with any polynomial $f$, not just quadratics. It also extends easily to rational functions. Look for zeroes of the numerator and of the denominator. Be careful where the denominator vanishes -- such points are typically excluded from the domain, but they still partition the line into intervals along with the zeroes of the numerator. Addendum: There are two things to worry about: (1) both sides have to be defined for all real $x$; and (2) the inequality must be true for all real $x$. First part: The LHS is defined for all real $x$. For the RHS, we require that the quadratic expression $ax^2 + 4x + a$ be strictly positive so that the logarithm is defined. In terms of the corresponding graph, which is a parabola, this means the parabola must not meet the $x$-axis and it must open upwards. Denoting the discriminant by $\Delta$, we must have $\Delta<0$ (no real zeroes) and $a>0$ (opens upward): $$16-4a^2<0$$ $$(4-2a)(4+2a)<0$$$$(a-2)(a+2)>0$$This is satisfied when $a<-2$ or $a>2$. Since we also require $a>0$, we eliminate the case $a<-2$ and conclude that we must have $$\boxed{a>2}\tag{req. for RHS to be defined}$$ Second part: Exponentiating both sides of the original equation with base $5$ and rearranging, we arrive at $$(a-5)x^2 + 4x + (a-5)\leq 0$$ We now ask: for which values of $a$ is this true for all real $x$? Denote the LHS by $f(x)$. If $a-5=0$, the inequality is linear, and so is not satisfied by all real $x$ (some $x$ would produce positive values of $f(x)$). If $a-5>0$, the function is a quadratic with a positive leading coefficient; its graph is a parabola that opens upward and so $f(x)>0$ for sufficiently large $x$ and the original inequality fails. We are therefore left with the requirement that $a-5<0$, i.e., $a<5$, in order to have the parabola open downward. This is not enough to guarantee that $f(x)<0$ for all $x$; for this, we require that the vertex of the parabola lie below the $x$-axis as well. The $x$-coordinate of the vertex is the midpoint between the zeroes, so it is at $$x_v=\frac{-4}{2(a-5)}$$ (To see this, recall the roots of the quadratic $Ax^2+Bx+C=0$ are $x=\frac{-B\pm\sqrt{\Delta}}{2A}$, so the midpoint is $x=\frac{-B}{2A}$.) It is easy enough to compute the $y$-coordinate of the vertex (I leave this to you -- tedious but not hard) as $$y_v=f(x_v)=f\left(\tfrac{-4}{2(a-5)}\right)=\frac{(a-9)(a-3)}{a-5}$$ Since $a<5$, for this to be nonpositive requires that $$\boxed{a\leq3}\tag{req. for inequ. always true}$$ Together: So, in the end, we require $$\boxed{2<a\leq 3}$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1706398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
For what simple piecewise smooth loops is this integral zero? I'm trying to solve the following problem: For what simple piecewise smooth loops $\gamma$ does the following equation hold: $$\int\limits_\gamma \frac{1}{z^2 + z + 1} \, \mathrm{d}z= 0$$ I'm allowed to appeal to Cauchy's integral theorem, and I have determined that the roots of the denominator are $\frac{-1}{2} \pm \sqrt{3}i$. But I am not exactly sure how to proceed, or even what the ideal solution would be. For example, I know that the integral is $0$ over any loop that is homotopic to a constant loop (at a point in the domain of the function), but I'm not sure of what else I should say.
The roots of $z^{3}-1$ are $e^{k(2\pi i/3)}$ for $k=0,1,2$ and $$ z^3-1 = (z-1)(z^2+z+1). $$ So $(z-e^{2\pi i/3})(z-e^{-2\pi i/3})=z^2+z+1$, and $$ \frac{1}{(z-e^{2\pi i/3})(z-e^{-2\pi i/3})}=\frac{A}{z-e^{2\pi i/3}}+\frac{B}{z-e^{-2\pi i/3}}, $$ where $A$, $B$ are easily seen to satisfy $A=-B$. Assuming $\gamma$ does not pass through the points $e^{\pm 2\pi i/3}$, you get zero iff the curve $\gamma$ has a winding number around $e^{-2\pi i/3}$ that is the same as the winding number around $e^{2\pi i/3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1706524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show that the set of all even permutations in $G$ forms a subgroup of $G$. Show that if $G$ is any group of permutations, then the set of all even permutations in $G$ forms a subgroup of $G$. I know that I need to show the closure, identity, and inverses properties hold. So I need to prove: 1) If $p$ and $q$ are even permutations, then so is $pq$. 2) Identity permutation is an even permutation 3) If $p$ is an even permutation, then so is $p^{-1}$ I am just unsure how to go about setting up each proof. Thanks in advance for the help.
Let $E$ be the set of even permutations in $G$ (which is presumably a group of permutations). Let $p$ and $q$ be elements of $E$. Check to see if $pq^{-1}$ is also an element of $E$. (Note: this checks all three conditions simultaneously). A permutation is called an even permutation if its expression as a product of disjoint cycles has an even number of even-length cycles. Alternatively, a permutation is called an even permutation if it can be written as a product of an even number of transpositions. These two definitions can be seen to be equivalent. The second seems a bit more useful here. So, let $p=p_1p_2\cdots p_{2k}$ and $q=q_1q_2\cdots q_{2j}$ be a representation of $p$ and $q$ as a product of transpositions. We have that $q^{-1}=q_{2j}q_{2j-1}\cdots q_2q_1$ since transpositions are self inverses. Thus, $pq^{-1} = p_1p_2\cdots p_{2k}q_{2j}\cdots q_2q_1$ is indeed a product of an even number of transpositions. Furthermore, $pq^{-1}$ is an element of $G$ since $p$ and $q$ (and thus $q^{-1}$) are elements of $G$ and $G$ is closed under products and inverses. Thus $pq^{-1}\in E$, implying that the identity is an element of $E$ (by taking $p=q$), that it is closed under inverse (by taking $p=id$), and that it is closed under products (by taking $q^{-1}$ instead of $q$) and $E$ is a subgroup of $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1706634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Volume of a curved cone Apparently, the volume of this cone is (1/16)π(r^2)h. My question is why this is the case, can someone please geometrically explain the reason behind the 1/16 bit. The radius is supposed to be proportional to the square of its height. Thanks.
\begin{equation} r=a \cdot h^2 \tag{01} \end{equation} \begin{equation} dV=\pi r^{2} dh = \pi a^{2}h^{4}dh \tag{02} \end{equation} \begin{equation} V=\int_{h=0}^{h=H}\pi a^{2}h^{4}dh=\dfrac{\pi a^{2}H^{5}}{5}=\dfrac{\pi R^{2}H}{5} \tag{03} \end{equation} If instead of eq.(01) your curve was \begin{equation} r=a \cdot \sqrt{h^{15}} \tag{04} \end{equation} then you would have 16 in the denominator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1706732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The meaning of bounding curve of a surface I am reading Stokes Theorem. I am not able to understand the meaning of bounding curve of a surface. What is the definition of boundary curve of a surface $$z=f(x,y)$$ In particular I am trying to figure out the boundary of the portion of the surface $x^2 + y^2 + z^2 =25$ below $z=4$. Thanks!
There is no boundary for $z = f(x,y)$ unless you specify a domain for $(x,y)$. In your particular example, the boundary is the circle where the plane $z=4$ intersects the sphere $x^2 + y^2 + z^2 = 25$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1706829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
continuous and uniformly continuous I remember proving $x^3$ is not uniformly continuous on $\mathbb{R}$. Then I read the proof of a theorem: Suppose $D$ is compact. Function $f: D \rightarrow \mathbb{R}$ is continuous on $D$ if and only if f is uniformly continuous. It's obvious that $f(x)=x^3$ is continuous at $x$ for all $x\in \mathbb{R}$ So does it mean if I bounded the interval $f(x)=x^3$ to, let's say, $[a,b]$ with $a<b$, then $f(x)=x^3$ is uniformly continuous?
Since your function is a polynomial and hence derivable.Thus using the fact if the derivative exists and is bounded then it is is uniformly continuous. $f(x)=x^3$, $f'(x)=3x^2$ and it is bounded on your interval $|f'(x)|\le 3b^2$ for all $x$ in $[a,b]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1706910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
"Natural" example of cosets Do you know natural/concrete/appealing examples of right/left cosets in group theory ? This notion is a powerful tool but also a very abstract one for beginners so this is why I'm looking for friendly examples.
While finite group examples may be easier to first digest, cosets naturally come up in calculus as a way to say what indefinite integrals are: the indefinite integral of an integrable function $f$ is the coset $ \{ F + c : c \in \mathbb R \} = F + \mathbb R$, where $F$ is some antiderivative of $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1706973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57", "answer_count": 14, "answer_id": 12 }
How to simplify this surd: $\sqrt{1+\frac{\sqrt{3}}{2}}+\sqrt{1-\frac{\sqrt{3}}{2}}$ $$\sqrt{1+\frac{\sqrt{3}}{2}}+\sqrt{1-\frac{\sqrt{3}}{2}} = x$$ We have to find the value of $x$. Taking the terms to other side and squaring is increasing the power of $x$ rapidly, and it becomes unsolvable mess. I think the answer lies in simplification, but can't do it. Also I have tried taking $\sqrt{2}$ common, but it doesn't help.
Hint: square the surd and note the cross term may be simplified. Also note that $\sqrt{3}/2 = \cos{(\pi/6)}$ and you may use the double angle formula $1 +\cos{t} = 2 \cos^2{(t/2)}$, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1707100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
The concept of "almost everywhere" wrt to zero-dimensional measure $\mathcal H^0$ From my understanding, a function $u$ is defined $\mathcal H^0$-a.e. on an interval $I$ means that $u$ is only not "defined" on a set of points $A \subset I$ such that $\mathcal H^0(A)=0$ However, I think $\mathcal H^0(A)=0$ means that $A$ is empty, and hence implies that $u$ is defined everywhere in $I$. Because even if $A$ contains only a point, then $\mathcal H^0(A)=1>0$. My question: can somebody confirm for me that "defined $\mathcal H^0$ a.e. is equivalent to defined everywhere" is correct? or am I making some trivial mistake? PS: $\mathcal H^0$ means Housdroff measure in $0$ dimensions. Also, we may think it as a counting measure.
Yes, the zero-dimensional Hausdorff measure $\mathcal H^0$ is simply the counting measure, and the only null set for this measure is the empty set. Example of usage: if $A\subset \mathbb{R}^2$ is a Borel set of zero length (meaning $\mathcal H^1(A)=0$), then for almost every $x\in\mathbb{R}$ the slice $$A_x = \{y\in \mathbb{R}: (x,y)\in A\}$$ has zero $\mathcal{H}^0$ measure. (This is an instance of a more general result.) In practical terms, this means the intersection of $A$ with almost every vertical line is empty.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1707220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Necessity of Galerkin Approximation in Existence Proof for Parabolic PDEs I'm studying the proof of the existence of solutions to linear parabolic PDEs, e.g. the heat equation $u_t = \Delta u + f$. The standard technique seems to be to do a Galerkin approximation, see e.g. Section 6.5 in these notes. My question is: In the notes mentioned above, a proof of the existence of solutions in the finite dimensional case is given (see Proposition 6.5), but I do not see where this proof makes use of the fact that the system is finite dimensional. If this were not required, however, then what's the point in making a Galerkin approximation in the first place?
I think I've found the answer. The proof is based on showing that the map $$ \Phi(u) := u_0 + \int_0^t \Delta u + f \, dt $$ is a contraction and hence has a fixed point. In the infinite-dimensional problem, this definition of $\Phi$ doesn't make sense since $\Delta : H^1_0 \to H^{-1}$ and thus it is not clear what the domain and range of $\Phi$ should be. In the finite-dimensional system, on the other hand, we replace $\Delta$ by an operator $\Delta_n : \mathbb{R}^n \to \mathbb{R}^n$ and everything works out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1707285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Deck transformation of $p : Y \to X : z \mapsto z^3 - 3z$ Let $X = \mathbb{C} \setminus \{ \pm 2 \}$ and $Y = \mathbb{C} \setminus \{ \pm 1, \pm 2 \}$. The map $$ p : Y \to X : z \mapsto z^3 - 3z $$ is a 3-branched covering. Problem: Find $\operatorname{Deck}(Y/X)$, the group of Deck transformations of $Y$. My try: My only idea is that $\operatorname{Deck}(Y/X) = \pi_1(Y)$ when $Y$ is the universal covering, but I don't think it is.
Too long for a comment. This is an idea for a solution. It seems that the covering is not normal. Let $a$, $b$, $c$, $d$ be loops around $-2,-1,1,2$ which generate $\pi_1(Y)=F(a,b,c,d)$ and $u,v$ be loops around $-2,2$ which generate $\pi_1(X)=F(u,v)$ then $p_{*}(a)=u^3$, $p_*(b)=uv^2$, $p_{*}(c)=u^2v$, $p_{*}(d)=v^3$ according to the multplicities of $-2,-1,1,2$. But, this claim needs a proof. Let $f:Y\rightarrow Y$ be a deck transformation which is not (homeomorphic to) the identity deck transformation. Then it must interchange $a\&c$ or $b\&d$. Assume that $f_*(a)=c$ then $u^3=p_*(a)=p_*f_*(a)=p_*(c)=u^2v$ with contradiction. Similarly $f_*(b)=d$ gives a contradiction. So, $f_*(a)=a$, $f_*(b)=b$, $f_*(c)=c$ and $f_*(d)=d$ and hence we have a contradiction with the asumption that $f$ is not the identity deck transformation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1707382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
For which parameter values does the series $\sum 1/(n^\beta + (-1)^n n^\alpha)$ converge? Given that $0 < \beta < \alpha \le 1$, for which $\alpha,\beta$ does the series $\sum^\infty_{n=1}\frac {1}{n^\beta + (-1)^n n^\alpha}$ converge ?
Write $a_n$ for the general term of the series. Also, let $\delta = a-b$. Set $$b_n = a_n - \frac{(-1)^n}{ n^{a}} = \frac{(-1)^n}{n^a (1+ (-1)^n n^{-\delta})}-\frac{(-1)^n}{n^a}=\frac{(-1)^n}{n^a} \times \left (1- \frac{1}{1+ (-1)^n n^{-\delta}}\right)=(*).$$ Now $$(*)= \frac{(-1)^n}{n^a} \times \frac{(-1)^n n ^{-\delta}}{1+(-1)^n n^{-\delta}}=\frac{1}{n^{a+\delta}}\times \frac{1}{1+(-1)^n n^{-\delta}}.$$ Therefore $\sum b_n$ is a positive series, and by comparison converges if and and only if $$a+\delta =2a-b >1.$$ Since $\sum (-1)^n /n^a$ converges for all $a>0$, and $a_n = b_n + (-1)^n/n^a$, it follows that $\sum a_n $ converges if and only if $\sum b_n$ converges, that is if and only if $2a-b>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1707516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Unknown Inequality $$ \left( \sqrt{3}\sqrt{y(x+y+z)}+\sqrt{xz}\right)\left( \sqrt{3}\sqrt{x(x+y+z)}+\sqrt{yz}\right)\left( \sqrt{3}\sqrt{z(x+y+z)}+\sqrt{xy}\right) \leq 8(y+x)(x+z)(y+z)$$ I can prove this inequality, but i need know if this inequaliy is known...
We need to prove that $$\prod\limits_{cyc}(a\sqrt{3(a^2+b^2+c^2)}+bc)\leq8\prod\limits_{cyc}(a^2+b^2)$$ which is true even for all reals $a$, $b$ and $c$. Indeed, let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative, and $abc=w^3$. Hence, it's obvious that the last inequality is equivalent yo $f(w^3)\leq0$, where $f$ is a convex function. Hence, $f$ gets a maximal value for an extremal value of $w^3$. We know that an equation $(x-a)(x-b)(x-c)=0$ or $x^3-3ux^2+3v^2x=w^3$ has three real roots $a$, $b$ and $c$. Thus, a graph $y=w^3$ cross a graph $y=x^3-3ux^2+3v^2x$ in three points (maybe two of them coincide) Let $u$ and $v^2$ be constants. Hence, $w^3$ gets an extremal value for an equality case of two variables. Id est, it remains to prove our inequality for $b=c=1$, which gives $(x-1)^2(13x^2+34x+25)\geq0$, which is obvious. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1707614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to determine if the following series converge or not? $\Sigma_{n=1}^{\infty} a_n $ where: * *$ a_n = \frac{1}{\ln(n)^{\ln(n)}}$ *$a_n = \frac{1}{n }-\ln\left( 1+\frac{1}{n}\right)$ in the first case, I really have no idea in the second case, is it correct to say that for $ \frac{1}{n }-\ln\left( 1+\frac{1}{n}\right)$ is (by taylor expansion) $\frac{1}{2n^2}+O(\frac{1}{n^3})$ and therefore, by the limit comparison test converges?Is there any other way? Thanks in advance
For 2. $\begin{array}\\ a_n &= \frac{1}{n }-\ln\left( 1+\frac{1}{n}\right)\\ &= \frac{1}{n }-\int_1^{1+1/n} \frac{dx}{x}\\ &= \frac{1}{n }-\int_0^{1/n} \frac{dx}{1+x}\\ &= \int_0^{1/n} (1-\frac{1}{1+x})dx\\ &= \int_0^{1/n} (\frac{x}{1+x})dx\\ &< \int_0^{1/n} x\,dx\\ &= \frac{x^2}{2}|_0^{1/n}\\ &= \frac{1}{2n^2}\\ \end{array} $ For a lower bound, from the integral, $a_n > 0$. To be more precise, $\begin{array}\\ a_n &= \int_0^{1/n} (\frac{x}{1+x})dx\\ &\gt \frac1{1+1/n}\int_0^{1/n} x\,dx\\ &= \frac1{1+1/n}\frac{x^2}{2}|_0^{1/n}\\ &= \frac1{1+1/n}\frac{1}{2n^2}\\ &= \frac{1}{2n(n+1)}\\ \end{array} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1707695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Distance in Poincaré disk from origin to a point given Let $C$ circle $x^2+y^2=1$ find the distance (Poincaré disk) from $O=(0,0)$ to $(x,y)$ The distance in Poincaré is $d=ln(AB,PQ)$ where AB are a segment of the curve and P and Q are points in the limits of Poincaré disk. Then $A=(0,0)$ and $B=(x,y)$ but I dont know the values of P and Q. I try to use the circunference formula, but I have only two points (A and B) and I need three. Please give me clues, to solve this problem. Thank you.
See, metric entry in the Poincaré Disk Model : $$|u| = \sqrt{x^2 + y^2}$$ $$\delta(u, v) = 2 * \frac{|u-v]^2}{(1 - |u|^2)(1 - |v|^2)}$$ In your case, $v = O$ and $|v| = 0$ so : $$\delta(u, O) = 2 * \frac{|u|^2}{(1 - |u|^2)}$$ $$argcosh(1 + \delta(u, O))$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1707798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Some proofs about the infinite products; the series and the analyticity Definition A.1.4. The infinite product $\prod_{n=1}^{\infty}(1+a_{n}(x))$, where $x$ is a real or complex variable in a domain, is uniformly convergent if $p_{n}(x)=\prod_{m=k}^{n}(1+a_{n}(x))$ converges uniformly in that domain, for each $k$. Theorem A.1.5. If the series $\sum_{n=1}^{\infty}|a_{n}(x)|$ converges uniformly in some region, then the product $\prod_{n=1}^{\infty}(1+a_{n}(x))$ also converges uniformly in that region. Corollary A.1.6. If $a_{n}(x)$ is analytic in some region of the complex plane and $\prod(1+a_{n}(x))$ converges uniformly in that region, then the infinite product represents an analytic function in that region. They are taken from Special Functions page 596-597. I want to prove Theorem A.1.5 and Corollary A.1.6. Proof: There exists a convergent series $\sum_{n=1}^{\infty}M_{n}$ such that $|a_{n}(x)|\leq M_{n}$, for all $x\in A\subseteq \mathbb{C}$. Note that $|1+a_{m}(x)|\leq 1+|a_{m}(x)|<2M_{m}$, so $$|p_{n}(x)|<2\prod_{m=k}^{n}M_{m}\leq 2\prod_{m=1}^{n}M_{m}=:2C_{n}.$$ We know that $M_{m}$ is bounded for all $1\leq m\leq n$, so the partial products $C_{n}$ must be bounded too. Therefore $2\sum_{n=1}^{\infty}C_{n}<\infty$, and this shows that the infinite product is uniformly convergent on $A$. Is this correct? Is there another way to prove it? I am not sure how to provethe corollary. Any hints?
No, that's totally wrong. For the series to be convergent, you certainly need $M_n \to 0$. But $1 + |a_n| > 1$, which is certainly not bounded by $2 M_n$. Hint: What you need to do is use logarithms: $$\prod_{n=1}^N (1 + a_n(x)) = \exp\left(\sum_{n=1}^N \log(1+a_n(x))\right)$$ As for the corollary, that's just the fact that a uniform limit of analytic functions in a region is analytic there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1707914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can a bounded function always attain its least upper bound on a bounded rectangle in $R^n$? Suppose we have a rectangle $Q$, and $Q\subset R^n$. Then $Q$ is bounded by the definition of higher dimensional rectangles. Suppose $f$ is a bounded function defined on $Q$. Since $f$ is bounded, we can produce its infimum $\inf_Q f$ and supremum $\sup_Q f$. My question is that is it always true that $\exists x_1,x_2\in Q$ with the property that $f(x_1)=\inf_Q f$ and $f(x_2)=\sup_Q f$? If this does not always hold, please provide an example.
Consider the rectangle $Q$ given by $(0,1)^n\subset \mathbb R^n$ with coordinates $x_1\cdots x_n$. Then the function $f(x_1\cdots x_n)=x_1$ is bounded on $Q$ but attains neither its supremum on $Q$ (which is $1$) nor its infimum on $Q$ (which is $0$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1708056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Counterexample Question Let $f:X\rightarrow Y$ be a morphism of varieties. If $f(X)$ is dense in $Y$, then $\tilde{f}:\Gamma(Y)\rightarrow \Gamma(X)$ is injective, where $\tilde{f}$ is the homomorphism induced by $f$. In fact, if $X$ and $Y$ are affine, then we have if and only if. Can we relax the prerequisites a bit and have $\tilde{f}$ injective $\Rightarrow$ $f(X)$ dense be true even if $Y$ is not affine? I'm inclined to say no, since I could only prove it using the fact that $Y$ is affine. But, this is not a proof that it's impossible. Are there any nice counterexamples out there?
Take $X$ to be an embedding of a closed point into $Y=\mathbb P^1$. Then $\Gamma(Y)\to\Gamma(X)$ is an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1708153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How do I write $F_1+F_3+F_5+\ldots+F_{2n-1}$ in summation notation? How do I write $F_1+F_3+F_5+\ldots+F_{2n-1}$ in summation notation? $F_i$ represents the Fibonacci sequence. I can't figure out how to write this in summation notation. Clear steps would be marvelous.
Since the index is odd, ie $\{1,3,5,7,...\}$ then $2n+1$ gives the odd numbers for $n \geq 0$. In summation form it is seen to be $$\sum_{n=0}^{m-1} F_{2n+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1708232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Functional Limits and Continuity so here is a problem I have been working on: Assume g is defined and continuous on all of $\mathbb{R}$. If $g(x_0) > 0$ for a single point $x_0 \in \mathbb{R}$, then $g(x)$ is in fact strictly positive for uncountably many points. So far I've come up with the following: (I've already shown there exists a bijective function between any interval (a, b), a < b, and R so I first take that for granted.) Since this is true, any interval (a, b) contains uncountably many points. Thus, the interval (0, g(x0)) contains uncountably many points. Now, here is my issue: how does one use continuity to show that g(x) must contain this interval rigorously? Intuitively, this conclusion seems obvious.
A possible proof is this one : Let $\epsilon = \frac{g(x_0)}{2}$, then by continuity, there exist $\delta > 0$ such that $\forall x\in ]x_0-\delta, x_0+\delta[, 0<g(x_0)-\epsilon \leq g(x) \leq g(x_0)+\epsilon$ So $\forall x\in ]x_0-\delta, x_0+\delta[, g(x >0)$ and as there are uncountably points in this interval, we have the resultat
{ "language": "en", "url": "https://math.stackexchange.com/questions/1708328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof By Induction With Integration Problem I am required to prove this formula by induction$$ \int x^k e^{\lambda x} = \frac{(-1)^{k+1}k!}{\lambda^{k+1}} + \sum_{i=0}^k \frac{(-1)^i k^\underline{i}}{\lambda^{i+1}}x^{k-i}e^{\lambda x}$$ where $k^{\underline i}$ is a falling factorial $k(k-1) \cdots (k-i+1)$ (assuming this to be equal to 1 when $i=0$) and the integral $\int f$ is defined as $\int_0^x f(\xi) d\xi$ My first problem results when trying for $k=0$ as an initial value, I get LHS: $\frac{e^{\lambda x}}{\lambda}$ and RHS: $\frac{e^{\lambda x}}{\lambda} - \frac{1}{\lambda}$ I assumed that as I have solved an integral the lambda on the RHS could be incorporated in the "+C" that the integral produced on the LHS but doesn't the $\int f$ definition mean that there is no +C? Help would also be appreciated to any steps towards solving for $k=n+1$
According to your definition of the operator $\int$, the LHS for $k=0$ becomes $$\int e^{\lambda x}=\int_0^x e^{\lambda\xi}\; d\xi = \left[\frac{e^{\lambda\xi}}{\lambda}\right]_{\xi = 0}^{\xi = x}=\frac{e^{\lambda x}}{\lambda}-\frac1\lambda$$ And working out the right-hand side for $k=0$ will give you the same result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1708407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Contradictory equations Question: Find whether the solution set of $$\begin{cases}2x = 1\\ y + 5 = x\\ x = y + 3\end{cases}$$ is a singleton. My attempt: Rewriting the first equation will give us $x = \frac{1}{2}$. The other two equations can be written as $x - y = 5$ and $x - y = 3$. Now, the solution of these equations is $\emptyset$. So we can say that the solution set of $$\begin{cases}2x = 1\\ y + 5 = x\\ x = y + 3\end{cases}$$ is not singleton. Kindly verify and throw some insights.
A vertical line ( first equation ) cuts two parallel lines ( last two ) transversely, creating two points of intersection or two solutions. If there were three (non-concurrent) lines then there would be three points of intersection enclosing a triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1708509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Root test is stronger than ratio test? I am a little bit confused regarding the meaning of the phrase :" Root test is stronger than ratio test", and was hoping you will be able to help me figure it out. As far as I can see here: https://www.maa.org/sites/default/files/0025570x33450.di021200.02p0190s.pdf The limit from the ratio test is greater or equal the limit from the root test . So, my first question is- is there any example of a series $\Sigma a_n$ such that the limit from the ratio test is exactly 1 (i.e.- inconclusive), but the limit from the root test is less than 1? (i.e.- convergence can be proved by using the root test but not by using the ratio test ) If not, then is it correct that this phrase is the meaning of "stronger" is when the limit from the ratio test does not exist? (as in the classic example of a rearranged geometric series) Hope you will be able to help. THanks ! related posts: Show root test is stronger than ratio test Inequality involving $\limsup$ and $\liminf$: $ \liminf(a_{n+1}/a_n) \le \liminf((a_n)^{(1/n)}) \le \limsup((a_n)^{(1/n)}) \le \limsup(a_{n+1}/a_n)$ Do the sequences from the ratio and root tests converge to the same limit?
Consider the example of series $$\sum 3^{-n-(-1)^n}$$ root test establishs the convergance but ratio test fails onother example series with nth term $a_n=2^{-n}$ if n is odd $a_n=2^{-n+2}$ if n is even for second series when n is odd or even and tends to $\infty$ ${a_n}^{\frac{1}{n}}=\frac{1}{2}$ Hence by cauchys root test the series converges but the ratio test gives $\frac{a_n}{a_n+1}=\frac{1}{2}$ if n is odd and tends to $\infty$ $\frac{a_n}{a_n+1}=8$ when n is even and approachs $\infty$ Hence ratio test fails.. Sorry I dnt know mathjax that is why i was a bit late...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1708595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Use this sequence to prove that there are infinitely many prime numbers. Problem: By considering this sequence of numbers $$2^1 + 1,\:\: 2^2 + 1,\:\: 2^4 + 1,\:\: 2^8 +1,\:\: 2^{16} +1,\:\: 2^{32}+1,\ldots$$ prove that there are infinitely many prime numbers. I am thinking that if I can show that every pair of numbers in the sequence are relatively prime then since each has at least one prime factor this would prove the existence of infinitely many primes. But I am new to discrete mathematics and number theory so I am not sure on how to proceed.
If $2^{2^n}\equiv -1\pmod p$, then show that $2^{2^{m}}\not\equiv-1\pmod p$ for any $m<n$. This is because if $2^{2^m}\equiv -1\pmod p$ then: $$-1\equiv 2^{2^n}=\left(2^{2^{m+1}}\right)^{2^{n-m-1}}\equiv 1\pmod p$$ So $p=2$. But $p$ can't be $2$. So $2^{2^n}+1$ and $2^{2^m}+1$ can't have any prime factors in common if $m\neq n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1708687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 0 }
joint exponential distribution range problem So the question asks: Let $X, Y$ be random variables, with the following joint probability density function: $$f_{X,Y}(x,y) = \left\{ \begin{array}{ 1 l } kye^{-y} & \mbox{if $0≤|x|≤y$}\\ 0 & \mbox{ otherwise} \end{array} \right.$$ Find the value of the constant $k$. So so far, I got: $$ 1 = \int_{-y}^y \int_{ 0 }^{∞} kye^{-y} \,dy \, dx $$ while I kind of have a problem calculating the equation, and I ended up getting that $ 1 = 2ky $, which leads $k=0.5/y$ which is clearly wrong. So do I have a wrong range of the $dy\, dx$?
Your order of integration is incorrect, which is why your computation doesn't work. The outermost integral cannot be a function of any other variables of integration. If you write $$\int_{x=-y}^y \int_{y=0}^\infty f(x,y) \, dy \, dx,$$ then what you have essentially done is integrated a function $f$ with respect to $y$, but then the interval of integration with respect to $x$, itself being a function of $y$, makes no sense. Instead, your order of integration for the region under consideration should be $$\int_{y=0}^\infty \int_{x=-y}^y f(x,y) \, dx \, dy.$$ For a given value of $y$, the inner integral with respect to $x$ is evaluated on the interval $x \in [-y,y]$, which now makes sense: we are integrating $f$ along horizontal "strips" in the $xy$-plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1708794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to show that $f(x; \alpha, \beta) = \frac{1}{\beta^{\alpha} \Gamma(\alpha)}x^{\alpha-1}e^{-x/\beta}$is a pdf? I have some problems trying to prove the following problem: A continuous random variable $X$ is said to have a gamma distribution with parameters $\alpha > 0$ and $\beta > 0$ if it has a pdf given by: $$f(x; \alpha, \beta) = \frac{1}{\beta^{\alpha} \Gamma(\alpha)}x^{\alpha-1}e^{-x/\beta}$$ if $x>0$, or $0$ otherwise. Given that apparently this is a pdf by definition, I do not know how to prove it is a pdf. My guess is to check if I take the integration of the distribution the value should be $1$. Is this correct? Is that enough?
$f(x)$ is a pdf if: $f(x) \geq 0$ for all x. And, $\int_{-\infty}^{\infty} f(x) dx = 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1708922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Duality of $L^p$ spaces Let $p,q\in(1,\infty)$ be such that $1/p+1/q=1$ and let $(\Omega, \mathcal A,\mu)$ be a $\sigma$-finite measure space. Claim: The map $$\phi:L^q(\Omega)\to \left(L^p(\Omega) \right)^*,\quad \phi(g)(f)=\int_\Omega fgd\mu$$ is an isometric isomorphism. Proving that $\phi$ is well-defined, linear and continuous was not too difficult. I also proved that $\|\phi(g)\|_{(L^p)^*}\leq \|g\|_{L^q}$ holds, but failed at showing the reverse inequality. This leads me to Question 1: What would be a function $f\in L^p(\Omega)$ with $$\int_{\Omega} fgd\mu=\|g\|_{L^q}\quad ?$$ To prove that $\phi$ is an isomorphism, it suffices to prove that it is bijective. I can prove injectivity but not surjectivity, hence Question 2: Why is $\phi$ surjective?
Question 1 : choose $f=g^{q-1}\cdot sign(f)$ Question 2 : Notice that $\phi \in \Big(L^q(\Omega) \Big)^{*}$ we ca define $$\nu (A)=\phi( \mathbb{1} _{A})$$, $\nu$ ia a measure and absolutly continuos acording to $\mu$ so find with Radon-Niodim Theorem we get $g$ that works for $f$ an indicator. We imediatly conclude that $g$ work for simple function. For $f$ general we have $f_n\to f$ simple functions that $|f_n|\leq|f|$ (the normal constraction) and so from the Dominated Convergence Theorem $g$ will work for all $f$. Q.E.D
{ "language": "en", "url": "https://math.stackexchange.com/questions/1709035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding number of roots using Rolle's Theorem, and depending on parameter I need to count the number of real solutions for $ f(x) = 0 $ but I have an $m$ in there. $$ f(x) = x^3+3x^2-mx+5 $$ I know I need to study $m$ to get the number of roots, but I don't know where to begin. Any suggestions?
You can use the discriminant of the cubic function to find the nature of the roots. The general cubic equation is $$ax^3+bx^2+cx+d=0$$ so in your case $$a=1,\ b=3,\ c=-m,\ d=5$$ In general, the discriminant is $$\Delta=18abcd-4b^3d+b^2c^2-4ac^3-27a^2d^2$$ You can substitute in your values and get a cubic expression in $m$. Then your equation has * *three distinct real roots, if $\Delta>0$. *one or two distinct real roots, if $\Delta=0$. *one real roots (and two complex roots), if $\Delta<0$. To really finish your answer, you would need to distinguish one or two real roots in the second case. You may also want so solve for $m$ in each (in)equality, to give simpler conditions on $m$. Since you asked for "suggestions" and "where to begin," I'll leave it at that. Note that this did not need Rolle's theorem. Do you really need to use it?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1709129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
required knowledge for solving this equations Hi everyone there is some kind of functions which they're confusing to me, I already studied whole function knowledge requirements but they wasn't enough for solving new type of questions at all, I studied IB diploma math either. I will write down my questions and I will be so appropriated if you help me about solution.please tell me which book or sources will be helpful for mastering this specific topic of function. * *Given that $f(x)+f(x+1)=2x+4$, find $f({1\over2})$. The official solution has suggested using linear function but it doesn't work in other type of same questions (like which I will write below) and it's hard to memorize specific question and it's solution during the test $$f(x)+f(x+1)=2x+4$$ $$2ax+a+2b=2x+4$$ $$a=1, \quad b={3\over2}$$ $$f(x)=x+{3\over2}$$ $$f({1\over2})=2$$ I tried to use the same technique for the following question: *Given that $f(x)+2f({1\over x})=2x+4$, find $f(2)$. $$ax+b+2\left({a\over x}+b\right)=2x+4$$ $$ax+{2a\over x}+3b=2x+4$$ In question 1 we set $2ax=2x$ so that $a=1$, but that did not work for this question. Also I could not solve the following questions: *Given that $f(x+y)=f(x)f(y)$ and $f(2)=3$, find $f(4)$. *Given that $f(xy)=f(x)+f(y)$ and $f(3)=2$, find $f(27)$. P.S. I revisited the first and second question again there is a solution which worked for first one but didn't for second one the official topic says they're both linear function but it doesn't help me solving questions at all by the way: I said in $f(x)+f(x+1)=2x+4$ we have $f(x+1)$ and that means $f(x)+1$ which moves $f(x)$ vector during $x$ axis so: $2f(x)+1=2x+4$ and we have $f({1\over 2})$=? $2f({1\over 2})+1=5$ $2f({1\over 2})=4$ $f({1\over 2})=2$ but when I tried same solution for second question in this way: $f(x)+2f({1\over x})=2x+4$ I said $f({1\over x})$ is $x-({x^2-1\over x})$ and for $f(2)$ will be: $f(2)+2(f(2)-{3\over 2})=4+4$ $3f(2)-3=8$ $f(2)= \frac{11}{3}$ this answer is incorrect because the true answer is $f(2) = \frac{2}{3}$ and I didn't understand what is linear function rule in this question
The functional equation $f(x)+f(x+1)=2x+4$ has many solutions, unless you require additional properties. You can choose $f(\frac{1}{2})$ arbitrary! Choose, for example, $f(\frac{1}{2})=42$. Then the equation $f(\frac{1}{2})+f(\frac{3}{2})=2\times\frac{1}{2}+4$ implies $f(\frac{3}{2})=5-42=-37$. Further, you have $f(\frac{3}{2})+f(\frac{5}{2})=2\times \frac{3}{2}+4$ which uniquely determines $f(\frac{5}{2})=44$. In this way, you can compute $f(\frac{7}{2})$, $f(\frac{9}{2}), \ldots$, but also $f(-\frac{1}{2})$, $f(-\frac{3}{2})$ etc. For numbers $x$ that are not of the form $\pm\frac{1}{2}$, $\pm\frac{3}{2},\ldots$, you can still keep $f(x)$ to be the original linear function with no change. Then $f(x)+f(x+1)=2x+4$ is satisfied for every $x$ and $f(\frac{1}{2})=42$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1709233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Deciding whether a complex function has a primitive in any open subset of $\mathbb{C}$ I need to decide whether $g(z)=\bar{z}$ has a primitive in any open subset of $\mathbb{C}$. I'm struggling to decide where to start. I was thinking of using Cauchy's integral theorem i.e. if $g(z)$ is holomorphic then $$\int_C g(z) \: \text{d}z=0,$$ where $C$ is a closed path. But I'm really not sure. Some tips would be great!
$\overline{z}$ is not even holomorphic. Let $\gamma$ be the unit circle in the positive orientation, then $$ \int_\gamma \overline{z}\,dz = i \int_0^{2\pi} \overline{e^{i\phi}} e^{i\phi}\,d\phi = i\int_0^{2\pi} \,d\phi = 2\pi i $$ By Goursat's theorem, $\overline{z}$ cannot be holomorphic in the unit disk. You can expand $\gamma$ to show that $\overline{z}$ is not holomorphic in any disk.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1709324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Stuck on basic limit problem: $\lim_{x \to 0} \frac{\sin(\tan x)}{\sin x}$ Consider $\lim_{x \to 0} \frac{\sin(\tan x)}{\sin x}$. The answer is $1$. This is clear intuitively since $\tan x ≈ x$ for small $x$. How do you show this rigorously? In general, it does not hold that $\lim_{x \to p} \frac{f(g(x))}{f(x)} = 1$ if $g(x) - x \to 0$ as $x \to p$. No advanced techniques like series or L'Hôpital. This is an exercise from a section of a textbook which only presumes basic limit laws and continuity of composite continuous functions. This should be a simple problem but I seem to be stuck. I've tried various methods, including $\epsilon-\delta$, but I'm not getting anywhere. The composition, it seems to me, precludes algebraic simplification.
Recall that $\tan x = \frac{\sin x}{\cos x}$ and that $\cos x = \sqrt{1 - \sin^2 x}$. Let $u = \sin x$ \begin{align} \lim_{x \to 0} \frac{\sin(\frac{\sin x}{\cos x})}{\sin x} &= \lim_{x \to 0} \frac{\sin(\frac{\sin x}{\sqrt{1 - \sin^2 x}})}{\sin x}\\ &= \lim_{u \to 0} \frac{\sin \frac{u}{\sqrt{1-u^2}}}{\frac{u}{\sqrt{1 - u^2}}} \frac{1}{\sqrt{1 - u^2}}\\ &= 1 \cdot \frac{1}{\sqrt{1 - 0}} = 1 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1709415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 2 }
Are $\mathbb{R}$ and $\mathbb{R}^n$ isomorphic for $n > 1$? My linear algebra class recently covered isomorphisms between vector spaces. Much emphasis was placed on the fact that $\mathbb{R}^m$ is isomorphic to particular subsets of $\mathbb{R}^n$ for positive $m < n$. For example, $\mathbb{R}^2$ is isomorphic to every plane in $\mathbb{R}^3$ passing through the origin. My question is whether $\mathbb{R}$ is isomorphic to $\mathbb{R}^n$ for all $n > 1$. It follows from the fact that $\lvert \mathbb{R} \rvert = \lvert \mathbb{R}^n \rvert$ that there exist bijections $\phi : \mathbb{R} \rightarrow \mathbb{R}^n$ for all $n > 1$, but showing that these sets are isomorphic as vector spaces would require showing that such $\phi$ are linear. The standard constructions don't appear to give linear maps, but that doesn't preclude the existence of some linear mapping. I'm rather surprised that my book didn't answer this question, because it seems like a natural question to ask. That, or the answer is obvious and I'm overlooking some important fact.
I have absolutely no doubt that your book covers this point. Two vector spaces are isomorphic if and only if they have the same dimension. So $\mathbb{R}$ (dimension $1$) cannot be isomorphic to any $\mathbb{R}^n$ (dimension $n$) if $n>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1709480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }