Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Two examples of modules: quotienting and direct sum, need some clarification. * *Any ring $A$ has the natural structure of a left, resp. right, module over itself. A submodule $J \subset A$ is just the same thing as a left, resp. right, ideal of $A$. Therefore, the set $A/J$ also has the natural structure of a left, resp. right, $A$-module. *For any ring $A$ and an integer $n > 0$, the abelian group $A^n = A \oplus \ldots \oplus A$ of column, resp. row, vectors has the structure of a left, resp. right, $\text{M}_n(A)$-module. I don't quite follow this, as it's quite terse. Can anybody help clarify what is being said here? Specifically: * *Can anybody explain to me what the multiplication $R×M \to M$ is here, in both these cases? *How do we verify the module axioms here, in both cases? *What is the intuition behind working with the $A$-module $A/J$, and what is the intuition behind working with the $\text{M}_n(A)$-module $A^n$?
* *In $A/J$ the elements are of the from $a+J$ for some $a\in A$. Now what is the natural choice to multiply this by $a'\in A$? Well the easiest thing to do is to take $a'a+J$ which does the job. For the direct sum remember that you can define them as functions from $\{1, \dots , n\}$ to $A$, here the natural choice for an action of $A$ on a function $f$ is given by $(a.f)(n)=a(f(n))$. *I assume that you know that both constructions in 1. have a group structure. Then you just have to check that this scalar multiplication is well-defined and that the axioms for the scalar multiplication are satisfied. *Considering $A^n$ as $\text{M}_n(A)$-module is just like in linear algebra that a $n\times n$ matrix with values in $A$ gives you a linear map on $A^n$ to $A^n$. For the other you may want some geometric interpretation like taking the ideal (continuous, smooth or something you want)functions which vanish at a point $x$, when you take the quotient you get functions which agree on a neighborhood of $x$. Then this tells you can multiply those local functions by global functions. An other nice thing is when $A$ is a PID then any finely generated module $M$ is of the form $A^n \oplus A/J_1 \oplus \dots \oplus A/J_m$. This is the elementary divisor theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1925385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find $\int\limits^{\infty}_{0}\frac{1}{(x^8+5x^6+14x^4+5x^2+1)^4}dx$ I was asked to prove that $$\int\limits^{\infty}_{0}\frac{1}{(x^8+5x^6+14x^4+5x^2+1)^{4}}dx=\pi\frac{14325195794+(2815367209\sqrt{26})}{14623232(9+2\sqrt{26})^\frac{7}{2}}$$ I checked the result numerically and the first digits correct using W|F $$\int\limits^{\infty}_{0}\frac{1}{(x^8+5x^6+14x^4+5x^2+1)^4}dx\approx 0.19874620328$$ I tried to start with trig substitution but the high power in the integral make it more complicated. Is there any way to evaluate this integral?
Hint. A route. One may recall the following result, which goes back at least to G. Boole (1857). Proposition. Let $f \in L^1(\mathbb{R})$ and let $f$ be an even function. Then $$ \int_{-\infty}^{+\infty}x^{2n}f\left(x-\frac1x\right) dx=\sum_{k=0}^n \frac{(n+k)!}{(2k)!(n-k)!}\int_{-\infty}^{+\infty} x^{2k}f(x)\: dx. \tag1 $$ Then one may write $$ \begin{align} &\int_0^{+\infty}\frac{1}{(x^8+5x^6+14x^4+5x^2+1)^{4}}\:dx \\\\&=\int_0^{+\infty}\frac{x^{-16}}{\left(\left(x^4+\dfrac1{x^4}\right)+5\left(x^2+\dfrac1{x^2}\right)+14\right)^{4}}\:dx \\\\&=\int_0^{+\infty}\frac{x^{-16}}{\left(\left[\left(x-\dfrac1x\right)^2+2\right]^2+5\left(x-\dfrac1x\right)^2+22\right)^{4}}\:dx \\\\&=\int_0^{+\infty}\frac{x^{14}}{\left(\left[\left(x-\dfrac1x\right)^2+2\right]^2+5\left(x-\dfrac1x\right)^2+22\right)^{4}}\:dx \qquad \left(x \to \dfrac1x \right) \\\\&=\frac12\int\limits^{\infty}_{-\infty}\frac{x^{14}}{\left(\left[\left(x-\dfrac1x\right)^2+2\right]^2+5\left(x-\dfrac1x\right)^2+22\right)^{4}}\:dx \\\\&=\frac12\sum_{k=0}^7 \frac{(7+k)!}{(2k)!(7-k)!}\int_{-\infty}^{+\infty} \frac{x^{2k}}{\left(\left(x^2+2\right)^2+5x^2+22\right)^{4}}\:dx \qquad (\text{using}\,\,(1)) \\\\&=\sum_{k=0}^7 \frac{(7+k)!}{(2k)!(7-k)!}\int_0^{+\infty} \frac{x^{2k}}{\left(x^4+6x^2+26\right)^{4}}\:dx \\\\&=\pi\:\frac{14325195794+(2815367209\sqrt{26})}{14623232(9+2\sqrt{26})^\frac{7}{2}} \end{align} $$ where we have concluded by using Theorem $3.1$ (p.$6$) here, in G. Boros and V. Moll's paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1925458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Proving $\angle QAP=45^\circ$ if $ABCD$ is a square with points $P$ in $BC$, $Q$ in $CD$ satisfying $\overline{BP}+\overline{DQ}=\overline{PQ}$ Here is the problem: Let $ABCD$ be a square with points $P$ in $BC$, $Q$ in $CD$ satisfying $\overline{BP}+\overline{DQ}=\overline{PQ}$. Prove that $\angle QAP=45^\circ$. So far I have been trying to show that $\overline{BP}=\overline{DQ}$ so that the sum of angles on both sides of $\angle QAP$ is $45^\circ$ Any hint or guidance will be great, Thanks in advance.
Using the cosine rule:$$|AP|^2+|AQ|^2-2|AP||AQ|cos\alpha=|PQ|^2$$Replacing $|AP|,|AQ|$ and $|PQ|$:$$|AP|=\sqrt{a^2+r^2}$$$$|PQ|=\sqrt{b^2+r^2}$$$$|PQ|=a+b$$ Here is $a=|BP|, b=|DQ|$:$$a^2+r^2+b^2+r^2-2\sqrt{a^2+r^2}\sqrt{b^2+r^2}cos\alpha=(a+b)^2$$$$\implies2r^2-2\sqrt{a^2+r^2}\sqrt{b^2+r^2}cos\alpha=2ab$$Note that $b =\frac{r^2-ar}{r+a}$. Substituting $b$ and simplifying further:$$cos\alpha=\frac{r^2(r+a)-a(r^2-ar)}{\sqrt{a^2+r^2}\sqrt{2r^2(a^2+r^2)}}\implies cos\alpha=\frac{1}{\sqrt{2}}$$$$\implies \alpha=45°$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1925585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Show that if a|c, b|c and gcd(a,b)=1, then ab|c Let $a,b,c \in \mathbb{Z}$ such that $a|c$, $b|c$, gcd$(a,b)=1$. Prove that $ab|c$. My thoughts so far: By the Unique Factorization Theorem, we can rewrite $a$ as $p_1^{\alpha_1}p_2^{\alpha_2}...p_k^{\alpha_k}$ where $p_i$'s are primes that make up $a$. Similarly, we can write $b=q_1^{\beta_1}q_2^{\beta_2}...q_k^{\beta_k}$, but with the restriction that $p_i\neq q_j$, $\forall i,j$. Then, by definition of "$|$," we can say that $a|c$ implies $\exists k \in \mathbb{Z}$ such that $k(p_1^{\alpha_1}p_2^{\alpha_2}...p_k^{\alpha_k})=c$. (Similarly for $b$). From here I am unsure where to go, but it feels like I am only a lemma away from my conclusion.
Any time you see $gcd(a,b)=1$, it can often be helpful to think of Bezout's identity. $$ gcd(a,b)=1\Rightarrow \exists \; x,y\in\mathbb{Z}\;s.t\; 1=ax+by $$ Then we can scale to get $$ c=cax+cby $$ And now using that $a\vert c,\;b\vert c \Rightarrow c=ak_1=bk_2$ our identity becomes $$ c=cax+cby=axbk_2+aybk_1\Rightarrow c=ab(k_2x+k_1y) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1925690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Inverse of series $z+a_2z^2+...$ Let $p$ be a power series with integer coefficients of the special form $p(z)=z+a_2z^2+a_3z^3+..$. I wonder if the inverse (composition not $1/p$) series has again integer coefficients. I have calculated some of such series so I guess yes. What do you think?
Given $$ f(x)=x+a_2x^2+a_3x^3+\dots\tag{1} $$ and $$ g(x)=x+b_2x^2+b_3x^3+\dots\tag{2} $$ formally, for $n\ge2$, the coefficient of $x^n$ in $g\circ f$ is $$ 0=a_n+b_n+\left[x^n\right]\sum_{k=2}^{n-1}b_k\left(x+a_2x^2+a_3x^3+\dots+a_{n-1}x^{n-1}\right)^k\tag{3} $$ For $n=2$, $(3)$ says that $b_2=-a_2$. Then inductively, since $a_n\in\mathbb{Z}$, $(3)$ guarantees that $b_n\in\mathbb{Z}$. Even if we only know for $2\le k\le n$, that $a_k\in\mathbb{Z}$, $(3)$ guarantees that for $2\le k\le n$, we have $b_k\in\mathbb{Z}$. For example: $b_2=-a_2$ $b_3=2a_2^2-a_3$ $b_4=5a_2a_3-5a_2^2-a_4$ $b_5=14a_2^4-21a_2^2a_3+3a_3^2+6a_2a_4-a_5$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1925781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove by induction that $(n!)^2\geq n^n$ How does one prove by induction that $(n!)^2\geq n^n$ for all $n \geq 1$ Hint:$(1+x)^r\geq 1+rx$ , for $r\geq0$ and $x\geq-1$ Step 1 For $n=1$, the LHS=$1!^2=1$ and RHS=$1^1=1$. So LHS$\geq$ RHS. Step 2 Suppose the result be true for $n=k$ i.e., $(k!)^2 \geq k^k$ Step 3 For $n=k+1$ $((k+1)!)^2 \geq (k+1)^{k+1}$ $((k+1)!)^2=(k!⋅(k+1))^2=(k!)^2(k+1)^2\geq k^k(k+1)^2$ How can I do?Thank you for help
Start by showing that it works for 3 (I'm starting with 3 because 1 and 2 are trivially true): Base Case: We have: $(3!)^2 = 6^2 = 36$ We also have: $(3)^3 = 27$. So, $36 \geq 27$. Done. Inductive Step: Assume that it works for all n. Prove it works for all (n+1): Let's set it up as follows: $\begin{align} ((n+1)!) ^2 \geq (n+1)^{n+1} \end{align}$. By the definition of factorial, we have $((n+1)!)^2 = ((n+1)n!)^2 = (n+1)^2(n!)^2$. So, now we have $(n+1)^2(n!)^2 \geq (n+1)^{n+1}$. If we divide both sides by $(n+1)^2$ (which we can do because this only applies where $ n \geq 0$), we get $(n!)^2 \geq (n+1)^{n-1}$. So, all that's left to do is to show that $n^n \geq (n+1)^{n-1}$ because, using the inductive hypothesis we know that $\forall n$ $ (n!)^2 \geq (n)^n $. This is almost trivial because the leading coefficient of $(n+1)^{n-1}$ is going to be $n^{n-1}$. This is obviously smaller than $n^n$ Everything else can be ignored because we know that the sum of all values $(n-1)^{n-2} + (n-2)^{n-3}...$ are going to be less than $n$. So, we finally get that $(n!)^2 \geq n^n \geq (n+1)^{n-1}$. And we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1926004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to evaluate $\lim_{n\to\infty}n!/n^{k}$ I do not know how to go about finding the limit of this sequence. I know it diverges to infinity yet I can't find terms to use the squeeze lemma effectively. $a_n = \frac{n!}{n^{1000}}$ $\lim_{n\rightarrow\infty} \frac{n!}{n^{1000}}$ I know that when $n>1000$, the numerator will "run out" of denominators and the expression will grow, yet I don't know how to formally prove this divergence. In addition to this, since it is a sequence, no series laws can be applied. Anyone have an idea on how to approach this type of problem?
$$\log\frac{n!}{n^k} = \log n!-\log n^k = \sum_2^n \log i - k\log n$$ Let's apply the intergral test. Note that: $$\int_2^{n} \log x\; dx-k \log n = x\log x\Bigg|_{2}^{n} - k\log n$$ So, for $n>k$, we get: $$x\log x\Bigg|_{2}^{n} - k\log n = (n-k)\log - 2\log 2 n>0$$ And $$\lim_{n\to \infty} (n-k)\log n - 2\log 2 = \infty$$ Hence, the series diverges, which implies: $$\forall k\lim_{n\to \infty}\frac{n!}{n^k} = \infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1926106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
What is max($ab^3$) if $a+3b=4$? This question was asked by my professor,i think in the set $\mathbb Z$,its answer is 1.But,he advised me for the further exploration in $\mathbb R$.But,what i think(i may be wrong) that in $\mathbb R $ its answer is not unique,but it should be unique. Any suggestions are heartly welcome.
If $a,b>0$, then $$a+3b=4 \ge 4\sqrt[3]{ab^3}$$From the mean inequality. Thus $ab^3 \le 1$. If $ab<0$, then $ab^3<0$. Thus the maximum of $ab$ cannot be find here. If $a,b<0$, then $a+3b<4$. Thus a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1926239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
Spherical Cake and the egg slicer Recently I baked a spherical cake (3cm radius) and invited over a few friends, 6 of them, for dinner. When done with main course, I thought of serving this spherical cake and to avoid uninvited disagreements over the size of the shares, I took my egg slicer with equally spaced wedges(and designed to cut 6 slices at a go; my slicer has 5 wedges spaced at 1 cm apart) and placed my spherical cake right in the exact middle of it before pressing uniformly upon the slicer. Was I successful in avoiding the feuds over the shares. If so then what could be the mathematical explanation of it and if not then why not? My Setup looked a bit similar like given in the image below:
I assume you mean something like this (although this is intended for five persons). If so, then that is obviously not a fair way of dividing the cake (the other slices will be much smaller than the inner slices).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1926312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Distributing 4 distinct balls between 3 people In how many ways can you distribute 4 distinct balls between 3 people such that none of them gets exactly 2 balls? This is what I did (by the inclusion–exclusion principle) and I'm not sure, would appreciate your feedback: $$3^4-\binom{3}{1}\binom{4}{2}\binom{2}{1}^2+\binom{3}{2}\binom{4}{2}\binom{2}{1}$$
One person gets $4$ balls: $3$ ways One person gets $3$ balls, one person gets $1$: $\dbinom41\times3!=24$ $27$ in total.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1926503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
$P:\mathbb{R}^2 \to \mathbb{R}, P(x,y) = x.y$ is continuous. I need to prove $P:\mathbb{R}^2 \to \mathbb{R}, P(x,y) = x.y$ is continuous. I just proved that the sum is continuous, but I'm lost at how to manipulate the inequalities in this case. My attempt: Let $\epsilon>0$. $d((x,y),(a,b)) = \sqrt{(x-a)^2 + (y-b)^2} $. So I need to find some $\delta$ such that $\sqrt {(x-a)^2 + (y-b)^2} < \delta \Rightarrow |xy-ab| < \epsilon $. I verified that $\sqrt{(x-a)^2 + (y-b)^2} \geq |x-a| $ and $\sqrt{(x-a)^2 + (y-b)^2} \geq |y-b|$, but can't see if that helps. I don't know how can I find $\delta$ in this case, can someone give me a hint? Thanks.
Since $$ \underbrace{(x_1^2+x_2^2)}_{\|x\|^2}\underbrace{(y_1^2+y_2^2)}_{\|y\|^2}=\underbrace{(x_1y_1+x_2y_2)^2}_{(x\cdot y)^2}+(x_1y_2-x_2y_1)^2 $$ We have $$ |x\cdot y|\le\|x\|\,\|y\| $$ Thus, $$ \begin{align} \left|\,x\cdot y-a\cdot b\,\right| &=\left|\,(x-a)\cdot b+x\cdot(y-b)\,\right|\\ &\le\|x-a\|\,\|b\|+\|y-b\|\,\|x\| \end{align} $$ If we have $\|x-a\|\le\min\left(\frac{\epsilon}{2\|b\|},\frac{\|a\|}2\right)$ and $\|y-b\|\le\frac{\epsilon}{3\|a\|}\le\frac{\epsilon}{2\|x\|}$, then $\left|\,x\cdot y-a\cdot b\,\right|\le\epsilon$. Therefore, if we let $\delta=\min\left(\frac{\epsilon}{2\|b\|},\frac{\|a\|}2,\frac{\epsilon}{3\|a\|}\right)$, and $\sqrt{\|x-a\|^2+\|y-b\|^2}\le\delta$, then $\left|\,x\cdot y-a\cdot b\,\right|\le\epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1926624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why is calculus considered to be difficult? I am going to take a calculus course soon and everyone tells me that it is very difficult. What is considered difficult about it and why do so many people fail in calculus courses?I am asking these questions so that I can prepare myself beforehand and do not face any difficulties which most people face.
One of the difficulties with calculus is the amount of time spent solving one problem. Most of that time is simplifying, evaluating, and the like, before applying the calculus-level concept at the end. If you can still do precalculus-level work fluently and without many mistakes, then the exercises won't take too long; it's when students get bogged down by loss of fluency that they get frustrated with how long it takes for one exercise (or exam question). One other piece of advice that I haven't seen in the answers posted before this one: go to your instructor's office hours, even if you have no specific questions to ask. Just watch as others ask for help (and receive it). Doing that was a huge help for me (though I got a median-level score on the midterm, I got the highest score on the final), so I recommend it for all college math courses, but especially for calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1926739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
What is needed to justify taking the derivative of a geometric series? Of course, if $|p| < 1$, $$\sum_{n=0}^{\infty}p^n=\dfrac{1}{1-p}\text{.}$$ One fact that is particularly useful in probability and financial mathematics is taking the derivative of both sides leads to $$\sum_{n=1}^{\infty}np^{n-1}=\dfrac{1}{(1-p)^2}\text{.}$$ What is needed to justify doing this? I was reading the Real Analysis text by Bartle yesterday, and my recollection is that if $(f_n) \to f$ converges at some $x \in S$, $S \subseteq \mathbb{R}$ bounded, and if $(f^{\prime}_n) \to g$ uniformly, then $(f_n) \to f$ uniformly and $f^{\prime} = g$ (I imagine this is true only over $S$). Define $f_n: (0, 1) \to \mathbb{R}$ by $f_n(p) = p^n$. Obviously, $(f_n) \to 0$... oh wait, I realized I'm not working with a sequence of functions, but a series (as I'm typing this). So my method is likely invalid. I'm not asking for a complete answer (go ahead and give one if you'd like), but a list of theorems or a sketch that would assist in proving this problem. This is not homework.
You can fix $k$ and consider series of the form $$f_k(p)=\sum_{n=0}^kp^n=\frac{1-p^{k+1}}{1-p}$$ Then $f_k$ is differentiable and $$f_k'(p)=\sum_{n=1}^knp^{n-1}\tag{$*$}$$ The series $\sum_{n=1}^\infty np^{n-1}$ converges uniformly on sets of the form $\left\{p:|p|\leq r\right\}$, where $r<1$ by Weierstrass' test, or you can simply use the last term in $(*)$ above to obtain the same. By by the result you stated, the limit function $f(p)=\sum_{n=0}^\infty p^n=\frac{1}{1-p}$ is differentiable on sets of the form $\left\{p:|p|\leq r\right\}$ for $r<1$, and so it is differentiable on $\left\{p:|p|<1\right\}$. From usual calculus and by comparing it with ($*$), $$\frac{1}{(p-1)^2}=f'(p)=\lim_k f_k'(p)=\lim_k\sum_{n=0}^knp^{n-1}=\sum_{n=0}^\infty np^{n-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1926814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
A Neat Identity Involving Zeta Zeroes While playing around, I encountered the following very curious and cool identity. Consider the exponential integral $\text{Ei}(x)$ and the $n$th nontrivial zero of the Riemann Zeta function $p_n$. Now, look at the first few imaginary parts of the following function: $$f(x)=\sum_{n=1}^x \text{Ei}(p_n)$$ $$\Im \ \ f(1)=3.13732$$ $$\Im \ \ f(10)=31.3169$$ $$\Im \ \ f(100)=314.097$$ $$\Im \ \ f(1000)=3141.54$$ $$\Im \ \ f(10000)=31415.9$$ As you can see, it is each time adding a digit of pi. Question: Is this a known result that can be proved easily? Does this pattern even continue?
The result found by OP turns out to be quite generic; it holds for a wide range of sequences $\rho_n$ and not just the zeros of the $\zeta$-function. A precise formulation of what is observed is the following: $$\lim_{N\to\infty} \frac{1}{N}\sum_{n=1}^N \text{Ei}(\rho_n) = \pi i \tag{1}$$ The sum above is the Cesaro mean of the sequence $a_n = \text{Ei}(\rho_n)$. If a sequence $a_n$ converges to $a$ then the Cesaro mean of $a_n$ also converges to $a$ (see e.g. this question) so $(1)$ would follow if we could show that $\lim_{n\to\infty}\text{Ei}(\rho_n) = \pi i$. This is indeed true and the reason for this is as follows: * *We can write $\text{Ei}(z)$ on the form $\text{Ei}(z) = -\Gamma(0,z) + \pi i$ where the incomplete $\Gamma$-function has the asymptotical expansion $\Gamma(0, z) = \frac{e^{-z}}{z} + \mathcal{O}(z^{-2})$. This implies that $\Gamma(0, z_n) \to 0$ for any sequence $z_n = x_n + i y_n$ where $y_n\to \infty$ and where $x_n$ is bounded. *The non-trivial zeros of the $\zeta$-function satisfy the conditions on the sequence above since they all lie in the strip $0<\Re[z]< 1$ in the complex plane and $\lim_{n\to\infty}\Im[\rho_n] = \infty$. This means that $\lim_{n\to\infty}\Gamma(0,\rho_n) = 0$ so $\lim_{n\to\infty} \text{Ei}(\rho_n) = \pi i$ and $(1)$ follows. Note that the conditions in point 1. above are far from being very restrictive so there are infinitely many sequences for which $(1)$ holds (random examples are $\rho_n = 7 + n^2i$ and $\rho_n = \frac{2n}{1+n} + n\log(n)i$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1926964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 1 }
Help in evaluating $\displaystyle\int\ \cos^2\Big(\arctan\big(\sin\left(\text{arccot}(x)\right)\big)\Big)\ \text{d}x$ Is there an easy way to prove this result? $$\int\ \cos^2\Big(\arctan\big(\sin\left( \text{arccot}(x)\right)\big)\Big)\ \text{d}x = x - \frac{1}{\sqrt{2}}\arctan\left(\frac{x}{\sqrt{2}}\right)$$ I tried some substitutions but I got nothing helpful, like: * *$x = \cot (z)$ I also tried the crazy one: * *$x = \cot(\arcsin(\tan(\arccos(z))))$ Any hint? Thank you!
As mentioned in comments, draw the triangles. You have $$ \frac x 1 = x = \cot\theta = \frac{\text{adjacent}}{\text{opposite}} $$ so if you have a triangle in which $\text{opposite}=1$ and $\text{adjacent} = x$ then you have $\text{hypotenuse} = \sqrt{x^2+1}$ and so $$ \sin\theta = \frac{\text{opposite}}{\text{hypotenuse}} = \frac 1 {\sqrt{x^2+1}} $$ so $$ \sin\operatorname{arccot} x = \frac 1 {\sqrt{x^2+1}}. $$ Then do a similar thing with the cosine of the arctangent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1927025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Is $(x+1)/(x^2-1)$ defined for $x=-1$? It might sound like a silly question, but I can't come up with a clear answer. By looking at the expression, the answer should be "no", since $(-1)^2=1$ and we're in trouble. However, if I factorize: $(x^2-1) =(x+1)(x-1)$, $x=-1$ is still illegal. But now the terms cancel out and I am left with $1/(x-1)$, which is clearly defined for $x=-1$? What happened? Was some information lost in the manipulation or was the original expression an "illusion"? I hope my question makes sense.
This is what's called a removable singularity. In this case, we have that $$\lim_{x\to -1}\frac{x+1}{x^2-1}$$ exists and is equal to $\lim_{x\to -1}\frac{1}{x-1}=-\frac12$, so setting the value of that function as $-\frac12$ yields a new function that is equal to the old one wherever the old one is defined $(\mathbb{R}\setminus\{-1\})$ and also continuous in the new point of definition $(x=-1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1927096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 1 }
Greatest common divisor relatively prime integers proof Let $gcd(c,m)=g.$ Show that if $kc+lm=g$, then $gcd(k,l)=1$ I can see that its true with this example Let $c=5, m=15$. We have $gcd(5,15)=5$, then let $k=-2$ and $l=1$ $gcd(-2,1)=1$ but I'm not sure how to generalize it.
We have $$ \left\{ \begin{gathered} \gcd (c,m) = g \hfill \\ kc + lm = g \hfill \\ \end{gathered} \right.\quad \Rightarrow \quad \left\{ \begin{gathered} \gcd (c',m') = 1 \hfill \\ kc' + lm' = 1 \hfill \\ \end{gathered} \right. $$ Suppose $ \gcd (c,m) \ne 1$ , then you would get the contradiction $$ \begin{gathered} \gcd \left( {k,l} \right) = q \ne 1\quad \Rightarrow \quad k'c' + l'm' = \frac{1} {q}\quad \Rightarrow \quad \hfill \\ \Rightarrow \quad \text{at}\,\text{least}\,\text{one}\,\text{of}\,k',l',c',m'\text{not}\,\text{integer} \hfill \\ \end{gathered} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1927189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Existence of $\xi$ and $\eta$ such that $f'(\xi)+f'(\eta)=\xi+\eta$ Let $f$ be continuous on $[0,1]$, differentiable in $(0,1)$. Assume futher that $f(0)=0$, $f(1)=1/2$. Show that there exist $\xi,\eta\in (0,1)$ such that $f(\xi)+f'(\eta)=\xi+\eta$. I saw this problem in a draft. I do not know whether it is true. Up to now, I have not find a counterexample. My idea is as follows: $f(\xi)-\xi=\eta-f'(\eta)$. Let $F(x)=f(x)-x$, then $F(0)=0, F(1)=-1/2$, so $f(\xi)-\xi$ can be chosen to be arbitrary $a\in (-1/2,0)$. The strategy is then to find $\eta$ such $a=\eta-f'(\eta)$. Roll's theorem may be applied. However, letting $G(x)=ax-x^2/2+f(x)$, then $G(0)=0$, $G(1)=a$. This fact does not verify that of Roll. So how can I get across the difficulties? If we change $f(\xi)+f'(\eta)=\xi+\eta$ to be $f'(\xi)+f'(\eta)=\xi+\eta$ for some $\xi,\eta\in (0,1), \xi\neq \eta$. Can we prove it?
For $f(\xi)+f'(\eta)=\xi+\eta$ there is a counterexample given in Prove $\exists \xi, \eta \in (0,1)$, such that $f(\xi)+f'(\eta)=\xi+\eta$.? On the other hand the statement with $f'(\xi)+f'(\eta)=\xi+\eta$ is true. Let $F(x)=f(x)-x^2/2$. Now by the MVT there is $\xi\in (0,1/2)$ such that $$\frac{F(1/2)-F(0)}{1/2-0}=F'(\xi)=f'(\xi)-\xi.$$ Similarly there is $\eta\in (1/2,1)$ such that $$\frac{F(1)-F(1/2)}{1-1/2}=F'(\eta)=f'(\eta)-\eta.$$ Hence $0<\xi<1/2<\eta<1$ and $$\begin{align*} f'(\xi)+f'(\eta)-(\xi+\eta)&=F'(\xi)+F'(\eta)\\ &=\frac{F(1/2)-F(0)}{1/2-0}+\frac{F(1)-F(1/2)}{1-1/2}\\&=\frac{F(1/2)}{1/2}-\frac{F(1/2)}{1/2}=0. \end{align*}$$ where we used $F(0)=0-0=0$ and $F(1)=1/2-1/2=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1927339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Prove inequality $2e^x>x^3+x^2$ If $x \in \Bbb R$, show that $$2e^x>x^3+x^2$$ This inequality is right, see Own ideas: If $x\in \Bbb R$, $$f(x)=2e^x-x^3-x^2$$ $$f'(x)=2e^x-3x^2-2x$$ $$f''(x)=2e^x-6x-2$$ $$f'''(x)=2e^x-6$$ $$f''''(x)=2e^x>0$$ Like the symbol cannot judge $f$ sign. So how can we show this $$f(x)>0 \text{ for } x \in \Bbb R?$$
Consider that for all x in $\mathbb R$, $e^{x}$ > 0, so $2e^{x}$ > 0 for all $\mathbb R$.So clearly for $x\leq 0$ in $\mathbb R$, the statement is true since $x^2 \geq 0$ and $x^3 \leq 0$. Since $|x^3| \geq x^2$ for x in $\mathbb R$, $x^3 + x^2 \leq 0$ for $x\leq 0$ in $\mathbb R$. Now consider where $x>0$ in $\mathbb R$. Let's use the exponential power series since we know it converges for all real numbers. Assuming x > 0,rearranging and substituting gives $e^x > \frac{1}{2} (x^3 + x^2)$ = $\sum_{n=1}^{\infty} \frac{x^n}{n!} > \frac{1}{2} (x^3 + x^2)$ The question now is which side grows larger as $x\rightarrow +\infty$. Clearly both go to infinity in this case. So consider the following: $\frac{e^x} {\frac{1}{2} (x^3 + x^2)}$ Now use L'Hospital's rule: $\frac{e^x} {\frac{3}{2} (x^2 + x)}$ Apply again: $\frac{e^x} {3x + 1}$ One last time. $\frac{e^x} {3}$ Clearly as $x\rightarrow +\infty$ then e^x > 3. That completes the proof. Whew!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1927471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Can completing the square apply to more higher degree? I have some trouble about completing the square lessons. As we know , minimum value of $\ \ 3x^{2} + 4x + 1 $ is $-\frac{1}{3}$ when $x=-\frac{2}{3}$ by completing the square. Then , how about minimum value of $\ \ x^5 +4x^4 + 3x^3 +2x^2 + x + 1$ when $(-2<x<2)$ ? According to Wolfram Alpha , minimum value is $12 - 5\sqrt{5}$ when $x=-\frac{3}{2} + \frac{\sqrt{5}}{2}$ I try to put 5th degree to form $(x-a)^5$ or any nth degree but it doesn't work. I'm just curious how completing the square can apply to higher deree equations to find their minimum or maximum value. Thank you for every help comments.
Yes we have something same. But if we look to the topic like following. $3x^2+4x+1$ has a complete square $3x^2+4x+\frac43$ corresponding to it in which their difference, $-\frac13$, is of degree maximum two degrees less than the original. For any polynomial in one variable we have a unique multiple of a power of a $(x-a)$ in which the difference of the two is of degree maximum two degrees less than the degrees of them. Before showing what I said, I try to give an algorithm to find that unique multiply of a complete power. It is easy: If the initial polynomial is of the form $ax^n+bx^{n-1}+\dots+c$, then the unique multiply of a complete power is $$a(x+\frac{b}{na})^n,$$ which it would be expanded as $$a\left(x^n+n({ \frac{b}{na}})x^{n-1}+\dots+\frac{b^n}{n^na^n}\right)=a(x^n+\frac b a x^{n-1}+\dots+\frac {b^n}{n^na^n})=$$ $$ax^n+bx^{n-1}+\dots+\frac{b^n}{n^na^{n-1}}$$. So by this: The multiple of complete power for $3x^2+4x+1$ is $3(x+\frac{4}{2\cdot 3})^2$, i.e. $3(x+\frac{2}{3})^2=3(x^2+\frac{2\cdot2}{3}+\frac{4}{9})=3x^2+4x+\frac43$. And so for the quintic, $5x^5+4x^4+3x^3+2x^2+x+1$, the unique multiple of the complete power of a would be $5(x+\frac{4}{5\cdot5})^5$, i.e. $5(x+\frac4{25})^5$, which has an expanded form like: $5x^5+4x^4+\frac{32}{25}x^3+\frac{128}{625}x^2+\frac{256}{15625}x+\frac{1024}{1953125}$ which their difference is (of course of two degrees lesser): $-\frac{43}{25}x^3 -\frac{1122}{625}x^2 -\frac{15369}{15625}x -\frac{1952101}{1953125}$. I wish you got something of what you where looking for!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1927629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
In gambling, how much should I bet to win a specific amount of money? I have a seemingly simple problem but I can't work it out. Say I start with a bank of £20 - I would like to work out how much I should bet take my bank to £25 (assuming a win). In this case we'll assume odds of 2.00. Since the amount I want to win is £5 I could bet £5/2.00 = £2.50 however, if the bet won my final bank would be £22.50 and not £25. Can anyone point out what I'm missing here? Thanks, Matt
Your mistake is that $£5:2.00=£5$ profit, not $£2.50$ as you say, so your bank would be at $£25.00$, as you wanted. If the odds are $d$ and you make a stake of $s$, then your profit $p$ is: $$p=s(d-1)$$ So for a profit of $p$, you should stake; $$s=\frac p{d-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1927741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$1$ heap of sand $+\ 1$ heap of sand $= 1$ heap of sand? My uncle, who barely passed elementary school math (which leads me to believe he read this in some kind of joke magazine), once told me this when I was very young. $$1 \text{ heap of sand } + 1\text{ heap of sand } = 1\text{ heap of sand} .$$ It does sound like a joke, but as I learned more and more math (still at basic college math), I still couldn't (and can't) disprove/prove it. So we all know this is true "linguistically," since if you add 1 heap of sand to another one, it will still be a heap of sand, but mathematically, 1 + 1 cannot equal 1. I expect there to be, and wouldn't be surprised if there weren't, similar questions to this one but I couldn't find it using the search feature. I guess I'm going out on a limb here because of the stupidity (and perhaps the obvious answer) of this question, but I seriously cannot disprove this based on what I know.
One thing you must notice here is that: $$1\, \text{heap of sand} + 1\, \text{heap of sand} = 1\, \text{heap of sand} \not \large\Rightarrow 1+1=1$$ The reason is that $1$ heap is not well-defined or quantitative or mathematical enough for being used to perform mathematical operations like addition,subtraction and make equations. If you had mentioned $1$ Kg. or perhaps $1$ quintal or even, in some special case, $1$ bag of sand, then also you can perform addition,subtraction etc. and equate both sides of the "$=$" sign. Otherwise this equation makes no sense. It is similarly wrong to say that $$1\, \text{straight line} + 1\, \text{straight line} = 1\, \text{straight line} \implies 1+1=1$$ Instead as pointed out by McFry, you should try to use set theory to clarify your problem since it is a clear case of composition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1927856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 0 }
The series $\sum\limits_ {k=1}^{\infty} \frac{1}{(1+kx)^2}$ converges for $x>0$ $$ \sum_ {k=1}^{\infty} \dfrac{1}{(1+kx)^2}$$ $$x\in (0,\infty) $$ This series converges on given interval but how exactly can I show this is true?
$$x>0\implies(1+kx)^2\ge k^2x^2\implies\frac1{(1+kx)^2}\le\frac1{k^2x^2}$$ and now use the comparison test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1928041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
My solution of this integral does not match. Where am I doing wrong? I've been trying to calculate the following integral. But I always get the wrong result.... $$ S(a, b) = \int_0^{2\pi}\frac{d\theta}{\left(a + b\cos\theta\right)^2}, \quad\quad\quad\mbox{for}\quad a > b > 0 $$ Assume substitution: $z = e^{i\theta}$. Then: $ \displaystyle\cos\theta = \frac{1}{2}\left(e^{i\theta} + e^{-i\theta}\right) = \frac{1}{2}\left(z + \frac{1}{z}\right) $. Also, $dz = ie^{i\theta} d\theta = izd\theta$. This is what I do: $$ \int_0^{2\pi}\frac{d\theta}{a + b\cos\theta} = \oint_{|z|=1}\frac{dz}{iz\left[a + \frac{b}{2}\left(z + \frac{1}{z}\right)\right]^2} = \frac{1}{i}\oint_{|z|=1}\frac{4zdz}{\left[2az + b z^2 + b\right]^2} $$ Singularities: $$ 2az + bz^2 + b = 0 \\ z_{\pm} = \frac{-a}{b} \pm\frac{1}{b}\sqrt{a^2 - b^2} $$ Residues: $$ Res_{z=z_{+}}\left\{\frac{4z}{\left[2az + b z^2 + b\right]^2}\right\} = \frac{1}{2\pi i}\oint_{\gamma+}\frac{4zdz}{(z-z_+)^2(z-z_-)^2} = \frac{d}{dz}\left[\frac{4z}{(z-z_-)^2}\right]_{z = z_+} \\ Res_{z=z_{+}}\left\{\frac{4z}{\left[2az + b z^2 + b\right]^2}\right\} = 4\left[\frac{1}{(z_+-z_-)^2} - \frac{2z_+}{(z_+-z_-)^3}\right] \\ Res_{z=z_-}\left\{\frac{4z}{\left[2az + b z^2 + b\right]^2}\right\} = 4\left[\frac{1}{(z_--z_+)^2} - \frac{2z_-}{(z_--z_+)^3}\right] $$ Where $\gamma+$ is the contour centered at $z_+$ with small radius $\epsilon$. We know: $$ \int_0^{2\pi}\frac{d\theta}{\left(a + b\cos\theta\right)^2} = \frac{1}{i}\oint_{|z|=1}\frac{4zdz}{\left[az + b z^2 + b\right]^2} = 2\pi \sum_n Res_{z = z_n}\left\{\frac{4z}{\left[2az + b z^2 + b\right]^2}\right\} $$ Where $z_n$ are all isolated singularities inside unit circle $|z|=1$. If $I(z)$ is an indicator function of the interior of the unit circle: $$ S(a, b) = 4I(z_+)\left[\frac{1}{(z_+-z_-)^2} - \frac{2z_+}{(z_+-z_-)^3}\right] + 4I(z_-)\left[\frac{1}{(z_--z_+)^2} - \frac{2z_-}{(z_--z_+)^3}\right] $$ When $b=1$ this seems to work for all values of $a$ that I've tested. But, if $b\neq 1$ it does not work... I tried to find a mistake somewhere, but I see none. For instance, if $a=8$ and $b=2$, I find: $$ z_+ = 0.127017 \\ z_- = -7.87298 \\ Res_{z=z_+} = 0.068853 \\ S = 0.432616 $$ But the integral is $0.108154$ instead of $0.432616$. Where is wrong?
I noted that while computing the residues you used $2az + b z^2 + b =(z-z_+)(z-z_-)$ instead of $2az + b z^2 + b =b(z-z_+)(z-z_-)$. Therefore I think that a factor $1/b^2$ is missing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1928174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Convergence/divergence of $\sum_{k=1}^\infty\frac{2\times 4\times 6\times\cdots\times(2k)}{1\times 3\times 5\times\cdots\times(2k-1)}$ A problem asks me to determine if the series $$\sum_{k=1}^\infty \frac{2 \times 4 \times 6 \times \cdots \times (2k)}{1 \times 3 \times 5 \times \cdots \times (2k-1)}$$ converges or diverges. (from the textbook Calculus by Laura Taalman and Peter Kohn (2014 edition); section 7.6, p. 639, problem 33) I am allowed to use the ratio test first and then any other convergence/divergence test if the former test does not work. In my original work, I attempted the ratio test and it was rendered inconclusive. $$ a_k = \frac{2 \times 4 \times 6 \times \cdots \times (2k)}{1 \times 3 \times 5 \times \cdots \times (2k-1)} $$ $$ a_{k + 1} = \frac{2 \times 4 \times 6 \times \cdots \times (2k) \times (2(k+1))}{1 \times 3 \times 5 \times \cdots \times (2k-1) \times (2(k+1)-1)} = \frac{2 \times 4 \times 6 \times \cdots \times (2k) \times (2k+2)}{1 \times 3 \times 5 \times \cdots \times (2k-1) \times (2k+1)} $$ $$ \frac{a_{k + 1}}{a_k} = \frac{\frac{2 \times 4 \times 6 \times \cdots \times (2k) \times (2k+2)}{1 \times 3 \times 5 \times \cdots \times (2k-1) \times (2k+1)}}{\frac{2 \times 4 \times 6 \times \cdots \times (2k)}{1 \times 3 \times 5 \times \cdots \times (2k-1)}} = \frac{2 \times 4 \times 6 \times \cdots \times (2k) \times (2k+2)}{1 \times 3 \times 5 \times \cdots \times (2k-1) \times (2k+1)} \times \frac{1 \times 3 \times 5 \times \cdots \times (2k-1)}{2 \times 4 \times 6 \times \cdots \times (2k)} = \frac{2k+2}{2k+1}$$ Evaluating $\rho = \lim_{x \to \infty} \frac{a_{k + 1}}{a_k}$ will determine if $\sum_{k=1}^\infty \frac{2 \times 4 \times 6 \times \cdots \times (2k)}{1 \times 3 \times 5 \times \cdots \times (2k-1)}$ converges or diverges. The conclusions for the ratio test are as follows: $\circ$ If $\rho < 1$, then $\sum_{k=1}^\infty a_k$ converges. $\circ$ If $\rho > 1$, then $\sum_{k=1}^\infty a_k$ diverges. $\circ$ If $\rho = 1$, then the test is inconclusive. $$ \rho = \lim_{x \to \infty} \frac{a_{k + 1}}{a_k} = \lim_{x \to \infty} \frac{2k+2}{2k+1} = 1$$ Since $\rho = 1$, the ratio test is rendered inconclusive, as I stated earlier. I will have to use other convergence/divergence tests to solve the problem. My issue is that I'm not sure which other convergence/divergence test to use. Any suggestions? Many thanks for the help.
$$\sum_{k=1}^\infty \frac{2 \times 4 \times 6 \times \cdots \times (2k)}{1 \times 3 \times 5 \times \cdots \times (2k-1)}=$$ $$\sum_{k=1}^\infty \frac{\prod_{j=1}^k(2j)}{\prod_{j=1}^k(2j-1)}=$$ $$\sum_{k=1}^\infty \prod_{j=1}^k\frac{(2j)}{(2j-1)}$$ The product $a_k = \prod_{j=1}^k\frac{(2j)}{(2j-1)}$ is $$\left(1+\dfrac11\right)\left(1+\dfrac13\right)\left(1+\dfrac15\right)\cdots \left(1+\dfrac1{2n-1}\right)$$ An infinite product $\lim_{k \to \infty} \left(1+a_k\right)$ converges to a non-zero number if one of the $\sum_{k \to \infty} \vert a_k \vert$ converges. Conclude what you want from this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1928286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Homeomorphisms between circles and rectangles In our topology class we learn that in $\mathbb{R}^2,$ circles and rectangles are homeomorphic to each others. I can understand the underline idea intuitively. But can we find an explicit homeomorphic between them? If so how? Also our professor said that, "we can describe any point in the rectangle $[0,1]\times[0,1]$ using a single coordinate." I wonder how such thing is possible. As I think, for this we need a bijection between $[0,1]\times[0,1]$ and some (closed?) interval in $\mathbb{R}.$ Can some one explain this phenomena?
consider $S^1$ parametrized by $(\cos(t),\sin(t))$. Extend this vector from the origin to its first intersection with the square. You can explicate the first quadrant and then just argue by symmetry. For $0 \leq t \leq \pi/4$, you're going to intersect with the $x=1$ line segment, and for $\pi/4 \leq t \leq \pi/2$, the vector will intersect $y=1$ at $\cot(t)$. However, the real idea is given in the answer by Fimpellizieri.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1928386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Characteristic equation of a recurrence relation? I am trying to find the general term of the following recurrence relation: $$a_{n + 1} = \frac{1}{2}(a_{n} + \frac{1}{a_{n}})$$ where $a_1 = 3$. I'm failing to write the characteristic equation.
Here is what I came upon: If we have: $a_{n} = \frac{1}{2}(a_{n} + \frac{\lambda^2}{a_{n}})$, then it holds that: $\frac{a_{n} - \lambda}{a_{n} + \lambda} = (\frac{a_{1} - \lambda}{a_{1} + \lambda})^{2^{n-1}}$, solving for $a_{n}$ from the above example we have: $$\frac{a_{n} - 2}{a_{n} + 2} = (\frac{3 - 2}{3 + 2})^{2^{n-1}}$$ $$a_n - 2 = \frac{a_n + 2}{5^{2^{n-1}}}$$ $$a_n5^{2^{n-1}} - a_n - 2.5^{2^{n-1}} - 2 = 0$$ $$a_n(5^{2^{n-1}} - 1) - 2(5^{2^{n-1}} + 1) = 0$$ $$a_n = 2\frac{5^{2^{n-1}} + 1}{5^{2^{n-1}} - 1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1928503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
proving a map is a covariant functor between modules I'm trying to show that $(\cdot)\otimes_A N$ is a covariant functor from $MOD_A\to MOD_A$. Can somebody ensure I'm not oversimplifying the task (which I feel to be the case). It is clear that this functor does indeed take objects in $MOD_A$ to objects in $MOD_A$, and the crux of this is ensuring the covariant functorality when looking at how it behaves with morphisms. So if $f\in Mor(P,Q)$, then $f$ is characterized by $f(\sum_k a_kp_k)=\sum_ka_kf(p_k)$ for $a_k\in A$, $p_k\in P$ and $f(p_k)\in Q$. Suppose we define $F(\cdot):= (\cdot)\otimes_A N$, then it must be that $F(f):P\otimes_A N\to Q\otimes_A N$ and hence $F(f)(\sum_ka_kp_k\otimes n_k)=\sum_ka_kf(p_k)\otimes n_k$. Now if in addition we have $g\in Mor(Q,R)$, then $F(g\circ f)(\sum_ka_kp_k\otimes n_k)=F(g)(\sum_ka_kf(p_k)\otimes n_k)$ (so that the $f(p_k)$ lie in $g$'s domain) $=\sum_ka_kg(f(p_k))\otimes n_k= (F(g)\circ F(f))(\sum_ka_kp_k\otimes n_k) $, hence having $F(g\circ f)=F(g)\circ F(f)$ as we wanted. Would this fully solve the claim? Thanks
Yeah, that's it. In principle, of course, you need to check that your formula actually defines a homomorphism, although that's hardly a problem here. Another approach just uses the universal property and the Yoneda lemma: if $f:P\to Q$, then for any $R$ and bilinear map $a:Q\times N\to R$ we get a bilinear map $af:P\times N\to R$. This is a natural transformation between the functors $\text{BiHom}(Q\times N,-)\cong \text{Hom}(Q\otimes N,-)$ and $\text{BiHom}(P\times N,-)\cong\text{Hom}(P\otimes N,-)$, which comes by the Yoneda lemma from a unique map $f\otimes N:P\otimes N\to Q\otimes N$. This is functorial since the natural transformation was; that and the naturality itself now just follow from trivial properties of composition of module morphisms. If you haven't seen any category theory yet, this proof probably doesn't look easier, but notice that I don't have to know anything about elements of the tensor product to write it down! This is a hint that it works in a much wider context.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1928587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit of ratio of two Gamma functions with negative integer arguments When using the hypergeometric representation for a Legendre polynomial, I encounter, for integer n and l, the following ratio: $$\frac{\Gamma(n-l)}{\Gamma(-l)}$$ Where $n \leq l$ (the quantity is definitely zero for $n > l$, as it should be in the definition of a Legendre polynomial). I am unsure as to how to evaluate this ratio; as it stands, it is indeterminate. My original idea was to use: $$\Gamma(k+1) = (k+1)\Gamma(k)$$ Multiple times to reduce $\Gamma(n-l)$ to: $$(n-l)(n-l-1)(...)(-l+1) \ \Gamma(-l)$$ Then the $\Gamma(-l)$ terms would cancel and I'd be left only with some sensible terms. Unfortunately, I do not think that this approach is valid, as it does not yield the correct representation for the Legendre polynomial. Secondly, the above can be written as: $$(-1)^n\frac{(l-1)!}{(l-n-1)!} \ \Gamma(-l)$$ Which now no longer permits us to set $n=l$ as is necessary to obtain a polynomial of order $l$. I'm trying to remain brief on the references to Legendre polynomials as it is specifically the ratio of the Gamma functions listed at the start of this post that I am interested in evaluating.
We have for $k\in\mathbf{N}$ and $x\rightarrow 0$ $$\Gamma(-k+x) \sim \frac{1}{k!x} + O(1)$$ and therefore $$\lim_{x\rightarrow 0}\frac{\Gamma(n-l+x)}{\Gamma(-l+x)} = \frac{1}{(l-n)!}/\frac{1}{l!} = \frac{l!}{(l-n)!}$$ Of course this is only a very special limit. It is finite for $n=l$ but I do not know what you expect for the value. If the above does reproduce it, you will at least have a simple arugument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1928697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving that the set $\{f \in \mathcal{F}(A,\mathbb{F}): f(a_0) = 0\}$ contains the zero vector. Suppose that $A$ is a nonempty set and $\mathbb{F}$ is a field. I would like to show that for any $a_0 \in A$, the set $\{f \in \mathcal{F}(A,\mathbb{F}): f(a_0) = 0\}$ is a subspace of $\mathcal{F}(A,\mathbb{F})$, where $\mathcal{F}(A,\mathbb{F})$ is defined to be the set of all functions from $A$ to $\mathbb{F}$. Furthermore, two functions $f$ and $g$ in $\mathcal{F}(A,\mathbb{F})$ are equal if $f(a) = g(a)$ for every $a \in A$. The closed under addition and multiplication part for showing it is a subspace is trivial for me. However, it is actually the zero vector condition that is confusing me. I understand that since we have that the set contains all functions where $f(a_0) = 0$, then it appears trivial that the condition is satisfied. However, I was thinking what if the set $A$ consisted of just one element and that the field were $\mathbb{R}$. In that case, there would be only one function, and so would it necessarily be the case that there is a function that maps to zero? In other words, how do I know that $\{f \in \mathcal{F}(A,\mathbb{F}): f(a_0) = 0\}$ is non-empty? Thanks!
The set $\{f \in \mathcal{F}(A,\mathbb{F}): f(a_0) = 0\}$ is non-empty IFF set $A$ is not empty. In this case the function $f\equiv0$ is always exist and is contained in aforementioned subspace. Thus the subset is (mnemonically trivial) nonempty. Don't worry :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1928864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question regarding a PDE I can see where to start on a $f(x,t)$ that solves a PDE $f_t+f*f_x+f_{xxx}=0$, then to show that it will also be true for $f(x-ct,t)+c$ for any number $c$ I can't see where to start on a problem. If a function $f(x,t)$ solves the PDE $$f_t + f f_x + f_{xxx} = 0$$ I want to show that $f(x-ct,t)+c$, for any number $c$, also solves the PDE.
I have started with putting the whole expression in like this $\frac{\partial (f(x-ct,t)+c)}{\partial t}+f(x,t) \frac{\partial (f(x-ct,t)+c)}{\partial x}+\frac{\partial (f(x-ct,t)+c)^3}{\partial^3 x}$ and then get $(-c,0)+(f(x-ct)+c)+(1,0) +0$ but it doesn't seme right?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1928972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove $\dfrac{ax+b}{cx+d}=\dfrac{b}{d}$ if $ad=bc$. Well obviously if $\dfrac{ax+b}{cx+d}=\dfrac{b}{d}$ holds then $cbx+bd=bd+adx$ and it holds for any $x$ if $ad=bc$. However, my question is to 'algebraically' or 'directly' calculate the $\dfrac{ax+b}{cx+d}$ if $ad=bc$. By multiplying both denominator any numerator by any possible guess it doesn't reduce to a constant! Please help! Thank you.
Multiplying $$\frac{ax+b}{cx+d}$$ by $\frac{bd}{bd}\ (=1)$ gives $$\frac{b(adx+bd)}{d(bcx+bd)}=\frac{b(bcx+bd)}{d(bcx+bd)}\tag1$$ since $ad=bc$. Now $(1)$ equals $\frac bd$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1929090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Prove the inequality $\sqrt{a^2+b^2}+\sqrt{c^2+b^2}+\sqrt{a^2+c^2}<1+\frac{\sqrt2}{2}$ Let $a,b,c -$ triangle side and $a+b+c=1$. Prove the inequality $$\sqrt{a^2+b^2}+\sqrt{c^2+b^2}+\sqrt{a^2+c^2}<1+\frac{\sqrt2}{2}$$ My work so far: 1) $a^2+b^2=c^2-2ab\cos \gamma \ge c^2-2ab$ 2) $$\sqrt{a^2+b^2}+\sqrt{c^2+b^2}+\sqrt{a^2+c^2}\le3\sqrt{\frac{2(a^2+b^2+c^2)}3}$$
Try to use Lagrange multipliers to solve this problem. max $[\sqrt{a^2+b^2}+\sqrt{c^2+b^2}+\sqrt{a^2+c^2}+\lambda(a+b+c-1)]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1929245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Polar to cartesian form of r=cos(2θ) This is possible for $r=\sin(2θ)$: Polar to cartesian form of $ r = \sin(2\theta)$ Surely there is some trig identity that may substitute for $cos(2θ)$ and allow for a similar coordinates transfer. What is the cartesian form of $\cos(2\theta)$? I found something remotely similar: $$\cos(2θ) = \cos^2θ − \sin^2θ = 2 \cos^2θ − 1 = 1 − 2 \sin^2θ$$ (source: http://www.math.ups.edu/~martinj/courses/fall2005/m122/122factsheet.pdf) However they all use a squared form of sin or cos, which I am not certain how to convert into Cartesian coordinates.
All polar to Cartesian / Cartesian to polar transformations derive from these simple rules $r^2 = x^2 + y^2\\ x = r \cos \theta\\ y = r \sin \theta$ $r = cos 2\theta$ the 4 petaled rose. If it had an elegant form in Cartesian we would teach it. It will likely be a cubic or quartic equation. $r = \cos^2 \theta - sin^2 \theta\\ r^3 = r^2 \cos^2 \theta - r^2 \sin^2 \theta\\ (x^2+y^2)^{\frac32} = x^2 - y^2\\ (x^2 + y^2)^3 = (x^2 - y^2)^2$ I suppose we could keep multiplying that out. But I think that looks pretty elegant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1929330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
"The following are equivalent" What does it mean for several statements to be equivalent? And why does it suffice to prove a "cyclic" chain $$A_1\implies A_2\implies \cdots\implies A_n\implies A_1$$ in order to show that the conditions $A_1, \dots, A_n$ are equivalent?
If you have shown $$A_1\Longrightarrow A_2 \Longrightarrow\ldots\Longrightarrow A_n $$ To show equivalence of all of these statements it suffices to finally show $A_n \Longrightarrow A_1$, because the former implications all hold and so you get equivalence between any two statements. For instance, $A_2\iff A_5$ because $A_2\Longrightarrow A_3\Longrightarrow\ldots\Longrightarrow A_5$ means you have $A_2\Longrightarrow A_5$ but by completing the cycle you also have $A_5\Longrightarrow A_6\Longrightarrow\ldots\Longrightarrow A_n\Longrightarrow A_1\Longrightarrow A_2$ which is $A_5\Longrightarrow A_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1929453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 1 }
limit of $a_n=\sqrt{n^2+2} - \sqrt{n^2+1}$ as $n$→∞ $a_n=\sqrt{n^2+2} - \sqrt{n^2+1}$ as $n$→∞ Both limits tend to infinity, but +∞ −(+∞) doesn't make sense. How would I get around to solving this?
Multiply by the conjugate over the conjugate, then go from there. In your case, you should multiply your expression by $\frac{\sqrt{n^2+2}+\sqrt{n^2+1}}{\sqrt{n^2+2}+\sqrt{n^2+1}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1929548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Let $f(a) = \frac{13+a}{3a+7}$ where $a$ is restricted to positive integers. What is the maximum value of $f(a)$? Let $f(a) = \frac{13+a}{3a+7}$ where $a$ is restricted to positive integers. What is the maximum value of $f(a)$? I tried graphing but it didn't help. Could anyone answer? Thanks!
$f'(a)=-\frac{32}{(3a+7)^2}<0$ so $f$ is strictly decreasing for $a>-\frac73$. So $f(1)>f(2)>f(3)>...$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1929660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Does $\sum_{n = 2}^{\infty} [\zeta(n) - 1]$ converge? Question in the title. Does $\sum_{n = 2}^{\infty} [\zeta(n) - 1]$ converge? If not, how about $\sum_{n = 1}^{\infty} [\zeta(2n) - 1]$?
Figured out the solution:$\sum_{n=2}^{\infty} [\zeta(n) - 1] = \sum_{n=2}^{\infty}\sum_{k=2}^{\infty} n^{-k} = \sum_{n=2}^{\infty}\frac{1}{1-\frac{1}{n}} - 1 - \frac{1}{n} = \sum_{n=2}^{\infty}\frac{1}{n(n-1)}$, which converges by comparison with $\zeta(2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1929731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
How to compute that $\int_{-\infty}^{\infty}x\exp(-\vert x\vert) \sin(ax)\,dx$ $$ \int_{-\infty}^{\infty}x\exp(-\vert x\vert) \sin(ax)\,dx\quad\mbox{where}\ a\ \mbox{is a positive constant.} $$ My idea is to use integration by parts. But I have been not handle three terms.. Please, help me solve that.
or note $$ 2\int_{0}^{\infty}x\exp(-x) \sin(ax)\,dx = 2\;\mathrm{Im}\int_{0}^{\infty}x\exp((-1+ia)x)\;dx $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1929818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $|f(x)|^p\ln|f(x)|$ is a bounded function (with some conditions) I wish to prove that $g(x,p)=|f(x)|^p\ln|f(x)|$ is a bounded function of $(x,p)$, where $0<|f(x)|\leq M$ for all $x$, and $p\in[p_1,p_2]$, where $0<p_1<p_2<\infty$. $f$ is measurable but not necessary continuous. My attempt: Let $(x,p)$ be an arbitrary point. We wish to show $|g(x,p)|\leq K$ where $K$ is independent of $x, p$. Case 1) Suppose $|f(x)|\geq 1$. Then $0<|f(x)|^p\ln|f(x)|\leq M^{p_2}\ln M$. The bounds are independent of $(x,p)$ so we are done for this case. Case 2) Suppose $|f(x)|<1$. Then $0<|f(x)|^p\leq M^p\leq M^{p_2}$. The $\ln|f(x)|$ part is kind of tricky as it is unbounded. However since $t^p\ln t\to 0$ as $t\to 0^+$, $|f(x)|^p\ln|f(x)|$ ought to be bounded. I don't know how to write it rigorously though. Thanks for any help.
If $t<1$, then $$|t^p \ln t| \le |t|^{p_1} |\ln t|.$$ Since $|t|^{p_1} |\ln t| \to 0$ as $t\to 0$, $|t|^{p_1} |\ln t|$ can be extended to a continuous function defined on $[0,1]$ and thus is bounded on $[0,1]$. Thus $t^p \ln t$ is also bounded on $[0,1] \times [p_1\times p_2]$. Together with case one you have that $t^p \ln t$ is bounded on $[0,M] \times [p_1,p_2]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1929970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the equation of two ... Find the single equation of two straight lines that pass through the point $(2,3)$ and parallel to the line $x^2 - 6xy + 8y^2 = 0$. My Attempt: Let, $a_1x+b_1y=0$ and $a_2x+b_2y=0$ be the two lines represented by $x^2-6xy+8y^2=0$. then, $$(a_1x+b_1y)(a_2x+b_2y)=0$$ $$(a_1a_2)x^2+(a_1b_2+b_1a_2)xy+(b_1b_2)y^2=0$$ Comparing with $x^2-6xy+8y^2=0$, $a_1a_2=1, -(a_1b_2+b_1a_2)=6, b_1b_2=8$. I got stuck at here. Please help me to continue this.
The lines $$(x-2)^2 - 6(x-2)(y-3) + 8(y-3)^2 = 0$$ on transfer of origin to $(2,3)$, using the transformations $X = x-2, Y= y-2$ becomes $$X^2 - 6XY + 8Y^2 = 0$$ Hence the required equation is $$(x-2)^2 - 6(x-2)(y-3) + 8(y-3)^2 = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1930128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Solve $\iint_D\sqrt{9-x^2-y^2}$ Where $D$ is the positive side of a circle of radius 3 Solve $\displaystyle\iint_D\sqrt{9-x^2-y^2}$ Where $D$ is the positive side of a circle of radius 3 ($x^2+y^2=9,x\ge0,y\ge0$) I tried to subsitute variables to $r$ & $\theta$: $$x = r\cos\theta$$ $$y = r\sin\theta$$ $$E = \{0\le r\le3,0\le\theta\le\pi\}$$ $$\displaystyle\iint_D\sqrt{9-x^2-y^2} = \displaystyle\iint_EJ\sqrt{9-(r\cos\theta)^2-(r\sin\theta)^2}=\displaystyle\iint_Er\sqrt{9-(r\cos\theta)^2-(r\sin\theta)^2}$$ But have no clue on how to solve this new integral.
It just is the half of the capacity of the ball: ($x^2+y^2+z^2=9)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1930244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
If $\ker f^k = \ker f^{k+1}$, then $\ker f^k = \ker f^{k+m}$ for all $m\in\mathbb N$ If $f: V\rightarrow V$ is a linear transformation, with $V$ being a finite-dimensional vector space over $\mathbb F$, such that $\ker f^k = \ker f^{k+1}$, then $\ker f^k = \ker f^{k+m}$ for all $m\in\mathbb N$. We prove by induction on $m$. If $m =1$, this is clearly true. Now, assume it is true for $m=n$ and we show that it is true for $m=n+1$. Thus we need to show that $\ker f^k = \ker f^{k+n+1}$. Is it correct to say that since $\ker f^k = \ker f^{k+1}$ for any $k\in\mathbb N$, then $\ker f^{k+n}$ = $\ker f^{k+n+1}$, hence $\ker f^k = \ker f^{k+n+1}$?
Here's the inductive step: Suppose $\ker f^{k+n}=\ker f^k$ for some $n>0$. $\ker f^k\subset \ker f^{k+n+1}$ is trivial. To prove the reverse inclusion, let $x\in\ker f^{k+n+1}$. So $f(x)\in\ker f^{k+n}$. By the inductive hypothesis, $f(x)\in\ker f^k$, i.e. $x\in\ker f^{k+1}=\ker f^k$ (initial step). The inductive step is proved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1930330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Equation with Vieta Jumping: $(x+y+z)^2=nxyz$. Find all $n\in\mathbb{N}$ so that there exist $x,y,z\in \mathbb{N}$ that solve: $$(x+y+z)^2=nxyz$$ I tried to attack it finding solutions, but all the solutions doesn't seem to have something in common. For example: $$ (x,y,z,n)=(1,1,1,9)$$ $$ (x,y,z,n)=(1,2,3,6)$$ $$ (x,y,z,n)=(1,4,5,5)$$ $$ (x,y,z,n)=(2,2,4,4)$$ $$ (x,y,z,n)=(8,8,16,1)$$ I was trying to solve this problem, but someone told me that a hint to solve it is using Vieta Jumping. I don't know how to use Vieta Jumping, so can anyone help me to understand how to apply Vieta Jumping and solve the problem?
Besides Vieta jumping one can successfully use Pell's equation. For $n=1$ this is demonstrated here, giving infinite families of solutions. This may work for other $n$, too. The author, Titu Andreescu, has written several notes on such Diophantine equations. If you search his articles, in particular on quadratic Diophantine equations, there are more references. Remark: For Vieta jumping, for the different equation $x^2+y^2+z^2=nxyz$, see this question on MSE.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1930438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Continuity and limits at end point of interval I am bad a calculus and I have question about continuity. If I have a polynomial, then the function is continuous on $\mathbb{R}$ because $\lim_{x\to a} f(x) = f(a)$ for all $a\in \mathbb{R}$. My question is if $f(x) = \sqrt{x}$ is continuous at $0$. My text book doesn't say if this is continuous at $0$. It does say that the function is continuous from the right. But I have also heard that continuity is a more general concept. My question is: under the real definition of continuity, if $f(x) = \sqrt{x}$ continuous at $0$? Is it correct to say that $\lim_{x\to 0} \sqrt{x} = 0$ without specifying that $x$ is approaching from the right?
We say a function $f$ is continuous at some interior point $a$ $\iff$ $$\forall \epsilon >0, \exists \ \delta>0, \ s.t. \ |x-a|<\delta\implies|f(x)-f(a)|<\epsilon$$ Or more simply notated as: $$\lim_{x\to a^+}f(x)=\lim_{x\to a^-}f(x)=f(a)$$ Considering one sided limits, we say $\lim{_{x\to a^+}f(x)}$ exists and equals $L \iff$ $$\forall\epsilon > 0, \exists \ \delta > 0 \ s.t. \ 0<x-a<\delta \implies|f(x)-L|<\epsilon$$ Such that $L$ is the limit at some exterior point $a$. We may then specify this further to the case of endpoints, and define the following: A function is continuous from the right at some exterior point $a$ if $\lim_{x\to\ a^+}f(x)=f(a)$, and is continuous from the left if $\lim_{x\to\ a^-}f(x)=f(a)$ Thus we may say that $f(x)=\sqrt{x}$ is continuous from the right at the point $a=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1930526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 3 }
What is the amount of different four-digit numbers that can be created from the digits $0, 1, 2, 3, 4$ and $5$? What is the amount of different four-digit numbers that can be created from the digits $0, 1, 2, 3, 4$ and $5$? The solution is $431,$ but I have no idea how this solution was found. How can I solve this problem?
As stated in some of the comments, the problem is in the description of the question. We can distinguish the following cases: * *Using the digits to form a four-digit number. The first digit cannot equal $0,$ while all digits can be used for the second, third and fourth digit. As such, the number of possibilities equals: $$5 \cdot 6^3 = 1080$$ *Using the digits to form a number which contains four unique digits. The first digit cannot equal $0,$ while the second digit cannot equal the first, the third digit cannot equal the first or the second and the fourth digit cannot equal the first, the second or the third. As such, the number of possibilities equals: $$5 \cdot 5 \cdot 4 \cdot 3 = 300$$ *Using the digits to form a number which contains at most four unique digits. We can distinguish four cases for the number of digits, and the number of possibilities thus equals: $$5 \cdot 5 \cdot 4 \cdot 3 + 5 \cdot 5 \cdot 4 + 5 \cdot 5 + 6 = 431$$ The latter scenario is the one considered by the authors, although the original question was badly posed. It should in fact have read: What is the amount of numbers that can be created from the digits $0, 1, 2, 3, 4$ and $5,$ if the number cannot contain more than four digits and every digit can be used at most once?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1930649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that a function $f:P(X)\to P(X)$ preserving the subset relation has a fixed point We have a map $f:P(X)\to P(X)$, where $P(X)$ means the part of $X$ and the function is monotone (by considering inclusion "$\subseteq$"). So $\forall \space A\subseteq B $ we have $f(A)\subseteq f(B)$. Show that this map has a fixed point. This claim is used in some proofs of Cantor–Schröder–Bernstein theorem, for example, see proof 3 on ProofWiki (current revision).
HINT: Consider the set $\bigcup\{A\subseteq X:A\subseteq f(A)\}$. (Be sure to show that there is at least one $A\subseteq X$ such that $A\subseteq f(A)$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1930743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Can the elasticity of a concave function be strictly increasing If a real valued function $f$ defined on the positive orthant is strictly concave, can its elasticity, defined as: $$E(f(x))=\frac{xf'(x)}{f(x)}$$ be strictly increasing in $x$?
Let $f(x) = \sqrt{x+1}$. Then $f$ is strictly concave on $(0,\infty)$. Furthermore, $$(Ef)(x) = \frac{xf'(x)}{f(x)} = \frac{x\frac{1}{2\sqrt{x+1}}}{\sqrt{x+1}} = \frac{x}{2(x+1)} = \frac{1}{2}\left(1-\frac{1}{x+1}\right)$$ is strictly increasing on $(0,\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1930891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Give an example of two distinct sets $A$ and $B$ such that $A \times B = B \times A$ This is a question from my textbook. The book gives the answer $A = \varnothing$ and $B = \{1\}$. The definition of $A \times B$ is $\{(a,b): a \in A \land b \in B\}$. But if $A = \varnothing$, what could be in it? Even if it's $(\varnothing, 1)$ & $(1, \varnothing)$, how do you compare $\varnothing$ and $1$?
You are on the right track as if one of the two sets is empty so is the cartesian product. Formally $$ \emptyset\in \{A,B\}\Longrightarrow A\times B=\emptyset $$ then you can choose $A=\emptyset,B=\{1\}$. One can, moreover, show that all the solutions are of the form $(A=\emptyset,B \not=\emptyset)$ or $(A\not=\emptyset,B=\emptyset)$. If you want to "feel" why it is so, you have two reasons. 1) Every cartesian products comes with its two projections $pr_1,pr_2$ such that $$ pr_1((x,y))=x;\ pr_2((x,y))=y $$ so, if you have any element in $A\times B$, you must have $A\not=\emptyset$ and $B\not=\emptyset$ 2) As remarked by Henry Swanson in the comments $|A \times B| = |A| \cdot |B|$ then, again, $$ A\times B=\emptyset \Longleftrightarrow A=\emptyset\mbox{ or } B=\emptyset $$ Hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1930953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Closed-form condition for concyclicity $A_i(x_i, y_i)$, where $i=0,1,2,3$, are four points such that no three of them are collinear. Is there a closed form on the condition that they are concyclic? Answers with $x_0 = y_0 = x_1 (\text { or } y_1) = 0$ are also welcome.
* *Construct the circumcircle of $A_0$, $A_1$ and $A_2$. This circle will be unique and well-defined, as it has been stipulated that no three points are collinear. This circle can be found in closed form in a few ways – by the method I described here, for example. Let the centre of this circle be $(R_x,R_y)$ and its radius $r$. *Now test whether $A_3$ lies on the circle just constructed: $(R_x-x_3)^2+(R_y-y_3)^2=r^2$. The four points are concyclic if and only if this last equation holds. Note that I have given this as an algorithm rather than a formula, since a full formulaic description would be too unwieldy. Nevertheless, since each step has a closed form, so does the overall algorithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1931064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cross product (high school level) I know that the cross product has this property: $v \times u = u \times -v$ Yet I still am struggling getting the order right. I am also not sure if I fully understand the concept of normal vectors. For instance, this question in my maths book: Line $l$ passes through point $A (-1, 1, 4)$ has has direction vector $${\bf d} = \begin{pmatrix} 6 \\ 1 \\ 5 \\ \end{pmatrix} . $$ Point $B$ has coordinates $(3,3,1)$. Plane $\Pi$ has normal vector $\bf n$, and contains the line $l$ and the point $B$. a) Write down a vector equation for $l$ b) Explain why $\overrightarrow {AB}$ and $\bf d$ are both perpendicular to $\bf n$ c) Hence find one possible vector $\bf n$ The answer to a) is $l$: $${\bf r} = \begin{pmatrix} -1 \\ 1 \\ 4 \\ \end{pmatrix} + \lambda \begin{pmatrix} 6 \\ 1 \\ 5 \\ \end{pmatrix} $$ The answer to b) is less straightforward for me. I know that $$\overrightarrow {AB} = \begin{pmatrix} 4\\ 2\\ -3\ \end{pmatrix} $$ but I'm not sure how to answer the question. Normally I would have calculated the scalar product to show that it equals zero if two vectors are perpendicular but I don't have the normal vector and there is no Cartesian equation of the plane $\Pi$ For c) though, I understand that the normal vector is the cross product of $\overrightarrow {AB}$ and $\bf d$, however I am unsure of the order. I did $\overrightarrow {AB}\times {\bf d}$ but the answer indicated that it should have been ${\bf d} \times \overrightarrow {AB}$. Could someone please help me answer b) and c)?
For (b), the normal of a plane is perpendicular to all vectors that lie in that plane, by definition. Since $\Pi$ contains $l$, it contains $\mathbf d$; since it contains $A$ and $B$ it contains the vector $AB$. Then $AB$ and $\mathbf d$ are both perpendicular to $\mathbf n$. Both $AB\times\mathbf d$ and $\mathbf d\times AB$ are fine for the cross product in (c) – they just give opposite-facing vectors. In other words, $\mathbf u\times\mathbf v=-(\mathbf v\times\mathbf u)$. These two complementary vectors correspond to the two directions a given plane's normal vector can face. You're good for this – just remember the signs!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1931301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the gradient of a given function Given the function $\phi=B_0/r=B_0/\vert \mathbf{x}\vert$ in a spherical axisymmetric geometry, where $B_0$ is a constant. Find $\nabla\phi$. The given answer is $$\phi=\frac{B_0}{r}=\frac{B_0}{\vert \mathbf{x}\vert}\implies \nabla\phi = -\frac{B_0\mathbf{x}}{\vert \mathbf{x}\vert^3}=-\frac{B_0\mathbf{e}_r}{r^2}$$ However I'm confused on how this implication has been achieved. I have attempted: $$\nabla\phi = \frac{\partial \phi}{\partial x_i}\mathbf{x}_i = B_0\frac{\partial}{\partial x_i}\left(\frac{1}{\vert\mathbf{x}\vert}\right)\mathbf{x}_i=B_0\frac{\partial}{\partial x_i}\left((\sum_{i=1}^3 x_i^2)^{-1/2}\right)\mathbf{x}_i$$ But I have no idea how to find this derivative. Can anyone help?
1º Note that r=$\sqrt{x^2+y^2+z^2}$, then ∇ϕ=($\frac{∂}{∂x}\frac{B_0}{\sqrt{x^2+y^2+z^2}}$,$\frac{∂}{∂x}\frac{B_0}{\sqrt{x^2+y^2+z^2}}$,$\frac{∂}{∂x}\frac{B_0}{\sqrt{x^2+y^2+z^2}}$). 2º$\frac{∂}{∂x}\frac{B_0}{\sqrt{x^2+y^2+z^2}}$=$-\frac{B_0}{2(\sqrt{x^2+y^2+z^2})^3} 2x$. 3º ∇ϕ=($-\frac{B_0}{\sqrt{x^2+y^2+z^2})^3} x,-\frac{B_0}{\sqrt{x^2+y^2+z^2})^3} y,-\frac{B_0}{\sqrt{x^2+y^2+z^2})^3} z$)=$-\frac{B_0}{(\sqrt{x^2+y^2+z^2})^3}e_r$, onde $e_r$=(x,y,z)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1931395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to find area under sines without calculus? In the section establishing that integrals and derivatives are inverse to each other, James Stewart's Calculus textbook says (pp325--pp326, Sec.4.3, 8Ed): When the French mathematician Gilles de Roberval first found the area under the sine and cosine curves in 1635, this was a very challenging problem that required a great deal of ingenuity. If we didn’t have the benefit of the Fundamental Theorem of Calculus, we would have to compute a difficult limit of sums using obscure trigonometric identities. It was even more difficult for Roberval because the apparatus of limits had not been invented in 1635. I wonder how Gilles de Roberval did it. Wikipedia and MacTutor do not contain much info on that. How to apply the method of quadrature is exactly the real challenge I suppose. This is mainly a history question, but I'm also curious as to how one would approach this in modern days. Thank you.
Let us show that $$ I(a,b)=\int_{a}^{b}\cos(x)\,dx = \sin(b)-\sin(a) \tag{1}$$ through Riemann sums. We have to compute: $$ \lim_{n\to +\infty}\frac{b-a}{n}\sum_{k=1}^{n}\cos\left(a+\frac{(b-a)k}{n}\right) \tag{2}$$ but the RHS of $(2)$ is a telescopic sum in disguise, hence $(1)$ boils down to proving $$ \lim_{n\to +\infty}\frac{(b-a)}{n}\,\cos\left(\frac{(b-a)+(a+b) n}{2 n}\right)\frac{\sin\left(\frac{b-a}{2}\right)}{\sin\left(\frac{b-a}{2 n}\right)}=\sin(b)-\sin(a)\tag{3}$$ which is not that difficult. This is my guess on de Roberval's method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1931509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48", "answer_count": 4, "answer_id": 2 }
Is it possible to swap sums like that? Say that I have two sums like this : $$\sum_{a=0}^n\sum_{b=0}^m f_{ab}$$ Would it be true to say that this expression can be considered as equal to : $$\sum_{a=0}^m\sum_{b=0}^n f_{ab}$$ As long as the expression that comes after the sums is the same is both cases ? If it is true, is it easy to prove ? Thank you ! EDIT : This is what I want to prove :
Not at all, look: $$\sum_{a=1}^2\sum_{b=1}^3 \frac ab=\sum_{a=1}^2 \left(\frac a1+\frac a2+\frac a3\right)=\frac 11+\frac 12+\frac 13+\frac 21+\frac 22+\frac 23=\frac {11}2,$$ and $$\sum_{a=1}^3\sum_{b=1}^2 \frac ab=\sum_{a=1}^3 \left(\frac a1+\frac a2\right)=\frac 11+\frac 12+\frac 21+\frac 22+\frac 31+\frac 32=9.$$ You can swap the sums, but not just what is on the top of your sum. You do have for all finite sums: $$\sum_{a=0}^n\sum_{b=0}^m f_{ab}=\sum_{b=0}^m\sum_{a=0}^n f_{ab}.$$ Edit What you want to prove works because you are dealing with finite sums , and you changed the indices too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1931602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
Solve $\cot^2x=\csc x$ in degrees $$\cot^2x = \csc x$$ I have to solve for $x$ in degrees. Here's what I did: $$\cot^2x=\csc^2x - 1$$ $$\csc^2x -\csc x - 1 = 0$$ If $y=\csc x$: $$y^2-y-1=0$$ and now I cannot proceed to the given answers of 38.2° and 141.2°.
since $$\frac{\cos(x)^2}{\sin(x)^2}=\frac{1}{\sin(x)}$$ you will get $$\cos(x)^2=\sin(x)$$ this means $$1-\sin(x)^2-\sin(x)=0$$ and so you will get a quadratic equation in $$\sin(x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1931705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the volume of the solid obtained by rotating the region between the curves around the line $y = -1$ I'm stuck on this volume problem - everytime I calculate the volume I get zero. Here is the problem: Find the volume of the solid obtained by rotating the region between the curves around the line $y = -1$. $$ \begin{cases} y = \sin x \\[1ex] y = \cos x\\[1ex] \pi/4 \leq x \leq 5\pi/4 \end{cases} $$ Heres my work https://drive.google.com/file/d/0B29hT1HI-pwxSHVwVWxsNk11S00/view?usp=sharing- once I get my volume integral, I use a calculator (symbolab) to plug in my bounds, and I keep getting zero. If anyone could show me the steps to solving this problem, that'd be amazing. Thank you
Your set up is not completely correct. You do have $\pi \int _{\pi / 4}^{5\pi/4} R^2-r^2 dx$ where $R=1+\sin (x)$ and $r=1+\cos (x)$ but then you claim that $R=\cos (x) $ and $r= \sin (x)$ and this is incorrect. To see why (I believe you confused a trigonometric identity because you were correct until that simplification), just remember that $$\sin (x)^2+\cos (x)^2=1$$ If you calculate the above integral with the given $R$ and $r$, you should get $4\sqrt{2}\pi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1931787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate $\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T\frac{\zeta(\frac{3}{2}+it)}{\zeta(\frac{3}{2}-it)}dt$ as $\sum_{n=1}^\infty\frac{\mu(n)}{n^3}$ Using a well known theorem for Dirichlet series can be justified that (for $T>0$) $$\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T\frac{\zeta(\frac{3}{2}+it)}{\zeta(\frac{3}{2}-it)}dt=\sum_{n=1}^\infty\frac{\mu(n)}{n^3},$$ where $\zeta(s)$ is the Riemann Zeta function and $\mu(n)$ is the Möbius function. Question. Is it possible do directly the calculations in this example? I am asking if you can show us how deduce the identity directly, first calculating the integral and secondly the limit, to get the identity. Is it possible? Many thanks.
$$\frac{\zeta(3/2+it)}{\zeta(3/2-it)} = \sum_{n=1}^\infty n^{-3/2} \sum_{d | n} \mu(d) (d^2/n)^{it}$$ Note it converges absolutely when $t$ is real. If $x \ne 1$ : $$\lim_{T \to \infty} \frac{1}{2T}\int_{-T}^T x^{it}dt = \lim_{T \to \infty} \frac{1}{2T} \frac{x^{iT}-x^{-iT}}{i\ln x} = 0$$ (with $x=1,x^{it} =1$ you have $\lim_{T \to \infty} \frac{1}{2T}\int_{-T}^T 1 dt = 1$) Hence $$\lim_{T \to \infty} \frac{1}{2T}\int_{-T}^T \frac{\zeta(3/2+it)}{\zeta(3/2-it)} dt = \sum_{n=1}^\infty n^{-3/2} \sum_{d | n} \mu(d) \lim_{T \to \infty} \frac{1}{2T}\int_{-T}^T (d^2/n)^{it}dt$$ $$ = \sum_{n=1}^\infty n^{-3/2} \sum_{d | n} \mu(d) 1_{n = d^2} = \frac{1}{\zeta(3)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1932081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Verification: Give a definition of a $G$-Set so that it may be viewed as an algebra A $G$-Set for those who may not be familiar with the terminology is as follows: Let $G$ be a group and $S$ be a set. $G$ acts on $S$ define by the map $\star : G \times S \to S $ where $e \star s = s$ and $g\star (h \star s) = gh \star s, \forall s \in S, g,h \in G $ and so $\langle S, \star \rangle$ is a $G$-set An algebra is a pair $\langle A, F \rangle $ where $A$ is the universe and $F$ is the family of operations on $A$ Now I need to give a definition that presents $G$-sets as an algebra which are equivalent to the above definition of a $G$-set I thought one such definition would be : $\langle S, \star \rangle =\langle S, \star_{g},\star_{h} ... \rangle$ (***) where we have $\star = \{\star_{g} : g \in G\}$ $\star_{g} s = g \star s, \forall g \in G, \forall s \in S$ That is for all elements of $G$, each element becomes associated with a unary operation in the signature of (***) . so we would have $\star_{e}s = e \star s = s, \forall s \in S$ for $\star_{g}(\star_hs) = \star_g(h \star s) = g \star (h \star s)) = hg (\star s) = \star_{gh}s$ This seems valid as we have our universe $S$ with a signature of operations and can axiomatize it by the above equations.
Your definition is fine, although using $\star$ for what is (as you point out) a unary operation maybe isn't ideal. For a group $G$ acting on a set $X$, define the $G$-set $\langle X, \overline{G}\rangle$ to be the algebra with universe $X$ and operations $\overline{G} = \{\bar{g} \mid g \in G\}$, one (unary) operation for each $g\in G$. Then $\bar{g}x$ would denote the action of the group element $g$ on the element $x\in X$. As you point out, we require $\bar{e} = \operatorname{id}_X$ (the identity function on $X$), and $\overline{gh} = \bar{g} \circ \bar{h}$. Essentially, this is what you have already figured out for yourself, in slightly different notation. See also Bill Lampe's very nice and concise presentation of $G$-sets as universal algebras, which develops some of the properties of $G$-sets from this perspective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1932200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
5 odd-numbered taxis out of 9 to 3 airports A fleet of 9 taxis must be dispatched to 3 airports: three to airport A, five to B and one to C. If the cabs are numbered 1 to 9, what is the probability that all odd-numbered cabs are sent to airport B? What I have have come up with so far: * *Probability the cabs are odd is $\frac59$ *Probability they are sent to only airport B is $\frac13$ *$\frac59 + \frac13 = \frac89$ Is this the right answer?
There is only $1$ way to send all $5$ odd numbered cabs to $B$, against an unrestricted distribution of $\binom95 = 126$ Thus $Pr = \dfrac1{126}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1932330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to prove this relation between the laplacian of the logarithm and the dirac delta function? Why is this true in two dimensions? $$\nabla^2\bigg(\ln(r)\bigg)=2\pi\delta^{(2)}(\mathbf{r}),$$ where $\delta^{(2)}$ denotes the two-dimensional $\delta$-function and $r=\sqrt{x^2+y^2}$ in Cartesian coordinates. I understand that both function will look the same. But I do not know how to prove this rigorously.
Hint: Let $\phi \in C^\infty_0(\mathbb{R}^2)$. You need to show \begin{align} \int_{\mathbb{R}^2} \log|x| \Delta \varphi(x)\ dx = 2\pi \varphi(0). \end{align} This can be done by splitting the left-hand side into \begin{align} \int_{B(0, \epsilon)} \log|x| \Delta \varphi(x)\ dx + \int_{\mathbb{R}^2\backslash B(0, \epsilon)} \log |x| \Delta \varphi(x)\ dx = I_1+I_2. \end{align} For $I_2$, use integration by parts to put the Laplacian on to $\log |x|$ (don't forget the boundary term). For $I_1$, show it vanishes as $\epsilon\rightarrow 0$. Remember the boundary term? Show that converges to the desired quantity as $\epsilon \rightarrow 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1932451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How to add IEE754 half precision numbers I'm getting stuck with an exercise on adding two IEE754 half precision numbers, the numbers are: $1110001000000001$ $0000001100001111$ I have tried to solve it using this procedure: Half precision is: $1$ sign bit, $5$ bits exponent and $10$ bits mantissa So I rewrote the numbers in this way: $1\hspace{1em}11000\hspace{0.5em}1000000001$ $0\hspace{1em}00000\hspace{0.5em}1100001111$ So the value of the first exponent is $24$ , i subract 15 from 24 and the result is $9$. But I don't know how to continue , and I haven't found a good reference or tutorial on this , can someone explain me the procedure to solve this?
The information you need is all in the Wikipedia pages on floating point arithmetic and IEEE 754. I agree with the comment suggesting that you may do better on a computer stackexchange site, but let me sketch a few hints about the mathematics that may help you understand the Wikipedia pages. The basic mathematical idea is that a number $x$ with sign bit $s \in \{0, 1\}$, mantissa $c \in \Bbb{N}$ and exponent $q \in \Bbb{Z}$ in base $2$ represents the number $$x = (-1)^s \times c \times 2^q$$ (The Wikipedia pages also refer to the mantissa $c$ as the "significand" or "coefficient".) For each defined precision, there are rules for the allowable ranges of $c$ and $q$. If you want to add to $x$ another IEEE 754 number $x'$ say represented by $s'$, $c'$ and $q'$, you first adjust the representations so the exponent is the smaller of $q$ and $q'$. E.g., if $q \le q'$, you represent $x'$ with exponent $q$ using the identity: $$x' = (-1)^{s'} \times c' \times 2^{q'} = (-1)^{s'} \times (2^{q'-q}c') \times 2^q$$ Once you have made the exponents the same, you can meaningfully add or subtract the adjusted mantissas and calculate the sign of the result according to the sign bits $s$ and $s'$, giving say: $$x + x' = (-1)^{s''} \times c'' \times 2^q$$ E.g., if $s = 0$, $s' = 1$ and $c < 2^{q'-q}c$, this will be $-1^1 \times (2^{q'-q}c' - c) \times 2^q$. You then "normalise" the result, i.e., you round and scale to make it conform to the rules for the required precision (if possible). With care, you can arrange for the intermediate results in all these calculations to involve just one or two additional bits. As you will find from the Wikipedia pages cited above, there are various special cases and optimisations in the representation. E.g., as you seem to know, the exponent is represented by adding a bias value dependent on the precision ($15$ in the case of half precision.) If a number can be normalised, you can infer that the top bit of the mantissa is $1$ and the representation omits it. Denormalised numbers are represented with the exponent bits all $0$ and do include all bits of the mantissa (this applies to one of the numbers in your example).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1932521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If the radius of a sphere is increased by $10\%$, by what percentage is its volume increased? Question: If the radius of a sphere is increased by $10\%$, by what percentage is its volume increased? Use Calculus. Answer: It increases by approximately $33.1\%$. I did the above question using calculus but my answer came out to be 30%. Here's the solution. What is wrong in it? $V = \frac{4}{3} πr^3$ $dV = \frac 4 3 π \cdot 3 r^2 dr$ $dV = 4 π r^2 dr$ since $dr = \frac{10}{100} r$ therefore $dV = 4 π r^2 \cdot \frac{10}{100} r$ $dV = \frac 4 {10} π r^3$ now $dV/V = [ \frac 4{10} πr^3] / [ \frac 4 3 πr^3]$ $dV/V = 3/10$ and $100dV/V = \frac 3{10} \times 100 = 30\% $
we have $$V_1=\frac{4}{3}\pi \cdot r^3$$ then we get $$V_2=\frac{4}{3}\pi\cdot r^3\left(\frac{11}{10}\right)^3$$ and we get $$\frac{V_2}{V_1}=\left(\frac{11}{10}\right)^3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1932597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Showing Euclidean metric and metric in $R^{2}$ produces same topology Question: Prove that the Euclidean metric and the metric $d_\infty \left ( x,y \right ):=\max\left \{ \left | x_{1}-y_{1} \right |,\left | x_{2}-y_{2} \right | \right \}$ defines the same topology in $\mathbb{R}^{2}$. The Euclidean metric is defined as $d_2:\mathbb{R}^{n} \times \mathbb{R}^{n}\rightarrow \mathbb{R}$ $\vec{X} \times \vec{Y} \mapsto d\left ( \vec{X},\vec{Y} \right ) $ I would like to request for a useful hint to kickstart my attempt. Thanks in advance.
To show that $d_1(x,y) = \sqrt{(x_1 - y_1)^2 + (x_2-y_2)^2}$ and the distance $d_2(x,y)=\max\{|x_1-$ $y_1|, |x_2-y_2|\}$ produce the same topology, it is enough to see the following inequality: There exist constants $c_1$ and $c_2$ such that for all $x,y$, $c_1 d_2(x,y) \leq d_1(x,y) \leq c_2 d_2(x,y)$. Proof: On one side, $$d_1(x,y) = \sqrt{(x_1 - y_1)^2 + (x_2-y_2)^2} \leq \sqrt{2\max((x_1 - y_1)^2,(x_2-y_2)^2)} \leq \sqrt{2}\cdot d_2(x,y)$$. On the other hand, $$ d_2(x,y) = \max\{|x_1-y_1|, |x_2-y_2|\} = \sqrt{\max\{|x_1-y_1|, |x_2-y_2|\} ^2} $$ Now, $$ \sqrt{\max\{|x_1-y_1|, |x_2-y_2|\} ^2}\leq \sqrt{|x_1-y_1|^2 + |x_2-y_2| ^2} \leq d_1(x,y) $$ Hence, the result follows. From here, I'll leave you to show, using the definition for topology given by a metric, that the two topologies are the same. Hint: show that open sets given by one topology are contained in the other, and vice versa. By the way, any two metrics derived from norms on $\mathbb{R}^2$ generate the same topology. You have only picked out a special case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1932683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Series Expansion for $\ln(x)$ I have a mathematics assignment, which requires me to proof that $$\ln\frac{2}{3} = \sum_{n=1}^{\infty}\frac{(-1)^{n}}{2^{n}n}$$. I know, I can solve this by proving $\ln x$ = $\sum_{n=1}^{\infty }\frac{1}{n}\left ( \frac{x-1}{x} \right )^{n}$, but I don't know how to prove this, so can anybody offer some help?
Using the definition of geometric series $$\frac{1}{1-x}=\sum_{k=0}^{\infty} x^{k} $$ Integrating both sides $$\int\frac{1}{1-x}=\sum_{k=0}^{\infty}\frac{(-1)^k x^{k+1}}{k+1} $$ So $$\ln (1-x)=-\sum_{k=1}^{\infty} \frac { x^k}{k} $$ $$\ln (y)= \sum_{k=1}^{\infty} \frac{1}{k} \left( \frac{y-1}{y} \right) ^{k} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1932795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Prove $\lim_{(x,y)\to(1,1)} x^2+xy+y=3$ Prove that $$\lim_{(x,y)\to(1,1)} x^2 + xy + y = 3$$ using the epsilon-delta definition. What I have tried: Let $\epsilon > 0$ be arbitrary. We must show that for every $\epsilon$ we can find $\delta>0$ such that $$0 < \|(x,y) - (1,1)\| < \delta \implies \|f(x,y) - 3\| < \epsilon$$. Or equivalently, $$0 < \sqrt{(x-1)^2 + (y-1)^2} < \delta \implies |x^2+xy+y-3| < \epsilon$$ The problem is finding the $\delta$. I have been trying to manipulate $|x^2+xy+y-3|$ with no success. $|x^2+xy+y-3|$ $=|(x^2-1)+y(x+1)-2|$ $=|(x-1)(x+1)+y(x+1)-2|$ $=|(x+1)[(x-1)+y]-2|$ $=|(x+1)||[(x-1)+(y-1)-1|$ Any help is appreciated!
\begin{align} x^2+xy+y-3 &= (x-1)^2+2x-1+(x-1)(y-1)+x+y-1+(y-1)+1-3 \\ &=(x-1)^2+(x-1)(y-1)+(y-1)+3x+y-4 \\ &=(x-1)^2+(x-1)(y-1)+(y-1)+3(x-1)+(y-1)\\ &=(x-1)^2+(x-1)(y-1)+2(y-1)+3(x-1)\\ \end{align} Let $\delta= \min(1, \frac{\epsilon}7),$ Then \begin{align} |x^2+xy+y-3| &\leq |x-1|^2+|x-1||y-1|+2|y-1|+3|x-1| \\ &\leq 2\delta^2+5\delta \\ & \leq 7 \delta \\ & \leq \epsilon \end{align} Alternatively, let me work from where you left off, there is a mistake at the last line of your equation, it should be \begin{align} (x+1)[(x-1)+(y)]-2 &=(x+1)[(x-1)+(y-1+1)]-2 \\ &=(x+1)[(x-1)+(y-1)]+x+1-2 \\ &=(x+1)[(x-1)+(y-1)]+(x-1) \end{align} Choose $\delta = \min(1, \frac{\epsilon}7)$ Then $|x-1|\leq \delta$ implies $1-\delta \leq x \leq 1+\delta$ and hence $|x+1|\leq 3,$ Hence, \begin{align} |(x+1)[(x-1)+(y-1)]+(x-1)| &\leq 3 [|x-1|+|y-1|]+|x-1|\\ & \leq 3(2\delta)+\delta \\ & = 7\delta \\ & \leq \epsilon \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1932891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Multiplication of series Suppose $a_n\geq0$ and $\sum a_n$ is convergent. Show that $ \sum 1/(n^2\cdot a_n)$ is divergent. I haven't been able to get any result from any of my approaches (which include the general tests for positive series). However if I were to create $\sum b_n$ such that the terms are in the same order as in $\sum a_n$ except the terms for which $a_n=0$ have been omitted, then of course I would be able to solve it. But would this approach be correct. I was also wondering whether some result related to p test could be used. Or, if you have a better approach please do share it
Assuming $a_n\ne 0$: If $\sum\limits_{n=1}^{\infty}{\frac{1}{n^2a_n}}<\infty$, then by Cauchy-Schwarz we would have $$\left(\sum\limits_{n=1}^{\infty}{\frac{1}{n}}\right)^2 = \left(\sum\limits_{n=1}^{\infty}{\sqrt{a_n}\frac{1}{n\sqrt{a_n}}}\right)^2 \le \sum\limits_{n=1}^{\infty}{a_n}\sum\limits_{n=1}^{\infty}{\frac{1}{n^2a_n}}<\infty,$$ contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1933001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integral representation of $\sum_{k=0}^{n} \frac{x^k}{(k!)^2}$? I would like to know if there is any integral representation of the following sum : $\displaystyle{S_n(x) := \sum_{k=0}^{n} \frac{x^k}{(k!)^2}}$. I'm willing to have an idea of how fast this sums goes to infinity when $x$ is a sequence $(x_n )$ such that $x_n\to \infty$, that is I'm looking for $\lim_{n\to\infty} S_n(x_n)$. The only thing I know is that $\lim_{n\to\infty} S_n(x) = I_0(2\sqrt{x})$ for $x$ fixed, and $I_0(x)\sim_{x\to \infty} \frac{e^x}{\sqrt{2\pi x}}$ where $I_0$ is the modified Bessel function of the first kind and of order $0$.
Since by De Moivre's formula $$\binom{2k}{k}= \frac{4^k}{\pi}\int_{-\pi/2}^{\pi/2}\cos(\theta)^{2k}\,d\theta \tag{1}$$ we have $$f(x)=\sum_{k\geq 0}\frac{x^k}{k!^2}=\int_{-\pi/2}^{\pi/2}\sum_{k\geq 0}\frac{(4x)^k\cos(\theta)^{2k}}{\pi(2k)!}\,d\theta \tag{2}$$ hence $$ \boxed{\,f(x)=\color{red}{\frac{1}{\pi}\int_{-\pi/2}^{\pi/2}\cosh\left(2\sqrt{x}\cos\theta\right)\,d\theta}.} \tag{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1933129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
show $h : \underline{A} \to \underline{B}$ is a homomorphism iff $h$ is a subuniverse of $A \times B$ show $h : \underline{A} \to \underline{B}$ is a homomorphism iff $h$ is a subuniverse of $\underline{A} \times \underline{B}$ where $\underline{A}$ and $\underline{B}$ are similar algebras $\Rightarrow$ Assume $h$ is a homomorphism, $h \subset A \times B$ as $h = \{(a,b) : a \in A \wedge h(a) = b \in B\}$ and additionally by assumption we have for all n-ary operations $f$ in the type of these algebras we have that $h(f^{A}(a_1, ... , a_n)) = f^{B}(h(a_1),...,h(a_n))$ where $f^{A}$ and $f^{B}$ are operations on $A$ and $B$, respectively. this is where I get stuck in this direction, I believe what the key is to show this forms a subalgebra of the direct product which gives directly the result that we get a subuniverse $\Leftarrow$ Assume $h$ is a subuniverse of $\underline{A} \times \underline{B}$, $h$ forms a subset of $A \times B$ such that it is closed under the fundamental operations of $\underline{A} \times \underline{B}$ which gives us a subalgebra. for the converse, I am not sure how I extract that $h$ gives us a homomorphism
If $h : \mathbf{A} \to \mathbf{B}$ is a homomorphism, and $f$ is a $n$-ary operation of the type of the algebras, then take $n$ elements of $h$ as a subset of $A \times B$: $$(a_1,b_1), \ldots, (a_n,b_n) \in h,$$ which means they have the form (as you say) $$(a_1,h(a_1)), \ldots, (a_n,h(a_n)).$$ Now we want to prove that $$f( (a_1,h(a_1)), \ldots, (a_n, h(a_n)) ) \in h.$$ Now $f$, in $\mathbf{A} \times \mathbf{B}$, is compute coordinate-wise, so that \begin{align} f( (a_1,h(a_1)), \ldots, (a_n, h(a_n)) ) &= ( f(a_1, \ldots, a_n), f(h(a_1), \ldots, h(a_n)) ) \\ &= (f(a_1, \ldots, a_n),h(f(a_1, \ldots, a_n))), \end{align} and this pair, undeed, belongs to $h$. For the converse, if $h$ is a subuniverse of $\mathbf{A} \times \mathbf{B}$, and (as we both agree with Alex Kruckman), $h:A\to B$ is a map, to see that $h$ is a homomorphism, suppose $f$ is as before, and $a_1, \ldots, a_n \in A$, so that $h(a_1), \ldots, h(a_n) \in B$, and $$f( (a_1,h(a_1)), \ldots, (a_n,h(a_n)) ) \in h,$$ because $h$ is a subuniverse of $\mathbf{A} \times \mathbf{B}$. Now, as before, $f$ is computed coordinate-wise, ,so that \begin{align} f( (a_1,h(a_1)), \ldots, (a_n,h(a_n)) ) &= ( f(a_1, \ldots, a_n), f(h(a_1), \ldots, h(a_n)) )\\ &= ( f(a_1, \ldots, a_n), h(f(a_1, \ldots, a_n)) ), \end{align} where the last equality follows from the fact that $h$ is a map so, if $f(a_1, \ldots, a_n) \in A$, then certainly $$( f(a_1, \ldots, a_n), h(f(a_1, \ldots, a_n)) ) \in h,$$ and is the unique such pair given the first coordinate. But we already have noticed that $f( (a_1,h(a_1)), \ldots, (a_n,h(a_n)) ) \in h$, and therefore that $( f(a_1, \ldots, a_n), f(h(a_1), \ldots, h(a_n)) ) \in h$, and so these elements must be equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1933232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Create a logical expression that has this truth table The following table has 3 input value. I need to make a logical expression that has this truth table. X Y Z OUTPUT 0 0 0 0 0 0 1 1 0 1 0 0 0 1 1 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 1 1 I attempted combinations like AND with XOR or XNOR with AND. But I can't seem to find the combination to the logic table to get the output of the table.
Consider rows, where output equals to $1$, all we need to do is to write combinations which produce each of these rows and then make union of them $$ Output = (\neg X\land \neg Y \land Z) \lor (X \land \neg Y \land \neg Z) \lor (X \land \neg Y \land Z) \lor (X\land Y \land Z) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1933360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Easier way to calculate the derivative of $\ln(\frac{x}{\sqrt{x^2+1}})$? For the function $f$ given by $$ \large \mathbb{R^+} \to \mathbb{R} \quad x \mapsto \ln \left (\frac{x}{\sqrt{x^2+1}} \right) $$ I had to find $f'$ and $f''$. Below, I have calculated them. But, isn't there a better and more convenient way to do this? My method: $$ {f'(x)}=\left [\ln \left (\frac{x}{(x^2+1)^\frac{1}{2}} \right) \right ]'=\left (\frac{(x^2+1)^\frac{1}{2}}{x} \right)\left (\frac{x}{(x^2+1)^\frac{1}{2}} \right)'=\left (\frac{(x^2+1)^\frac{1}{2}}{x} \right) \left (\frac{(x^2+1)^\frac{1}{2}-x[(x^2+1)^\frac{1}{2}]'}{[(x^2+1)^\frac{1}{2}]^2} \right)=\left (\frac{(x^2+1)^\frac{1}{2}}{x} \right) \left (\frac{(x^2+1)^\frac{1}{2}-x[\frac{1}{2}(x^2+1)^{-\frac{1}{2}}(x^2+1)']}{\left | x^2+1 \right |} \right)=\left (\frac{(x^2+1)^\frac{1}{2}}{x} \right) \left (\frac{(x^2+1)^\frac{1}{2}-x[\frac{1}{2}(x^2+1)^{-\frac{1}{2}}(2x)]}{x^2+1} \right)=\left (\frac{(x^2+1)^\frac{1}{2}}{x} \right) \left (\frac{(x^2+1)^\frac{1}{2}-x^2(x^+1)^{-\frac{1}{2}}}{x^2+1} \right)=\frac{(x^2+1)^{(\frac{1}{2}+\frac{1}{2})}-x^2(x^2+1)^{\frac{1}{2}+-\frac{1}{2}{}}}{x(x^2+1)}=-\frac{x^2}{x}=-x $$ and $$ f''(x)=(-x)'=-1\ $$ This took me much more than 1.5 hours just to type into LaTex :'(
By implicit differentiation: Let $$ y(x) = \log\left[\frac{x}{\sqrt{x^2 + 1}}\right]. $$ Then $$ (x^2 + 1)e^{2 y(x)} = x^2. $$ Differentiating both sides, $$ (x^2 + 1)e^{2y(x)}y'(x) + x e^{2y(x)} = x. $$ Solving for $y'(x)$, $$ y'(x) = \frac{x(e^{-2y(x)}-1)}{(x^2 + 1)} = \frac{1}{x(x^2+1)}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1933460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
Sequence that is neither increasing, nor decreasing, yet converges to 1 Give an example of a sequence which is neither increasing after a while, nor decreasing after a while, yet which converges to 1. My solution: $1.01,\ .99,\ 1.001,\ .999,\ 1.0001,\ .9999,\ \text{etc}\dots$ Does that satisfy all the conditions? Also, judging by the instructions, do you think I would have to define that sequence? In which case, I could do $\{x_n\} = 1 + .01^n$ for odd $n$ and $1 - .01^n$ for even $n$ (which would change the sequence, but just increases the rate at which it approaches $1$). The definition of an increasing sequence used is the next term being bigger than OR equal to the preceding term. And dually for decreasing.
Yes, your sequence satisfies all required conditions. Another example sequence: $$a_n = \begin{cases} 1+\frac 1n & \text{if } \log_{10}n \in \mathbb N \cup \{0\} \\ 1 & \text{otherwise} \end{cases}$$ has terms: $$\begin{align} a_1 & = 2 \\ a_{10} & = 1.1 \\ a_{100} & = 1.01 \\ a_{1000} & = 1.001 \\ a_{10000} & = 1.0001 \\ \ldots \end{align}$$ and all remaining terms equal $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1933596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 7, "answer_id": 0 }
How to find Laurent series for this real valued function $1/(1-x)$? http://www.wolframalpha.com/input/?i=1%2F(1-x)+taylor This page says that $$\dfrac{1}{1-x}=\sum\limits_{n=0}^\infty x^n = 1+x+x^2+x^3 +... $$ when $|x|<1$, and $$\dfrac{1}{1-x}=\sum\limits_{n=0}^\infty -x^{-(n+1)} = -\dfrac{1}{x}-\dfrac{1}{x^2}-\dfrac{1}{x^3}-... $$ when $|x|>1$. It's easy to verify that both equations are true by calculating the sum of the geometric series. And by using taylor series expansion formula I can only get the first equation. My question is how to derive the second equation. On the Wolfram Alpha page it says the second equation is given by Laurent series. I googled Laurent series and I learned that the coefficient of the Laurent series is defined by a line integral of some complex function. I assume Laurent series on real valued functions is just a specific version of the general statement. However I have limited knowledge in complex analysis and am not able to derive it by myself. Thank you in advance.
If $\left|x\right| > 1$, then $1/\left|x\right| < 1$. Hence, $$ \frac{1}{1 - x} = \frac{1/x}{1/x - 1} = \frac{-1}{x}\cdot\left(\frac{1}{1 - 1/x}\right). $$ Now use the series for $\frac{1}{1 - u}$ for $\left|u\right| < 1$ with $u$ replaced by $1/x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1933729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability of never getting k tails in infinite flips I am currently trying to solve a problem but am at a standstill. The question is: If a coin is flipped infinitely many times, what is the probability that there will never be j successive tails? I have a recursive sequence of a_n, where a_n is the number of ways of flipping a coin n times without having successive j tails. The recursive sequence is: an=a(n-1) + a(n-2) + ... + a(n-j) (this is all based on the flips and what sequence the flips generate). Now I am asked to prove that a_n < c^n for some c<2 for every n>=(j+1). Not sure where to go from here.
Suppose we flip a coin $j$ times, and the chance that any flip lands tails is $p \in (0,1)$. Then, assuming that individual flips are independent, the probability that all flips land tails is simply $p^j$. Let's call such a group of flips a single trial. Consequently, if we repeat this experiment $n$ times--i.e., we conduct $n$ trials--the probability that none of the trials results in $j$ consecutive tails is $(1 - p^j)^n$. Now, observe that the event that any $j$ consecutive tails, including those that may have started in one trial and ended in the next, is a strict superset of the aforementioned event, it follows that the probability in $N = nj$ flips of not observing $j$ consecutive tails is at least $(1 - p^j)^n$. Taking the limit as $n \to \infty$, we find $$\lim_{n \to \infty} (1 - p^j)^n = 0,$$ since $0 < p < 1$ implies $0 < p^j < 1$, which implies $1 > 1 - p^j > 0$. Therefore, the probability of avoiding eventually observing $j \ge 1$ consecutive flips--no matter how large $j$ is--is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1933830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Example of space $T_3$ which is not regular Does there exist example of a topological space in which any closed set and a point can be seperated by open sets i.e. space is $T_3$. But there exist a pair of points which can't be seperated by points(i.e. points not closed) i.e. space not $T_1$. Hence space not regular because regular space = $T_1 + T_3$
Take the indiscrete topology on any set with more than one point. Then it's not $T_1$, but it's $T_3$ because any closed set which does not contain a point is empty. (In fact, any example is basically the same: a $T_3$ space is regular iff it is $T_0$, and a space is $T_3$ iff its $T_0$ quotient is regular. So the only way to get examples is to take a regular space and add topologically indistinguishable "copies" of points.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1933977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
QR algorithm for finding eigenvalues and eigenvectors of a matrix Let $A$ be symmetric and diagonalizable, and let $\{\lambda_1, \cdots, \lambda_n\} $ be its spectrum. A consequence of the Spectral Theorem assures that $\exists Q$ orthogonal s.t. $\begin{pmatrix} \lambda_1 & 0 &\cdots & 0\\ 0 & \lambda_2 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & \lambda_n \end{pmatrix} = D = Q^{-1} A Q = Q^T A Q.$ I want to apply the QR algorithm for finding the spectrum of A and an orthonormal basis of A, such that the matrix is orthogonal. $A^{(0)}=A$ FOR $k = 1,2,...$ 1) get the factorization $A^{(k-1)}= Q^{(k)}R^{(k)}$ 2) $A^{(k)} = R^{(k)}Q^{(k)}$ 3) $\overline{Q}^{(k)} = Q^{(1)}\cdot ...\cdot Q^{(k)} $ I've read several notes and books and now I am quite confused. In some it is assumed that all eigenvalues are distinct, in others only symmetry is assumed. It is easy to see that all $A^{(k)}$ are similar, and therefore they have the same eigenvalues, but in some notes they say that $A^{(k)}$ converges to a diagonal matrix as $k \rightarrow \infty$, in others $A^{(k)}$ converges to a triangolar matrix. I couldn't find a formal proof to this in any case though. Last but not least, in all notes it is said that the matrix $\overline{Q}^{(k)}$ converges to a basis of eigenvectors of $A$ and that it is orthogonal, but is this holding also without assuming that all eigenvalues are distinct? And where may I find a formal proof to all this? Ayone who could help me in making it all clear and give me some good references?
* *eigenvalues are distinct: the more relevant property would be that all eigenvalues are simple. This is guaranteed for symmetric or more generally normal matrices. This only has to do with convergence results, and has no influence in the considered case of symmetric matrices. *$A^{(k)}$ converges to a triangular matrix: this is the result for general matrices. For symmetric matrices $A^{(k)}$ stays symmetric for all $k$, so that "triangular" translates to "diagonal". *$\bar Q^{(k)}$ converges to a basis of eigenvectors of $A$: This is only true for diagonal matrices. For normal matrices, the complex eigenvalues result in $2×2$ diagonal blocks and the corresponding columns of the cumulative $\bar Q$ are real and imaginary parts of the pair of conjugate eigenvectors. In general where $A^{(k)}$ is increasingly triangular, the $\bar Q$ columns form a basis for an increasing sequence of invariant subspaces. Stoer/Bulirsch wrote a book on numerical analysis, Watkins did a series on papers that can (could) be found online on Francis QR method and modern variants. In the netlib.org archives for the LAPACK package one can find research and white papers on the methods used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1934078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Dirichlet hyperbola methods : estimate functions # of ordered pairs Let $f$ be an arithmetic function defined by $$f(n) = |A_n|$$ where $A_n = \{(a, b) : n = ab^2\}$.Estimate $$\sum_{n \leq x} f(n)$$ where $x \in \mathbb{R}^+$, using Dirichlet hyperbola method. The error term should be $O(x^{1/3})$. Dirichlet hyperbola method requires the function $f$ to be written as Dirichlet convolution of two functions. The problem is I try to compute the values of $f$, and try to guess $g ,h$ such that $f = g * h$. But it is not successful. * Update * According to Adam Hughes answer, $$f(n) = d(n) * \lambda(n).$$ Set $$D(n) = \sum_{n \leq x} d(n), \Lambda(n) = \sum_{n \leq x} \lambda(n).$$ Let $a, b >0$ such that $ab = x,$ then by Dirichlet heperbola method $$\sum_{n \leq x} f(n) = \sum_{c \leq a}d(c)\Lambda(x/c) + \sum_{e \leq b}\lambda(e)D(x/e) - D(a)\Lambda(b).$$ The problem is all of the examples I have seen, eg : $\sum_{n \leq x} d(n) = \sum 1*1$, know explicitly that $\sum_{n \leq x} 1 = [x].$ However, here I try to derive, and then search some information on the internet, the summation fomular for $$\Lambda(n), D(d)$$ are complicated. http://mathworld.wolfram.com/LiouvilleFunction.html I try to estimate it with using big O, and follows other examples, but I cannot do it. Any help, or hint how to do the estimation ? How to deal with that $\Lambda, D$ if the summation formula is not simple, or sometimes no formula ? Thank you in advanced. PS link to a new question Estimate # of order paired function using Dirichlet hyperbola method
For each $n$ we are computing $f(n) = \displaystyle\sum_{ab^2=n} 1$. Now the indicator function of the square is just $\sum_{d|n}\lambda(d)$ where here $\lambda(d)$ is the Liouville function, $\lambda(d) = (-1)^{\Omega(d)}$. Writing this out we have $$f(n) = \sum_{d|n}\sum_{e|d}(-1)^{\Omega(e)}$$ Because this will have a $1$ in the outer sum exactly when the divisor $d$ is a perfect square and note that $a$ is totally determined by $b^2$ so we need only count the number of square factors. As always, we reverse the order of summation using $ef = d, dg = n$ to produce $$f(n) = \sum_{efg = n}(-1)^{\Omega(e)}\cdot 1(f)\cdot 1(g)$$ which reveals itself as a triple convolution, $1*1*\lambda$. Since we know $(1*1)(n) = d(n)$ we can write this as $d*\lambda(n)$ with $d$ the number of divisors function. Here you have something neatly written as a convolution of two basic arithmetic functions, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1934192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $(X\times Y)\setminus (A\times B)$ is connected Problem. Let $\emptyset \subset A\subset X$ and $\emptyset \subset B\subset Y$. If $X$ and $Y$ are connected, show that $(X\times Y)\setminus (A\times B)$ is also connected by using the criteria of connectedness that if for any continuous function $f$ such that $f:X\to \{\pm1\}$, $f$ is constant then $X$ is connected. I began by assuming that there exists a function $f:(X\times Y)\setminus (A\times B)\to\{\pm1\}$ which is continuous but not constant but couldn't proceed any further beyond that.
The proof is essentially the same as the one linked to. Suppose we have a function $f:(X\times Y)\setminus(A\times B)\to\{\pm1\}$. Begin by choosing $a\in X\setminus A$ and $b\in Y\setminus B$ (which is possible because both are proper subsets). Now, let $(x,y)\in(X\times Y)\setminus(A\times B)$ be arbitrary. We will show that $f(x,y)=f(a,b)$. Because $(x,y)\notin A\times B$, either $x\notin A$ or $y\notin B$. Without loss of generality, suppose $x\notin A$. Then $\{x\}\times Y$ is homeomorphic to $Y$ and contained in $(X\times Y)\setminus(A\times B)$, so the restriction $f|_{\{x\}\times Y}$ is constant. Similarly, $X\times\{b\}$ is homeomorphic to $X$ and contained in $(X\times Y)\setminus(A\times B)$, so $f|_{X\times\{b\}}$ is constant. Hence $$f(x,y)=f(x,b)=f(a,b)$$ and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1934259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that there exists $c\in[a,b]$ such that $f(c)=\frac{1}{n} ( f(x_1)+f(x_2)+...f(x_n))$ Given $f$ a continuous function on $(a,b)$ such that $x_1,x_2,...,x_n$ $n$ elements of $(a,b)$, show that it exists $c\in[a,b]$ such that $f(c)=\frac{1}{n} ( f(x_1)+f(x_2)+...f(x_n))$ That equals to show that : $f(c)+f(c)+...+f(c) ( \text{n times})=f(x_1)+f(x_2)+...+f(x_n)$ There we assume the function $h(x)=f(x)-f(x_1)+f(x)-f(x_2)+...+f(x)-f(x_n)$ Its there in my mind to use the IVT but I don't know how in such serie function like that
Find $l,h \in \{1,2,\ldots ,n \}$ such that $f(x_l)\leq f(x_i)$ for all $i \in \{1,2,\ldots ,n \}$ and $f(x_h)\geq f(x_i)$ for all $i \in \{1,2,\ldots ,n \}$. Clearly such $l$ and $h$ exist. Now if $f(x_l)=f(x_h)$, then all $f(x_i)$ are equal so there is nothing to prove. So $f(x_l)<f(x_h)$ and so now assume WLOG $x_l<x_h$. Now consider the function $f(x) - \frac{1}{n}(f(x_1)+ \ldots f(x_n))$ in the interval $[x_l,x_h]$. Clearly $f(x_l) \leq 0$ and $f(x_h)\geq 0$, hence by IMVT $\exists c \in [x_l,x_h]$ with $f(c) - \frac{1}{n}(f(x_1)+ \ldots f(x_n)) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1934345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A geometric locus in a equilateral triangle Find the locus of the points $P$ in the plane of an equilateral triangle $ABC$ that satisfy : $$\max\{PA,PB,PC\} = \frac{PA+PB+PC}{2}.$$ I have never dealt with locus problems like these. So any help would be appreciated. (And please mention the intuition behind the answer too, if possible)
Tricky question. I will give you just a substantial hint. Let we consider a point $P$ on the minor $BC$-arc of the circumcircle of $ABC$. By applying Ptolemy's theorem to the cyclic quadrilateral $PBAC$ we get that $PA=PB+PC$, from which $$ PA = \frac{PA+PB+PC}{2}.$$ Can you guess now what the wanted locus is? Consider that $\max\{PA,PB,PC\}$ equals $PA$ iff $P$ lies in the $\widehat{BOC}$ angle, where $O$ is the circumcircle of $ABC$. To finish the proof, prove that along a ray emanating from $O$ in the $\widehat{BOC}$ angle, the function $PA-(PB+PC)$ has a unique zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1934450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Mathematical notation of set with $n+3$ members One of the math problems I have describes a set of numbers this way: let there be a set A such that $A=\{{1,2,3,...,n+3}\}$. I don't understand what the $n+3$ means and how the set actually looks like.
The $n$ is arbitrary and finite. Basically, choose an $n$ so $$ A_n = \{ 1, 2, \ldots, n, n+1, n+2, n+3 \}.$$ For example, $A_4 = \{1, 2, \ldots, 5, 6, 7 \}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1934524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Prove that $a/(p-a) + b/(p-b) + c/(p-c) \ge 6$ Prove that $a/(p-a) + b/(p-b) + c/(p-c) \ge 6$ , where $a,b,c$ are the sides of a triangle and $p$ is the semi-perimeter .
By C-S $\sum\limits_{cyc}\frac{a}{p-a}\geq\frac{(a+b+c)^2}{\sum\limits_{cyc}(pa-a^2)}=\frac{2(a+b+c)^2}{(a+b+c)^2-2(a^2+b^2+c^2)}\geq\frac{2(a+b+c)^2}{(a+b+c)^2-\frac{2}{3}(a+b+c)^2}=6$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1934649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to find $f(x)$ in order to find $f^{(10)}(3)$? (a) Find the radius of convergence of $\sum_{n=1}^\infty (-1)^n \frac{(x-3)^n}{(2n+1)}$ and its derivative. (b) Denote by $f(x)$ the function represented by the above power series within its region of convergence. Find $f^{(10)}(3)$, i.e., its 10th derivative at $x = 3$. I can solve problem (a). However I cannot find f(x). In order to find f(x) how should I do?
You don't need to find $f(x)$. To do part (b), you only need to think about the coefficients of the Taylor series of $f(x)$ at $x=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1934770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Solving system of $9$ linear equations in $9$ variables I have a system of $9$ linear equations in $9$ variables: \begin{array}{rl} -c_{1}x_{1} + x_{2} + x_{3} + x_{4} + x_{5} + x_{6} + x_{7} + x_{8} + x_{9} &= 0 \\ x_{1} - c_{2}x_{2} + x_{3} + x_{4} + x_{5} + x_{6} + x_{7} + x_{8} + x_{9} &= 0 \\ x_{1} + x_{2} - c_{3}x_{3} + x_{4} + x_{5} + x_{6} + x_{7} + x_{8} + x_{9} &= 0 \\ x_{1} + x_{2} + x_{3} - c_{4}x_{4} + x_{5} + x_{6} + x_{7} + x_{8} + x_{9} &= 0 \\ x_{1} + x_{2} + x_{3} + x_{4} - c_{5}x_{5} + x_{6} + x_{7} + x_{8} + x_{9} &= 0 \\ x_{1} + x_{2} + x_{3} + x_{4} + x_{5} - c_{6}x_{6} + x_{7} + x_{8} + x_{9} &= 0 \\ x_{1} + x_{2} + x_{3} + x_{4} + x_{5} + x_{6} - c_{7}x_{7} + x_{8} + x_{9} &= 0 \\ x_{1} + x_{2} + x_{3} + x_{4} + x_{5} + x_{6} + x_{7} - c_{8}x_{8} + x_{9} &= 0 \\ x_{1} + x_{2} + x_{3} + x_{4} + x_{5} + x_{6} + x_{7} + x_{8} - c_{9}x_{9} &= 0 \end{array} I want to find a general non-trivial solution for it. What would be the easiest and least time consuming way to find it by hand? I don't have a lot of background in maths, so I would very much appreciate if you actually found the solution and explained briefly. Thanks in advance! EDIT: Very important to mention is that always any $c_{i} > 1$ and any $x_{i} \geq 20$. Also it would be nice if someone posted how would a general non-trivial solution look in the form of $$S = \left \{( x_{1}, x_{2}, x_{3}, x_{4}, x_{5}, x_{6}, x_{7}, x_{8}, x_{9}\right )\}$$
Let $$\mathrm A := 1_n 1_n^T - \mbox{diag} (1 + c_1, \dots, 1 + c_n)$$ where $c_i \neq -1$ for all $i \in \{1,2,\dots,n\}$. Using the matrix determinant lemma, $$\det (\mathrm A) = \left( 1 - \sum_{i=1}^n \frac{1}{1 + c_i} \right) (-1)^n \left( \prod_{i=1}^n (1+c_i)\right)$$ We want the homogeneous linear system $\mathrm A \mathrm x = \mathrm 0_n$ to have non-trivial solutions. Thus, we impose the equality constraint $\det (\mathrm A) = 0$, or, equivalently, $$\sum_{i=1}^n \frac{1}{1 + c_i} = 1$$ If this constraint is satisfied, using visual inspection, we conclude that all points on the line $$\left\{ \gamma \begin{bmatrix} \frac{1}{1 + c_1}\\ \vdots\\ \frac{1}{1 + c_n}\end{bmatrix} : \gamma \in \mathbb R \right\}$$ are solutions to the aforementioned homogeneous linear system.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1934857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Convergence of Complex Sequence Let $z_n = \left(\frac{1+i}{3}\right)^n$ be a complex sequence. Show that $(z_n)$ converges. I'm unsure how to do this because I've only just started learning about complex sequences. If this were real and, say we replaced $i$ with just $1$, I would note that $2^n < 3^n$ for all $n \geq 1$. That said, I wonder if it's okay to use the absolute value of the complex number in the numerator and note that $(\sqrt{2})^n < 3^n$ to see that the sequence is convergent, or is this totally wrong?
Yes, the key here is that $$ \left| {1 + i \over 3} \right| \leq {2 \over 3} < 1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1935007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Determinant involving function of $x$ If $f(x)$ is a polynomial satisfying $$f(x)=\frac{1}{2} \begin{vmatrix} f(x) & f(\frac{1}{x})-f(x) \\ 1 & f(\frac{1}{x}) \end{vmatrix} $$ and $f(3)=244$ then $f(2)$ is what? My attempt— Replacing $x$ by $\frac{1}{x}$ we get $$f\left(\frac{1}{x}\right)=\frac12\begin{vmatrix} f\left(\frac{1}{x}\right) & -(f(\frac{1}{x})-f(x)) \\ 1 & f(x) \end{vmatrix}.$$ Which gave me $f\left(\frac{1}{x}\right)-f(x)=\frac{2f(x)-f(x)^2}{f(x)-1}$. Now putting the value of $f\left(\frac{1}{x}\right)$ obtained in terms of $f(x)$ in the original determinant I became helpless when it reduced to $f(x)=f(x)$ and I achieved nothing. Please help me out with this problem.thanks.
Your starting idea is indeed great. Using $f(x)$ and $f(1/x)$ both, and adding them up, we have $$ f(x)+f(1/x) = f(x)f(1/x). $$ This gives $$ f(1/x)-1=\frac{1}{f(x)-1}. \ \ \ \ (*) $$ As $x\rightarrow\infty$, we have $f(0)=1$. By assuming that $n=\textrm{deg}(f)$, we have by multiplying $x^n$ both sides, $$ \frac{x^n}{f(x)-1} \ \textrm{ is a polynomial.} $$ Thus, $f(x)-1$ must be a divisor of $x^n$. Then only possibilities that we have $$ f(x) = ax^k + 1. $$ By (*), we have $a^2=1$, giving that $a=\pm 1$. With $f(3)=244$, we are left with $a=1$. Then $f(x) = x^k +1$. Again by $f(3)=244$, we obtain $k=5$. Therefore, $$ f(x)=x^5+1, \ \textrm{this gives } f(2) = 33. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1935122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$L^p_0(\Omega)\cap L^p(\Omega)$ dense in $L^p(\Omega)$ when $m(\Omega)=\infty$? Let $1\leq p < \infty$ and $\Omega\subset\mathbb{R}^n$ be a measurable (Lebesgue) set. I know that $L_0^p(\Omega)\cap L^p(\Omega)$ is dense in $L^p(\Omega)$ when $m(\Omega)$ is finite. For the proof I used the absolute continuity of the integral and the fact that $\Omega$ can be approximated with a compact set. Now I'm wondering if the claim is true when $m(\Omega)=\infty$. Obviously, I cannot use the same proof that I used in the case where $m(\Omega)<\infty$. If it is true, could you give a hint from where to start?
Sure it is. Since $\mathbb{R}^n$ is $\sigma$-finite. Intersect your $\Omega$ with sets of the form $\{n<|x|<n+1\}$ and approximate on each annulus with error at most $\epsilon/2^n$, then you will see the total error will be less than $\epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1935278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve $x^3 = x^{x^2-2x}$? Well i guess this is somehow pretty easy but there is something i don't understand. I know that if $\;x>0\;$ then I can compare the exponents: $\; 3=x^2-2x$ , and from here I get that $\;x=3\; or \;x=-1\;$, but because $\;x>0\;$ that leaves me only with $\;x=3$ . Second thing is that if both bases from both sides are equal to $1$ then its another solution, and therefore $\;x=1\;$ is another solution. For conclusion we get that the solutions are: $\;x=3\; or \;x=1$ . Now my question is why $\;x=-1\;$ is also a solution? How do i get to this solution? am I suppose to just try place it and check because I somehow got it for $\;x>0$ ? are there any steps I can follow for solving this kind of equations? Another question is how do you call this kind of equation? when I was looking for "exponential equations" I could only find ones with numbers in the bases . Thanks!
$x^a=x^b\implies a=b$ for all $x$,whether $x>0$ or $x<0$. You got $3=x^2-2x$, and solved this quadratic equation to get $x\in\{3,-1\}$. So $-1$ is indeed a solution. It depends on the equation as to which methods will you follow, there is no general algorithm. But yes, keep using exponent rules, and you will eventually lead to a solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1935367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Elementary number theory and quadratic Find the value(s) of $a$ for which the equation $ax^2-4x+9=0$ has integral roots(i.e. $x$ is an integer). My attempt: By hit and trial I am getting answer as $a=\frac{1}{3}$. No information about nature of $a$ is given.
I propose a graphical understanding of this question: Consider it as looking for the intersection points of these two curves $$\cases{y=4x-9\\y=ax^2}$$ * *The first one is a fixed straight line on which we define points with integer abscissas $P_k(k,4k-9) \ \ (k \neq 0).$ *the second one is a variable parabola with an "opening" coefficient $a$ we have to constrain in order that the parabola passes through the different points $P_k$. And this is possible for every values of $k$ by equating the ordinates (for all $k \neq 0$) $$a k^2 = 4 k -9 \ \ \ \Longleftrightarrow \ \ \ a=\dfrac{4k-9}{k^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1935515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Proof that $ \|x+y\|^2 - \|x\|^2 \geq b(1 - 2^{-n})\|y\|^2 + 2^n( \|x+2^{-n}y\|^2 - \|x\|^2), \quad \forall x,y \in E, \forall n \in \mathbb N $ Suppose that $E$ is a Banach space over $\mathbb R$ satisfying the following inequality, for some $b > 0$ $$ \|x+y\|^2 + b\|x-y\|^2 \leq 2\|x\|^2 + 2 \|y\|^2, \quad \forall x,y \in E $$ I'm trying to proof that: $$ \|x+y\|^2 - \|x\|^2 \geq b(1 - 2^{-n})\|y\|^2 + 2^n( \|x+2^{-n}y\|^2 - \|x\|^2), \quad \forall x,y \in E, \forall n \in \mathbb N \quad (I) $$ My attempt: An induction proof over $n$. For $n = 1$, using the hypothesis replacing $x$ and $y$ for $(x+y)/2$ and $x/2$, respectively, we have: $$ \|x + y/2\|^2 + b \|y/2\|^2 \leq 2\|(x+y)/2 \|^2 + 2\|x/2\|^2 = \frac{\|x+y\|^2}{2} + \frac{\|x\|^2}{2} $$ $$ \therefore \quad 2\|x+ 2^{-1}y\|^2 + b2^{-1}\|y\|^2 \leq \|x+y\|^2 + \|x\|^2 $$ $$ \therefore \quad \|x+y\|^2 - \|x\|^2 \geq b2^{-1}\|y\|^2 + 2(\|x+ 2^{-1}y\|^2 - \|x\|^2) $$ which is the inequality $(I)$ for $n = 1$. If I suppose that $(I)$ works for some $n > 1$, I have to proof that $$ \|x+y\|^2 - \|x\|^2 \geq b(1 - 2^{-n-1})\|y\|^2 + 2^{n+1}( \|x+2^{-n-1}y\|^2 - \|x\|^2), \quad \forall x,y \in E \quad (II) $$ However I don't know how to conclude that. Thank you for any help!!
I think I might have solved the problem. Let's re-write (I) as $$ \|x+y\|^2 - \|x\|^2 \geq b(\frac{2^n - 1}{2^n})\|y\|^2 + 2^n\|x+\frac{1}{2^n}y\|^2 - 2^n\|x\|^2 $$ $$ 2^n\|x+y\|^2 - 2^n\|x\|^2 \geq b(2^n - 1)\|y\|^2 + (2^n)^2\|x+\frac{1}{2^n}y\|^2 - (2^n)^2\|x\|^2 $$ $$ 2^n\|x+y\|^2 - 2^n\|x\|^2 \geq b(2^n - 1)\|y\|^2 + \|2^n x+y\|^2 - \|2^n x\|^2 $$ Now, replacing $x$ for $2x$, we have $$ 2^n\|2x+y\|^2 - 2^n\|2x\|^2 \geq b(2^n - 1)\|y\|^2 + \|2^{n+1} x+y\|^2 - \|2^{n+1} x\|^2 $$ $$ 2^{n+1} [2( \|x + y/2\|^2 - \|x\|^2 ) ] \geq b(2^n - 1)\|y\|^2 + (2^{n+1})^2\| x+\frac{1}{2^{n+1}}y\|^2 - (2^{n+1})^2\| x\|^2 $$ $$ 2( \|x + y/2\|^2 - \|x\|^2 ) \geq b\frac{2^n - 1}{2^{n+1}}\|y\|^2 + 2^{n+1}\| x+\frac{1}{2^{n+1}}y\|^2 - 2^{n+1}\| x\|^2 $$ Summing $b\frac{1}{2}\|y\|^2$ in both sides: $$ b\frac{1}{2}\|y\|^2 + 2( \|x + y/2\|^2 - \|x\|^2 ) \geq b(\frac{1}{2}+\frac{2^n - 1}{2^{n+1}})\|y\|^2 + 2^{n+1}(\| x+\frac{1}{2^{n+1}}y\|^2 - \| x\|^2) $$ Applying the inequality for $n = 1$ $$ \|x+y\|^2 - \|x\|^2 \geq b(1-\frac{1}{2^{n+1}})\|y\|^2 + 2^{n+1}(\| x+\frac{1}{2^{n+1}}y\|^2 - \| x\|^2) $$ since $\frac{1}{2}+\frac{2^n - 1}{2^{n+1}} = \frac{1}{2} (1 + 1 - \frac{1}{2^n}) = (1 -\frac{1}{2^{n+1}})$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1935587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Example of a left compatible relation on a semigroup that is not right compatible. Definition : Let $S$ be a semigroup. let $R$ be a relation on $S$. * *Left compatibility: $R$ is left compatible if $$ (\forall a , s ,t \in S) \ \ (s,t) \in R \ \ \Rightarrow (as , at) \in R $$ *Right compatibility: $R$ is is right compatible if $$ (\forall a , s ,t \in S) \ \ (s,t) \in R \ \ \Rightarrow (sa , ta) \in R $$ Give a counterexample of a relation which is left compatible but not right compatible. Similarly, a counter example of a relation which is right compatible but not left compatible. Any help would be appreciated. Thank you.
The Green's relation $\mathcal{R}$ that you mentioned in this question is left compatible (but in general not right compatible). Dually the Green's relation $\mathcal{L}$ is right compatible (but in general not left compatible)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1935681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
limit of a function of three variables I would like to ask you how to solve the limit at the origin of the following function: $$f(x,y,z)=\frac{x^3y^3z^2}{x^6+y^8+z^{10}}$$ I am quite sure that it is $0$, but I cannot find a function majorizing $f$ and going to $0$ at the origin (in order to use the sandwich thm). Thank you in advance for your help. P.S.:I am looking for an alternative method than using spherical coordinates.
Let $X=x^3$, $Y=y^4$, $Z=z^{5}$. Then $r^2=X^2+Y^2+Z^2=x^6+y^8+z^{10}$ and $$|x^3y^3z^2|=|X||Y|^{3/4}|Z|^{2/5}\leq r^{1+3/4+2/5}=r^{43/20}$$ where $$X=r\cos\theta\cos\phi,\quad Y=r\cos\theta\sin\phi,\quad Z=r\sin\theta\ .$$ Therefore, as $(x,y,z)\to(0,0,0)$, it follows that $r\to 0$ and $$\left|{x^3y^3z^2\over x^6+y^8+z^{10}}\right|\leq {r^{43/20}\over r^2}=r^{3\over 20}\to 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1935761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The smallest topology in $\mathbb C$ in which every singleton set is closed Let $\tau$ be the smallest topology on $\mathbb C$ such that every singleton set under $\tau$ is closed. Then which of the following is true ? $1.(\mathbb C,\tau) \text{ is not Hausdorff}.$ $2.(\mathbb C,\tau) \text{ compact }$ $3.(\mathbb C,\tau) \text{ connected }.$ $4. \mathbb Z \text{ is dense in } (\mathbb C,\tau).$ All $4$ options are correct if I consider this to be the co-finite topology on the Complex numbers. But I do not know what the smallest topology would be. It won't be the discrete one but what other options do we have here ? Please discuss.Thank you.
First of all, notice that nothing about this problem is particular to $\Bbb C$; you can replace it with any infinite set whatsoever (and then $\Bbb Z$ is replaced by your favorite infinite subset). It does turn out to be true that the smallest suitable topology is the co-finite topology (so awesome that you know the answers for that topology already!). One needs to show that any topology in which every singleton set is closed contains the co-finite topology. So let $\tau$ be such a topology. Can you show that any set $U$ that is open in the co-finite topology must also be open in $\tau$, just using the properties of topologies (such as the finite intersection property) and the one extra thing we know about $\tau$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1935857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Codes and Codewords I'm sorry for the very dumb question, but I just can't grasp the concept of codes. If C is an [n, k] linear code, what are n and k exactly? I know k is the dimension of C and that n is the number of tuples of the field where C is a subset of. Is n the dimension of the field? If so, why should it be specified? What does C looks like? What does its code words look like? I've read codewords are just vectors in C, and they are mainly denoted by (a1, a2, ..., an), but what are these a's? From the field?
An $[n,k]$ linear code over some field $F$ is a $k$-dimensional subspace of the vectorspace $F^n$. The codewords are elements of $F^n$, that is they are $n$-tuples of elements of $F$. Thus yes the $a_i$ are from the field. The $n$ is the length of the code, i.e., the codeword. The $k$ is the dimension of the code, which you can think of the size of the code. In total there are $|F|^n$ strings of lengths $n$ in $F$ but only $|F|^k$ of those form the code.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1935979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Double Integral $\iint\limits_D\frac{dx\,dy}{(x^2+y^2)^2}$ where $D=\{(x,y): x^2+y^2\le1,\space x+y\ge1\}$ Let $D=\{(x,y)\in \Bbb R^2 : x^2+y^2\le1,\space x+y\ge1\}$. The integral to be calculated over $D$ is the following: \begin{equation} \iint_D \frac{dx\,dy}{(x^2+y^2)^2} \end{equation} I do not know how to approach the problem. I have tried integrating the function in cartesian coordinates but it doesn't seem to work out. I have also tried the variable change $u=x^2+y^2$ and $v=x+y$ (with the associated jacobian transformation) and again I cannot obtain the result.
Transforming to polar coordinates $(\rho,\phi)$, we find that on $x+y=1$ we have $\rho =1/(\cos(\phi)+\sin(\phi))$. Therefore, we can write $$\begin{align} \int_D \frac{1}{(x^2+y^2)^2}\,dx\,dy&=\int_0^{\pi/2}\int_{1/(\sin(\phi)+\cos(\phi))}^1 \frac{1}{\rho^3}\,d\rho\,d\phi\\\\ &=\frac12\int_0^{\pi/2}\sin(2\phi)\,d\phi\\\\ &=\frac12 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1936076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Help with Epsilon-Delta proof for limits I tried to teach myself these types of proofs. I understand the reasoning behind it very well, but I have trouble understanding specific parts when simplifying inequalities. Let me give an example: Say I wanted to prove the following: $$\lim_{x\to1}(x^2+3)=4$$ I start by supposing: given $ε>0$, I want to find $δ>0$ such that $$0<|x-1|<δ => |(x^2+3)-4|<ε$$ I start by simplifying the RHS to find an expression that relates $ε$ to $δ$: $$|x^2-1|<ε$$ $$=>|(x-1)(x+1)|<ε$$ $$=>|x-1||x+1|<ε$$ At this point I was stuck and did research on how I should proceed. Apparently we can restrict $δ$ to only be at most $1$ unit away from $a$. Since we are dealing with the limit of $f(x)$ as ${x\to a}$, it is reasonable to restrict our "radius" around $a$ this way. I sort of understand this reasoning, although a more detailed explanation would be appreciated. Anyway, this implies (in my case): $$|x-1|<δ≤1$$ $$=>|x-1|<1$$ $$=>0<x<2$$ $$=>1<x+1<3$$ $$=>1<|x+1|<3$$ This means that the min value of |x+1| is larger than 1 and the max value is smaller than 3. However, here comes the part where I get stuck: according to multiple solutions I found online, it is reasonable to say that: $$|x-1||x+1|<3|x-1|$$ Now the part above I totally understand, but the following part I do not. Apparently, it is a logical step to deduce the following: $$|x-1||x+1|<3|x-1|<ε$$ $$=>3|x-1|<ε$$ How can we logically deduce that $$3|x-1|$$ is smaller than the given $ε$? In my reasoning, if $$3|x-1|$$ is larger than $$|x+1||x-1|$$ it does not necessarily mean that the former is also smaller than $ε$. For instance, if $3<5$ and $6>3$, then it does NOT mean that $6<5$, obviously. In my opinion, it is correct to deduce the following: $$|x-1|<|x+1||x+1|$$ (Using the same reasoning as earlier) $$=>|x-1|<|x+1||x-1|<ε$$ $$=>|x-1|<ε$$ This, to me, is pretty clear. Any expression smaller than the middle one is logically smaller than the third. Therefore, I set ε=δ. But then I'm stuck again. In the proofs I looked at online, they say to set $$δ=min{(1,ε)}$$, to pick the smaller value of the two. Why is that? To sum up, I would appreciate any feedback on my work, especially an explanation on why the part between ****(...)**** is a valid deduction and why we pick the smallest value for $δ$? Thanks!
We get to choose our $\delta$, hence to get the conclusion that $$3|x-1|<\epsilon$$ I can choose my $\delta$ to be $$\delta=\min(1, \epsilon/3)$$ Hence, For $|x-1| <\delta$, $$|(x+1)(x-1)|<3|x-1|<3\delta\leq 3(\epsilon/3)=\epsilon$$ $|x+1|<3$ is due to $\delta \leq 1$, $3\delta \leq \epsilon$ is due to we choose $\delta \leq \epsilon/3$. Since I want $\delta$ to be both smaller than $1$ and $\epsilon/3$, I choose the minimal of both quantities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1936157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Coordinate Systems I have two axes: X,Y,Z and x',y',z'. I am given 4 relations: X is 60 degrees from x', Y is 90 degrees from x', Y is 120 degrees from z', Y is 30 degrees from y'. Knowing all of this, how do I find the rotation matrix relating the XYZ axis to the x'y'z' axis?
If you are asking for a rotation matrix, we are speaking of two sets of orthogonal systems. First step, determine $Y$. Its position is clear, it is on the $y,z$ plane, and with the given angles it cannot be but$(0,\sqrt {3} /2, -1/2)$. Second step, determine $X$, by imposing that its dot product with $x=(1,0,0)$ be $1/2$, and then that $X \cdot Y=0$, and finally that $|X|=1$. Finally determine $Z=X \times Y$. Now you have the matrix and can see if it is actually a rotation. Then I suppose you know how to proceed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1936240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Linear combination of columns In the following question I am trying to determine if vector $b$ is a linear combination of the columns of $A$. If this is false I need to explain why, but if it is true I need to write down the linear combination. Matrix A: $$ \begin{bmatrix} 1& -1& 2& 1\\ 2& -3& 2& 0\\ -1& 1& 2& 3 \\ -3& 2& 0& 3 \end{bmatrix} $$ Vector B: $$\begin{bmatrix} 2 \\ 3 \\ 6 \\ 9 \end{bmatrix}. $$ I know that the system $Ax = b$ has a solution if and only if the vector $b$ is a linear combination of the columns of $A$. Also, the system $Ax = 0$ has nontrivial solutions if and only if the columns of $A$ are linearly dependent.
If you perform row reduction on the augmented matrix $$ \begin{bmatrix} 1& -1& 2& 1&|&2\\ 2& -3& 2& 0&|&3\\ -1& 1& 2& 3 &|&6\\ -3& 2& 0& 3 &|&9\end{bmatrix}, $$ you'll obtain $$ \begin{bmatrix} 1&0&0&-1&|&-5\\ 0&1&0&0&|&-3\\ 0&0&1&1&|&2\\ 0&0&0&0&|&0 \end{bmatrix}, $$ which shows that the system $Ax=b$ is consistent. So, yes, $b$ is a linear combination of the columns of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1936359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Given u and v are harmonic in some region R prove the following If u and v are harmonic in some region R prove the following $ ( \frac {\partial u}{ \partial y} - \frac {\partial v}{ \partial x}) + i(\frac {\partial u}{ \partial x} + \frac {\partial v}{ \partial y})$ is analytic in R. Does this mean if u and v are harmonic in R then their first order partials in R satisfy the Cauchy-Riemann equations and if so how do I prove it? I tried taking the partial derivatives of each term to see if they would satisfy the Cauchy-Riemann equations but that didn't work.
Define the function $\hat{u}(x,y) := \frac{\partial u}{\partial y}(x,y) - \frac{\partial v}{\partial x}(x,y)$ and $\hat{v}(x,y) := \frac{\partial u}{\partial x}(x,y) + \frac{\partial v}{\partial y}(x,y)$. Then consider the function $f(x,y) := \hat{u}(x,y) + i \cdot \hat{v}(x,y)$, this is your function in question. Now what do you have to check if you want $f$ to be analytic in the region $R$? Of course, you need to check wheter the Cauchy-Riemann equations are satisfied or not. Consider: $\hat{u}_x = \frac{\partial^2 u}{\partial y \partial x} - \frac{\partial^2 v}{\partial x^2}$ and $\hat{v}_y = \frac{\partial^2 u}{\partial x \partial y} + \frac{\partial^2 v}{\partial y^2} \Longrightarrow \hat{u}_x - \hat{v}_y = \frac{\partial^2 u}{\partial y \partial x} - \frac{\partial^2 v}{\partial x^2} - \frac{\partial^2 u}{\partial x \partial y} - \frac{\partial^2 v}{\partial y^2} = 0$, since $v$ is harmonic and $u$ has continous partial derivatives, hence the order of differentiation doesn't matter (by Schwarz's theorem), so the mixed terms cancel out each other. This shows that your function $f$ satisfies the first part of the CR equations. Now, in the same manner, check the second part of the CR equations, and you'll arive at the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1936495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conditional probability that the first toss resulted in heads A fair coin is tossed until two heads have appeared. * *Given that exactly $k$ tosses were required, what is the conditional probability that the first toss resulted in heads? *If $p_k$ is the probability that at least $k$ tosses are required, find a formula for $p_k$ and find the smallest $k$ such that $p_k\le0.1$. How do I approach problems like this? For the first question I am not able to apply the Bayes/Price theorem because I am not sure how to derive the $P(A\cap B)$ expression in the numerator. For the second, I am stuck at "at least $k$ tosses".
A = First toss Was a head B = k tosses required $P(A|B) = P(A \cap B)/P(B)$ for $P(A \cap B)$ (probability that first toss was a head then k tosses were reuired to get a second head) we need to consider that in this case, toss 1 was a head, and toss k was a head, all tosses in between were tails - therefore it is a a single unique chain of possible tosses, $P = 1 / 2^k$ For P(B), consider that toss k was a head, and tosses 1 to k-1 contained 1 head (at any time) $P(B) = \binom{k-1}{1}(1/2^k) = (k -1) / 2^k$ therefore $P(A|B) = P(A \cap B)/P(B)$ $=\frac{( 1 / 2^k)}{((k -1) / 2^k)} = 1/ (k-1)$ answer $\frac{1}{k-1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1936726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Numbers $1,2,3, ...., 2016$ arranged in a circle A student wrote numbers $1,2,3, ...., n$ arranged in a circle, then began with erasing number $1$ then he Leaves $2$ then he erased $3$ and leaves $4$ ...... exemple: if $n=10$ : in the first round he erased $1,3,5,7,9$ , and leaves $2,4,6,8,10$ in the second round he erased $2,6,10$ and leaves $4,8$ in the third round he leaves $4$ because he erased the last number $(10)$ in the second round, then he erased $8$ The last remaining number in the circle is $4$ if $n=2016$ , what is the last remaining number in the circle? My progress: In the first round all the odd numbers will be erased $ n=1(\mod2) $ In the second round $ n=2(\mod4) $ will be erased, then $ n=4(\mod8) $.... but in the seventh round we have $ n=0(\mod128) $
Let it be that $a_{n}$ is the remaining number if we are dealing with the numbers $1,2,3,4,5\dots,n$. If $n$ is sufficiently large then after erasing $1$ and leaving $2$ we have the numbers $3,4\dots,n,2$ ahead of us. Comparing this with the situation in which we have the numbers $1,2,3,\dots,n-2,n-1$ ahead of us we conclude that $a_{n}=a_{n-1}+2$ if $a_{n-1}<n-1$ and $a_{n}=2$ otherwise. So next to the evident $a_1=1$ we have the equality: $$a_{n+1}=\begin{cases} a_{n}+2 & \text{ if }a_{n}<n\\ 2 & \text{otherwise}\end{cases}$$ Looking at sequence $\left(a_{n}\right)$ the following conjecture arises: $$a_{n}=2n-2^{\lceil\log_{2}n\rceil}$$ This conjecture can be proved with induction and we conclude: $$a_{2016}=4032-2^{11}=1984$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1936853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }