Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Quasiconformal mappings: Metric deffinition In the lectures notes http://users.jyu.fi/~pkoskela/quasifinal.pdf (Prof. Koskela has made them freely available from his webpage, so I am guessing is OK that I paste the link here) Quasiconformality is defined by saying that $\displaystyle \limsup_\limits{r \rightarrow 0} \frac{L_{f}(x,r)}{l_{f}(x,r)}$ must be uniformly bounded in $x,$ where $\displaystyle L_{f}(x,r):=\sup_\limits{\vert x-y \vert \leq r} \{ \vert f(x)-f(y) \vert \}$ and $\displaystyle l_{f}(x,r):=\inf_\limits{\vert x-y \vert \geq r} \{ \vert f(x)-f(y) \vert \}.$ I have three questions concerning this definition: 1) The main question: When he proves that a conformal mapping is quasiconformal he says (at the beginning of page 5): "Thus, given a vetor h, we have that $|Df(x, y)h| = |∇u||h|$ By the complex differentiability of f we conclude that: $\limsup_\limits{r \rightarrow 0} \dfrac{L_{f}(x,r)}{l_{f}(x,r)}=1$" And I don't quite understand how did he do that step. Is he perhaps using the mean value theorem and the maximum modulus principle? 2) Second question: Even accepting the previous argument, he only shows that conformal mappings are quasiconformal in dimension $2.$ How to do this in general? Also, is this definition the same if we replace $\vert x-y \vert \leq r$ and $\vert x-y \vert \geq r$ by $\vert x-y \vert =r$? The former bounds the latter trivially, but more than that I do not know. 3) What would be a nice visual interpretation of a quasiconformal mapping? How would look a map with possible infinite distortion at some points? Thanks
To answer 1), he's only using the definition of the $\mathbb{R}^2$ derivative. $Df(x) : \mathbb{R}^2 \to \mathbb{R}^2$ is a linear transformation and is characterized by the formula $$\lim_{h \to 0} \frac{f(x+h) - f(x) - Df(x)(h)}{|h|} = 0 $$ Fix $|h|=r$ for very small $r$ and you will see from this that the ratio $L_f(x,r)/ l_f(x,r)$ is very close to $1$. Regarding 2), your question is too vast. Higher dimensional quasiconformal theory is different than the 2-dimensional theory. To answer 3), remember that conformal maps are maps that take tiny round circles to shapes that are very close to round circles. Quasiconformal maps take tiny round circles to shapes that are very close to ellipses of uniformly bounded eccentricity. To answer the second part of 3), probably there is a simple formula for a map which takes a nested sequence of circles contracting down to a point in the domain to a nested sequence of ellipses of higher and higher eccentricity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1158970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How many ways to reach a given tennis-score? Let's say a tennis player wins a set with a game score of 6-3. In how many different ways can we reach this score? Assuming H means the home-player won the game and A means the away-player won the game, one permutation would be HHHHHAAAH. Note that the winner of the set will always have to win the last game (so maybe I could reduce this to how many permutations are there of the set of HHHHH AAA?) I realize I have almost solved the problem but I'm still having problems grasping it, any ideas or if you could point me in the correct directions would be helpful. For those unfamiliar with the tennis scoring system, a set is a sequence of games. The set score is simply a count of how many games each player won. For purposes of this question, all you need to know is that the first person to win a total of 6 games wins the set.
As you saw yourself, you’re looking for the number of permutions of five H’s and three A’s. That’s a string of $8$ symbols, and the A’s can occupy any $3$ of the $8$ positions, so the desired number is simply the number of ways to choose $3$ things from a set of $8$ things. This is the binomial coefficient $$\binom83=\frac{8!}{3!5!}=\frac{8\cdot7\cdot6}{6}=56\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1159063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Seemingly simple high school combinatorics proof doesn't add up Prove that $\binom{n}{n-2}\binom{n+2}{n-1}$ is an integer for all $n\in\mathbb Z^+$. My take on this: Recall: $${n\choose k} = \frac{n!}{k!(n-k)!}$$ So our problem reads (simplified): $$\frac{n!}{(n-2)!2!} \cdot \frac{(n+2)!}{(n-1)3!}$$ But simply substituting $n=1$ does not work for the problem (answer becomes $-\frac{1}{2}$) and therefore the original assumption must be wrong? Or am I misunderstanding? I'm stuck in the proof: Trying to multiply these fractions together I can get: $$\frac{n!(n+2)!}{12(n-2)!(n-1)!}$$ and I can divide $n!$ by $(n-1)!$ to get $$\frac{n(n+2)!}{12(n-2)!}$$ but I'm stuck here. This still does not clearly show that the answer is an integer.
By definition $\dbinom{n}k=0$ when $k<0$, so $$\binom{1}{-1}\binom{3}0=0\cdot1=0\;.$$ Added: In the edited question you’ve reduced the case $n\ge 2$ to showing that $$\frac{n(n+2)!}{12(n-2)!}$$ is an integer. Do a bit more cancellation to get $$\frac{n^2(n+2)(n+1)(n-1)}{12}\;.$$ Now show that any product of four consecutive integers is a multiple of $12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1159144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find Inverse of ideal $I= \langle 3, 1+2\sqrt{-5} \rangle$ Question- Find Inverse of ideal $I = \langle 3, 1 + 2\sqrt{-5} \rangle$ of $O_K$ (ring of integers of algebraic number field $K$), where $K = \mathbb{Q}(\sqrt{-5})$. First of all can $I$ be simplified more than $I = \langle 3, 1 + 2\sqrt{-5} \rangle = \langle 3, 2 - 2\sqrt{-5} \rangle$ which is not much help. Now if because $I$ is a prime ideal then, $I^{-1} = I' = {\alpha \in K :\ \alpha I \subseteq O_K}$. While attempting to find $I'$, I started by letting $x + y{\sqrt-5} \in K$ and then when proceeded, what I got is $3x \in \mathbb{Z}$ and $3y, 11y \in \mathbb{Z}$ but this says that no rational $y$ can exist. So obviously I did something wrong. I guess my answer should be in terms of $I$, can somebody help here?
Hints: * *$I=\langle 3,1-\sqrt{-5}\rangle$. *If $J=\langle 3,1+\sqrt{-5}\rangle$, then prove that $IJ=\langle 3\rangle$. *Deduce that $I^{-1}$ is $J$ times a principal fractional ideal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1159279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Confusion regarding limiting variances (Casella, Statistical Inference, 2nd edition, example 10.1.8) In the book Statistical Inference (George Casella 2nd ed.), page 470, there is an example: $\bar{X}_n$ is the mean of $n$ iid observations, and E$X=\mu$, $\operatorname{Var}X=\sigma^2$. "If we take $T_n=1/\bar{X}_n$, we find that the variance is $\operatorname{Var}T_n=\infty$, so the limit of the variances is infinity." Why is the limit of the variance infinity? I can go as far as $$\operatorname{Var}\frac 1{\bar{X}_n}=\operatorname{E}_{\bar{X}_n}\left[\left(\frac1{\bar{X}_n}-\operatorname{E} \frac 1{\bar X_n} \right)^2 \mid \mu\right] $$ what's next? I know that $\lim_{n\to\infty}\operatorname{Var}{\bar X_n}=0$. However, $\lim_{n\to\infty}1/\operatorname{Var}{\bar X_n}\not=\lim_{n\to\infty}\operatorname{Var}1/\bar X_n$. How can we show the variance approaches to infinity for sufficiently large $n$? Thanks!
In the example, the mean $\overline{X}_n$ is taken of $n$ iid normal observations. Therefore, $\overline{X}_n$ also has a normal distribution with mean $\mu$ and variance $\sigma^2/n$; its probability density function is therefore $f(x)=\frac1{\sqrt{2\pi\sigma^2/n}}e^{\frac{-(x-\mu)^2}{2\sigma^2/n}}$. When we try to compute the mean of $T_n=1/\overline{X}_n$, we find: $$E(|T_n|) = \int_{-\infty}^{\infty} \frac1{\sqrt{2\pi\sigma^2/n}}e^{\frac{-(x-\mu)^2}{2\sigma^2/n}}\frac1{|x|}\ dx=\infty$$ since near $x=0$ the integrand is bounded below by a constant multiple of $1/|x|$, which has infinite integral. Thus each $T_n$ has undefined mean and hence also undefined variance. I'm not sure why Casella says that $\text{Var }T_n=\infty$; I think it would be more correct to simply say that the variance of each $T_n$ is undefined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1159500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove the sum of the even Fibonacci numbers Let $f_n$ denote the $nth$ Fibonacci number. Prove that $f_2\:+\:f_4\:+...+f_{2n}=f_{2n+1}-1$ I am having trouble proving this. I thought to use induction as well as Binet's formula where, $f_n=\frac{\tau^2-\sigma^2}{\sqrt5}$ where $\tau=\frac{1+\sqrt5}{2}$ and $\sigma=\frac{-1}{\tau}$. Can someone give me a hand? Thanks!
Using the recurrence relation for $f_n$ we find \begin{align}f_2 + f_4 + \cdots + f_{2n} = (f_3 - f_1) + (f_5 - f_3) + \cdots + (f_{2n+1} - f_{2n-1}), \end{align} which telescopes to $f_{2n+1} - f_1 = f_{2n+1} - 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1159572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Does having positive second derivative at a point imply convexity in some neighborhood? Suppose that I have a real valued function of a single variable $f(x)$ which is twice differentiable in some open interval $I$. Then, I know from calculus that if $f''(x) >0 $ on $I$, then $f$ is convex on $I$. But, what I am wondering is that if I only know that $f''(x) > 0$ at a particular point $x_0$ in $I$, then can I say that $f(x)$ is locally convex at $x_0$? That is, can I find a small neighbourhood around $x_0$ in which $f$ is convex? If this is true, does this idea that a positive second derivative at a point means local convexity generalize?
No, having $f''(x_0)>0$ does not imply that $f$ is convex in some neighborhood of $x_0$. Take the function $$ g(x) = x+2x^2\sin(1/x),\quad g(0)=0 $$ which, as shown here, is not increasing in any neighborhood of $0$ despite $g'(0)=1$. (It oscillates rapidly on small scales.) Then let $f(x)=\int_0^x g(t)\,dt$. This function is twice differentiable (since $g$ is once differentiable), has $f''(0)=1>0$ but in any neighborhood of $0$, it fails to be convex because $f'$ fails to be increasing. Of course, if you assume $f''$ is continuous, then $f''(x_0)>0$ implies $f''>0$ in some neighborhood, and convexity follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1159690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Updated: Prove completely $\int^\infty_0 \cos(x^2)dx=\frac{\sqrt{2\pi}}{4}$ using Fresnel Integrals Prove completely $\int^\infty_0 \cos(x^2)dx=\frac{\sqrt{2\pi}}{4}$ I've tried substituting $ x^2 = t $ but could not proceed at all thereafter in integration. Any help would be appreciated. I should mentioned at the start that I am trying to use Fresnel Integrals. That's why I was trying to substitute t=x^2 since I'm nearly positive that is the first step. However, thereafter I am lost.
As is common, use $f(z)=e^{-iz^2}=\cos(z^2)-i\sin(z^2)$ Now $$\int_{-\infty}^{\infty}e^{-iz^2}{\rm d}z=\int_{-\infty}^{\infty}e^{-\left(e^{i\pi/4}z\right)^2}{\rm d}z=\frac1{e^{i\pi/4}}\int_{-\infty}^{\infty}e^{-x^2}{\rm d}x=e^{-i\pi/4}\sqrt{\pi}=\sqrt{\frac{\pi}2}-i\sqrt{\frac{\pi}2}$$ Now, since $f(z)$ is even: $$\int_0^{\infty}\cos(x^2){\rm d}x=\Re\left(\frac12\int_{-\infty}^{\infty}e^{-iz^2}{\rm d}z\right)=\frac12\sqrt{\frac{\pi}2}=\frac{\sqrt{2\pi}}4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1159818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
IMO problem number theory Determine the greatest positive integer $k$ that satisfies the following property. The set of positive integers can be partitioned into $k$ subsets $A_1,A_2,A_3,\ldots,A_k$ such that for all integers $n\ge 15$ and all $i\in\{1,2,\ldots,k\}$ there exist two distinct elements of $A_i$ such that their sum is $n$.
The greatest such number is $3$ First we are going to show that there exists a construction for $k = 3$ and then we are going to prove that $k \not\ge 4$ PART 1 We build the 3 sets $A_1,A_2\ and\ A_3$ as: $$ \begin{align} \\&A_1 = \{1,2,3\} \cup \{3m\ |\ m \ge\ 4\} \\&A_2 = \{4,5,6\} \cup \{3m - 1\ |\ m \ge\ 4\} \\&A_3 = \{7,8,9\} \cup \{3m - 2\ |\ m \ge\ 4\} \end{align} $$ Here we easily see that, * *all $n\ge13$ can be represented as the sum of two unique elements of $A_1$ *all $n\ge15$ can be represented as the sum of two unique elements of $A_2$ *all $n\ge17$ can be represented as the sum of two unique elements of $A_3$ So we are only left to see that $15\ and\ 16$ can be represented as the sum of two distinct elements of $A_3$ which is given by: $$ \\15 = 7+8 \\16 = 7+9 $$ PART 2 Here we prove that $k \not\ge 4$ Let us assume that such a construction exists for $k \ge 4$. So let the sets $A_1,A_2,A_3...A_k$ satisfy the condition for $k \ge 4$ If $A_1,A_2,A_3...A_k$ satisfy the conditions, so does $A_1,A_2,A_3,A_4 \cup A_5 \cup A_6 \cup...\cup A_k$ Thus we assume that $k = 4$ $\therefore$ we assume the sets to be $A_1,A_2,A_3\ and\ A_4$ Now, we construct sets $B_1,B_2,B_3\ and\ B_4$ such that $B_i = A_i \cap \{1,2,3...,23\}$ Then $\forall i \in {1,2,3,4}$ each of the $10$ numbers $15,16,17...,24$ can be represented as the sum of two distinct numbers in $B_i$ $\therefore$ we must have $n = |B_i|$ such that $\binom{n}{2} \ge 10$ $\therefore\ |B_i| \ge 5\ \forall i \in \{1,2,3,4\}$ as $\binom{5}{2} = 10$ But we have $|B_1| + |B_2| + |B_3| + |B_4| = 23$ as we have done intersection with ${1,2,3,...23}$ and $B_1 \cap B_3 \cap B_3 \cap B_4 = \phi$ $\therefore \exists\ j \in \{1,2,3,4\}$ such that $|B_j| = 5$ Now let $B_j = \{x_1,x_2,x_3,x_4,x_5\}$ Now, the sums of two distinct elements of $A_j$ representing the numbers $15, 16, . . . , 24$ must exactly be equal to all the pairwise sums of the elements of $B_j$ Let the pairwise sum of all elements of $B_j$ be equal to $S$ $\therefore S = 4(x_1 + x_2 + x_3 + x_4 + x_5)$ Again, $S = 15 + 16 + 17 ... 24 = 195$ which gives $4(x_1 + x_2 + x_3 + x_4 + x_5) = 195$ which clearly is impossible. Thus we have proved that $k \not \ge 4$ Thus, the anser is $k =3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1159934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Non-trivial examples of power series which are uniformly convergent on $[0,1)$ and left-continuous at $x = 1$ The question is motivated by a more extensive problem that needs a formal proof, but I am not interested in help on the proof itself, but I'd like to see some examples of such power series. I put non-trivial in the title because it has turned out that power series such as: $$e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} $$ ...are not very interesting as they are uniformly convergent on $\mathbb{R}$ and "trivially" left-continuous at $x = 1$ due to regular continuity at $x = 1$. The power series I have toyed with do not offer insight or intuition on the proof I am working on, so I am looking for some example that cannot be trivially deduced to be left-continuous at $x = 1$. Summary : I am looking for a non-trivial power series such that $\sum_{n=0}^{\infty} a_n x^n$ converges uniformly on $[0,1)$ and has the property that for its limit function $f(x)$ we have: $$ \lim_{x \to 1^-} f(x) = f(1)$$ Thank you!
Using Abel's Theorem you can come up with lots of examples, for example $$\sum_{n=1}^\infty\frac{x^n}{n^p}\;,\;\;\forall\,p>1\;,\;\;\text{with convergence radius}\;\;R=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1160017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Mutual difference of vectors squared, does it have a name? Given a set of $n$ vectors $\def\vv{\vec{v}} \vv_i$ with the additional property that they all have the same absolute value $||\vv_i||=c$, define the average of the vectors as $\vv = \frac{1}{n}\sum_{i=1}^n \vv_i$. If all $\vv_i$ are identical we have $||\vv|| = c$, but if they all have different directions, a not to complicated derivation yields $$ c^2 - ||\vv||^2 = \frac{1}{n^2}\sum_{i<j}^n ||\vv_i-\vv_j||^2 .$$ The sum on the right side is the average mutual delta squared of the individual vectors. It vaguely reminds me of variance, which it is obviously not. Does the formula have a name? Does it appear prominently in another context?
Meanwhile I learned that the right hand side is identical to the variance of the $\\v_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1160118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of Exponents and Primes Is it necessary that for every prime $p$, there exists at least one of $1+p$, $1+p+p^2$, $1+p+p^2+p^3$, ... that is a prime? Is it also true that an infinite number of $1+p$, $1+p+p^2$, $1+p+p^2+p^3$, etc. are primes? I am thinking of using something similar to Euclid's proof of the infinitude of primes, given that similar stuff happens. Clarifications: My questions are FOR ALL P
For the case as simple as $p=2$, we don't even know the answer to the last question. Primes of this form are called Mersenne primes: http://en.wikipedia.org/wiki/Mersenne_prime. For the first question note that $1+p+\cdots+p^n=11\cdots 1$ ($n+1$ ones) in base $p$. The primes of this form are called repunit primes. It is known that there are no base $4$ or base $9$ repunits primes. I don't think it is known whether or not there exists a prime number $p$ such that there are no base $p$ repunit primes. See also: http://en.wikipedia.org/wiki/Repunit#Repunit_primes. EDIT: In fact, there do not exist repunit primes in any quadratic base (except for $1$): Let $k\in\mathbb{N}-\{1\}$. Then: $$1+k^2+\cdots+(k^2)^n=\frac{(k^2)^{n+1}-1}{k^2-1}=\frac{(k^{n+1}-1)(k^{n+1}+1)}{k^2-1}$$ Now if $n$ is odd, we have $k^{n+1}=(k^2)^{m}$ (with $m=\frac{n+1}{2}$), hence: $$k^{n+1}-1=(k^2)^m-1=(k^2-1)\left((k^2)^{m-1}+(k^2)^{m-2}+\cdots+1\right)$$ So then $k^{n+1}-1$ is divisible by $k^2-1$, and hence: $$\frac{(k^{n+1}-1)(k^{n+1}+1)}{k^2-1}$$ is divisible by $k^{n+1}+1$ (so not prime). If $n$ is even then: $$k^{n+1}+1=(k+1)(k^n-k^{n-1}+k^{n-2}-k^{n-3}+\cdots+1)$$ And $$k^{n+1}-1=(k-1)(k^n+k^{n-1}+\cdots+1)$$ Hence $$\frac{(k^{n+1}-1)(k^{n+1}+1)}{k^2-1}=\frac{(k^{n+1}-1)(k^{n+1}+1)}{(k-1)(k+1)}$$ $$=(k^n+k^{n-1}+\cdots+1)(k^n-k^{n-1}+k^{n-2}-k^{n-3}+\cdots+1)$$ is divisible by $(k^n+k^{n-1}+\cdots+1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1160219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that the function $f(x, y)$ = $xy$ is continuous. How do I show that $xy$ is continuous? I know that the product of two continuous functions is continuous but how do I show that $x$ is continuous and $y$ is continuous?
The function $f(x,y) = x$ is continuous since given $\epsilon > 0$ and $(a,b)\in \Bbb R^2$, setting $\delta = \epsilon$ makes $$|f(x,y) - f(a,b)| = |x - a| = \sqrt{(x - a)^2} \le \sqrt{(x - a)^2 + (y - b)^2} < \epsilon$$ whenever $\sqrt{(x - a)^2 + (y - b)^2} < \delta$. Similarly, the function $g(x,y) = y$ is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1160354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
A group whose automorphism group is cyclic Is there an Abelian group $A$ which is not locally cyclic whose automorphism group is cyclic ?
This MathOverflow question cites this paper, which says there are torsion-free groups of any finite rank of without automorphism group $C_2$, and which in turn cites this paper in which Theorem III says there are Abelian groups of all finite ranks $\ge 3$ with automorphism group $C_2$. Rank $>1$ means they can't be locally cyclic. (the paper refers to an automorphism group $C_2$ as having only 'trivial' automorphisms.) So the answer is yes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1160453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
When is the supremum\infimum an accumulation point? Trying to show that if a sequence converges, it either has a maximum, a minimum or both, I reached a dead-end. Assuming it is not constant, it is still bounded and its supremum and infimum aren't equal. Then I assumed that the supremum and infimum are not in the sequence. I want to show that there are two subsequences that converge to each of them but for that to happen I have to show they are accumulation points. I tried to use definition but failed. I know logically that following my assumption they have to be accumulation points but I can't derive it from the definitions. Any help?
Consider the sequence $\{\mathbf{x}_n\}_{n\in\mathbb{N}} \subseteq A$, where $A \subseteq \mathbb{R}^n$. If this sequence is convergent, then it is bounded. Also, the sequence must converge to $\mathbf{x} \in \overline{A}$. Show that for all $\epsilon > 0$, there are finitely many $\mathbf{x}_n \notin N(\mathbf{x}, \epsilon) \cap A$ where $N(\mathbf{x},\epsilon)$ is some neighborhood of $\mathbf{x}$. Then, use the fact that a finite set is bounded and that $N(\mathbf{x}, \epsilon_0) \cap A$ is bounded for some $\epsilon_0$ to prove your claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1160568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The congruence $(34\times 10^{(6n-1)}-43)/99 \equiv -1~\text{ or }~5 \pmod 6$ Trying to prove this congruence: $$ \frac{34\times 10^{(6n-1)}-43}{99} \equiv-1~\text{ or }~5 \pmod 6,\quad n\in\mathbb{N}$$ Progress Brought it to the form $$34\times 10^{6n-1}-43\equiv -99\pmod{6\cdot 99}$$ but how to proceed further? $6\cdot 99$ is not a small number.
${\ 6\mid f_{n+1}-f_n =\, \dfrac{34\cdot(\color{#c00}{10^6\!-1}) 10^{6n\!-1}}{99}\ }$ by ${\ 3\cdot 99\mid \color{#c00}{10^6\!-1} = 999999 = 99\cdot\!\!\!\!\!\!\!\!\!\!\!\underbrace{10101}_{\equiv\, 1+1+1\pmod 3}}$ So $\,{\rm mod}\ 6\!:\ f_{n+1}\!\equiv f_n\,\overset{\rm induct}\Rightarrow\,f_n\equiv f_1\equiv -1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1160646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $x=\cos t,y=\cos(2t+\pi/3)$ find an analytical relation between $x$ and $y$. I'm having a bit of trouble figuring this out. At the moment this is the near solution I have: $$y=\frac12(2\cos^2 t-1)-\sqrt{3}\sin t\cos t.$$ I should be just about to solve it but find myself stuck. I appreciate any hint on how to "eliminate" that sine, or any other way in which I could express $x$ as an expression that depends on $y$. Thanks!
You're almost there. You have correctly used that $$ \cos\left(2t+\frac{\pi}{3}\right)=\frac{1}{2}\cos2t-\frac{\sqrt{3}}{2}\sin2t $$ so $$ 2y=2\cos^2t-1-2\sqrt{3}\sin t\cos t=2x^2-1-2x\sqrt{3}\sin t. $$ Thus $$ 2x\sqrt{3}\sin t=2x^2-1-2y. $$ Hence $$ 12x^2(1-x^2)=(2x^2-1-2y)^2. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1160740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Predictable Processes in Brownian Setting Maybe it's a silly question. I've been reading Protter's book on stochastic integration. And all the integrands are required to be predictable. But from what I can recall, in the traditional stochastic calculus in Brownian setting (i.e., integration with respect to a Brownian motion), the integrands only need to be in $H^2$ (or $L_{\text{loc}}^2$) and there does not seem to be any mentioning of predictability. So is it that the Protter's general setting does not entirely include the traditional Brownian setting as a special case, or that the predictable $\sigma$-algebra is special in the Brownian setting (e.g., every adapted process is predictable)? EDIT: I found a result saying that if all martingales are continuous, then the predictable sigma-algebra equals the optional sigma-algebra (which is generated by all adapted, cadlag processes). So in the case of Brownian filtration, "predictable" is equivalent to "optional" by the Martingale Representation Theorem. But can one go further?
It is a bit subtile but in the particular case of the Brownian motion, you doen't need to assume the integrand is predicable. However it is only because you consider a particular $L^2$ space where your adapted process is a.s. equal to a predicable one. Thus, the loss of generality is an illusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1160846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof: Number theory: Prove that if $n$ is composite, then the least $b$ such that $n$ is not $b$-pseudoprime is prime. I'm looking to prove this, but not too sure how: If $n$ is composite, then the least $b$ such that $n$ is not $b$-psp is prime. Thanks!
Let $b$ be the minimum such element and assume it is not prime.Then $b=cd$ and $b^{n-1}\not\equiv 1 \bmod n$ . Then $b^{n-1}=c^{n-1}d^{n-1}\not \equiv 1$. So one of $b^{n-1}$ and $c^{n-1}$ is not $1\bmod n$. Contradicting the minimality of $b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1160949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Differentiating this problem $\frac{2t^{3/2}}{\ln(2t^{3/2}+1)}$ How does one differentiate the function $$y(t)=\frac{2t^{3/2}}{\ln(2t^{3/2}+1)}.$$ I am still tying to understand MathJaX and not sure what is wrong with the expression. Anyways, How do I start/process solving this? Do i take the ln of both side? If so I get the log of the top - the log of the bottom. which is the log of a log? If I do the quotient rule right away, i get the log expression in the bottom squared. Help please?
In this case the quotient rule is probably the best option. The symmetry of the $2 t^{2/3}$ in the top and bottom makes me suspect that some things might end up canceling out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1161066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What is an example of a nonmetrizable topological space? Find topological spaces $X$ and $Y$ and a function $f:X\to Y$ which is not continuous, but which has the property that $f(x_n)\to f(x)$ in $Y$ whenever $x_n\to x$ in $X$ I know this is true if $X$ is metrizable, so I want a counterexample when $X$ is not metrizable.
You want an example of a function $f:X\to Y$ that is sequentially continuous but not continuous. Let $X=\omega_1+1$, the space of all ordinals up to and including the first uncountable ordinal, with the order topology, and let $Y=\{0,1\}$ with the discrete topology. Let $$f:X\to Y:\alpha\mapsto\begin{cases} 0,&\text{if }\alpha<\omega_1\\ 1,&\text{if }\alpha=\omega_1\;. \end{cases}$$ Then $f$ is not continuous, because $f^{-1}\big[\{1\}\big]=\{\omega_1\}$ is not open. However, $f$ is sequentially continuous, because a sequence $\langle\alpha_n:n\in\omega\rangle$ in $X$ converges to $\omega_1$ if and only if there is an $m\in\omega$ such that $\alpha_n=\omega_1$ for all $n\ge m$ and hence such that $f(\alpha_n)=1=f(\omega_1)$ for all $n\ge m$. Added: Here’s a simpler version of essentially the same idea, though in this case the space $X$ is no longer compact. Define a new topology $\tau$ on $\Bbb R$ as follows: $$\tau=\{U\subseteq\Bbb R:0\notin U\text{ of }\Bbb R\setminus U\text{ is countable}\}\;.$$ Let $X$ be $\Bbb R$ with the topology $\tau$. If $x\in\Bbb R\setminus\{0\}$, then $\{x\}$ is open: $x$ is an isolated point of $x$. However, the all nbhds of $0$ are big: if $x\in U\in\tau$, then $\Bbb R\setminus U$ is a countable set. (Note: countable includes finite.) It’s a standard and pretty straightforward exercise to show that if $\sigma=\langle x_n:n\in\Bbb N\rangle$ is a convergent sequence in $X$, then $\sigma$ is eventually constant, i.e., there are an $m\in\Bbb N$ and an $x\in X$ such that $x_n=x$ for all $n\ge m$. In other words, the only sequences in $X$ that converge are the ones that absolutely have to converge, because from some point on they’re constant. Now define $$f:X\to Y:x\mapsto\begin{cases} 0,&\text{if }x\ne 0\\ 1,&\text{if }x=0\;. \end{cases}$$ Then $f$ is not continuous, because $f^{-1}\big[\{1\}\big]=\{0\}$, which is not open in $X$. However, $f$ is sequentially continuous: if $\langle x_n:n\in\Bbb N\rangle$ is a sequence in $X$ converging to some $x\in X$, there is an $m\in\Bbb N$ such that $x_n=x$ for all $n\ge m$, so of course $f(x_n)=f(x)$ for all $n\ge m$, and $\langle f(x_n):n\in\Bbb N\rangle$ converges to $f(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1161175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why line integral of f(x.y)=(x.y) is not zero along the circle? I am asked to determine whether f(x.y)=(x.y) is gradient or not. It is clear that there exists a function g whose derivative with respect to x and y is equal to first and second component of f. So, f is gradient. If f is gradient, then on closed path , its line integral must be zero. I considered a circle with radius $1$ at the origin from $0$ to $2\pi$ This is closed path. And I did integrate f along the circle and ended up getting $2 \pi$. Why is it different?
$$\int_C f(x,y)d\vec x=\int_0^{2\pi}f(\cos \theta,\sin\theta)\cdot d(\cos\theta,\sin \theta)=\int_0^{2\pi}(\cos\theta,\sin\theta)\cdot (-\sin\theta,\cos\theta)d\theta=\int_0^{2\pi}\underbrace{(-\cos\theta\sin\theta+\cos\theta\sin\theta)}_{=0}d\theta=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1161258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
On the definition of clone of relations I am reading A short introduction to clones and I am stuck at this definition ($A$ is a set and $R_A$ the set of finitary relations on $A$) Definition A subset $R\subseteq R_A$ is called a clone of relations on $A$ if: (i) $\varnothing\in R$ (ii) $R$ is closed under general superposition, that is, the following holds: for an arbitrary index set $I$, let $\sigma_i\in R^{(k_i)}$ ($i\in I$) and let $\phi:k\longmapsto\alpha$ and $\phi_i:k_i\longmapsto\alpha$ be mappings where $\alpha$ is some cardinal number. Then the relation defined by $${\displaystyle\bigwedge}^{\phi}_{(\phi_i)_{i\in I}} (\sigma_i)_{i\in I}:=\{r\circ \phi| \forall i\in I: r\circ \phi_i\in \sigma_i,r\in A^{\alpha}\}$$ belongs to $R$. Here are some questions: 1) What is $k$? 2) What does it mean that $\phi_i$ is a map? What is a map from a number to a number? 3) I know what the composition of two or many relations is, but what is meant here by $r\circ \phi_i$?
Here are some questions: 1) What is $k$? $k$ is the natural number that is the arity of the relation mentioned in part (ii) of the definition. 2) What does it mean that $\phi_i$ is a map? What is a map from a number to a number? map=function. 3) I know what the composition of two or many relations is, but what is meant here by $r\circ \phi_i$? $r\circ \phi_i$ is the composition of the function $r\colon \alpha\to A$ with the function $\phi_i\colon k_i\to \alpha$. It is therefore a function $k_i\to A$, hence is a potential element of $\sigma_i$, since $\sigma_i\subseteq A^{k_i}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1161505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Irreducible representations of $C(T,B(X))$ Let $T$ be a compact topological space, $X$ a finite-dimensional Hilbert space, $B(X)$ the algebra of operators in $X$, and $C(T,B(X))$ the $C^*$-algebra of continuous maps from $T$ into $B(X)$ (with the poinwise algebraic operations and the uniform norm). I think, all irreducible representations of $C(T,B(X))$ must be of the form $$ \pi:C(T,B(X))\to B(X),\qquad \pi(f)x=f(t)x,\qquad x\in X,\ f\in C(T,B(X)), $$ for some $t\in T$. How is this proved?
Strictly speaking, the answer is no. The following is true: every irreducible representation is unitarily equivalent to a "point-evaluation". That is, there exists $t\in T$ and a unitary operator $U\in B(X)$ such that $$ \pi:C(T,B(X))\to B(X),\qquad \pi(f)x=(U^*f(t)U)x,\qquad x\in X,\ f\in C(T,B(X)). $$ It follows from a general theorem for continuous fields of $C^*$-algebras, see e.g. the book by J.Dixmier "$C^*$-algebras and representation theory", Corollary 10.4.4. You can prove it yourself by showing (similarly to the case $C(T,\mathbb C)\ $) that every closed two-sided ideal $I\subseteq C(T,B(X))$ is of the form $$ I_S=\{f\in C(T,B(X))\ \ \colon\ f|_S\equiv 0\} $$ for some closed $S\subseteq T.$ The kernel of an irreducible representation is then $I_{\{t\}}$ for some $t\in T.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1161661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How come $\left(\frac{n+1}{n-1}\right)^n = \left(1+\frac{2}{n-1}\right)^n$? I'm looking at one of my professor's calculus slides and in one of his proofs he uses the identity: $\left(\frac{n+1}{n-1}\right)^n = \left(1+\frac{2}{n-1}\right)^n$ Except I don't see why that's the case. I tried different algebraic tricks and couldn't get it to that form. What am I missing? Thanks. Edit: Thanks to everyone who answered. Is there an "I feel stupid" badge? I really should have seen this a mile a way.
HINT: $1+\frac{2}{n-1}=\frac{n-1}{n-1}+\frac{2}{n-1}=\frac{n+1}{n-1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1161746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Exotic bijection from $\mathbb R$ to $\mathbb R$ Clearly there is no continuous bijections $f,g~:~\mathbb R \to \mathbb R$ such that $fg$ is a bijection from $\mathbb R$ to $\mathbb R$. If we omit the continuity assumption, is there such an example ? Notes: to follow from Dustan's comments: Notes: By definition $fg~:~x \mapsto f(x)\times g(x)$ and not $f\circ g$. If there were continuous bijections just look at the limits of $f$ and $g$ at $+\infty$ and $-\infty$ to conclude that $fg$ can't be a bijection
Let $f(x)=x$, and define $g$ piecewise by $$ g(x) = \begin{cases} -x ,& x \in \cdots \cup (-16,-8] \cup (-4,-2] \cup \cdots &= \bigcup_k -[2 \cdot 4^k, 4 \cdot 4^k) \\ 4x ,& x \in \cdots \cup (-8,-4] \cup (-2,-1] \cup \cdots &= \bigcup_k -[4^k, 2 \cdot 4^k) \\ 0 ,& x = 0 \\ x ,& x \in \cdots \cup [1,2) \cup [4,8) \cup \cdots &= \bigcup_k [4^k, 2 \cdot 4^k) \\ -\frac14 x ,& x \in \cdots \cup [2,4) \cup [8,16) \cup \cdots &= \bigcup_k [2 \cdot 4^k, 4 \cdot 4^k) \\ \end{cases} $$ (where $-[b,a)$ just means $(-a,-b]$). This $g$ is a bijection from $\mathbb{R}$ to $\mathbb{R}$, since * *$g$ maps $\bigcup_k -[2 \cdot 4^k, 4 \cdot 4^k)$ one-to-one onto $\bigcup_k [2 \cdot 4^k, 4 \cdot 4^k)$. *$g$ maps $\bigcup_k [2 \cdot 4^k, 4 \cdot 4^k)$ one-to-one onto $\bigcup_k -[2 \cdot 4^k, 4 \cdot 4^k)$. *$g$ maps $\bigcup_k -[4^k, 2 \cdot 4^k)$ one-to-one onto itself. *$g$ maps $\bigcup_k [4^k, 2 \cdot 4^k)$ one-to-one onto itself. *$g$ maps $0$ to itself. And $fg$ is a bijection from $\mathbb{R}$ to $\mathbb{R}$, since * *$fg$ maps $\bigcup_k -[2 \cdot 4^k, 4 \cdot 4^k)$ one-to-one onto $\bigcup_k -[4 \cdot 16^k, 16 \cdot 16^k)$. *$fg$ maps $\bigcup_k [2 \cdot 4^k, 4 \cdot 4^k)$ one-to-one onto $\bigcup_k -[16^k, 4 \cdot 16^k)$. *$fg$ maps $\bigcup_k -[4^k, 2 \cdot 4^k)$ one-to-one onto $\bigcup_k [4 \cdot 16^k, 16 \cdot 16^k)$. *$fg$ maps $\bigcup_k [4^k, 2 \cdot 4^k)$ one-to-one onto $\bigcup_k [16^k, 4 \cdot 16^k)$. *$fg$ maps $0$ to itself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1161829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 1 }
Inverse of continuous function If we have continuous function $f$ between two topological spaces, such that it is one to one and onto, is it true that we conclude $f^\rm{-1}$ is continuous?
No. As a counterexample, take $ f:[0,1) \to S^1 $ given by $$ f(x) = (\cos(2\pi x),\sin(2\pi x)) $$ or, depending on your preferred definition of $S^1$, $$ f(x) = e^{2 \pi i x} $$ However, it is useful to note that if $f:X \to Y$ is one to one and onto with $X$ compact and $Y$ Hausdorff, then $f^{-1}$ must be continuous since any continuous map between these spaces takes closed sets to closed sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1161912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find the equation of a line parallel to the y-axis, that goes through the point $(\pi,0)$ I have been trying to do this problem and I am very confused. I know the gradient is infinity when any line is parallel to the y-axis, therefore, $y = \infty \cdot x + c$, right ($y = mx + c$ being the general equation a straight line)? We know $y = 0$, therefore $0 = \infty \cdot \pi + c$ and so $c = -\infty \cdot \pi$. Therefore, $y = \infty \cdot x - \infty \cdot \pi$. And therefore $y = 0$ which would seem to check-out? But, the answer I have been given (by my teacher) says $x = \pi$. Please can somebody explain how you come to this answer?
Is this in $\mathbb{R}^2$? If so, the equation $y=mx+b$ will not help you, as the slope of any line parallel to the $y$-axis is undefined. Instead, a vertical line (parallel to $y$-axis) has the equation $x=a$, where $a$ is the $x$-intercept of the line. This should clarify the (correct) answer provided by your teacher.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1162131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
About multivariable quadratic polynomials Say one has a polynomial function $f : \mathbb{C}^n \rightarrow \mathbb{C}$ such that it is quadratic in any of its variables $z_i$ (for $i \in \{ 1,2,..,n\}$). Then it follows that for any $i$ one can rewrite the polynomial as $f = A_i (z_i - a_i)(z_i - b_i)$, where the constants depend on the $i$ chosen. * *But does it also follow that one can write the function in the form, $f = \prod_{i=1}^n (B_i (z_i - c_i)(z_i - d_i) )$ ? If the above is not true then what is the simplest decomposition that can be written for such a $f$? * *Like is it possible to redefine the basis in $\mathbb{C}^n$ to some $Z_i$ such that $f = \sum_{i} (a_i Z_i^2 + b_i Z_i + c_i)$ ? *Like any extra leverage that can be gotten if one knows that $f$ is real-rooted?
You can't expect the $A_i,a_i,b_i$ to be constant. One could have $$f=z_1^2+z_2^2=(z_1+iz_2)(z_1-iz_2).$$ What about if $$f=z_1^2+z_2^2+z_3^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1162216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
You construct a rectangular Box with volume K cm^3 Prove that a cube uses the least amount of material to construct the box
Let $x$, $y$ and $z$ the lenghts of the sides of the box. If we suppose this box is closed, we must to minimize the function $f(x,y,z)=2xy+2xz+2yz$ restricted to $xyz=K$, we can write the volume of the box, $V$, as follows $$V=2\left(xy+\frac{K}{y}+\frac{K}{x}\right)$$ If we fix $x$ then we have $V_y=2\left(x-\frac{K}{y^2}\right)$ and $V_{yy}=\frac{4K}{y^3}>0$ then $V$ reach its minimun value for this $x$ when $y=\sqrt{\frac{K}{x}}$. Now we are looking where the function $x\mapsto2\left(2\sqrt{Kx}+\frac{K}{x}\right)$ reach its minimum value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1162347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Can we predict the past? Can we use probability rules to predict the occurrence of an event which has already happened in the past or already formed? For example, hemoglobin is a protein formed of $141$ amino acids connected like a chain with specific order, the first amino acid is Leucine (we have only $20$ types of amino acids forming any protein), is it valid to say that probability of leucine being first in this chain is $\frac1{20}$ so the probability to get the $141$ a.a hemoglobin with such order by chance is $\left(\frac{1}{20}\right)^{141}?$ or this prediction makes no sense as we already have hemoglobin formed with such order in the nature?
Probabilities can represent a state of limited knowledge of events that have already happened. This can occur, for example, in games of cards after the hands have already been dealt but before players have revealed their cards through play. Each player has perfect knowledge of his or her own hand but only probabilistic knowledge of the cards held by other players. So we could say, given certain conditions, but not knowing what proteins these conditions would give rise to, what is the probability that the conditions would give rise to hemoglobin? The conditions that yield the answer $\left(\frac{1}{20}\right)^{141},$ however, would be something along the lines of we choose exactly $141$ amino acids one at a time from a practically infinite "bag" in which each amino acid occurs with equal frequency. Moreover we assume only one attempt. That is not how proteins form under any realistic circumstances. There were undoubtedly a great many times that hemoglobin could first have been encoded by some creature's DNA during the history of life on Earth. So even without taking into account the probabilistic dependencies that may exist between the encoding of one amino acid in a protein and the encoding of the next, the probability under the limited one-time uniform-random-choice conditions does not seem to be of much interest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1162419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Find the locus of $w$ $$ \text{Find the locus of $w$, where $z$ is restricted as indicated:} \\ w = z - \frac{1}{z} \\ \text{if } |z| = 2 $$ I have tried solving this by multiplying both sides by $z$, and then using the quadtratic equation. I get $z = \frac{w \pm \sqrt{w^2+4}}{2}$. I then set $0 \leq w^2+4 $ But I still have no idea on how to solve this. Thanks in advance.
let $z = 2(\cos t + i \sin t)$. Then $$w= u + iv= z - \frac 1 z=2(\cos t + i \sin t ) - \frac 1 2(\cos t - i\sin t). $$ that gives you $$u = \frac 3 2\cos t, v = \frac 52 \sin t, \, \text{ which is an ellipse } \frac 49 u^2 + \frac 4 {25} v^2 = 1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1162573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integral of an inverse Let $f(x)=x^3−2x^2+5$. Then find the integral $$\int_{37}^{149} \! f^{-1}(x) \, \mathrm{d}x$$ I know the inverse theorem for differentiation.( I don't think we can apply it here). Is there other theorem for integration.(I am not finding the inverse and then integrating).
Here $f(x)$ is a cubic polynomial. $f'(x)=x(3x-4)>0$ for $x>37$. So for $x>37, f$ is increasing and hence it is a one to one and onto function from $37$ to $149$. So $f^{-1}$ exists on $[37,149]$. Now use change of variables. Let $x=f(t)$ for $x\in [37,149]$. Then $f^{-1}(x)=t$. Then, $dx=f'(t)dt$. So, $f^{-1}(x) dx=tf'(t)dt$. Hence, $\displaystyle \int_{37}^{149} f^{-1}(x) dx=\int_{f^{-1}(37)}^{f^{-1}(149)} tf'(t)dt$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1162786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Limit of an unknown cubic expression Let $f(x)=ax^3+bx^2+cx+d$ and $g(x)=x^2+x-2$. If $$\lim_{x \to 1}\frac{f(x)}{g(x)}=1$$ and $$\lim_{x \to -2}\frac{f(x)}{g(x)}=4$$ then find the value of $$\frac{c^2+d^2}{a^2+b^2}$$ Since the denominator is tending to $0$ in both cases, the numerator should also tend to $0$, in order to get indeterminate form. But it led to more and more equations.
What about? $$f(x)=-x^3+x^2+4x-4$$ So: $$\frac{c^2+d^2}{a^2+b^2}=32/2=16$$ Let us make ansatz that $$f(x)=a(x+\alpha)(x-1)(x+2)$$ So: $$a(1+\alpha)=1,a(\alpha-2)=4\implies a=-1,\alpha=-2$$ so: $$f(x)=-(x-1)(x+2)(x-2)=-x^3+x^2+4x-4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1162864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
$Y=X+Z$ and $X \perp Z$ and $Y$, $X$, $Z$ continuous. Is $X \perp Z|Y$? Let $Y=X+Z$. Assume that $Y$, $X$, $Z$ are continuously distributed. Let $X \perp Z$. Prove / disprove that $X \perp Z | Y$. I've seen a bunch of these types of problems on math.SE (and almost all disprove by counterexample) but in all problems I've seen the counterexamples use discrete distributions. I suspect there are still counterexamples when the variables are continuous but I couldn't find a counterexample and just wanted to make sure that there is nothing special about discrete distributions. Here's a counterexample if the variables are discrete: Let $X \in \{0,1\}$ and $Z \in \{0,1\}$ independently. Then conditional on $Y=2$, we know that $X=Z=1$. In the continuous case I was trying to convince myself that the statement is false by letting $Z$ take on large values and $X$ small values then if $Y$ takes on large values then $Z$ must also be large but then couldn't quite make the step that this would imply that $X$ and $Z$ are dependent. Quick note on notation: $X \perp Z$ means $X$ is independent of $Z$ and $X \perp Z|Y$ means $X$ is independent of $Z$ conditional on $Y$. Thanks for help.
Other than in degenerate examples, intuitively, it seems obvious that $X,Z$ are dependent given $Y$ because given both $Z$ and $Y$, the value of $X$ is determined exactly. This applies for both discrete and continuous random variables. In the continuous case, for any distributions of $X,Z:\;\;$ $f_{X,Z\mid Y}(x,z\mid y) = 0$ whenever $x+z\neq y$ but in general $f_{X\mid Y}(x\mid y)\;f_{Z\mid Y}(z\mid y) \neq 0$. This is sufficient to show that $X,Z$ are dependent given $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1162948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inclusion relation between two summability methods Let $0\leq x<1$ and $s_n$ be a sequence of partial sums of the series $\sum_{n=0}^{\infty}a_n$. It is called that the series $\sum_{n=0}^{\infty}a_n$ is $(A)$ or Abel summable to $s$ if $$\lim_{x\to1^-}(1-x)\sum_{n=0}^{\infty}s_nx^n=s,$$ and the series $\sum_{n=0}^{\infty}a_n$ is called $(L)$ summable to $s$ if $$\lim_{x\to1^-}\frac{-1}{\log(1-x)}\sum_{n=0}^{\infty}\frac{s_n}{n+1}x^{n+1}=s.$$ I need help to prove $(A)$ summability of the series $\sum_{n=0}^{\infty}a_n$ to $s$ implies $(L)$ summability of the series $\sum_{n=0}^{\infty}a_n$ to $s$. That is $(L)$ summability includes $(A)$ summability.
$$ \begin{matrix} \text{Let} & F(x) = \sum\limits_{n=0}^\infty \frac{s_n}{n+1} x^{n+1} &\text{so}& F'(x) = \sum\limits_{n=0}^\infty s_n x^n \\ \text{and} & G(x) = -\log(1-x) &\text{so}& G'(x)=\frac1{1-x} \\ \text{then} & -\dfrac1{\log (1-x)}\sum\limits_{n=0}^\infty \dfrac{s_n}{n+1}x^n=\dfrac{F(x)}{G(x)} &\text{and}& (1-x)\sum\limits_{n=0}^\infty s_nx^n=\dfrac{F'(x)}{G'(x)}. \\ \end{matrix} $$ We want to prove that $\lim\limits_{x\to1-0} \dfrac{F'(x)}{G'(x)}=s$ implies $\lim\limits_{x\to1-0} \dfrac{F(x)}{G(x)}=s$. As $\lim\limits_{x\to1-0}G(x)=\infty$, this is true by L'Hospital's rule. (Most textbooks state L'Hospital's rule for limits of the forms $\frac00$ and $\frac{\pm\infty}{\pm\infty}$, but the case $\lim |G|=\infty$ no assumption is required on $\lim F$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1163052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Contour integration of cosine of a complex number I am trying to find the value of $$ -\frac{1}{\pi}\int_{-\pi/2}^{\pi/2} \cos\left(be^{i\theta}\right) \mathrm{d}\theta,$$ where $b$ is a real number. Any helps will be appreciated!
Because cosine is an even function you may write the integral as $$-\frac1{2 \pi} \int_{-\pi/2}^{3 \pi/2} d\theta \, \cos{\left ( b e^{i \theta} \right )} = -\frac{1}{i 2 \pi} \oint_{|z|=1} dz \frac{\cos{b z}}{z} $$ which, by the residue theorem or Cauchy's theorem, is $$-\frac{1}{i 2 \pi} i 2 \pi \cos{0} = -1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1163150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Solving recurrence equation with generating indices of positive indices I don't know how to solve recurrence equation with positive indices like $$a_{n+2} + 4a_{n+1}+ 4a_n = 7$$ by generating functions. How to solve such kind of problems.
Hint. You may just multiply both sides of the following relation by $x^{n+2}$ and summing it: $$ a_{n+2} + 4a_{n+1}+4a_n = 7 \tag1 $$ to get $$ \sum_{n=0}^{\infty}a_{n+2}x^{n+2} + 4x\sum_{n=0}^{\infty}a_{n+1}x^{n+1}+4x^2\sum_{n=0}^{\infty}a_n x^n= 7x^2\sum_{n=0}^{\infty}x^n $$ or $$ \sum_{n=2}^{\infty}a_{n}x^{n} + 4x\sum_{n=1}^{\infty}a_{n}x^{n}+4x^2\sum_{n=0}^{\infty}a_n x^n= 7x^2\sum_{n=0}^{\infty}x^n \tag2 $$ equivalently, setting $\displaystyle f(x):=\sum_{n=0}^{\infty}a_{n}x^{n} $, you formally get $$ f(x)-a_0-a_1x+4x(f(x)-a_0)+4x^2f(x)=7x^2\frac{1}{1-x} $$ that is $$ (2x+1)^2f(x)=\frac{7x^2}{1-x}+(4a_0+a_1)x+a_0 $$ $$ f(x)=\frac{7x^2}{(1-x)(2x+1)^2}+\frac{(4a_0+a_1)x+a_0}{(2x+1)^2} \tag3 $$ Then by partial fraction decomposition and power series expansion, you are able to identify coefficients of both sides of $(3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1163250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to prove this limit is $0$? Let $f:[0,\infty )\rightarrow \mathbb{R}$ be a continuous function such that: * *$\forall x\ge 0\:,\:f\left(x\right)\ne 0$. *$\lim _{x\to \infty }f\left(x\right)=L\:\in \mathbb{R}$. *$\forall \epsilon >0\:\exists x_0\in [0,\infty)$ that $0<f\left(x_0\right)<\epsilon $. Prove that: $$L=0$$ So far, I thought about assuming that $L\ne 0$ to get a contradiction. Assume that $f\left(0\right)<0$. Now, choose an arbitrary $\epsilon $>$0$, and we know that $\exists x_0>0$ such that $0<f\left(x_0\right)<\epsilon$. $f$ is continuous, so from the Intermediate value theorem, $\exists c\in \left[0,x_0\right]: \space f\left(c\right)=0$, and it's a contradiction to (1). So, $f\left(0\right)\ge 0$. (Actually, I think it means that $\forall c\in \left[0,\infty \right): \space f\left(c\right)\ge 0$, and it contradicts that $L<0$). I get stuck while proving the case when $L>0$. Can someone guide me? Thanks in advance!
First note that $f(x) >0$ for all $x$. This follows from 3. and the fact that $f$ is continuous (and so $f([0,\infty))$ is connected). It follows from this that $L \ge 0$. Let $\alpha(x) = \min_{t \in [0,x]} f(t)$. Since $[0,x]$ is compact, we see that $\alpha(x) >0$ for all $x$. Property 3. shows that $\lim_{x \to \infty} \alpha(x) = 0$. If $L>0$, then there would be some $m>0$ such that $\alpha(x) \ge m$ for all $x$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1163331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
On a property of split short exact sequences Let $A_{\bullet}, B_\bullet$ and $C_\bullet$ be three short exact sequences of groups (not necessarily abelian) out of which $A_\bullet$ and $B_\bullet$ are split. Assume that there is again a short exact sequence, $$0 \to A_{\bullet} \to B_{\bullet} \to C_{\bullet} \to 0$$of the short exact sequences. Is it necessary that $C_\bullet$ is also split?
$\newcommand{\ZZ}{\mathbb{Z}}$ Here is what I believe to be a counterexample, where all groups are $\ZZ$-modules (i.e., abelian groups): $A_1 = 2 \ZZ$. $A_2 = \left\{\left(i,j\right) \in \ZZ \times \ZZ \mid 4 \mid 2i+j\right\}$ (the first "$\mid$" here is a "such that" symbol, while the second one is a "divides" sign). $A_3 = 2 \ZZ$. The injection $A_1 \to A_2$ sends each $i$ to $\left(i, 0\right)$, and the surjection $A_2 \to A_3$ sends each $\left(i, j\right)$ to $j$. $B_1 = \ZZ$. $B_2 = \ZZ \times \ZZ$. $B_3 = \ZZ$. The injection $B_1 \to B_2$ sends each $i$ to $\left(i, 0\right)$, and the surjection $B_2 \to B_3$ sends each $\left(i, j\right)$ to $j$. $C_1 = \ZZ / 2 \ZZ$. $C_2 = \ZZ / 4 \ZZ$. $C_3 = \ZZ / 2 \ZZ$. The injection $C_1 \to C_2$ sends each $\overline{i}$ to $\overline{2i}$, and the surjection $C_2 \to C_3$ sends each $\overline{i}$ to $\overline{i}$. The maps $A_1 \to B_1$ and $B_1 \to C_1$ are the canonical inclusion and projection that one would expect. Same for the maps $A_3 \to B_3$ and $B_3 \to C_3$. The map $A_2 \to B_2$ is the canonical inclusion. The map $B_2 \to B_3$ sends each $\left(i, j\right)$ to $\overline{2i+j}$. The sequences $A_\bullet$ and $B_\bullet$ are exact, and therefore are split because any short exact sequence which ends with a free module splits. But the sequence $C_\bullet$ is a non-split exact sequence. That is, unless I've made a mistake, for which there is plenty of occasion...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1163460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
convergence of a numerical series I would like to study the convergence of the numerical serie $$ S_n=\sum_{k= 1}^n u_k=\sum_{k= 1}^n \frac{1}{\left(\sqrt[k]{2}+\log k\right)^{k^2}}. $$ I tried the Cauchy rule (i.e. evaluate $\lim_{k\rightarrow +\infty}(u_k)^{\frac 1 k}$ but there is no issue.
$$\frac1{\left(\sqrt[k]2+\log k)\right)^{k^2}}<\frac1{\sqrt[k]{2}^{k^2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1163560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
H-function for the following integral I stumbled upon the integral $\int\limits_0^{+\infty} u^\nu\exp(-au-bu^\rho)du$, $\Re(a)>0,\,\,\Re(b)>0,\,\,\rho>0$. I cannot find any way to represent it using the Fox-H function. Any hints? PS: This is the Krätzel function, right? The one known as reaction rate integral.
Expand the factor $exp(-bu^\rho)$ into a Taylor series. You get: $\int_0^\infty u^\nu exp(-au-bu^\rho)du=\sum_{i=0}^\infty \frac{1}{i!} (-b)^i \int_0^\infty u^{\nu+i \rho} exp(-au)du$. With the Substitution $v=au$ you get the following series of Gamma functions: $\int_0^\infty u^\nu exp(-au-bu^\rho)du=\sum_{k=0}^\infty \frac{1}{k!} (-b)^k (\frac{1}{a})^{\nu + k \rho + 1} \Gamma(\nu + k \rho + 1)$. It is possible to express this sum of Gamma functions by contour integrals. For example, may be $f(s)$ an arbitrary function, then you would have the following contour integral: $\frac{1}{2 \pi i}\oint \Gamma(qs+r) f(s) ds = \sum_{k=1}^\infty Res(\Gamma(qs+r),\frac{-k-r}{q}) f(\frac{-k-r}{q})$. Here you see what are the poles of the Gamma function. It holds that $Res(\Gamma(qs+r),\frac{-k-r}{q})$ is proportional to the factor $\frac{(-1)^k}{k!}$. Now the H-function can be used. The coefficients $(-b)^k (\frac{1}{a})^{\nu + k \rho + 1}$ arise when a suitable $z$ is Chosen for the factor $z^{-s}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1163645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
linear least square problems solved using LU decomposition I have been given this datafile For which I have to solve Ax=b. In which A is a matrix, x a vector and b a vector. The datafile consist of 2 vector one with the X-coordinates, and the other one with y-coordinates. I don't how i based from that shall create a A matrix, because that will make a A matrix on length of dataset) x 2?
Here is how you get the matrix. You substitute the pairs $(x_i,y_i)$ in the model $$ y = \beta_0+\beta_1 x +\beta_2 x^2 + \epsilon $$ to get the system of equations $$ y_i = \beta_0+\beta_1 x_i +\beta_2 x_i^2 + \epsilon, \quad i=1,2,\dots, n $$ then you will get the matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1163715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Trees have a vertex of degree $1$ Prove that any tree has a vertex of degree $1$. Let graph $G=(V,E)$ have $n$ vertices and $m$ edges where $m<n$. We need to prove that the minimum degree of, $\delta (G)=1$ Since G is connected then there exists a path from $u$ to $v$ such that $u,v \in V$. Is what I have said so far correct? I really don't know how to carry it on. I need a formal answer.
This is trivial: every tree has n-1 edges, where n is the number of vertices. If every vertices has degree at least 2 the sum of the degree is at least 2n.So there are at least n edges. Impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1163824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
In how many ways you can write $2015$ as a $20x+15y$? In how many ways you can write $2015$ as a $20x+15y$ where $x$ and $y$ are natural numbers? So probably I can do it using Euclid's algorithm but right now I am not sure how to do it. Could anyone explain step-by-step how to do exercises like this? I will be so grateful!
This might be overkill, but it couldn't do any harm. Let $$\begin{align*} f : \mathbb Z \times \mathbb Z &\rightarrow \mathbb Z \\ (x,y) &\mapsto 20x + 15y \end{align*}$$ We are looking for solutions to $f(x,y) = 2015$. Suppose you've found two solutions $(x,y)$ and $(X,Y)$. Then $f(X,Y) - f(x,y) = 0 = f(X-x,Y-y)$ and so $20(X-x) + 15(Y-y) = 0$. Let $n = X-x$ and $m = Y-y$. We can then see that we have: $$(X,Y) = (x,y) + (n,m)$$ where $20n + 15m = 0$. Note that any solution to this will add to $(x,y)$ to give another solution. The point of this is to illustrate that any two solutions are 'separated' by a tuple $(n,m)$ which satisfies the equation $20n + 15m = 0$. This means that if we find just one solution to the original problem, we can generate all possible solutions by allowing ourselves to add any and all solutions for $(n,m)$. All that is left to determine is the number of solutions to $20n + 15m = 0$. Any two such solutions $(n,m)$ and $(N,M)$ can be added to yield another solution. There is also the solution $(n,m) = (0,0)$. So what we have on our hands is a group of $2$-tuples. It is clear that in order to specify a pair $(n,m)$, we only need to specify one of the integers. This uniquely identifies the pair as there will be a unique solution given that one integer. So this group is finitely generated of rank $1$. All we must do then is find a generating element of the group. $(3,-4)$ must be a generating element because $3$ and $-4$ are coprime and so this $2$-tuple could not have been generated by another element. All possible $2$-tuple solutions to the null equation are thus $d(3,-4) : d \in \mathbb Z$ and so all possible solutions to the problem are: $$(x,y) + d(3,-4) : d \in \mathbb Z$$ ...where $(x,y)$ is some solution to the problem. $(100,1)$ is an obvious choice. So the complete set of solutions is: $$(100,1) + d(3,-4) : d \in \mathbb Z$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1163922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Cheeger inequalities for nonregular graphs I'm looking for a reference for something I thought was easy and well known. There are (at least) two definitions of expander graphs. There is a combinatorial definition via edge expansion, and an algebraic definition using the spectral gap. Neither of these definitions require the graph to be regular. Now, I always thought that the Cheeger inequalities implied that these definitions were equivalent up to the constants. However, when I looked up the Cheeger inequalities it seems that they only talk about regular graphs. Is there a version of Cheeger's inequalities for nonregular graphs as well? In general, is it true that a family of (not necessarily regular) graphs is a family of expanders in the first sense iff they are expanders in the second sense?
For graphs that are not regular, the right matrix to look at is $A_G' := D^{−1/2}A_GD^{−1/2}$. (Here $D^{−1/2}$ is simply the diagonal matrix whose $(i; i)$th entry is $(\deg(i))^{−1/2}$. We assume there are no isolated vertices, so none of the degrees is zero). This matrix is sometimes called the normalized adjacency matrix of a graph. Note that it also symmetric. Now consider the vector u, whose ith entry is $(\deg(i))^{−1/2}$. So that $D^{−1/2}u = 1_n \rightarrow D^{−1/2}A_GD^{−1/2}u = D^{−1/2}A_G1_n = u$. The last equality is because $A_G1_n$ is a vector whose ith entry is $\deg(i)$. Thus $u$ is an eigenvector with eigenvalue 1. It turns out $λ_{\max}(A_G') = 1$. This is not entirely trivial. From the characterization of $λ_{\max}$, we have: $$λ_{\max}(A_G')=\max_{x}\frac{x^TA_G'x}{x^Tx} =\max_{x} \frac{\sum_{ij\in E}(2x_ix_j)/(\deg(i)\deg(j))^{1/2}}{\sum_i x^2_i}.$$ Notice that $\sum_{ij\in E}x_i^2+x_j^2 =\sum_{k}\deg(k)x_k^2,\quad k=(1,2,...,\text{end})$. We have $$\sum_{ij\in E}(2x_ix_j)/(\deg(i)\deg(j))^{1/2}\leq \sum_{ij\in E}\frac{x_i^2}{\deg(i)}+\frac{x_j^2}{\deg(j)}=\sum_i x_i^2.$$ Because in the basic linear algebra this classical equation: $\lambda_{k+1}=\min_{x\perp \text{span}(v_1,...,v_k)}\frac{x^TMx}{x^Tx}$,where the $x$ is the unit vector written as $\sum_i \alpha_i v_i$ ($\sum_i \alpha_i^2=1$). So here we have $\sum_{ij\in E}\frac{x_i^2}{deg(i)}+\frac{x_j^2}{deg(j)}=\sum_i x_i^2=\sum_i x_i^2=1$. So $λ_{\max}(A_G')\leq 1$. It turns out that Cheeger’s inequality also holds in terms of the second smallest eigenvalue of $L_G'$ (without the factor d in the denominator, as you can see, no $d$-regular contains in this equations with $d$ terms).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1164023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is there a difference between the calculated value of Pi and the measured value? The mathematical value of Pi has been calculated to a ridiculous degree of precision using mathematical methods, but to what degree of precision has anyone actually measured the value of Pi (or at least the ratio of diameter to circumference), by actually drawing a circle and then measuring the diameter and circumference? If these two values differ, is the resulting difference (discounting inaccuracy in measurement) the result of the curvature of the surface on which the circle is drawn, or in the case of a circle in space in zero gravity (as much as that can exist), the curvature of space-time?
There's an underlying error in the question, namely the assumption that being in a curved space would result in a "different measured value of $\pi$". What happens in a curved space is that the ratio between a circle's circumference and diameter is no longer the same for all circles. More precisely, the ratio will depend on the size of the circle. For small circles (with diameter tending towards 0) the ratio will converge towards the one unchanging mathematical constant $\pi$ -- as circles get larger the ratio will either become larger and smaller according to whether the curvature of space is negative or positive. However, $\pi$ as the limit of $\frac{\text{circumference}}{\text{diameter}}$ for small circles is the same mathematical constant for all possible curvatures of space. According to the General Theory of Relativity we live in a slightly curved space. This has been measured directly in the vicinity of Earth by the Gravity Probe B experiment. The experiment didn't actually measure the circumference of a large circle, but the results imply that the geometric circumference of a circle approximating the satellite's orbit around the earth is about one inch shorter than $\pi$ times its diameter, corresponding to $\frac CD\approx 0.9999999984\, \pi$. (The curvature is caused by Earth's mass being inside the orbital circle. A circle of the same size located in empty space would have a $\frac CD$ much closer to $\pi$). Science fiction authors sometimes get this wrong. For example in Greg Bear's Eon there's a mathematician character who concludes she's in a curved area of space by measuring the value of $\pi$ and getting a nonstandard value. I headdesked -- it doesn't work that way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1164156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Complex Analysis: Confirmation for a question in my textbook? I'm being told that $$\frac{\exp{(1+i3\pi})}{\exp{(-1+i\pi /2})}= \exp(2)i$$ I keep getting $-\exp(2)i$. I have no idea how they didn't get that to be negative.
Relatively straightforward : $$\frac{\exp{(1+i3\pi})}{\exp{(-1+i\pi /2})}= \frac{e \exp({i3\pi})}{e^{-1}\exp{(i\frac{\pi}{2}})} = \frac{e\times(-1) }{e^{-1}\times i} = -e^2 \frac{1}{i}$$ But $\frac{1}{i} = -i$ (because $-1 = i\times i$ then you divide by $i$) so you get $$\frac{\exp{(1+i3\pi})}{\exp{(-1+i\pi /2})}=ie^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1164260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How to compute the summation of a binomial coefficient/ show the following is true $\sum\limits_{k=0}^n \left(2k+1\right) \dbinom{n}{k} = 2^n\left(n+1\right)$. I know that you have to use the binomial coefficient, but I'm not sure how to manipulate the original summation to make the binomial coefficient useful.
Consider $$f(x)=(1+x)^n=\sum_{k=0}^n\binom{n}{k}x^k$$ so $$f'(x)=\sum_{k=1}^{n}k\binom{n}{k}x^{k-1}$$ Hence $$f'(1)=\sum_{k=1}^{n}k\binom{n}{k}=\sum_{k=0}^{n}k\binom{n}{k}$$ On the other hand $$f'(x)=n(1+x)^{n-1}\Rightarrow f'(1)=n2^{n-1}$$ It follows that $$\sum_{k=0}^{n}k\binom{n}{k}=n2^{n-1}$$ Similarly we have $$\sum_{k=0}^n\binom{n}{k}=f(1)=2^n$$ Put all stuff together gives $$\sum_{k=0}^n(2k+1)\binom{n}{k}=2^n(n+1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1164366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Mathematical induction for inequalities: $\frac1{n+1} + \frac1{n+2} + \cdots +\frac1{3n+1} > 1$ Prove by induction: $$\frac1{n+1} + \frac1{n+2} + \cdots +\frac1{3n+1} > 1$$ adding $1/(3m+4)$ as the next $m+1$ value proves pretty fruitless. Can I make some simplifications in the inequality that because the $m$ step is true by the inductive hypothesis, the 1 is already less than all those values?
More generally (one of my favorite phrases), let $s_k(n) =\sum\limits_{i=n+1}^{kn+1} \frac1{i} $. I will show that $s_k(n+1)>s_k(n)$ for $k \ge 3$. In particular, for $n \ge 1$ $s_3(n) \ge s_3(1) =\frac1{2}+\frac1{3}+\frac1{4} =\frac{6+4+3}{12} =\frac{13}{12} > 1 $. $\begin{array}\\ s_k(n+1)-s_k(n) &=\sum\limits_{i=n+2}^{kn+k+1} \frac1{i}-\sum\limits_{i=n+1}^{kn+1} \frac1{i}\\ &=\sum\limits_{i=n+2}^{kn+1} \frac1{i}+\sum\limits_{i=kn+2}^{kn+k+1} \frac1{i} -\left(\frac1{n+1}+\sum\limits_{i=n+2}^{kn+1} \frac1{i}\right)\\ &=\sum\limits_{i=kn+2}^{kn+k+1} \frac1{i}-\frac1{n+1}\\ &=\sum\limits_{i=2}^{k+1} \frac1{kn+i}-\frac1{n+1}\\ &=\frac1{kn+2}+\frac1{kn+k+1}+\sum\limits_{i=3}^{k} \frac1{kn+i}-\frac1{n+1}\\ \end{array} $ $\sum\limits_{i=3}^{k} \frac1{kn+i} \ge \sum\limits_{i=3}^{k} \frac1{kn+k} = \frac{k-2}{kn+k} $. If we can show that $\frac1{kn+2}+\frac1{kn+k+1} \ge \frac{2}{kn+k} $, then $s_k(n+1)-s_k(n) \ge \frac{2}{kn+k}+\frac{k-2}{kn+k}-\frac1{n+1} = \frac{k}{kn+k}-\frac1{n+1} = \frac{1}{n+1}-\frac1{n+1} =0 $. But $\begin{array}\\ \frac1{kn+2}+\frac1{kn+k+1}-\frac{2}{kn+k} &=\frac{kn+k+1+(kn+2)}{(kn+2)(kn+k+1)}-\frac{2}{kn+k}\\ &=\frac{2kn+k+3}{(kn+2)(kn+k+1)}-\frac{2}{kn+k}\\ &=\frac{(2kn+k+3)(kn+k)-2(kn+2)(kn+k+1)}{(kn+2)(kn+k+1)(kn+k)}\\ \end{array} $ Looking at the numerator, $\begin{array}\\ (2kn+k+3)(kn+k)-2(kn+2)(kn+k+1) &=2k^2n^2+kn(k+3+2k)+k(k+3)\\ &-2(k^2n^2+kn(k+3)+2(k+1)\\ &=2k^2n^2+kn(3k+3)+k(k+3)\\ &-2k^2n^2-2kn(k+3)-4(k+1)\\ &=kn(3k+3)+k(k+3)\\ &-kn(2k+6)-4(k+1)\\ &=kn(k-3)+k(k+3)-4(k+1)\\ &=kn(k-3)+k^2-k-4\\ &> 0 \quad\text{for $k \ge 3$} \end{array} $ and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1164493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Integral of $\frac{e^{-x^2}}{\sqrt{1-x^2}}$ I am stuck at an integral $$\int_0^{\frac{1}{3}}\frac{e^{-x^2}}{\sqrt{1-x^2}}dx$$ My attempt is substitute the $x=\sin t$, however there may be no primitive function of $e^{-\sin^2 t}$. So does this integral has a definitive value? If does, how can we solve it? Thank you!
We have: $$ I = \int_{0}^{\arcsin\frac{1}{3}}\exp\left(-\sin^2\theta\right)\,d\theta \tag{1}$$ but since: $$ \exp(-\sin^2\theta) = \frac{1}{\sqrt{e}}\left(I_0\left(\frac{1}{2}\right)+2\sum_{n\geq 1}I_n\left(\frac{1}{2}\right)\cos(2n\theta)\right)\tag{2} $$ we have: $$ I = e^{-1/2}\left(\arcsin\frac{1}{3}\right) I_0\left(\frac{1}{2}\right)+e^{-1/2}\sum_{n\geq 1}\frac{1}{3n}\, I_n\left(\frac{1}{2}\right)U_{2n-1}\left(\sqrt{\frac{8}{9}}\right)\tag{3}$$ where $I_m$ is a modified Bessel function and $U_k$ is a Chebyshev polynomial of the second kind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1164662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Evaluate $ \int \frac{e^x}{\left({1+\cos(x)}\right)} dx$ Background: I was in the process of solving some interesting integrals from this site, only to find out I needed a lot more practice before becoming familiar with special functions. So while doing some problems, I encountered some difficulty with one particular integral; I happened to incorrectly copy it onto a notebook. But I'm curious to know as to how exactly I can evaluate this particular integral. Essentially, I need help in evaluating the following integral :- $$ \int \frac{e^x}{\left({1+\cos(x)}\right)} dx$$ Question: How exactly can I evaluate this integral? Both solutions as well as hints would be greatly appreciated. Note: Original problem had $\cosh(x)$ instead of $\cos(x)$.
The integrand does not possess an elementary antiderivative. This can be shown using either Liouville's theorem or the Risch algorithm. However, doing so requires advanced knowledge of abstract algebra. Alternately, expand $~\dfrac1{1+\cos x}~$ into its binomial series, then switch the order of summation and integration to obtain an infinite series, which you might rewrite in terms of hypergeometric functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1164780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that: $\lim\limits_{x \to +\infty}\frac{(\log x)^b}{x^a} = 0$ and $\lim\limits_{x \to +\infty}\frac{x^b}{e^{ax}} = 0$ If $a > 0$ and $b > 0$, show that $$\lim_{x \to +\infty}\frac{(\log x)^b}{x^a} = 0 \tag{1}$$ and $$\lim_{x \to +\infty}\frac{x^b}{e^{ax}} = 0 \tag{2}$$ Attempts: $(1)$ Given that $$\log x = \frac{\ln x}{\ln 10}$$ Then $$ \begin{align*} \lim_{x \to +\infty}\frac{(\log x)^b}{x^a} &= \lim_{x \to +\infty}\left(\frac{\ln x}{\log 10}\right)^b\cdot \lim_{x \to +\infty}\frac{1}{x^a}\\ &= \lim_{x \to +\infty}\left(\frac{\ln x}{\log 10}\right)^b\cdot 0\\ &= 0 \end{align*} $$ I'm not sure whether this is right. There is a Theorem a my textbook which says: If $f(x)$ is an infinitesimal function as $x \to a$, and $g(x)$ is a bounded function, then $\lim_{x \to a}f(x)\cdot g(x)$ is an infinitesimal (i.e $= 0$). The rightmost limit it is indeed and infinitesimal function, but it seems that the leftmost one is unbounded. Does the theorem not hold here? $(2)$ $$\lim_{x \to +\infty}\frac{x^b}{e^{ax}} = 0$$ Here I'm puzzled. So far I've done just this: $$\lim_{x \to +\infty}\frac{x^b}{e^{ax}} = \lim_{x \to +\infty}\frac{e^{b \ln x}}{e^{ax}} = ...$$
You must know something more, if you cannot use De l'Hospital. For instance $$ \lim_{x \to +\infty} \frac{x^b}{e^x}=0 \quad\hbox{for every $b>0$} $$ or $$ \lim_{x \to +\infty} \frac{x}{e^{ax}}=0 \quad\hbox{for every $a>0$}. $$ Indeed, $$ \frac{x^b}{e^{ax}}=\left( \frac{x}{e^{\frac{a}{b}x}} \right)^b $$ or $$ \frac{x^b}{e^{ax}}=\left( \frac{x^{\frac{b}{a}}}{e^{x}} \right)^a $$ For sure you cannot use the theorem in your book, since $\log x$ does not remain bounded as $x \to +\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1164895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Find a linear operator s.t. $A^{2}=A^{3}$ but $A^{2}\neq A$? From Halmos's Finite-Dimensional Vector Spaces, question 6a section 43, the section after projections. Find a linear transformation A such that $A^{2}(1-A)=0$ but A is not idempotent (I remember A is idempotent iff it is a projection). I had no luck.
(Too long for a comment.) You have no luck because you depend on luck. It really doesn't take much to solve the problem. If $A\ne A^2=A^3$, there must exist a vector $x$ so that when $y=Ax$ and $z=A^2x$, we have $y\ne z$ but $Az=z$. In other words, by applying $A$ repeatedly on $x$, we get the following chain of iterates: $$ x \mapsto y \mapsto \underbrace{z}_{\ne y} \mapsto z. $$ Now it is utterly easy to construct an example of $A$ with this chain of iterates. For instances: * *if we put $x=\pmatrix{1\\ 0},\ y=\pmatrix{0\\ 1}$ and $z=0$, we get ahulpke's answer (here $x$ is his "vector 2" and $y$ is his "vector 1"); *if we put $x=\pmatrix{1\\ 0\\ 0},\ y=\pmatrix{0\\ 1\\ 0}$ and $z=\pmatrix{0\\ 0\\ 1}$, we get Omnomnomnom's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1165126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Find : $ dy/dx, y=\sqrt{4x^2 - 7x - 2}$ The problem says Find $dy/dx, y=\sqrt{4x^2 - 7x - 2}$ So far I changed it to $(4x^2 - 7x - 2)^{1/2}$ I don't know where to go from there.
hint: Use the chain rule: $\sqrt{u(x)}' = \dfrac{u'(x)}{2\sqrt{u(x)}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1165243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
If we have a square matrix thats invertible, do its row and column space coincide? If we have a square matrix thats invertible, do its row and column space coincide? Regarding an nxn invertible matrix: -The row space of the matrix is R^n -The column space of the matrix is R^n -The rank of the matrix is n Is this a sufficient way of proving the question, or am I missing something?
$\newcommand{\Reals}{\mathbf{R}}$The row space and column space of an $n \times n$ matrix are not generally equal, e.g., $$ A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix},\quad \text{row space} = \{0\} \times \Reals,\quad \text{column space} = \Reals \times \{0\}. $$ The row space and column space of an $n \times n$ matrix do always have the same dimension, however, and if this dimension is $n$, then each space is equal to $\Reals^{n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1165349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How can I demonstrate that $x-x^9$ is divisible by 30? How can I demonstrate that $x-x^9$ is divisible by $30$ whenever $x$ is an integer? I know that $$x-x^9=x(1-x^8)=x(1-x^4)(1+x^4)=x(1-x^2)(1+x^2)(1+x^4)$$ but I don't know how to demonstrate that this number is divisible by $30$.
Let's factor $x^9-x$ like you have done: $$ x^9-x=(x-1)x(x+1)(x^2+1)(x^4+1).\tag{$*$} $$ Let's look at the RHS. The product of the first 2 terms is divisible by $2$ because it consists of 2 consecutive integers. Similarly, the product of the first 3 terms is divisible by $3$. Now, if you had $$ (x-2)(x-1)x(x+1)(x+2) $$ then of course that would be divisible by $5$ as well. But note this $$ (x-1)x(x+1)(x^2+1)-(x-2)(x-1)x(x+1)(x+2)=5x(x^2-1)\equiv 0\pmod{5}. $$ So the product of the first 4 terms of the RHS of ($*$) is also divisible by $5$. Now you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1165438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Factoring Polynomials in Fields I always have problems to factorize polynomials that have no linear factors any more. For example ($x^5-1$) in $\mathbb{F}_{19}$. It's easy to find the root 1 and to split it. ($x^5-1$) = ($x-1$) * ($x^4$+$x^3$+$x^2$+x+1). I think the last part must split into two irreducible polynomials with degree 2. ($x^2$+ ax+ b) ($x^2$+ Cx+ d). I expanded it and compared the coefficients to find values for a,b,c,d. But it wasn't solvable. Is this approach correct or a there any other procedures or tricks to solve such a problem ? Thank you.
There is a trick to see that the polynomial is reducible. The multiplicative group $\mathbb{F}_{19^2}^*$ of the field of $19^2$ elements has $19^2 - 1 \equiv 0 \mod 5$ elements, so it contains a fifth root of unity. So the minimal polynomial of a fifth root of unity over $\mathbb{F}_{19}$, which divides $x^5 - 1$, has degree $2$. So your polynomial splits into quadratics. I don't know a good way to find the factorization though, besides brute force. It turns out to be $x^5 - 1 = (x - 1)(x^2 + 5x + 1)(x^2 - 4x + 1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1165576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that for any two Cauchy sequences of rational numbers, their difference is a equivalent to a sequence of nonnegative numbers Let $a_n$ and $b_n$ be Cauchy sequences of rational numbers, either $b_n-a_n$ or $a_n-b_n$ is a sequence of nonnegative numbers. I don't really understand how this is true, I think first we have to assum $a_n\neq b_n$. If we assume that they are not the same sequence, how does this work?
This sentence with either $b_n-a_n$ or $a_n-b_n$ is a sequence of nonnegative numbers is not true in this form: you can modify the first some elements of a sequence anyhow without affecting its being Cauchy. What is true, is either * *one of the sequences $b_n-a_n$ and $a_n-b_n$ is equivalent to (=has the same limit as) a Cauchy sequence of nonnegative (rational) numbers. as Arthur answered, or *there is an index $N$ such that one of $a_n-b_n$ and $b_n-a_n$ is nonnegative for all $n>N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1165669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove that lim$_{n → \infty} \int_{0}^{3} \sqrt(sin \frac{x}{n} + x + 1) dx$ exists and evaluate it Prove that the following limits exist and evaluate them. c) $\lim_{n \to \infty} \int_{0}^{3} \sqrt{\sin \frac{x}{n} + x + 1}\ \text dx$ I need to use the following theorem from analysis; Suppose $f_n \to f$ uniformly on a closed interval [a,b]. If each $f_n$ is integrable on $[a,b]$, then so is $f$ and $\lim_{n \to \infty} \int_{a}^{b}f_n(x) \text dx = \int_{a}^{b}[\lim_{n \to \infty}f_n(x) ]\text dx $. My attempt: Let $f_n = \sqrt{\sin \frac{x}{n} + x + 1}$. Recall |$\sin\frac{x}{n}| \leq 1$ for all x in [0,3]. First we try to find the $\lim_{n\to \infty} f_n = f(x)$ so it we can for every $\epsilon > 0$ there is an $N$ such that $n \geq N$ implies $|f_n(x) - f(x) | < \epsilon$ However, the limit does not exists when for $f_n = \sqrt{\sin \frac{x}{n} + x + 1}$ because of sine. Can someone please help me? I would really appreciate it.
I am not sure that this could be the answer you expect; so, please forgive me if I am off-topic. When $n$ is large, we can approximate $\sin(x)$ by its Taylor expansion built at $x=0$. The problem is that only the first term of the expansion can be used (otherwise we should bump on nasty elliptic integrals). $$I_n=\int_{0}^{3} \sqrt{\sin (\frac{x}{n}) + x + 1}\, dx\approx \int_{0}^{3} \sqrt{\frac{x}{n} + x + 1}\, dx=\frac{2 \left(\sqrt{\frac{3}{n}+4}\, (4 n+3)-n\right)}{3 (n+1)}$$ the limit of which being $\frac{14}3$ which is, as expected, the value of $\int_{0}^{3} \sqrt{ x + 1}\, dx$. Comparing with the results of numerical integration (given in parentheses), the match is quite correct as shown below for a few values on $n$. $$I_{5}=4.92550\cdots (4.91865)$$ $$I_{10}=4.79798\cdots (4.79709)$$ $$I_{15}=4.75465\cdots (4.75438)$$ $$I_{20}=4.73282\cdots (4.73271)$$ A Taylor expansion of the approximation gives $$I_n \approx \frac{14}{3}+\frac{4}{3 n}-\frac{5}{24 n^2}+O\left(\left(\frac{1}{n}\right)^3\right)$$ showing how the limit is reached. Curve fitting the results of the numerical integration for the range $5\leq n\leq 100$ leads to $$I_n=\frac{14}{3}+\frac{1.34057}{n}-\frac{0.389332}{n^2}$$ which is extremely close.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1165941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $c_{n} > 0$ then $\sum_{0}^{n}c_{k}x^{k} > 0$ for some $x \in \mathbb{R}$? Let $n \geq 1$ be an integer and let $c_{0}, \dots, c_{n} \in \mathbb{R}$. If $c_{n} > 0,$ is there necessarily an $x \in \mathbb{R}$ such that $$\sum_{0}^{n}c_{k}x^{k} > 0?$$ I just realized that for a while I had implicitly taken this for granted. However, when I would like to give a rigorous proof then I find it is not that obvious.
Yes. Suppose $x>0$ and consider $f(x) = {\sum_{k=0}^n c_k x^k \over x^n} = \sum_{k=0}^n c_k {1 \over x^{n-k}}$. Then $\lim_{x \to \infty} f(x) = c_n>0$, hence there is some $M$ such that if $x \ge M$, then $f(x) >0$. Hence ${\sum_{k=0}^n c_k x^k } >0$ for $x \ge M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1166053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What kind of mathematics is required by game theory? I want to learn about game theory, but I do not know if I have the necessary background to do so. What kind of mathematics does game theory involve the most? What are some of the things that an undergrad in mathematics might not have seen which arises in game theory?
Perhaps the main thing that you will come up with and with which you will not be familiar as an undergraduate student are fixed point theorems (functional analysis) and linear or/and dynamic programming. Generally, for an undergraduate course in game theory you will mostly need to be familiar with the following: * *solving quadratic equations, maximizing/minimizing functions (mostly polynomial functions), *certainly some combinatorics (mainly in cooperative game theory) and some basics in probability and - depending on the professor - *the basics of linear programming. *additionally basic concepts from linear algebra (calculating the determinant of a matrix etc.) can be also required. In sum, if you are in a "good shape" as far as the basics of linear algebra, probability and calculus are concerned then you will have no problem. But do not think that it stops here in game theory...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1166125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Series Convergence $\sum_{n=1}^{\infty} \frac{1}{n} \left(\frac{2n+2}{2n+4}\right)^n$ I have to show if this series converges or diverges. I tried using asymptotics, but it's not formally correct as they should work only when the arguments are extremely small. Any ideas? $$ \sum_{n=1}^{\infty} \frac{1}{n} \left(\frac{2n+2}{2n+4}\right)^n $$
Hint: Put $n=k-2$ and simplify the fraction inside the parentheses.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1166192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Find base vectors and dim Find base vectors and dim of a space described by the following system of equation: $$2x_1-x_2+x_3-x_4=0 \\ x_1+2x_2+x_3+2x_4=0 \\ 3x_1+x_2+2x_3+x_4=0$$ I did rref of the matrix and as a result i get: $$\begin{pmatrix} 1 & 2 & 1 & 2 \\ 0 & -5 & -1 & -5 \\ 0 & 0 & 0 & 0 \end{pmatrix} $$ Thus i think that the independent variables will be $x_1,x_2$ and base vectors are the solution of this sytem when $x_1=0,x_2=1 $ and $x_1=1,x_2=0$, but in the answer to the question is that independent variables are $x_2,x_4$. Am i doing something wrong?
My definition of RREF is different from yours it seems, and I calculated RREF form of augmented matrix to be: $$ \left( \begin{array}{cccc|c} 1 & 0 & 3/5 & 0 & 0\\ 0 & 1 & 1/5 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 \end{array} \right) $$ From this you set $x_3=t, x_4=s$ where $t,s$ are (scalar) parameters, and you can describe your solution space with these two independent variables. Then $x_1=-\frac{3}{5} t$, $x_2= -\frac{1}{5}t-s$. So $$ (x_1,x_2,x_3,x_4)=\left(-\frac{3}{5} t,\; -\frac{1}{5}t-s,\; t,\; s\right)=\left(-\frac{3}{5},\; -\frac{1}{5},\; 1,\; 0\right)t+\left(0,\; -1,\; 0,\; 1\right)s $$ and you easily see that dimension is $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1166277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Complex solutions of equations in Maple Is it possible to compute all complex solutions of the equation $$ e^z = 1 $$ in Maple? That is, I need Maple print all solutions $z=2\pi k I$. What procedure do I have to use? Thank you very much in advance!
It is done by solve(exp(z)=1,z,AllSolutions=true); The output will be 2*I*Pi*_Z1~ The _Z1 represents some constant, and the tilde implies that there is some assumption on the constant, which in this case means that it is an integer. getassumptions(_Z1); tells you that it must be an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1166406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Barycentric coordinates in a triangle - proof I want to prove that the barycentric coordinates of a point $P$ inside the triangle with vertices in $(1,0,0), (0,1,0), (0,0,1)$ are distances from $P$ to the sides of the triangle. Let's denote the triangle by $ABC, \ A = (1,0,0), B=(0,1,0), C= (0,0,1)$. We consider triangles $ABP, \ BCP, \ CAP$. The barycentric coordinates of $P$ will then be $(h_1, h_2, h_3)$ where $h_1$ is the height of $ABP$, $h_2 \rightarrow BCP$, $h_3 \rightarrow CAP$ I know that $h_1 = \frac{S_{ABP}}{S_{ABC}}$ and similarly for $h_2, \ h_3$ My problem is that I don't know how to prove that if $P= (p_1, p_2, p_3)$ then $(h_1 + h_2 + h_3)P = h_1 (1,0,0) + h_2 (0,1,0) + h_3 (0,0,1)$ Could you tell me what to do about it? Thank you!
Let $A_i$ $\>(1\leq i\leq3)$ be the vertices of your triangle $\triangle$, and let $P=(p_1,p_2,p_3)$ be an arbitrary point of $\triangle$. Then the cartesian coordinates $p_i$ of $P$ satisfy $p_1+p_2+p_3=1$, and at the same time we can write $$P=p_1A_1+p_2A_2+p_3A_3\ ,$$ which says that the $p_i$ can be viewed as well as barycentric corrdinates of $P$ with respect to $\triangle$. We now draw the normal $n_3$ from $P$ to the side $A_1A_2$ of $\triangle$. This normal will be orthogonal to $\overrightarrow{A_1A_2}=(-1,1,0)$ and to $s:=(1,1,1)$; the latter because $n_3$ lies in the plane of $\triangle$. It follows that $\overrightarrow{A_1A_2}\times s=(1,1,-2)$ has the proper direction. We now have to intersect $$n_3:\quad t\mapsto (p_1,p_2,p_3) +t(1,1,-2)$$ with the plane $x_3=0$ and obtain $t={p_3\over2}$. Therefore the distance from $P$ to $A_1A_2$ is given by $$h_3={p_3\over2}\sqrt{1+1+4}=\sqrt{3\over2}\>p_3\ .$$ The conclusion is that the barycentric coordinates of $P$ are not equal to the three heights in question, but only proportional to these.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1166493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Solve $\sin(z) = 2$ There are a number of solutions to this problem online that use identities I have not been taught. Here is where I am in relation to my own coursework: $ \sin(z) = 2 $ $ \exp(iz) - \exp(-iz) = 4i $ $ \exp(2iz) - 1 = 4i \cdot \exp (iz) $ Then, setting $w = \exp(iz),$ I get: $ w^2 - 4iw -1 = 0$ I can then use the quadratic equation to find: $ w = i(2 \pm \sqrt 3 )$ So therefore, $\exp(iz) = w = i(2 \pm \sqrt 3 ) $ implies $ e^{-y}\cos(x) = 0 $, thus $ x = \frac{\pi}{2} $ $ ie^{-y}\sin(x) = i(2 \pm \sqrt 3 ) $ so $ y = -\ln( 2 \pm \sqrt 3 ) $ So I have come up with $ z = \frac{\pi}{2} - i \ln( 2 \pm \sqrt 3 )$ But the back of the book has $ z = \frac{\pi}{2} \pm i \ln( 2 + \sqrt 3 ) +2n\pi$ Now, the $+2n\pi$ I understand because sin is periodic, but how did the plus/minus come out of the natural log? There is no identity for $\ln(a+b)$ that I am aware of. I believe I screwed up something in the calculations, but for the life of me cannot figure out what. If someone could point me in the right direction, I would appreciate it.
Setting $w=e^{iz},$ we need to solve the equation $w^2-4iw-1=0.$ The solutions to this quadratic equation are $w=i(2+\sqrt 3)$ and $w=i(2-\sqrt 3).$ Let's deal with the first solution. We need to find $z=x+iy$ such that $e^{iz}= e^{ix}e^{-y}= i(2+\sqrt 3).$ This implies $\cos x =0.$ As you point out, that has solution set $\pi/2 + n\pi, n\in \mathbb Z.$ But there is another implication: In order to get $2+\sqrt 3$ as the imaginary part, we have to delete all $\pi/2 + n\pi$ for $n$ odd, as they lead to negative imaginary values. This is why we end up with $\pi/2 + 2n\pi.$ At this point, let's say goodbye to the original post for inspiration, as things are a little cloudy there. The easiest way to do this is write $e^{ix}e^{-y}=i(2+\sqrt 3)= e^{i\pi/2}(2+\sqrt 3).$ This tells us that $x= \pi/2 +2n\pi,$ and $-y= \ln(2+\sqrt 3).$ Solving for $z$ in the case $w=i(2-\sqrt 3)$ is the same. So in all, the solutions to the original problem are $z = (\pi/2 +2n\pi) -i\ln(2\pm\sqrt 3),n\in \mathbb Z.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1166562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
Prove that a number that consists of $3^n$ ones is divisible by $3^n$ I can't even make sense of this question. Isn't this just like asking, "Prove that 3 is divisible by 3." Isn't any number divisible by itself? Is this all there is to this question—it seems like there must be more to it.
We can use induction. I prefer to show that the number that "consists of" $3^n$ $9$'s is divisible by $9\cdot 3^n$. The number whose decimal representation consists of $3^n$ consecutive $9$'s is $10^{3^n}-1$. For the induction step, note that $10^{3^{k+1}}-1=x^3-1$ where $x=10^{3^k}$. This factors as $(x-1)(x^2+x+1)$. By the induction assumption, $x-1$ is divisible by $9\cdot 3^k$. Also, $3$ divides $x^2+x+1$, so $9\cdot 3^{k+1}$ divides $x^3-1$. Remark: Or else we could show that the number whose decimal representation consists of $3^{k+1}$ consecutive $1$'s is the number with $3^k$ consecutive $1$'s, times a number of the shape $1000\cdots 01000\cdots 01$. The second number is divisible by $3$ by the usual divisibility test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1166675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Show $17$ does not divide $5n^2 + 15$ for any integer $n$ Claim: $17$ does not divide $5n^2 + 15$ for any integer $n$. Is there a way to do this aside from exhaustively considering $n \equiv 0$, $n \equiv 1 , \ldots, n \equiv 16 \pmod{17}$ and showing $5n^2 + 15$ leaves a remainder of anything but $0$. It's easy but tedious. Since if $n \equiv 0$ then $5n^2 + 15 \equiv 15$ so that 17 does not divide $5n^2 + 15$. I did four more cases successfully, got bored, and skipped to the last case which also worked. Thanks in advance for any responses.
${\rm mod}\ 17\!:\ 5(n^2\!+3)\equiv0\,\Rightarrow\,n^2\equiv-3\,\Rightarrow\,n^4\equiv 9\,\Rightarrow\, n^8\equiv -4\,\Rightarrow\,n^{16}\equiv -1$ contra little Fermat
{ "language": "en", "url": "https://math.stackexchange.com/questions/1166877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Prove that the equation $x^{2}-x\sin(x)-\cos(x)=0$ has only one root in the closed interval $(0,\infty)$. Here's the graph (http://www.wolframalpha.com/input/?i=%28x%5E2%29-xsenx-cosx%3D0). The part I'm having trouble with is proving that the root is unique. I can use the intermediate value theorem to find the interval where the root is, but from that I'm lost. I know you can look at the first or second derivative, but I don't know how to use that when the sign of sin and cos varies. Thanks.
$f(x)=x^2-x \sin x -\cos(x)$ and $f'(x)=2x-x \cos x=x(2-\cos x)$. Clearly $2-\cos x>0$ and for $x>0$ we have that $f'(x)> 0$ $x>0$, therefore $f(x)$ is strctly increasing. now since $f(0)<0$ and $f(\infty)>0$, $f(x)$ just has one root on $x>0$. This is a graphs of $f(x)$ and here is $f'(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1166972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
How is N defined? I understand that $\mathbb{Z}$ and $\mathbb{Q}$ are defined as ... $$\mathbb{Z} = \mathbb{N} \cup \{ -n \mid n \in \mathbb{N} \}$$ $$\mathbb{Q} = \left\{ \frac{a}{b} \mid a,b \in \mathbb{Z} \right\}$$ ... but how is $\mathbb{N}$ defined? (and how is the order on these sets defined?)
You can make a construction of natural numbers based on set theory by defining $0:=\{\,\}=\varnothing$ and the successor of $n$ $($denoted by $S(n))$ as $S(n)=n\cup\{n\}$. And so for instance : * *$1:=S(0)=\varnothing\cup\{\varnothing\}=\{\varnothing\}$. *$2:=S(1)=\{\varnothing\}\cup\{\{\varnothing\}\}=\{\{\varnothing\},\{\{\varnothing\}\}\}$. Now, for what concerns the ordering: Let $a,b\in\mathbf{N}$, then $a\gt b$ if and only if $b\in a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1167066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the ring of integers initial in Ring? In Algebra Chapter 0 Aluffi states that the ring $\Bbb{Z}$ of integers with usual addition and multiplication is initial in the category Ring. That is for a ring $R$ with identity $1_{R}$ there is a unique ring homomorphism $\phi:\Bbb{Z}\rightarrow{R}$ defined by $\phi(n)\mapsto{n\bullet1_{R}}$ $(\forall{n\in\Bbb{Z}})$which makes sense for rings such as $\Bbb{Q},\Bbb{R},\Bbb{C}$ which have $\Bbb{Z}$ as a subring but I fail to see how $\phi$ holds when the codomain is a ring which doesn't contain $\Bbb{Z}$. If someone could provide examples of ring homomorphisms from $\Bbb{Z}$ to rings other than the rings mentioned above I would appreciate it.
Any such morphism $f$ satisfies $f(n)=f(\sum_{i=1}^n 1) = \sum_{i=1}^n f(1) = \sum_{i=1}^n 1_R = n 1_R$. You have the unicity. The existence is trivial, just define it this way, by the previous formula. So you have the definition for any ring $R$ with unit. ;-) Note that the image of $\mathbf{Z}$ is always in $R$'s center. Remark. $n 1_R$ for $n\in\mathbf{N}$ simply means $0$ if $n=0$, $\sum_{i=1}^n 1_R$ if $n>0$, and the opposite if $n<0$. I precise this as I think that this is the only thing bothering you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1167176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Is $\sqrt{x^2} = x$? Does the $\sqrt{x^2}$ always equal $x$? I am trying to prove that $i^2 = -1$, but to do that I need to know that $\sqrt{(-1)^2} = -1$. If that is true then all real numbers are imaginary, because an imaginary number is any number that can be written in terms of $i$. For example, 2 can be written as $i^2 + 3$. Does this work or did I make an error?
* *It is not true that $\sqrt{x^2} = x$. As a very simple example, with $x=-2$, we obtain $$ \sqrt{(-2)^2} = \sqrt{4} = 2 \ne -2. $$ In general, if $x \in \mathbb{R}$, then $\sqrt{x^2} = |x|$. Things get more complicated when you start working with complex numbers, but I think that a discussion of "branches of the square root function" is quite a bit beyond the scope of this question. *There is a serious problem of definitions in the question. The question asserts "...all real numbers are imaginary, because an imaginary number is any number that can be written in terms of $i$." However, this is not the definition of an imaginary number. An imaginary number is a number $z$ such that there is some real number $y$ such that $z = iy$, where $i$ is the imaginary unit. A number such as $i^2 + 3$ is not an imaginary number, since there is no real number $y$ such that $2 = i^2 + 3 = iy$. On the other hand, it is reasonable to say that every real number is a complex number. A complex number is a number $z$ such that there are $x,y\in\mathbb{R}$ such that $z = x + iy$. In the case of the example give, we have $$ i^2 + 3 = (-1) + 3 = 2 = 2 + i0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1167315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Inequality involving exponential of square roots How can I show that: $$ 2e^\sqrt{3} \leq 3e^\sqrt{2} $$ ? (that's all I have) Thank you so much!
Let $f(x)=\frac{e^{x}}{x^2}$. Then $$f'(x)=e^x\left(\frac{x-2}{x^3}\right)$$ So $f'(x)<0$ for $0<x<2$. Then $f(\sqrt{3})<f(\sqrt{2})$ and your result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1167459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
Have I obtained the proper solution to this PDE? I'm a little stuck on this. Consider $ u_t -(1+t^2)u_x = \phi(x,t) \quad u(x,0)=u_0(x)$ Via the method of characteristics, the total derivative of $u(x,t)$ is $$\frac{du}{dt} = \dfrac{\partial u}{\partial t} + \frac{dx}{dt}\frac{\partial u}{\partial x}\>. $$ Therefore, the characteristics satisfy $$\frac{dx}{dt} = -(1+t^2) \implies x(t)=-\left(t+\frac{t^3}{3}\right) +x_0\>, $$ And the value of $u$ satisfies $$ \frac{du}{dt} = \phi(t)\>.$$ Here, the $x$ dependance has been dropped, as $x$ is a function of $t$. Now, I could write the solution to this DE as $$u(x(t),t) = u_0(t_0) + \int_{t_0}^{t} \phi(z) dz = u_0(t_0) -\Phi(t_0) + \Phi(t)$$ But that doesn't seem to get me anywhere. The solution should have the initial profile moving along the characteristic, with the magnitude of the profile changing according to the solution to the differential equation for $u$ w.r.t $t$. So how can I rectify this?
Follow the method in http://en.wikipedia.org/wiki/Method_of_characteristics#Example: $\dfrac{dt}{ds}=1$ , letting $t(0)=0$ , we have $t=s$ $\dfrac{dx}{ds}=-(1+t^2)=-1-s^2$ , letting $x(0)=x_0$ , we have $x=x_0-s-\dfrac{s^3}{3}=x_0-t-\dfrac{t^3}{3}$ $\dfrac{du}{ds}=\phi(x,t)=\phi\left(x_0-s-\dfrac{s^3}{3},s\right)$ , letting $u(0)=f(x_0)$ , we have $u(x,t)=f(x_0)+\int_0^s\phi\left(x_0-r-\dfrac{r^3}{3},r\right)dr=f\left(x+t+\dfrac{t^3}{3}\right)+\int_0^t\phi\left(x+t-r+\dfrac{t^3-r^3}{3},r\right)dr$ $u(x,0)=u_0(x)$ : $f(x)=u_0(x)$ $\therefore u(x,t)=u_0\left(x+t+\dfrac{t^3}{3}\right)+\int_0^t\phi\left(x+t-r+\dfrac{t^3-r^3}{3},r\right)dr$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1167600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$\sum_{n=1}^{\infty}\frac{n^2}{a^2_{1}+a^2_{2}+\cdots+a^2_{n}}$is also convergent? Let sequence $a_{n}>0$, $n\in N^{+}$, and such $\displaystyle\sum_{n=1}^{\infty}\dfrac{1}{a_{n}}$ convergent. Show that $$\sum_{n=1}^{\infty}\dfrac{n^2}{a^2_{1}+a^2_{2}+\cdots+a^2_{n}}$$ is also convergent? Jack A related result: maybe I guess this also is hold? $$\sum_{k=1}^{n}\dfrac{k^2}{a^2_{1}+\cdots+a^2_{k}}\le\left(\dfrac{1}{a_{1}}+\cdots+\dfrac{1}{a_{n}}\right)^2$$
The Polya-Knopp's inequality (that is an instance of Hardy's inequality for negative exponents) states that for any $p\geq 1$ and for every positive sequence $\{a_n\}_{n\in\mathbb{N}}$ we have: $$ \frac{N^{\frac{p+1}{p}}}{(p+1)\left(a_1^p+\ldots+a_N^p\right)^{1/p}}+\sum_{n=1}^N \left(\frac{n}{a_1^p+\ldots+a_n^p}\right)^{1/p} \leq (1+p)^{\frac{1}{p}}\sum_{n=1}^{N}\frac{1}{a_n},\tag{1}$$ hence by taking $p=2$ it follows that: $$ \sum_{n= 1}^{N}\frac{\sqrt{n}}{\sqrt{a_1^2+a_2^2+\ldots+a_n^2}}\leq \sqrt{3}\sum_{n=1}^{N}\frac{1}{a_n}\tag{2} $$ Now we re-write the LHS of $(2)$ by partial summation. Let $Q_n^2\triangleq a_1^2+\ldots+a_n^2$ and $h(n)\triangleq\sum_{k=1}^{n}\sqrt{k}$: $$\sum_{n=1}^N \frac{\sqrt{n}}{Q_n}=\frac{h(N)}{Q_N}-\sum_{n=1}^{N-1}h(n)\left(\frac{1}{Q_{n+1}}-\frac{1}{Q_n}\right)=\frac{h(N)}{Q_N}+\sum_{n=1}^{N-1}h(n)\frac{a_{n+1}^2}{Q_n Q_{n+1}(Q_{n+1}+Q_n)} $$ since $h(n)\geq\frac{2}{3}n^{3/2}$, it follows that: $$ \frac{2N\sqrt{N}}{Q_N}+\sum_{n=1}^{N-1}\frac{n^{3/2} a_{n+1}^2}{Q_{n+1}^3}\leq 3\sqrt{3}\sum_{n=1}^{N}\frac{1}{a_n}.\tag{3}$$ If we let $g(n)=\sum_{k=1}^{n}k^2$ and apply partial summation to the original series we get: $$ \sum_{n=1}^{N}\frac{n^2}{Q_n^2}=\frac{g(N)}{Q_N^2}+\sum_{n=1}^{N-1}g(n)\frac{a_{n+1}^2}{Q_n^2 Q_{n+1}^2}\tag{4}$$ hence by $(3)$ we just need to show that $\frac{g(n)}{Q_n Q_{n+1}}$ is bounded by some constant times $\frac{h(n)}{Q_n+Q_{n+1}}$, or: $$ g(n)\left(Q_n+ Q_{n+1}\right) \leq K \cdot h(n) Q_n Q_{n+1} $$ or: $$ \frac{1}{Q_n}+\frac{1}{Q_{n+1}}\leq K\cdot\frac{h(n)}{g(n)} \tag{5}$$ that follows from the fact that $\frac{\sqrt{n}}{Q_n}$ is summable by $(2)$. Edit: A massive shortcut. If a positive sequence $\{b_n\}$ is such that $\sum {b_n}$ is convergent, then $\sum n b_n^2 $ is convergent too, since $\{n b_n\}$ must be bounded in order that $\sum b_n$ converges. So we can just use this lemma and $(2)$ to prove our claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1167832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Prove that $n^3+2$ is not divisible by $9$ for any integer $n$ How to prove that $n^3+2$ is not divisible by $9$?
Suppose, $\exists n\in \mathbb{N}$ such that $n^3+2\equiv 0 \pmod{9}\implies n^{6}\equiv 4\pmod{9}$, which is not true since\begin{array}{rl} n^6\equiv 1\pmod 9 & \mbox{if $n$ and $9$ are relativley prime, by Euler's theorem, since $\phi(9)=6$.}\\ n^6\equiv 0\pmod 9 & \mbox{otherwise} \end{array}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1167968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
How to evaluate $\sum_{n=1}^\infty \frac{1}{n 2^{2n}}$? Evaluate $$\sum_{n=1}^\infty \frac{1}{n2^{2n}}$$ I'd be glad for guidance, because, frankly, I don't have clue here.
We could write this just as well as $$ \sum_{n=1}^\infty \frac{1}{n}(1/4)^n $$ Consider the function $$ f(x) = \sum_{n=1}^\infty \frac{1}{n}x^n \quad x \in (-1,1) $$ noting that $f(0) = 0$. We evaluate $$ f'(x) = \sum_{n=0}^\infty x^n = \frac{1}{1-x} \quad x \in (-1,1) $$ It follows that $$ f(4) = f(0) + \int_0^{1/4} f'(t)\,dt = \int_0^{1/4} \frac 1{1-t}\,dt $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1168084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Algorithm that decides whether collection of primes exists which satisfies 3 equations Suppose someone gives you a list of $n$ positive numbers $(a_1, \ldots , a_n)$, together with an upper limit $N$ and asks you to find prime numbers $p_1, \ldots ,p_n$ in the range $2, \ldots , N$ satisfying $p_2 = p_1 + a_1$ and $p_3 = p_2 + a_2$ ... and ... $p_n = p_{n-1} + a_{n-1}$ How do you write down a general algorithm for deciding whether or not collection of prime numbers $(p_1, \ldots , p_n)$ exists that satisfies these 3 equations? What will it look like?
Here is some pseudo-code for my brutish approach to it. Let A = {a1, a2, ..., an} Let sieve = {p | p is prime and less than N} If ((size of sieve < size of A) OR (some a[i] == 1 for i > 1)) return False; Let primeDiffs = {sieve[i] - sieve[i-1] | i in [2..(size of sieve)]} For i in [1..(size of primeDiffs - size of A)] { For j in [1..(size of A)] { If(A[j] /= primeDiffs[i+j]) continue } return True } return False Essentially just generate all the primes that you need, build the set of the consecutive differences of those primes, and iterate over that set seeing if all of your set $\{a_1, a_2, \dotsc, a_n\}$ matches your current position in the set of differences. You can do some clever checking on the $\{a_1, a_2, \dotsc, a_n\}$ set at the start like make sure none of $\{a_2, a_3 \dotsc, a_n\}$ are $1$ (the only primes with a difference of $1$ are $2$ and $3$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1168206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Explicit formula for IFS fractal dimesnion Is there an explicit formula for finding the box counting dimension of an arbitrary IFS fractal, such as the IFS fern or any other random IFS fractal? If not, is there at least a summation, or recurrence relation that could find the fractal dimension? An example of how this would work would also be appreciated. If there is really no formula, is it just because we don't know it or because it cannot be expressed?
For graph-directed IFS of similarities satisfying an open set condition with every cycle contracting, it's possible to compute the Hausdorff dimension: Hausdorff Dimension in Graph Directed Constructions R Daniel Mauldin, S C Williams Transactions of AMS, vol309 no2, October 1988, 811-829 A regular IFS of can be considered as a graph-directed IFS with one node, so every similarity must be contracting. However, similarities are not very general (you can have IFS with all sorts of transformation functions), box-dimension is sometimes different from Hausdorff dimension, and I don't know if any algorithm exists for verifying that the IFS passes the open set condition. I have a Javascript implementation of the algorithm described in the paper (view page source), with some more information in two blog posts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1168300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Notation for Average of a Set? In particular, I have some set $S = \{s_1, s_2, s_3, ..., s_n\}$ and a subset $S^\prime$, and I want to denote the average of the elements in $S^\prime$. I would generally just use $\frac{\sum\limits_{i=1}^n s_i}{n}$, but $S^\prime$ only contains some of the elements of $S$ and so this won't work. This page suggests that the proper notation is $\left<S^\prime\right>$, but I wasn't able to find this anywhere else. Is this notation common, or is there some other accepted notation that I could use? Thanks! (This is in a computer science paper, if it makes a difference.)
Physicists may use $\langle S'\rangle$; statisticians might write $\bar{S'}$. But since you mention $\displaystyle\sum_{i=1}^n s_i /n$, I suggest $\displaystyle\sum_{s\in S}s/|S|$ or $\displaystyle\sum_{s\in S'}s/|S'|$, as the case may be. In some contexts one could write $\displaystyle\sum S/|S|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1168533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Geometry problem (triangle) How to solve the following problem Let $P$ be a point inside or outside (but not on) of a triangle $\Delta ABC$. Prove that $PA +PB +PC$ is greater than half of the perimeter of the triangle. That is, show $$ PA+PB+PC > 1/2(A+B+C) $$
Just use the fact that the sum of two sides of a triangle is always bigger then the third side. For example, if $P$ was inside then you look at the (small) triangle $\Delta PBC$, there $$PB+PC \geq BC$$ Similarly we get $$PA+PC \geq AC$$ and $$PB+PA \geq AB.$$ Now add all the inequalities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1168611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How Do I solves these types of limits such as $\lim_{x\to\infty}\frac{7x-3}{2x+2}$ I guess the best question I have is how does my strategy change when I get a limit such as $$\lim_{x\to\infty}\dfrac{7x-3}{2x+2}$$ What is essential to have as an understanding to solve these problems? Help welcomed.
The method you should be taking is to take advantage of algebra, and in some cases L'Hospitals rule. So for this question, we have $\lim_{x \to \infty} $ $ \frac{7x-3}{2x+2} $ We I'm sure you can see as x approached infinity we have the case of infinity/infinity so it is valid to use the rule. So (7x-3)'=(7) and (2x+2)'=2 so we have $\lim_{x \to \infty} \frac{7}{2} = 7/2$ Or as eluded to above, you can take advantage of the fact that $\lim_{x \to \infty} 7/x = 0 $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1168671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Does $\|u\|=\|u^\ast\|$ imply $\|uh\| = \|u^\ast\|$? Let $H$ be a Hilbert space and $u \in B(H)$ and let $u^\ast$ denote its adjoint. I know that $\|u\|=\|u^\ast\|$. But now I am wondering: Does $\|u\|=\|u^\ast\|$ imply $\|uh\| = \|u^\ast h\|$ for all $h\in H$? At first I thought that yes but on second thought I can't argue why it should be true.
Consider $H=\mathbb{R}^2$, and $u\in B(H)$ is given by the matrix $\begin{bmatrix} 1& 1\\ 0 &1\end{bmatrix}$, then $u^*$ is the transpose $\begin{bmatrix} 1& 0\\ 1 &1\end{bmatrix}$. Let $h=\begin{bmatrix} 1\\ 2 \end{bmatrix}$, then $$uh=\begin{bmatrix} 3\\ 2 \end{bmatrix},\quad u^*h=\begin{bmatrix} 1\\ 3 \end{bmatrix}$$ clearly $\Vert uh\Vert\neq\Vert u^*h\Vert$. In fact, it is not hard to show that $\Vert uh\Vert=\Vert u^*h\Vert,\forall h\in H$ if and only if $u$ is normal, i.e. $uu^*=u^*u$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1168739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find $\lim_{x \to \infty} \left(\frac{x^2+1}{x^2-1}\right)^{x^2}$ How to calculate the following limit? $$\lim\limits_{x \to \infty} \left(\frac{x^2+1}{x^2-1}\right)^{x^2}$$
The qty in bracket tends to 1 as x→infinte and power tends to infinity u can easily prove that Lt(x→a){f(x)}^(g(x)) if f(a)→1 and g(a)→∞ then its equal to limit of e^(f(x)-1)(g(x)) as x→a so here it is.. e^(2/(x^2-1))(x^2) limit as x→ ∞ giving e^2 .. !
{ "language": "en", "url": "https://math.stackexchange.com/questions/1168828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 6 }
Completion of borel sigma algebra with respect to Lebesgue measure There are two ways of extending the Borel $\sigma$-algebra on $\mathbb{R}^n$, $\mathcal{B}(\mathbb{R}^n)$, with respect to Lebesgue measure $\lambda$. * *The completion $\mathcal{L}(\mathbb{R}^n)$ of $\mathcal{B}(\mathbb{R}^n)$ with respect to $\lambda$, i.e. chuck in all sets contained in Borel sets of measure $0$. *let $\lambda^*$ be outer Lebesgue measure on $\mathcal{P}(\mathbb{R}^n)$, and take $\mathcal{L}'(\mathbb{R}^n)$ to be those $E$ such that for all $A\subseteq\mathbb{R}^n$, $\lambda^*(A)=\lambda^*(A\cap E)+\lambda^*(A\cap E^\complement)$. We know that $\mathcal{L}'(\mathbb{R}^n)\supset\mathcal{B}(\mathbb{R}^n)$ and $ \mathcal L'(\mathbb R^n) $ is complete, so $\mathcal L'(\mathbb R^n)\supset\mathcal{L}(\mathbb{R}^n)$. But does the reverse inclusion also hold?
From the definition of the outer measure $\lambda^{*}$, you can show that if $A\in \mathcal{L}'$ then there's a $G_{\delta}$ set $B$ so that $A\subseteq B$ and $\lambda^{*}(B\setminus A)=0$. After that, the answer to this question is an easy yes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1168953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 2, "answer_id": 1 }
Example of equivalent metrics on the same set such that uniform continuity of some function is not preserved Give example of a set $X$ and two metrics $d_1,d_2$ on $X$ such that $(X,d_1)$ and $(X,d_2)$ are topologically equivalent but there exist a function $f:X \to X$ which is uniformly $d_1$ continuous but not uniformly $d_2$ continuous . What I know is that the metrics cannot be strongly equivalent . Please help . Thanks in advance
Let $X=(0,1)$, and let $d_1$ be the usual Euclidean metric on $X$. For $x,y\in X$ let $$d_2(x,y)=\left|\frac1x-\frac1y\right|\;.$$ * *Verify that $d_2$ is a metric on $X$ and is topologically equivalent to $d_2$. *Consider the function $f(x)=1-x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\psi$ is upper semicontinous $\Longleftrightarrow\{z:\psi(z)Let $\Omega\subseteq\Bbb C$ be open. A function $\psi:\Omega\to[-\infty,+\infty[$ is called upper semicontinous if $\psi(z_0)\ge\limsup_{z\to z_0}\psi(z)\;\;\forall z_0\in\Omega$. How can I show that $\psi$ is upper semicontinous IFF $\{z:\psi(z)<c\}$ is open $\forall c\in\Bbb R$? I have no ideas! Can someone help me? Thanks a lot!
Suppose $\psi$ is upper semicontinuous. Fix $c\in \Bbb R$ and let $X_c := \{z : \psi(z) < c\}$. Let $z$ be a limit point of $\Bbb C \setminus X_c$. Then $z = \lim_{n\to \infty} z_n$ for some sequence $z_n$ in $\Bbb C \setminus X_c$. Since $\psi(z_n) \ge c$ for all $n \in \Bbb N$, $\limsup_{n\to \infty} \psi(z_n) \ge c$. Since $\psi$ is upper semicontinuous and $z_n \to z$, $\psi(z) \ge \limsup_{n\to \infty} \psi(z_n) \ge c$. Therefore $z \in \Bbb C \setminus X_c$, and consequently $\Bbb C\setminus X_c$ is closed. Therefore, $X_c$ is open. Conversely, suppose $X_c$ is open for all $c\in \Bbb R$. Given $\epsilon > 0$ and $z_0 \in \Bbb C$, $X_{\psi(z_0) + \epsilon}$ is an open set. So since $z_0 \in X_{\psi(z_0) + \epsilon}$, there exists $\delta > 0$ such that for all $z$, $|z - z_0| < \delta$ implies $z\in X_{\psi(z_0) + \epsilon}$. That is, there is $\delta$-neighborhood of $z_0$ such that $\psi(z) < \psi(z_0) + \epsilon$ for all $z$ in the neighborhood. Hence, $\limsup_{z\to z_0} \psi(z) \le \psi(z_0) + \epsilon$. Since $\epsilon$ was arbitrary, $\limsup_{z\to z_0} \psi(z) \le \psi(z_0)$. Hence, $\psi$ is upper semicontinuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$L_1 \cap L_2$ is dense in $L_2$? We were talking about Fourier series the other day and my professor said that the requirement that a function be in $L_1 \cap L_2$ wasn't a huge obstacle, because this is dense in $L_2$. Why is this true?
An element $f$ of $\mathbb L^2$ can be approximated for the $\mathbb L^2$ norm by a linear combination of characteristic function of measurable sets of finite measure. Such a function is integrable, hence the function $f$ can be approximated for the $\mathbb L^2$ norm by an element of $\mathbb L^1\cap \mathbb L^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of an Limit Using the formal definition of convergence, Prove that $\lim\limits_{n \to \infty} \frac{3n^2+5n}{4n^2 +2} = \frac{3}{4}$. Workings: If $n$ is large enough, $3n^2 + 5n$ behaves like $3n^2$ If $n$ is large enough $4n^2 + 2$ behaves like $4n^2$ More formally we can find $a,b$ such that $\frac{3n^2+5n}{4n^2 +2} \leq \frac{a}{b} \frac{3n^2}{4n^2}$ For $n\geq 2$ we have $3n^2 + 5n /leq 3n^2. For $n \geq 0$ we have $4n^2 + 2 \geq \frac{1}{2}4n^2$ So for $ n \geq \max\{0,2\} = 2$ we have: $\frac{3n^2+5n}{4n^2 +2} \leq \frac{2 \dot 3n^2}{\frac{1}{2}4n^2} = \frac{3}{4}$ To make $\frac{3}{4}$ less than $\epsilon$: $\frac{3}{4} < \epsilon$, $\frac{3}{\epsilon} < 4$ Take $N = \frac{3}{\epsilon}$ Proof: Suppose that $\epsilon > 0$ Let $N = \max\{2,\frac{3}{\epsilon}\}$ For any $n \geq N$, we have that $n > \frac{3}{\epsilon}$ and $n>2$, therefore $3n^2 + 5n^2 \leq 6n^2$ and $4n^2 + 2 \geq 2n^2$ Then for any $n \geq N$ we have $|s_n - L| = \left|\frac{3n^2 + 5n}{4n^2 + 2} - \frac{3}{4}\right|$ $ = \frac{3n^2 + 5n}{4n^2 + 2} - \frac{3}{4}$ $ = \frac{10n-3}{8n^2+4}$ Now I'm not sure on what to do. Any help will be appreciated.
Perhaps simpler: With the Squeeze Theorem: $$\frac34\xleftarrow[x\to\infty]{}\frac{3n^2}{4n^2}\le\frac{3n^2+5n}{4n^2+2}\le\frac{3n^2+5n}{4n^2}=\frac34+\frac54\frac1{n}\xrightarrow[n\to\infty]{}\frac34+0=\frac34$$ With arithmetic of limits: $$\frac{3n^2+5n}{4n^2+2}=\frac{3n^2+5n}{4n^2+2}\cdot\frac{\frac1{n^2}}{\frac1{n^2}}=\frac{3+\frac5n}{4+\frac2{n^2}}\xrightarrow[n\to\infty]{}\frac{3+0}{4+0}=\frac34$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Strange Double sum 3 Could you explain how to get the following double sum: $$\sum _{j=0}^{\infty } \sum _{k=0}^{\infty } \frac{2 (-1)^{k+j}}{(j+1)^2 k!\, j! \left((k+1)^2+(j+1)^2\right)}=(\gamma -\text{Ei}(-1))^2$$ where $\text{Ei}$ is the ExpIntegral?
Hint: $$\begin{align} S &=\sum_{k=0}^{\infty}\sum_{j=0}^{\infty}\frac{2(-1)^{k+j}}{k!j!(j+1)^2\left[(k+1)^2+(j+1)^2\right]}\\ &=\sum_{k=0}^{\infty}\sum_{j=0}^{\infty}\frac{(-1)^{k+j}}{k!j!(j+1)^2\left[(k+1)^2+(j+1)^2\right]}+\sum_{k=0}^{\infty}\sum_{j=0}^{\infty}\frac{(-1)^{k+j}}{k!j!(j+1)^2\left[(k+1)^2+(j+1)^2\right]}\\ &=\sum_{k=0}^{\infty}\sum_{j=0}^{\infty}\frac{(-1)^{k+j}}{k!j!(j+1)^2\left[(k+1)^2+(j+1)^2\right]}+\sum_{k=0}^{\infty}\sum_{j=0}^{\infty}\frac{(-1)^{k+j}}{k!j!(k+1)^2\left[(k+1)^2+(j+1)^2\right]}\\ &=\sum_{k=0}^{\infty}\sum_{j=0}^{\infty}\frac{(-1)^{k+j}}{k!j!(k+1)^2(j+1)^2}\\ &=\left(\sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!(k+1)^2}\right)\left(\sum_{j=0}^{\infty}\frac{(-1)^{j}}{j!(j+1)^2}\right)\\ &=\left(\sum_{k=0}^{\infty}\frac{(-1)^{k}}{k!(k+1)^2}\right)^2\\ \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proof of boundedness of a function Let $|x|<1$ and $f(x)=\displaystyle\frac{e^{\frac{1}{1+x}}}{(|x|-1)^{-2}}.$ Is $f(x)$ bounded?
$|f(x)| \leq e^{\frac{1}{1+x}}$ because the denominator is smaller than 1. Only for $x=-1$ you will get $\infty$ for the exponential function, but this value is excluded. Hence $f(x)$ is bounded for $|x| < 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Can all linear transformations be calculated with a matrix? For all finite vector spaces and all linear transformations to/from those spaces, how can you prove/show from the definition of a linear transformation that all linear transformations can be calculated using a matrix.
The most general way to show this is to show the dual space of linear transformations on a finite dimensional vector space V over a field F is isomorphic to the vector space of m x n matrices with coeffecients in the same field F. However, although the proof isn't difficult,it needs considerably more machinery then just the definition of a linear transformation and building to the isomorphism theorem takes a number of pages. An excellent and self contained presentation of linear transformations and matrices which ends with the isomorphism theorem can be found in Chapter 5 of the beautiful online textbook by S. Gill Williamson of The University of California at San Diego. It's Theorem 5.13. But my strongest advice to you is to work through the entire chapter-not only is it a really beautiful and clear presentation of the structure of the vector space L(U,V), you'll only fully understand the proof after working through the chapter. It's worth the effort.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Showing a set is Dense in Metric Space $(\Omega, d)$ Let $(\Omega, d)$ be a metric space, and let $A,B \subseteq \Omega$ such that * *$A \subseteq \overline{B}$ and *$A$ is dense in $\Omega$. Show that $B$ is also dense in $\Omega$. Here is my attempt at a solution: Let $x \in \overline{B}$. Then there is a sequence $(x_n)_{n \in \mathbb{N}} \in B$ such that $x_n \to x$. Since $A \subseteq \overline{B}$, then $x_n \to a$ for some $a \in A$. We are also given that $A$ is dense in $\Omega$, so $d(x,A) = 0$. But $x \in \overline{B}$, so $d(x, \overline{B}) = 0$ too. In the following sentence, I quote a Lemma from my textbook that reads: Let $(\Omega, d)$ be a metric space, and let $A , B \subseteq \Omega$. Then the following assertions hold: a). $\overline{\emptyset} = \emptyset, \quad \overline{\Omega} = \Omega$. b). $ A \subseteq \overline{A}$. c). $A \subseteq B \Rightarrow \overline{A} \subseteq \overline{B}$. I quote part (b) by stating this: $B \subseteq \overline{B}$, so $d(x, \overline{B}) = 0 \Rightarrow d(x, B)=0$ which means that $B$ is dense in $\Omega$. As an alternative solution, instead of going off of part (b) of this Lemma, I consider using part (c) as follows: If $\overline{A} \subseteq \overline{B}$, then $\Omega \subseteq \overline{B}$ since we are given that $A$ is dense in $\Omega$. So $\Omega \subseteq \overline{B}$ and $B \subseteq \Omega$, so we conclude that $B$ is dense in $\Omega$ since its closure is equal to the entire space. Is this the right way to approach this problem? Any suggestions or hints will be greatly appreciated. Much thanks in advance for your time and patience.
Your alternative solution is correct and is definitely the way to go. There are some problems with your first attempt; I’ll quote part of it with comments. Let $x \in \overline{B}$. You’re trying to show that $B$ is dense in $\Omega$, so this is the wrong place to start: if you’re going to use this sequences approach, you need to start with an arbitrary $x\in X$ and show that there’s a sequence in $B$ that converges to it. Then there is a sequence $(x_n)_{n \in \mathbb{N}} \in B$ such that $x_n \to x$. True, but (as noted above) not really useful. Since $A \subseteq \overline{B}$, then $x_n \to a$ for some $a \in A$. This is not true unless $x$ happens to be in $A$. Since $A$ may well be a proper subset of $\overline{B}$, there’s no reason to suppose that $x\in A$. It’s possible to make this kind of argument work, however. Let $x\in X$ be arbitrary. $A$ is dense in $X$, so there is a sequence $\langle x_n:n\in\Bbb N\rangle$ in $A$ that converges to $x$. $A\subseteq\overline{B}$, so $x_n\in\overline{B}$ for each $n\in\Bbb N$. Thus, for each $n\in\Bbb N$ there is a sequence $\langle x_{n,k}:k\in\Bbb N\rangle$ in $B$ that converges to $x_n$. To complete the proof that $B$ is dense in $\Omega$, you need to extract from that collection of sequences one that converges to $x$. This is possible but a bit messy. For each $n\in\Bbb N$ there are $k(n),\ell(n)\in\Bbb N$ such that $$d(x_{k(n)},x)<\dfrac1{2^{n+1}}$$ and $$d(x_{k(n)},x_{k(n),\ell(n)})<\dfrac1{2^{n+1}}\;,$$ so that $$d(x_{k(n),\ell(n)},x)<\frac1{2^n}$$ by the triangle inequality. Clearly the sequence $\langle x_{k(n),\ell(n)}:n\in\Bbb N\rangle$ converges to $x$ and lies entirely in $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to find $y$ from $y' = e^{2x}-e^x y$? The problem asks me to find $y(x)$ from the equation $$y' = e^{2x}-e^x y$$ The $y'$ is $dy/dx$ right, so wouldn't the correct step be to integrate right away? If not, should I change some terms before integrating? I'm fairly new to this, and am unaware of rules so please be clear in explanation, thank you very much.
You can use the method of integrating factors. First write $$y' + e^x y = e^{2x}.$$ An integrating factor for the equation is $\exp(\int e^x\, dx) = e^{e^x}$. So we multiply both sides by $e^{e^x}$. $$e^{e^x}y' + e^x e^{e^x}y = e^{2x}e^{e^x}.$$ The left hand side is $(e^{e^x} y)'$. So $$(e^{e^x}y)' = e^{2x}e^{e^x}.$$ By integration, $$e^{e^x} y = \int e^{2x} e^{e^x}\, dx + c,$$ where $c$ is a constant. Let $u = e^x$. Then $du = e^x\, dx$, thus $$\int e^{2x}e^{e^x}\, dt = \int e^x e^{e^x} e^x\, dx = \int ue^u\, du = (u - 1)e^u + c' = (e^x - 1)e^{e^x} + c',$$ where $c'$ is constant. Hence $$e^{e^x}y = (e^x - 1)e^{e^x} + C,$$ where $C$ is a constant. So $$y = e^x - 1 + Ce^{-e^x}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is the tangent of 22.5 degrees not 1/2? Sorry for the stupid question, but why is the tangent of 22.5 degrees not 1/2? (Okay... I get that that the tangent of 45 degrees is 1 ("opposite" =1, "adjacent" =1, 1/1 = 1. Cool. I am good with that.) Along those same lines, if the "opposite" drops to 1/2 relative to the "adjacent" i.e., "opposite" = 1, "adjacent" = 2 therefore 1/2. What am I missing? Thanks in advance for your help.
The basic error being made is the assumption that you are converting one unit into another unit such as when you convert meters into yards. Degrees are units but tangent represents a ratio.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 7 }
Properties of resolvent operators I am asked to prove the identities of $(12)$ and $(13)$, which are given on page 438 of the textbook PDE Evans, 2nd edition as follows: THEOREM 3 (Properties of resolvent operators). (i) If $\lambda,\mu \in \rho(A)$, we have $$R_\lambda - R_\mu=(\mu-\lambda)R_\lambda R_\mu \quad \text{(resolvent identity)} \tag{12}$$ and $$R_\lambda R_\mu = R_\mu R_\lambda \tag{13}.$$ If it helps, here are the relevant definitions on the previous page. Also, $A$ is a closed linear operator on the real Banach space $X$, with domain $D(A)$. DEFINITIONS. (i) We say a real number $\lambda$ belongs to $\rho(A)$, the resolvent set of $A$, provided the operator $$\lambda I - A : D(A) \to X$$ is one-to-one and onto. (ii) If $\lambda \in \rho(A)$, the resolvent operator $R_\lambda : X \to X$ is defined by $$R_\lambda u := (\lambda I - A)^{-1} u.$$ What can I do to prove the first identity $$R_\lambda - R_\mu=(\mu-\lambda)R_\lambda R_\mu$$ at least? I got stuck after writing the following: $$R_\lambda - R_\mu = (\lambda I - A)^{-1} - (\mu I - A)^{-1}$$ and $$(\mu - \lambda) R_\lambda R_\mu = (\mu - \lambda) (\lambda I - A)^{-1} (\mu I - A)^{-1}$$ Perhaps the second identity requires similar justification, so I can try to do that on my own after getting help with the first one.
Hint: For the first inequality, look at $$ (\lambda I - A)(R_{\lambda}-R_{\mu})(\mu I-A) $$ on $D(A)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1169977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
$A ⊂ B$ if and only if $A − B = ∅$ I need to prove that $A ⊂ B$ if and only if $A − B = ∅$. I have the following "proof": $$ A \subset B \iff A - B = \emptyset$$ proof for $\implies:$ $$\forall x \in A, x \in B$$ Therefore, $$A - B = \emptyset$$ proof for $\impliedby$: If $$A - B = \emptyset$$ then $$\forall x \in B, x \in A$$ since $\forall x \in B, x \in A$, $$ A \subset B $$ However the whole thing seems to be incredibly "fragile" and relies on circular logic (see how I just switched the sets in the for all statements) Is this a valid proof? Is there a better way to write it?
One might proceed more directly by noting that $A\subseteq B$ is equivalent to $$\forall x,(x\in A)\implies(x\in B),$$ which is equivalent to $$\forall x,\neg(x\in A)\vee(x\in B).$$ The negation of this is $$\exists x:(x\in A)\wedge\neg(x\in B),$$ which is equivalent to $$\exists x:(x\in A)\wedge(x\notin B),$$ which is equivalent to $$\exists x:(x\in A\setminus B).$$ Renegating then shows us that $A\subseteq B$ is equivalent to $$\forall x,\neg(x\in A\setminus B),$$ at which point we're basically done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1170190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Equivalence of another formula for the number of $r$-combinations with repetition allowed Basically it means choosing r things out of n, where order doesn't matter, and you are allowed to pick a thing more than once. For example, $\{1, 1, 2\}$ out of $\{1, 2, 3, 4\}$. I managed to find another solution: $$ {n \choose r} + (r-1){n \choose r-1} + (r-2){n \choose r-2} + \cdots + {n \choose 1} $$ I am having trouble proving that these two are equivalent.
As has already been pointed out, unfortunately the two solutions are not equivalent. However, if we use Pascal's Rule:- $${n \choose k} = {n-1 \choose k-1} + {n-1 \choose k}$$ and apply this $r$ times to ${r+n-1 \choose r}$ the following solution can be shown to be equivalent:- $${r-1 \choose r-1}{n \choose r} + {r-1 \choose r-2}{n \choose r-1} + {r-1 \choose r-3}{n \choose r-2} + \cdots + {r-1 \choose 0}{n \choose 1}$$ In other words the following relationship holds:- $${r+n-1 \choose r}=\sum_{k=1}^r{r-1 \choose k-1}{n \choose k}$$ Perhaps allowing the repetition of multiple elements at the same time results in the binomial terms ${r-1 \choose k-1}$ for $k\in \{1,2,..,r\}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1170237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Show that Möbius transformation $S$ commute with $T$ if $S$ and $T$ have the same fixed point. Let $T$ be a Möbius Transformation such that $T$ is not the identity. Show that Möbius transformation $S$ commute with $T$ if $S$ and $T$ have the same fixed point. Here is what I know so far 1) if $T$ has fixed points says $z_1$ and $z_2$ then $S^{-1}TS$ has fixed point $S^{-1}(z_1)$ and $S^{-1}(z_2)$ 2) if $T$ is dilation then $0$ and $\infty$ are its only fixed point, but if $T$ is translation then only $\infty$ is its fixed point. Assume that $S$ and $T$ has the same fixed points $z_1$ and $z_2 $ then by 1) $S^{-1}TS$ and $T^{-1}ST$ have the same fixed point $S^{-1}(z_1)=T^{-1}(z_1)$ and $S^{-1}(z_2)=T^{-1}(z_2)$ I know that $T$ is not the identity, but I can't assume it is dilation or translation to use 2), because it can also be inverse, right? I wonder if anyone would please have me a hand from here.
Assume that $S$ and $T$ are two Moebius transformations of the extended $z$-plane $\overline{\Bbb C}$ having the same fixed points $z_1$, $z_2\in{\Bbb C}$, $\>z_1\ne z_2$. The we can introduce "temporarily" in $\overline{\Bbb C}$ a new complex coordinate $$w:={z-z_1\over z-z_2}\ .$$ The point $z=z_1$ gets the $w$-coordinate $w_1={z_1-z_1\over z-z_2}=0$, and the point $z=z_2$ gets the $w$-coordinate $w_2=\infty$. In terms of this new coordinate both $S$ and $T$ are again given by Moebius expressions, but now the fixed points have $w$-coordinate values $0$ and $\infty$; whence $S$ and $T$ appear as dilations: $$S(w)=\lambda w, \quad \lambda\ne 1;\qquad T(w)=\mu w,\quad \mu\ne1\ .$$ When expressed in terms of $w$ the two transformations obviously commute; therefore they have to commute as well when expressed in terms of the original coordinate $z$. – A similar argument takes care of the case $z_1\in{\Bbb C}$, $z_2=\infty$. When $S$ and $T$ both have exactly one fixed point $z_0\in {\Bbb C}$ then we can replace $z_0$ by $w_0=\infty$ as before. Now $S$ appears as $$S(w)=\alpha w+\beta\ .$$ When $\alpha\ne1$ then $S$ would have a second fixed point $w_2={\beta\over 1-\alpha}$. It follows that $S$ and $T$ are of the form $$S(w)=w+\beta,\quad\beta\ne0;\qquad T(w)=w+\gamma,\quad \gamma\ne0\ ;$$ whence commute.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1170332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }