Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Wave Equation - like 4th Order PDE How does one solve a fourth-order PDE of the form $\frac{\partial^4y}{\partial x^4}=c^2\frac{\partial^2y}{\partial t^2}$? It looks like a one dimensional wave equation, but I'm unfortunately very bad at PDEs.
You do almost the same thing as people explained in your other question. Unfortunately, you can only factor the operator into $$ \left(\frac{\partial^2}{\partial x^2} - c\frac{\partial}{\partial t}\right) \left(\frac{\partial^2}{\partial x^2} + c\frac{\partial}{\partial t}\right)y = 0. $$ Then you have to solve a heat-equation like equation. If your domain is finite, you should try separation of variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/193951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
(Help with) A simple yet specific argument to prove Q is countable I was asked to prove that $\mathbb{Q}$ is countable. Though there are several proofs to this I want to prove it through a specific argument. Let $\mathbb{Q} = \{x|n.x+m=0; n,m\in\mathbb{Z}\}$ I would like to go with the following argument: given that we know $\mathbb{Z}$ is countable, there are only countable many n and countable many m , therefore there can only be countable many equations AND COUNTABLE MANY SOLUTIONS. The instructor told me that though he liked the argument, it doesn't follow directly that there can only be countable many solutions to those equations. Is there any way of proving that without it being a traditional proof of "$\mathbb{Q}$ is countable"?
This is simply isomorphic to the Cartesian product of two copies of $\mathbb{Z}$ . $\mathbb{Z}\times\mathbb{Z}$ is countable and can be ordered by the lexicographic order so it is countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
zeros of linear recurence sequences Given a linear recurrence sequence $\{a_n\}_{n\geq 0}$, how to decide whethere there are infinitely many zeros, or there are only finitely many ones?
The characteristic polynomial of your recurrence is $$x^{m-1}-c_{m-1}x^{m-2}-\cdots-c_1$$ Here's something that follows from the Skolem-Mahler-Lech Theorem: if the recurrence has infinitely many zeros, then the characteristic polynomial has two distinct roots whose ratio is a root of unity. Careful: this is not an "if and only if". Another good source is Chapter 2 of the book Recurrence Sequences by Graham Everest, Alf Van Der Poorten, Igor Shparlinski and Thomas Ward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Need matlab help related with the iteration method I am reading an iteration method for computing the Moore- Penrose genralized inverse of a given matrix $A$, which is given as follows: $X_{k+1} = (1+\beta)X_{k} - \beta X_{k} A X_{k}$ where $X_{k}$, k = 0,1,... is a sequence of approximations for computing Moore- Penrose genralized inverse $ X_{0} = \beta A' $ is the initial approximation , $0<\beta\leq 1$ and $A'$ is the transpose of matrix $A$ $d_{k} = \|X_{k+1} - X_{k}\|_{fro}$ is the error matrix norm (frobenius norm) I have made following matlab program for computing Moore- Penrose genralized inverse by above mentioned method. But i am unable to make code for stopping criterion which says that. perform the iteration untill $|d_{k+1}/d_{k} - \beta -1|> 10^{-4}$ Please help me with this. I would be very much thankful to you.
The prep before your loop should stay the same. The appropriate script is A = ...; % as you have given beta = ...; % whatever you want X0 = beta*A'; % calculate initial estimate % (these initial values may need to be changed, I don't have a copy of % matlab in front of me) dklast = NaN; dk = NaN; % initialise to begin loop iter = 0; maxiter = 100; while (abs(dk/dklast - beta - 1) > 1e-4) && (iter < maxiter) % loop until tolerance met iter = iter + 1; % keep count of iteration X1 = (1+beta)*X0 -beta*X0*A*X0; % calculate new iterate dklast = dk; % move old difference "new estimate to previous iterate" dk = norm(X1-X0,'fro'); % determine new difference X0 = X1; % copy current iterate to "old" iterate for next iteration end I am wondering why you are using this convergence test at all. I would recommend using dk = norm(X1*X0-I,'fro'); which measures how close X1 is to the left inverse of $A$. Your termination criteria would then be while dk > (some_tolerance) && iter < maxiter .... end As you currently have, you are measuring how much X1 changes from X0, which may be small, but still not an approximate inverse (or pseudoinverse) for $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show that in a discrete metric space, every subset is both open and closed. I need to prove that in a discrete metric space, every subset is both open and closed. Now, I find it difficult to imagine what this space looks like. I think it consists of all sequences containing ones and zeros. Now in order to prove that every subset is open, my books says that for $A \subset X $, $A$ is open if $\,\forall x \in A,\,\exists\, \epsilon > 0$ such that $B_\epsilon(x) \subset A$. I was thinking that since $A$ will also contain only zeros and ones, it must be open. Could someone help me ironing out the details?
The discrete metric just says that $$d(x,x)=0$$ $$d(x,y)=1,\ x\neq y$$ So say your ball has radius $r$. If $r<1$ then the only point it contains is the point it's centred on. So any single point has a ball of some radius around it containing only that point. This is the same thing as $B_{0<r<1}(x)=\{x\}$, so we know that every singleton is open. And now we're actually done! Since now we know that any point $x$ in a set $A$ has a ball containing it, because we can always construct a ball that only contains $x$! Since all sets are open, their complements are open as well. This implies that all sets are also closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 4, "answer_id": 1 }
infinity sum of numbers If we have a series of numbers $$1^5 + 2^5 + 3^5 + \cdots + (10^n)^5.$$ Final sum of the series is approximately equal $16666\ldots$ . If there is more and more numbers in the series is the result of closer and closer to $16666\ldots$ . For example if the last number $1000$ or $10000$ or $100000$ and so on, the final sum is closer to $16666\ldots$ . If it is true (of course it is), can we conclude that $$1^5 + 2^5 + 3^5 + \cdots = \frac 1 6$$ Greetings.
The orthogonal projection of this question onto the subspace of sensible questions is answered by @sos440's comment. Claim: $$\frac{1^m + 2^m + \cdots + n^m}{n^{m+1}}\to \frac{1}{m+1}.$$ Proof: Note $\text{LHS} = \frac{1}{n}((1/n)^m + (2/n)^m + \cdots + ((n-1)/n)^m + 1^m)$ is a Riemann sum approximating the integral $\int_0^1 x^m\,dx = \frac{1}{m+1}$. Thus we can say that $$1^5 + 2^5 + \cdots + (10^n)^5 \sim \frac{1}{6} (10^n)^6,$$which will look like $1666\ldots$ in base $10$, as you correctly observe.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Show me some pigeonhole problems I'm preparing myself to a combinatorics test. A part of it will concentrate on the pigeonhole principle. Thus, I need some hard to very hard problems in the subject to solve. I would be thankful if you can send me links\books\or just a lone problem.
IF $\alpha>0$ is irrational, then the fractional parts of $n\alpha$ forms a dense subset of $[0,1]$. Given a positive integer $n>0$, Fibonacci numbers modulo $n$ is periodic. These are very famous ones from number theory. There are proofs of these using pigeonhole principle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 4 }
Finding $X$ When $Y=XX'$ Consider matrices $Y\in\mathbb{R}^{n\times n}$ and $X\in\mathbb{R}^{n\times m}$ where $m\geq n$. $X$ is unknown but $Y=XX'$, which implies that $Y$ is positive definite (I see no reason why this couldn't alternately be expressed as a positive semi-definite problem with $Y=X'X$, a different $Y\in\mathbb{R}^{m\times m}$ would still be known). What the easiest method to find $X$? I was thinking of minimizing the Frobenius norm, but wasn't sure if there was some relatively straightforward thing that I'm missing.
Consider $X = I$, then $X' = I$ and $Y = I$. On the other hand, if $X = -I$, then $X' = -I$, but $Y = I$ still. Are there any other assumptions you can make about $X$? In fact, for any $X$, $(-X)(-X)' = XX'$, so there are always at least two solutions. Additionally, if $$X_1 = \left(\begin{array}{cc} a_1 & b_1 \\ 0 & 0\end{array}\right),\quad X_2 = \left(\begin{array}{cc} a_2 & b_2 \\ 0 & 0\end{array}\right)$$ with $a_1^2 + b_1^2 = a_2^2 + b_2^2$ then $Y = X_1 X_1' = X_2 X_2'$. I think maybe something needs to be said about the singular values of $X$ to get better uniqueness - maybe that they're all non-degenerate?
{ "language": "en", "url": "https://math.stackexchange.com/questions/194359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Character of $S_3$ I am trying to learn about the characters of a group but I think I am missing something. Consider $S_3$. This has three elements which fix one thing, two elements which fix nothing and one element which fixes everything. So its character should be $\chi=(1,1,1,0,0,3)$ since the trace is just equal to the number of fixed elements (using the standard representation of a permutation matrix). Now I think this is an irreducible representation, so $\langle\chi,\chi\rangle$ should be 1. But it's $\frac{1}{6}(1+1+1+9)=2$. So is the permutation matrix representation actually reducible? Or am I misunderstanding something?
No the permutation representation is not irreducible. The permutation representation contains the trivial representation. $\chi_U = (1,1,1)$. Here I choose to represent characters using conjugacy classes. The first position is the conjugacy class corresponding to $3 = 1 + 1 + 1$ and has 1 element. The second is the conjugacy classes corresponding $3 = 2 + 1$ and contains 3 elements. The third corresponds to $3 = 3$ and contains 2 elements. The trace of the permutation representation is the number of fixed points. So as you mentioned $\chi_P = (3, 1, 0)$. The permutation representation always contains a copy of the trivial representation spaned by $e_1 + e_2 + e_3$, where $e_1, e_2, e_3$ are the basis for the vector space of the permutation representation. Let $U$ denote the trivial representation. Then $\chi_P = \chi_U + \chi_V$ for some representation $V$. Since $\chi_U = (1, 1, 1)$, you must have that $\chi_V = (2,0, -1)$. By taking inner product of $\chi_V$ with itself, you see that $\langle \chi_V, \chi_V \rangle = 1$. Hence $V$ is also a irreducible representation. $V$ is called the standard representation of $S_3$. Hence, $P = U \oplus V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
More and more limits for sequences So here goes a bit of homework: $$\lim_{n\to\infty}{\left(\frac{3n^2+2n+1}{3n^2-5}\right)^{\frac{n^2+2}{2n+1}}}$$ Well, this would trivially lead to: $$\lim_{n\to\infty}{\left(\frac{3+\frac{2}{n}+\frac{1}{n^2}}{3+\frac{5}{n^2}}\right)^{\frac{n\left(1+\frac{2}{n^2}\right)}{2+\frac{1}{n}}}}$$ Which is clearly an indetermination of type "$1^\infty$". Now, I can't really get through this step... Any hints?
The standard trick for dealing with $1^\infty$ forms is to take logs; it’s very useful if you don’t see anything slicker. Let $$L=\lim_{n\to\infty}{\left(\frac{3n^2+2n+1}{3n^2-5}\right)^{\frac{n^2+2}{2n+1}}}\;;$$ then $$\begin{align*} \ln L&=\ln\lim_{n\to\infty}{\left(\frac{3n^2+2n+1}{3n^2-5}\right)^{\frac{n^2+2}{2n+1}}}\\ &=\lim_{n\to\infty}\ln{\left(\frac{3n^2+2n+1}{3n^2-5}\right)^{\frac{n^2+2}{2n+1}}}\\ &=\lim_{n\to\infty}\left(\frac{n^2+2}{2n+1}\right)\ln\left(\frac{3n^2+2n+1}{3n^2-5}\right)\;, \end{align*}$$ where the second step uses the continuity of the log function. This is an $\infty\cdot 0$ form, which you can easily convert to a $\frac00$ form: $$\lim_{n\to\infty}\frac{\ln\left(\frac{3n^2+2n+1}{3n^2-5}\right)}{\frac{2n+1}{n^2+2}}\;.$$ Once you know $\ln L$, recovering $L$ is trivial; just remember to do it!
{ "language": "en", "url": "https://math.stackexchange.com/questions/194468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Order of pole and evaluating residue $f(z)=\frac{ze^{iz}}{z^2+a^2}$ I need to determine the order of the poles and compute the residues. To compute this, we were told to (in general), write $f(z)=\frac{g(z)}{(z-z_0)^m}$ , and choose $g$ so that $m$ is minimized (a natural number -- this is the order of the pole), my issue is two fold: 1)How do we know, in general, that we have picked a $g$ which minimizes the order. 2) How do I handle this particular function (and others with essential singuarities). I know I may also write $f(z) =\frac{ze^{iz}}{(z+ia)(z-ia)}$, so I know the singularities are $\frac{+}{ }ia$ and an 'essential singularitiy' (at infinity). But not sure how to compute the residues.... EDIT: $\lim_{z\to ∞} \frac{ze^{iz}}{z^2+a^2}=\lim_{z\to ∞} \frac{ze^{iz}}{z^2}= \lim_{z\to ∞} \frac{e^{iz}}{z}=∞$ So I can't use the result for when the limit exists and is finite (zero or non-zero).
The practical advice to determine the order of a pole $w$ of a function $f(z)$ is the following: write $f(z)$ in the form \begin{equation} f(z)=\frac{g(z)}{(z-w)^m} \end{equation} where $g(w) \neq 0$, then $m$ is the order of the pole $w$. In your example, you already have the function $f(z)$ in the nice form, for example for $w=ia$ pick $g(z)=\frac{z e^{iz}}{z+ia}$ which is clearly non-zero when evaluated in $z=ia$, thus the order of the pole $z=ia$ is 1. By the way, I assumed $a \neq 0$, try to work out on your own what happens when $a=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Continuous Brouwer's fixed point theorem via Stokes's theorem? Let $B$ denote the closed unit ball in $\mathbf{R}^n$. Brouwer's fixed point theorem states that every continuous map $f:B\to B$ has a fixed point. There is a simple proof using Stokes's theorem, at least for the special case in which $f$ is smooth, as presented on Wikipedia here. The page also states that this case contains the full generality of the theorem, because if $f:B\to B$ is continuous without fixed points then $\epsilon = \inf_{x\in B} |x-f(x)| > 0$, so we can just convolve (each component of) $f$ with a smooth bump $\psi:\mathbf{R}^n\to\mathbf{R}$ supported on $\epsilon B$ to get a smooth counterexample to the theorem. Unfortunately, as it stands the proof doesn't work, because the distance of $f(B)$ to $\partial B$ could well be zero, in which case $\tilde{f} = \psi\ast f$ might not satisfy $\tilde{f}(B)\subset B$. Does anybody see a resolution to this difficulty? EDIT, following Willy's answer. I've just realised that I was confused when I asked this question. $\tilde{f}(B)\subset B$ was never really an issue; the issue was rather that convolution isn't fully defined near the boundary. The most immediate interpretation is to extend $f:B\to B$ by $0$ to $\mathbf{R}^n\to B$, but then mollifying $f$ doesn't give you a uniformly nearby $\tilde{f}$. The interpretation that works is to extend $f:B\to B$ to any uniformly continuous $F:\mathbf{R}^n\to B$, such as $$F(x) = \begin{cases} f(x) & \text{if $|x|\leq 1$,}\\ f(x/|x|) & \text{if $|x|\geq 1$,}\end{cases}$$ and then mollify.
Sean's last comment inspired the following answer: Let $100\epsilon < \inf |x - f(x)|$. Let $g(x) = \frac{1}{1 + 10\epsilon} f(x)$. Then by triangle inequality we have that $|x - g(x)| > \epsilon/2$. Let $h: (1+10\epsilon)^{-1}B \to (1+10\epsilon)^{-1}B$ be the smooth map formed by $$ h(x) = \eta* g(x) $$ where $\eta$ is a mollifier supported in $\epsilon B$. We have that $h(x)$ is smooth and has no fixed points etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Show that the group $G$ is of order $12$ I am studying some exercises about semi-direct product and facing this solved one: Show that the order of group $G=\langle a,b| a^6=1,a^3=b^2,aba=b\rangle$ is $12$. Our aim is to show that $|G|\leq 12$ and then $G=\mathbb Z_3 \rtimes\mathbb Z_4=\langle x\rangle\rtimes\langle y\rangle$. So we need a homomorphism from $\mathbb Z_4$ to $\mathrm{Aut}(\mathbb Z_3)\cong\mathbb Z_2=\langle t\rangle$ to construct the semi-direct product as we wish: $$\phi:=\begin{cases} 1\longrightarrow \mathrm{id},& \\\\y^2\longrightarrow \mathrm{id},& \\\\y\longrightarrow t,& \\\\y^3\longrightarrow t, \end{cases} $$ Here, I do know how to construct $\mathbb Z_3 \rtimes_{\phi}\mathbb Z_4$ by using $\phi$ and according to the definition. My question starts from this point: The solution suddenly goes another way instead of doing $(a,b)(a',b')=(a\phi_b(a'),bb')$. It takes $$\alpha=(x,y^2), \beta=(1,y)$$ and note that these elements satisfy the relations in group $G$. All is right and understandable but how could I find such these later element $\alpha, \beta$?? Is really the formal way for this kind problems finding some generators like $\alpha, \beta$? Thanks for your help.
$aba=b$ implies that $bab^{-1}=a^{-1}$, hence $bab^{-1}a^{-1}=a^{-2}$; it follows that $G'\supseteq \langle a^{-2}\rangle\cong \mathbb{Z}_3$. Also, $b^{-1}a^2b=(b^{-1}ab)^2 = (b^{-1}aba.a^{-1})^2=(b^{-1}.ba^{-1})^2=a^{-2}\in \langle a^2\rangle$. Hence $\langle a^2\rangle$ is normalised by both $b$, and $a$; $\langle a^2\rangle\triangleleft G$. Then $G/\langle a^2\rangle=\langle \bar{a},\bar{b}\colon \bar{a}^2=1, \bar{b}^2=\bar{a}, \bar{b}\bar{a}\bar{b}^{-1}\bar{a}^{-1}=1\rangle = \langle \bar{b}\colon \bar{b}^4=1\rangle \cong \mathbb{Z}_4$. Hence $G'=\langle a^2\rangle\cong \mathbb{Z}_3$, and $G/G'\cong \mathbb{Z}_4$. Further $aba=b$ implies $abab=b^2$, i.e. $(ab)^2=b^2=a^3$. Therefore, $(ab)^4=(a^3)^2=1$, and $(ab)^2\neq 1$; hence $\langle ab\rangle$ is cyclic subgroup of $G$ of order $4$, and clearly, it should intersect trivially with $G'$ (compare orders). Hence $G=G'\rtimes \langle ab\rangle \cong \mathbb{Z}_3\rtimes \mathbb{Z}_4$. ........................................................... To construct semidirect product of $\mathbb{Z}_3=\langle x\rangle$ by $\mathbb{Z}_4=\langle y\rangle$, note that $y$ acts on $\langle x\rangle$ by conjugation, and hence $y^{-1}xy\in \{x,x^2\}$. If $y^{-1}xy=x$, then $G$ will be abelian, and if $y^{-1}xy=x^2$, then $G$ will be the non-abelian group of order 12 (it is different from $D_{12}$, and $A_4$; why?).
{ "language": "en", "url": "https://math.stackexchange.com/questions/194665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is there a direct, elementary proof of $n = \sum_{k|n} \phi(k)$? If $k$ is a positive natural number then $\phi(k)$ denotes the number of natural numbers less than $k$ which are prime to $k$. I have seen proofs that $n = \sum_{k|n} \phi(k)$ which basically partitions $\mathbb{Z}/n\mathbb{Z}$ into subsets of elements of order $k$ (of which there are $\phi(k)$-many) as $k$ ranges over divisors of $n$. But everything we know about $\mathbb{Z}/n\mathbb{Z}$ comes from elementary number theory (division with remainder, bezout relations, divisibility), so the above relation should be provable without invoking the structure of the group $\mathbb{Z}/n\mathbb{Z}$. Does anyone have a nice, clear, proof which avoids $\mathbb{Z}/n\mathbb{Z}$?
For a prime $p$ and $k\ge1$, $$ \phi\!\left(p^k\right)=(p-1)p^{k-1}\tag1 $$ Furthermore, the result is true for powers of a prime: $$ \begin{align} \sum_{d|p^m}\phi(d) &=\phi(1)+\sum_{k=1}^m\phi\!\left(p^k\right)\tag{2a}\\ &=1+(p-1)\sum_{k=1}^mp^{k-1}\tag{2b}\\ &=1+(p-1)\frac{p^m-1}{p-1}\tag{2c}\\[9pt] &=p^m\tag{2d} \end{align} $$ Given $(m_1,m_2)=1$, for every $d$ so that $d|m_1m_2$, that there are unique $d_1$ and $d_2$ so that $d=d_1d_2$ and $d_1|m_1$ and $d_2|m_2$. Thus, $$ \sum_{d_1|m_1}\phi(d_1)\sum_{d_2|m_2}\phi(d_2)=\sum_{d|m_1m_2}\phi(d)\tag3 $$ Therefore, since $\phi$ is multiplicative, $m\mapsto\sum\limits_{d|m}\phi(d)$ is multiplicative. $(2)$ and $(3)$ show that $$ \sum_{d|n}\phi(d)=n\tag4 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/194705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 8, "answer_id": 7 }
Is using a step function to limit the value of a function considered inelegant? I am a programmer and recently we had some problems with a certain function $f$ that tends to infinity near 10, and we can not have such great values as a result. Recently there was a collegue trying to tweak the original function to get it close to a fixed number, suppose 10. Although I just suggested using a step function (programming if) to solve it quickly, he said that it could be seen as sloppy or lazy. Considering that neither of us are mathematicians nor math gurus, is this approach considered negatively by mathematicians? Edit: Thanks a lot for your responses and sadly I do not recall the exact function, and my collegue is absent, although as soon as he is back I will ask him the formula.
"You know we all became mathematicians for the same reason: we were lazy." -- Max Rosenlicht
{ "language": "en", "url": "https://math.stackexchange.com/questions/194762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Calculate $\lim\limits_{x\to a}\frac{a^{a^{x}}-{a^{x^{a}}}}{a^x-x^a}$ Please help me solving $\displaystyle\lim_{x\to a}\frac{a^{a^{x}}-{a^{x^{a}}}}{a^x-x^a}$
Since this is calculus why not try with L'Hospital? $$\lim_{x\to a}\frac{a^{a^x}-a^{x^a}}{a^x-x^a}=\lim_{x\to a}\frac{a^xa^{a^x}\log^2a-ax^{a-1}\log a\cdot a^{x^a}}{a^x\log a-ax^{a-1}}=$$ $$=\frac{a^aa^{a^a}\log^2a-a^a\log a\cdot a^{a^a}}{a^a\log a-a^a}=\frac{a^{a^a}\log^2a-\log a\cdot a^{a^a}}{\log a -1}=a^{a^a}\log a$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/194813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 3 }
Counterexamples to "Naive Induction" I was teaching a nine-year-old friend about prime numbers. When I asked him if he thought there were finitely or infinitely many primes, he answered confidently that there must be an infinite number. "How do you know?" I asked. "Because I can keep thinking up larger and larger primes. It's easy!" By way of proof, he came up with a new larger prime. I call this "Naive Induction" (there might be a better term). I am looking for not-too-complicated counterexamples where * *It appears to be the case that there are infinitely many members of a set, or (equivalently) that some property is true for all integers, but *It can be demonstrated that there is a largest member of the set, or a largest number with some property. Any suggestions? Thanks.
This isn't what you're looking for, but it's semi - related. Euler's polynomial $n^2 + n +41$ generates primes for integers $n = 0$ to 39. It seems when you start plugging in numbers and checking that this always gives a prime!
{ "language": "en", "url": "https://math.stackexchange.com/questions/194879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Prove: If $a\mid m$ and $b\mid m$ and $\gcd(a,b)=1$ then $ab\mid m\,$ [LCM = product for coprimes] Prove: If $a\mid m$ and $b\mid m$ and $\gcd(a,b)=1$ then $ab\mid m$ I thought that $m=ab$ but I was given a counterexample in a comment below. So all I really know is $m=ax$ and $m=by$ for some $x,y \in \mathbb Z$. Also, $a$ and $b$ are relatively prime since $\gcd(a,b)=1$. One of the comments suggests to use Bézout's identity, i.e., $aq+br=1$ for some $q,r\in\mathbb{Z}$. Any more hints? New to this divisibility/gcd stuff. Thanks in advance!
Hint $\rm\, \ a,b\mid m\iff ab\mid am,bm \!\!\overset{\rm\color{#0a0} U\!\!}\iff ab\mid \overbrace{(am,bm)}^{\large \color{#c00}{ (a,\,b)\,m}}\!\iff ab/(a,b)\mid m$ Remark $\ $ We used $\rm\color{#0a0} U$= gcd Universal Property and the gcd distributive law $\rm\:\color{#c00}{(a,b)\,c} = (ac,bc).\ $ If above we use Bezout's Identity to replace the gcd $\rm\:(a,b)\:$ by $\rm\:j\,a + k\,b\:$ (its linear representation) then we obtain the proof by Bezout in lhf's answer (here in divisibility language). The above proof is more general than Bezout-based proofs since there are rings with gcds not of linear (Bezout) form, e.g. $\,\rm \Bbb Z[x,y]\,$ the ring of polynomials in $\,\rm x,y\,$ with integer coefficients, where $\,\rm gcd(x,y) = 1\,$ but $\rm\, x\, f + y\, g\neq 1\,$ (else evaluating at $\rm\,x,y = 0\,$ yields $\,0 = 1).\,$ The proof shows that $\rm\ a,b\mid m\iff ab/(a,b)\mid m,\ $ i.e. $\ \rm lcm(a,b) = ab/(a,b)\ $ using the universal definition of lcm. $ $ The OP is the special case $\rm\,(a,b)= 1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/194961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
What is the number of equivalence classes formed by a merge I have this question related to the number of equivalence classes of equivalence relations. If $R_1$ and $R_2$ are two equivalence relations on a set $A$ with number of equivalence classes of $R_1 = n_1$ and number of equivalence classes of $R_2= n_2$. What is the number of equivalence classes formed by $R_1 \cap R_2\,$?
$R_1\cap R_2$ could have as few as $\max\{n_1,n_2\}$ and as many as $n_1n_2$ equivalence classes, depending on how $R_1$ and $R_2$ interact. Let $n$ be the number of equivalence classes of $R_1\cap R_2$. * *If $R_1\subseteq R_2$, then $n_2\le n_1$, and $\langle x,y\rangle\in R_1\cap R_2$ iff $\langle x,y\rangle\in R_1$, so $n=\max\{n_1,n_2\}$. *If every $R_1$-class has non-empty intersection with every $R_2$-class, then $R_2$ divides each $R_1$-class into $n_2$ $(R_1\cap R_2)$-classes, and $n=n_1n_2$. Added: A simple example of the the latter case is given by $A=\{0a,0b,0c,1a,1b,1c\}$, where for $\alpha,\beta\in A$ we set $\alpha\,R_1\,\beta$ if $\alpha$ and $\beta$ have the same first character, and $\alpha\,R_2\,\beta$ if $\alpha$ and $\beta$ have the same second character. In the table below, the rows within the body of the table are the $R_1$-equivalence classes, the columns within the body of the table are the $R_2$-equivalence classes, and each of the six cells within the body of the table is an $(R_1\cap R_2)$-equivalence class. $$\begin{array}{c|ccc} &a&b&c\\ \hline 0&0a&0b&0c\\ 1&1a&1b&1c \end{array}$$ Here $n_1=2$, $n_2=3$, and $n=6$. (In fact in this case $R_1\cap R_2$ is the identity relation on $A$.) Added2: $\operatorname{trans}(R_1\cup R_2)$ can have as few as $1$ and as many as $\min\{n_1,n_2\}$ equivalence classes. In the example that I added above, the transitive closure of $R_1\cup R_2$ has only one equivalence class.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Need a crash course in fourier analysis, recommend resources I need to be able to understand everything about fourier analysis asap. Could you recommend one or two references or books that are considered 'the book' to learn this subject?
It really depends on what level of Fourier analysis you're talking about, and whether you're coming at it from the applied (for example, how to use Fourier series or transforms in solving PDE's) or the pure real/harmonic/functional analysis sides. On the pure side, I'd recommend Edwards, "Fourier Series: A Modern Introduction" and Rudin, "Fourier Analysis on Groups",
{ "language": "en", "url": "https://math.stackexchange.com/questions/195071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
rotating 2D coordinates I've tried googling this, but I always end up somewhere that just says it's easy. Anyhow, I have a coordinate system, where I need to rotate a bunch of points. It's all 2D. Coordinates varies and so does the angles, but the point to rotate around, is always 0,0. I've look a bit at this post, but I haven't really been able to make it work. https://stackoverflow.com/questions/2259476/rotating-a-point-about-another-point-2d It mentions subtracting the pivot point, but should I subtract the distance to the pivot point? Since my pivot point is $(0,0)$ it sounds too easy, to just subtract 0 (and also doesn't give me the results I expect). As an example, I have a point in $ (2328.30755138590, 1653.74059364716) $ (very accurate, I know). I need to rotate it $ 5.50540590872993 $ degrees around $(0,0)$. I would expect it to end in 2339.68319170805, 1878.18099075262(based on a rotation in my CAD software) but I don't really see how I can get it to do that. Actually, I need to rotate it around $(0, 1884.25802838571)$. Sorry. I have gotten some coordinate systems mixed.
Let's say you want to rotate a point $P = (a, b)$ w.r.t. to point $Q = (c, d)$ by angle $\theta_0$. You need to first calculate the co-ordinate of point $P$ taking $Q$ as origin, which will be $(a-c, b-d)$. Convert the representation of P from cartesian co-ordinate system to polar co-ordinate system. Now the polar co-ordinates of P(taking Q as origin) will be $$\left (\sqrt{{(a-c)^2}+(b-d)^2}, sin^{-1}\frac{c-d}{\sqrt{{(a-c)^2}+(b-d)^2} } \right )$$ All you have to do now is to increase the angle by $\theta_0$. Lastly convert the polar co-odinate back to cartesian co-ordinate using this $(r, \theta)$ ~ $(r\sin\theta, r\cos\theta)$. But note that your co-ordinates are taking point Q as the origin. To find the co-ordinate w.r.t. to the real origin, just add $c$ and $d$ to the x and y co-ordinates respectively. Note- To rotate a point $(a,b)$ in cartesian co-ordinate by angle $\phi$ you just need to multiply matrix $\left[ \begin{array}{ c c } \cos\phi & -\sin\phi \\ \sin\phi & \cos \phi \end{array} \right] $and $ \left[ \begin{array}{ c c } a \\ b \end{array} \right]$, hence you can apply this matrix multiplication to $\left[ \begin{array}{ c c } a-c \\ b-d \end{array} \right] $ and then add $c$ and $d$ to the $x$ and $y$ co-ordinate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Inequality. $\sum{(a+b)(b+c)\sqrt{a-b+c}} \geq 4(a+b+c)\sqrt{(-a+b+c)(a-b+c)(a+b-c)}.$ Let $a,b,c$ be the side-lengths of a triangle. Prove that: I. $$\sum_{cyc}{(a+b)(b+c)\sqrt{a-b+c}} \geq 4(a+b+c)\sqrt{(-a+b+c)(a-b+c)(a+b-c)}.$$ What I have tried: \begin{eqnarray} a-b+c&=&x\\ b-c+a&=&y\\ c-a+b&=&z. \end{eqnarray} So $a+b+c=x+y+z$ and $2a=x+y$, $2b=y+z$, $2c=x+z$ and our inequality become: $$\sum_{cyc}{\frac{\sqrt{x}\cdot(x+2y+z)\cdot(x+y+2)}{4}} \geq 4\cdot(x+y+z)\cdot\sqrt{xyz}. $$ Or if we make one more notation : $S=x+y+z$ we obtain that: $$\sum_{cyc}{\sqrt{x}(S+y)(S+z)} \geq 16S\cdot \sqrt{xyz} \Leftrightarrow$$ $$S^2(\sqrt{x}+\sqrt{y}+\sqrt{z})+S(y\sqrt{x}+z\sqrt{x}+x\sqrt{y}+z\sqrt{y}+x\sqrt{z}+y\sqrt{z})+xy\sqrt{z}+yz\sqrt{x}+xz\sqrt{y} \geq 16S\sqrt{xyz}.$$ To complete the proof we have to prove that: $$y\sqrt{x}+z\sqrt{x}+x\sqrt{y}+z\sqrt{y}+x\sqrt{z}+y\sqrt{z} \geq 16\sqrt{xyz}. $$ Is this last inequality true ? II. Knowing that: $$p=\frac{a+b+c}{2}$$ we can rewrite the inequality: $$\sum_{cyc}{(2p-c)(2p-a)\sqrt{2(p-b)}} \geq 8p \sqrt{2^3 \cdot (p-a)(p-b)(p-c)} \Leftrightarrow$$ $$\sum_{cyc}{(2p-c)(2p-a)\sqrt{(p-b)}} \geq 16p \sqrt{(p-a)(p-b)(p-c)}$$ This help me ? Thanks :)
I couldn't solve this with elementary means. Take $\sum_{cyc}{\sqrt{x}(S+y)(S+z)} \geq 16S\cdot \sqrt{xyz} $ and note that it is enough to prove it for $S=1$ because it is homogeneous. Substituting $x\to x^2$, etc. consider the function $f(x,y,z)=(x^2+1) (y^2+1) z+(x^2+1) y (z^2+1)+x (y^2+1) (z^2+1)-16 x y z$. It is enough to prove it is non-negative if $\sum_{cyc}{x^2}=1$. Let's denote $p[i]=x^i+y^i+z^i$. Now notice that the following decomposition is valid: $$12f(x,y,z)=12 p[1] - 32 p[1]^3 + p[1]^5 + 108 p[1] p[2] - 4 p[1]^3 p[2] + 3 p[1] p[2]^2 - 76 p[3] + 2 p[1]^2 p[3] - 2 p[2] p[3]=$$ Using our assumption $p[2]=1$ this simplifies to:$f(x,y,z)=123 p[1] - 36 p[1]^3 + p[1]^5 - 78 p[3] + 2 p[1]^2 p[3]$. Lets put $x=p[1]$ and $a=p[3]$. Because $p[2]=1$ we ca deduce the ranges for $a$ and $x$ as $1 \ge p[3] \ge 1/\sqrt{3}$ and $ \sqrt{3}\ge p[1] \ge 1$. Also we need to keep in mind that from the power means inequality $p[1]/3 \le \sqrt[3]{p[3]/3}$. We need to prove that the function:$g(x)=123 x - 36 x^3 + x^5 - 78 a + 2 a x^2 $ is non-negative in our domain. It turns out that this problem is numerically tractable, and this function is decreasing within our domain and becomes zero only when $a=\sqrt{3}/3$, and $x=\sqrt{3}$. The calculations however are quite tedious. I am sure that a better way to solve this exists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Average Distance Between Random Points on a Line Segment Suppose I have a line segment of length $L$. I now select two points at random along the segment. What is the expected value of the distance between the two points, and why?
It is possible to solve this using limits however the solution is somewhat inelegant. Split the line L into $(k+1)$ equal segments by placing points $p_1$, $p_2$, ... $p_{k}$ on the line. The length of each segment is $\dfrac L{k+1}$. Let $S_i$ be the sum of distances from point $p_i$ to all other points. $S_i$ = $ \dfrac L{k+1} \sum_{j=1}^{i-1} j + \dfrac L{k+1} \sum_{j=1}^{k-i} j $ $S_i$ = $\dfrac L{2(k+1)} ( (k-i) (k-i+1) + (i-1) (i) )$ $S_i$ = $\dfrac L{2(k+1)} ( 2i^2 - 2i(k+1) + k(k+1) )$ Now, the sum of all distances from every point to every other point is $\sum_{i=1}^{i=k} S_i$. Also, the number of pairs of points is $k^2$. Therefore the average $A_k$ is $\dfrac{\sum_{i=1}^{i=k} S_i}{k^2}$ $\sum_{i=1}^{i=k} S_i = \dfrac L{2(k+1)} \sum_{i=1}^{i=k} ( 2i^2 - 2i(k+1) + k(k+1) ) $ $A_k = \dfrac L{2(k+1)k^2} \sum_{i=1}^{i=k} ( 2i^2 - 2i(k+1) + k(k+1) ) $ $A_k = \dfrac {2L}{2(k+1)k^2} \dfrac{k(k+1)(2k+1)}6 - \dfrac {2L(k+1)}{2(k+1)k^2} \dfrac{k(k+1)}2 + \dfrac {L(k(k+1))}{2(k+1)k^2}k $ $A_k = \dfrac {L(2k+1)}{6k} - \dfrac {L(k+1)}{2k} + \dfrac {L}{2} $ $A_k = \dfrac {L}{6k} (2k+1 - 3k-3 + 3k) $ $A_k = \dfrac {L}{3k} (k-1) $ $A_k = \dfrac L3 - \dfrac L{3k} $ Now, we just need to make the line segments infinitely small, i.e infinitely large $k$ and we're done. $\lim_{k \to +\infty} A_k$ = $ \dfrac L3 - \lim_{k \to +\infty} \dfrac {L}{3k} = \dfrac L3 \space\blacksquare $
{ "language": "en", "url": "https://math.stackexchange.com/questions/195245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63", "answer_count": 11, "answer_id": 5 }
Open cover rationals proper subset of R? If I were to cover each rational number by a non-empty open interval, would their union always be R? It seems correct to me intuitively, but I am quite certain it is wrong. Thanks
Take an irrational number $x \in \mathbb{R} - \mathbb{Q}$. If $q \in \mathbb{Q}$ is any rational number then $\left| x - q \right| = r_q > 0$. The union $\displaystyle \bigcup_{q \in \mathbb{Q}} (q-r_q, q+r_q)$ then does not contain $x$, and so particular does not cover $\mathbb{R}$. [Interestingly, it does cover $\mathbb{R}-\{x\}$; but you could change your intervals in the union to force this not to be the case.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/195313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 1 }
An analytic geometry question + algebra We have a Cartesian coordinate system with the points M (a,b) Q (4,2) and P (x,y) but I don't think you need P to solve this one, only M and Q. M is the middle of a circle with a radius r, and Q is a point on the circle (P is too, that's why I think P is redundant). What is the equation of this circle? I thought it was: (4-a)^2 + (2-b)^2 = r^2 Is this correct my friends?
$M(a,b)$ is the center of the circle and $r$ is the radius. And property of any circle is that every point on the circle are at the same distance $r$ from center. So let $(x,y)$ be any point on the circle. So what is the distance of that point form the center ?? $$d=\sqrt{(x-a)^2 + (y-b)^2} \quad \quad \text{Why ??}$$ and we know that distance is same for every point on circle and is equal to $r$. So $$\sqrt{(x-a)^2 + (y-b)^2} = r$$ $$(x-a)^2 + (y-b)^2 = r^2$$ Hence this is the equation of the circle with center at $(a,b)$ and radius $r$
{ "language": "en", "url": "https://math.stackexchange.com/questions/195380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Kernel of Linear Functionals Problem: Prove that for all non zero linear functionials $f:M\to\mathbb{K}$ where $M$ is a vector space over field $\mathbb{K}$, subspace $(f^{-1}(0))$ is of co-dimension one. Could someone solve this for me?
Take $u \in M$ so that $f(u) = 1$. Then for any $v \in M$ you can write $v = f(v) u + w$ where $w \in f^{-1}(0)$ since $f(w) = f(v - f(v) u) = 0$. That says $M = {\mathbb K} u + f^{-1}(0)$, so $f^{-1}(0)$ has codimension $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Elevator probability problem An elevator in a building starts with five passengers and stops at seven floors. If each passenger is equally likely to get off on any floor and all the passengers leave independently of each other, what is the probability that no two passengers will get off at the same floor?
We can rephrase the question as how many ways can a single passenger choose a floor to get off the elevator? There are five passengers and seven floors. First passenger has seven floors to choose from, second passenger has six floors to choose from, etc. Now of course the probability is the answer to the above question over total number of ways five passengers can choose seven floors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
When is the image of a null set also null? It is easy to prove that if $A \subset \mathbb{R}$ is null (has measure zero) and $f: \mathbb{R} \rightarrow \mathbb{R}$ is Lipschitz then $f(A)$ is null. You can generalize this to $\mathbb{R}^n$ without difficulty. Given a function $f: X \rightarrow Y$ between measure spaces, what are the minimal conditions (or additional structure) needed on $X$, $Y$ and $f$ for the image of a null set to be null? Any generalization (containing the above as a special case) is appreciated. Apparently if $X$ and $Y$ are $\sigma$-compact metric spaces with the $d$-dimensional Hausdorff measure and $f$ is locally Lipschitz then the result holds. Can we be more general? I would like to see something without a metric.
There is no real condition on a map which could be valid for arbitrary measure spaces. If the measures can be arbitrary, rather than depending on the metric space structure of the underlying space, (eg Hausdorff measure), then nothing about the map itself can tell you which sets will be nullsets in the image measure space. For example, given some measure on the image space and a map from some other measure space which sends nullsets to nullsets, add a measure supported on the image of some nullset. The map no longer maps nullsets to nullsets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 1, "answer_id": 0 }
Publishing an article after a book? If I first publish an article, afterward I may publish a book containing materials from the article. What's about the reverse: If I first publish a book does it make sense to publish its fragment as an article AFTERWARD?
To add to the previous answers, it's very important to realize that unless you self-publish a book or article, it is almost always the case that the publisher will hold the copyright. That will be part of the contract you sign with the publisher. That means that you can't legally extract any part of the first work for publishing elsewhere, even though you wrote it. To avoid being sued for copyright infringement in a case like this you must obtain permission from the publisher (to whom you signed over the copyright). In my experience, book publishers are generally happy to give you permission to extract part of your book for publishing elsewhere, as long as they don't see any negative impact on the sales of the original. It's not quite as straightforward for journal publishers: they can often be quite difficult when it comes to using your article as part of a compilation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Compute expectation of certain $N$-th largest element of uniform sample A premier B-school has 2009 students.The dean,a math enthusiast,asks each student to submit a randomly chosen number between 0 and 1.She then ranks these numbers in a list of decreasing order and decides to use the 456th largest number as a fraction of students that are going to get an overall pass grade this year.What is the expected fraction of students that get a passing grade? I am not able to think in any direction. As it is really difficult to comprehend.
A naive approximation would be: Assume the $N=2009$ given numbers are equidistant within $[0,1]$. The the $k$th smallest should be about $\frac{k-\frac12}N$ and hence the 456th largest = 1554th smalles $\approx \frac{1544-\frac12}{2009}=\frac{63}{82}$. But this reasoning is a bit handwaving. To be precise you should evaluate for $x\in[0,1]$: What is the probability $p(x)$ that exactly 1553 guesses are $<x$ and exactly 456 guesses are $\ge x$? Then the expected value we want is $\int_0^1 x p(x) dx$. Try this latter method for some smaller values of $N$ and $k$ and see how much it dieffers from the straightforward guess $\frac{k-\frac12}N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
In Need of Ideas for a Small Fractal Program I am a freshman in high school who needs a math related project, so I decided on the topic of fractals. Being an avid developer, I thought it would be awesome to write a Ruby program that can calculate a fractal. The only problem is that I am not some programming god, and I have not worked on any huge projects (yet). So I need a basic-ish fractal 'type' to do the project on. I am a very quick learner, and my math skills greatly outdo that of my peers (I was working on derivatives by myself last year). So does anybody have any good ideas? Thanks!!!! :) PS: my school requires a live resource for every project we do, so would anybody be interested in helping? :)
Mandelbrot set in Ruby : http://eigenjoy.com/2008/02/22/ruby-inject-and-the-mandelbrot-set/ See also : http://en.wikibooks.org/wiki/Fractals HTH
{ "language": "en", "url": "https://math.stackexchange.com/questions/195830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Integer solutions to $ x^2-y^2=33$ I'm currently trying to solve a programming question that requires me to calculate all the integer solutions of the following equation: $x^2-y^2 = 33$ I've been looking for a solution on the internet already but I couldn't find anything for this kind of equation. Is there any way to calculate and list the integer solutions to this equation? Thanks in advance!
Note that this is equivalent to solving $(x+n)^2-x^2=33$, and that $(x+n)^2-x^2=(2x+n)n$. Since $33=3\cdot 11$, we see that the solutions are $n=3,x=4$ and $n=11,x=-4$ with $n>0$ and both of thee times $-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 1 }
Working with Clocks I have frequently seen problems like how many times between _ and _ will the minute and hour hand be together, or be 90 degrees apart. So if someone can give me a complete solution to the following three parts I would be grateful! a) How would we find the number of times the minute and hour hand are together from 12:00 a.m and 12:00 p.m? b) How many times will the minute and hour hand be diametrically opposite? c) How many times will the minute and hour hand be 90 degrees apart?
Hint: imagine placing the clock on a turntable that rotates very slowly, so that the hour hand doesn't move at all. The clock itself rotates once counter-clockwise every 12 hours. The hour hand doesn't move at all. How many times around does the minute hand go every 12 hours?
{ "language": "en", "url": "https://math.stackexchange.com/questions/196008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
how to calculate the exact value of $\tan \frac{\pi}{10}$ I have an extra homework: to calculate the exact value of $ \tan \frac{\pi}{10}$. From WolframAlpha calculator I know that it's $\sqrt{1-\frac{2}{\sqrt{5}}} $, but i have no idea how to calculate that. Thank you in advance, Greg
look this How to prove $\cos \frac{2\pi }{5}=\frac{-1+\sqrt{5}}{4}$? then you will get $\sin \frac{\pi}{10}$($\frac{2\pi }{5}+\frac{\pi}{10}=\frac{\pi}{2}$) ,then $\tan \frac{\pi}{10}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/196067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 1 }
Minimal polynomial of a matrix over a field I remember from my linear algebra courses that if I have a $n\times n$ matrix with coefficients in a field (denoted as $A$) and I have a polynomial $P $ over the field s.t. $P(A)=0$ and a decompostion $P=f(x)g(x)$ over the field then $f(A)=0$ or $g(A)=0$. This was used to calculate the minimal polynomial of $A$. My question is: Is the statement above that $f(A)=0$ or $g(A)=0$ is correct or maybe I remember wrong ? the reason I am asking is that there are non zero matrices $B,C$ s.t. $BC=0$ so I don't see how the conclusion was made
This isn't correct. For example, the matrix $A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$ satisfies $p(A)=0$ where $p(t)=t^2$, however $p(t)=f(t)g(t)$ where $f(t)=g(t)=t$ and $f(A) \ne 0 \ne g(A)$. There are some similar-ish results that you might be thinking of. I'll list a few. Let $A$ be a matrix with characteristic polynomial $p$ and minimal polynomial $m$ over a given (well-behaved) field. * *$A$ satisfies its minimal polynomial; that is, $m(A) = 0$. *If $f$ is a polynomial then $f(A)=0$ if and only if $m\, |\, f$; in particular, $p(A)=0$. *If $m(t)=f(t)g(t)$ and $f(A)=0$ then $g$ is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/196109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove $\frac{1}{a(1+b)} + \frac{1}{b(1+c)} + \frac{1}{c(1+a)} \geq \frac{3}{ (abc)^{\frac{1}{3}}\big( 1+ (abc)^{\frac{1}{3}}\big) }$ using AM-GM I need to proof this inequality by AM-GM method. Any ideas how to do it? $$\frac{1}{a(1+b)} + \frac{1}{b(1+c)} + \frac{1}{c(1+a)} \geq \frac{3}{ (abc)^{\frac{1}{3}}\big( 1+ (abc)^{\frac{1}{3}}\big) }$$
We may assume without loss of generality that $abc = k^3$, which enables us to make the substitution $\displaystyle a = \frac{kq}{p}, b = \frac{kr}{q}, c = \frac{kp}{r}$. Now, $\displaystyle a(1+b) = \frac{kq}{p} \left(1+ \frac{kr}{q} \right) = \frac{k(q+kr)}{p}$. Thus, the inequality reduces to proving (after cancelling $k$ from the denominator on both sides) $$\frac{p}{q+kr} +\frac{q}{r+kp} +\frac{r}{p+kq} \ge \frac{3}{1+k} $$ By Cauchy Schwarz, we get $$\left(\frac{p}{q+kr} +\frac{q}{r+kp} +\frac{r}{p+kq} \right) ( p(q+kr) + q(r+kp) + r(p+kq)) \ge (p+q+r)^2$$ Thus, we have that $$\frac{p}{q+kr} +\frac{q}{r+kp} +\frac{r}{p+kq} \ge \frac{(p+q+r)^2}{(1+k)(pq+qr+rp)} \ge \frac 3{1+k}$$ due to the well known $(p+q+r)^2 \ge 3(pq+qr+rp)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/196176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Problem in proving a set is a Sigma Algebra Let $(\Omega,\mathscr A)$ be a measurable space. If $\varnothing \subset X \subset \Omega$, let $$\mathscr F = \{ F \subseteq \Omega, F = X \cap Y, Y \in \mathscr A\} \;. $$ I need to prove that $\mathscr F$ is a $ \sigma$-Algebra on $X$. So, I have to show that * *$\varnothing \in \mathscr F$ *If $F \in \mathscr F$, then $F^C \in \mathscr F $ *If $F_i \in \mathscr F$, then $\bigcup_{i=1}^\infty F_i \in \mathscr F $ I have trouble in showing 2 and 3 conditions.
HINTS: For both (2) and (3), note that $F\in\mathscr{F}$ iff there is a $Y_F\in\mathscr{A}$ such that $F=X\cap Y_F$. (2) What is $X\cap(\Omega\setminus Y_F)$? (3) What is $X\cap\bigcup_iY_{F_i}$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/197263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Epsilon-Delta Proof for $x^n$ tends to 0 What is the epsilon proof that $x^n \rightarrow 0$ as $n \rightarrow \infty$ provided $|x| < 1 $? I only know it's true because I know the geometric series converges, which implies its terms must tend to 0, but never seen an epsilon proof of this simple fact.
Let $|x|=\dfrac{1}{1+t}$. Since $|x|\lt 1$, we have $t\gt 0$. By the Binomial Theorem, $(1+t)^n \ge 1+nt\gt nt$. Now it is easy to find $N$ such that if $n \gt N$, then $\dfrac{1}{nt}\lt \epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
If $\sum a_n$ converges, then $\sum \sqrt{a_na_{n+1}}$ converges Prove that if the positve term series $\sum^{\infty}_{n=1}a_n$ is convergent, also $\sum^{\infty}_{n=1}\sqrt{a_na_{n+1}}$ is convergent. Prove that if the positive term series $\sum^{\infty}_{n=1}a_n$ and $\sum^{\infty}_{n=1}b_n$ are convergent, also $\sum^{\infty}_{n=1}a_nb_n$ is convergent. I've tried to solve it using comparison test, but no results.
One can also do these problems with Cauchy-Schwarz which states that for $a_n,b_n\geq 0,$ we have $\displaystyle \sum a_n b_n \leq \left(\sum a_n^2 \right)^{1/2} \left(\sum b_n^2 \right)^{1/2}.$ The second problem then follows by the argument in the comments of Brian's answer. It also gives $$\sum \sqrt{a_n a_{n+1} } \leq \left(\sum a_n \right)^{1/2} \left(\sum a_{n+1} \right)^{1/2} $$ which solves the first problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
$\mathbb{R}$ represented using an infinite union of finite sets containing reals? If $S_1$, $S_2$, $\dots$ are sets of real numbers and if $\bigcup_{j=1}^{\infty}{S_j} = \mathbb{R}$ then one of the sets $S_j$ must have infinitely many elements. I believe at least one of the $S_j$ must be an infinite set, but I can't work out a proof. What's the trick I'm missing?
Suppose that all sets were finite, the union of countably many finite sets is countable, but the real numbers are not. [This argument uses the axiom of choice, however it is true without the axiom of choice that a countable union of finite sets of real numbers is countable]
{ "language": "en", "url": "https://math.stackexchange.com/questions/197455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to prove that $\lim_{n \to \infty} n x^{n} = 0 $ when $0Intuitively it's easy, but hard to prove by the epsilon-delta method: $$ \lim_{n \to \infty} n x^{n} = 0$$
We have $nx^n=\exp(\log n)\exp(n\log x)=\exp(n\log x+\log n)$, so it's enough to show that $n\log x+\log n\to -\infty$ as $n\to +\infty$. We use the fact that $\log n\leq \sqrt n$ for $n$ large enough to see that $$n\log x+\log n\leq n\log x+\sqrt n=n\left(\log x+\frac 1{\sqrt n}\right).$$ As $\log x<0$, $\log x+\frac 1{\sqrt n}<\frac{\log x}2$ for $n$ large enough hence $$n\log x+\log n\leq n\frac{\log x}2,$$ which gives the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 9, "answer_id": 0 }
Reduction of polynomials mod p 1-Let $f_1,f_2\in\mathbb{Z}[X]$ be two different irreducible monic polynomials. Is it true that for almost all primes $p$ (that is, for all but a finite number of primes), the polynomials $\bar{f}_1$ and $\bar{f}_2$ have no common roots in $\mathbb{F}_p$? (here $\bar{f}$ means reduction modulo $p$).
By Gauss' lemma, $f_1$ and $f_2$ remain irreducible over $\mathbb{Q}[x]$. The Euclidean algorithm then gives polynomials $g_1,g_2\in\mathbb{Q}[x]$ such that $g_1(x)f_1(x)+g_2(x)f_2(x)=1$. For all primes $p$ which do not divide the denominator of any coefficient of $g_1$ or $g_2$, we can reduce the above equation mod $p$, and conclude that $\overline{f}_1$ and $\overline{f}_2$ are coprime, so don't share a root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Problem understanding “and”,“or” and importance of “()” in set theory I was reading the distributive law of sets (I keep coming back to basic maths when needed, forget it after some time, then come back again. Like I'm in loop): $A\cup(B \cap C)=(A\cup B)\cap(A\cup C)$ The proof (which I'm assuming everyone knows) has a transaction between lines which baffled me , which are: $x \in A$ or ($x\in B$ and $x\in C$) ($x\in A$ or $x\in B$) and ($x\in A$ or $x \in C$) in the second line, did they just applied distributive law? (in the proof of distributive law itself Oo) or Did they simple assumed "and" = "+" etc like following: $2 X (A + B) \equiv (2XA) + (2XB)$ Another question will be : ($x\in A$ or $x\in B$) or $x\in C\implies x\in A$ or ($x\in B$ or $x\in C$) I can just open the brackets?
It does seem quite naughty to do this sort of thing; but yes, you are allowed. But only if you know what you're doing. However, it does not assume the result. $\cap$ and $\cup$ are set operations, whereas 'and' and 'or' are logical operations. More precisely, in logic, we can write $p \wedge q$ to mean '$p$ and $q$' and $p \vee q$ to mean '$p$ or $q$'. By use of truth-tables, we can prove things like $p \vee (q \wedge r) \leftrightarrow (p \vee q) \wedge (p \vee r)$, and these proofs have nothing specifically to do with set theory. But then these logical rules transfer across by setting $p$ to be the assertion $x \in A$, $q$ to be the assertion $x \in B$, and so on. $+$ and $\times$ are arithmetic operators and thus yet another thing altogether from $\cap$,$\cup$ and $\wedge$,$\vee$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Specific homotopy between complex conjugation and the identity. Consider the set $\mathcal{C} = C^{\infty}(\mathbb{C}^*, \mathbb{C}^*)$, where $\mathbb{C}^* = \mathbb{C}\backslash\{0\}$. Both $f(z) = z$ and $g(z) = \bar{z}$ can be seen as elements in $\mathcal{C}$. Question: Is there a (smooth, not necessarily analytic) homotopy $H : [0, 1] \times \mathbb{C}^* \rightarrow \mathbb{C}^*$ between $f$ and $g$? I tried what seemed to me like natural choices, such as deforming the imaginary part, but the problem is to avoid producing some function which maps a non-zero complex number to zero. Motivation: In case anyone is wondering, this problem arises in showing that two complex line bundles over the $2$-sphere are (smoothly) isomorphic. The bundles are $L_g^*$ and $L_{1/g}$, where $g : \mathbb{C}^* \rightarrow \mathbb{C}^*$ is the gluing cocycle (there is only one, since the $2$-sphere is covered by two stereographic projections). Thanks.
Community verdict (from comments by t.b. and Michael): $z$ and $\bar z$ are not homotopic in $C(\mathbb C^*,\mathbb C^*)$. More precisely, the set of homotopy classes of continuous maps $h\in C(\mathbb C^*,\mathbb C^*)$ is $\{[z\mapsto z^n]:n\in\mathbb Z\}$, and all classes $[z\mapsto z^n]$ are distinct (distinguished by their action on the fundamental group). The complex conjugation belongs to $[z\mapsto z^{-1}]$. At least tangentially related reference: Stein Manifolds and Holomorphic Mappings: The Homotopy Principle in Complex Analysis by Franc Forstnerič.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$\int\frac{x^3}{\sqrt{4+x^2}}$ I was trying to calculate $$\int\frac{x^3}{\sqrt{4+x^2}}$$ Doing $x = 2\tan(\theta)$, $dx = 2\sec^2(\theta)~d\theta$, $-\pi/2 < 0 < \pi/2$ I have: $$\int\frac{\left(2\tan(\theta)\right)^3\cdot2\cdot\sec^2(\theta)~d\theta}{2\sec(\theta)}$$ which is $$8\int\tan(\theta)\cdot\tan^2(\theta)\cdot\sec(\theta)~d\theta$$ now I got stuck ... any clues what's the next substitution to do? I'm sorry for the formatting. Could someone please help me with the formatting?
Let $u = x^2 + 4$, $du = 2x\,dx$: \begin{align*} I &= \frac{1}{2} \int \frac{u - 4}{\sqrt{u}}du \end{align*} Should be easy to take it from there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
What is the value of $w+z$ if $1I am having solving the following problem: If the product of the integer $w,x,y,z$ is 770. and if $1<w<x<y<z$ what is the value of $w+z$ ? (ans=$13$) Any suggestions on how I could solve this problem ?
The number 770 is the product of the prime numbers 2,5,7,11 , and 1<2<5<7<11 Thus, the answer is 2+11 = 13 . Hope this helps
{ "language": "en", "url": "https://math.stackexchange.com/questions/197820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Maclaurin expansion of $\arcsin x$ I'm trying to find the first five terms of the Maclaurin expansion of $\arcsin x$, possibly using the fact that $$\arcsin x = \int_0^x \frac{dt}{(1-t^2)^{1/2}}.$$ I can only see that I can interchange differentiation and integration but not sure how to go about this. Thanks!
The following Python + SymPy script: from sympy import * # symbolic variable x = Symbol('x') # compute and print Maclaurin series print series(asin(x),x,0,11,'+') produces the following output: x + x**3/6 + 3*x**5/40 + 5*x**7/112 + 35*x**9/1152 + O(x**11) Therefore, assuming that the implementation is correct, the answer to your question is: $$\displaystyle\arcsin(x) \approx x + \frac{1}{6} x^3 + \frac{3}{40} x^5 + \frac{5}{112} x^7 + \frac{35}{1152} x^9$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/197874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 8, "answer_id": 1 }
Quotient Riemann surfaces Let $\mathbb{H}$ be an upper half plane (this is a Riemann surface), then $PSL(2,\mathbb{Z})$ acts on $\mathbb{H}$ and it is well-know that $$ \mathbb{H}/PSL(2,\mathbb{Z})\cong \mathbb{C} $$ is again a Riemann surface. There are however three fixed points on $\mathbb{H}$, namely $$ e^{\frac{2\pi i}{6}}, \ \ \ i, \ \ \ e^{\frac{2\pi i}{3}}. $$ I hence think the image of these points should be considered as orbifold point of $\mathbb{H}/PSL(2,\mathbb{Z})$. I know there is a map $z\mapsto z^n$ which gives a new local parameter at each orbifold point, but is it really natural? It seems to me that this new chart is not really natural as it is not conformal at the orbifold point (it maps an angle $\theta$ to $n\theta$, right?) More generally if a finite group $G$ acts on a Riemann surface $S$, we can ask a similar question about the quotient $S/G$. Should one think of $S/G$ as a smooth Riemann surface or an orbifold?
Yes, I'd agree with the conclusion of your observation. The traditional local re-parametrization to avoid discussion of orbifolds is made possible by the fact that the isotropy group of point on a Riemann surface is a subgroup of a circle-group, so a discrete subgroup of that isotropy group is a finite cyclic group, thus admitting "unwinding" by the $z\rightarrow z^n$ maps the question mentions. In higher dimensions, orthogonal groups $SO(n)$ (with $n>2$) have non-abelian finite subgroups, so this dodge is not reliably available.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
A dubious proof using Weierstrass-M test for $\sum^n_{k=1}\frac{x^k}{k}$ I have been trying to prove the uniform convergence of the series $$f_{n}(x)=\sum^n_{k=1}\frac{x^k}{k}$$ Obviously, the series converges only for $x\in(-1,1)$. Consequently, I decided to split this into two intervals: $(-1,0]$ and $[0,1)$ and see if it converges on both of them using the Weierstrass M-test. For $x\in(-1,0]$, let's take $q\in(-1,x)$. We thus have: $$\left|\frac{x^k}{k}\right|\leq\left|x^k\right|\leq\left|q^k\right|$$ and since $\sum|q^n|$ is convergent, $f_n$ should be uniformly convergent on the given interval. Now let's take $x\in[0,1)$ and $q\in(x,1)$. Now, we have: $$\left|\frac{x^k}{k}\right|=\frac{x^k}{k}\leq\ x^k\leq{q^k}$$ and once again, we obtain the uniform convergence of $f_n$. However, not sure of my result, I decided to cross-check it by checking whether $f_n$ is Cauchy. For $x\in(-1,0]$, I believe it was a positive hit, since for $m>n$ we have: $$\left|f_{m}-f_{n}\right|=\left|f_{n+1}+f_{n+2}+...f_{m}\right|\leq\left|\frac{x^n}{n}\right|\leq\frac{1}{n}$$ which is what we needed. However, I haven't been able to come up with a method to show the same for $x\in[0,1)$. Now, I am not so sure whether $f_n$ is uniformly convergent on $[0,1)$. If it is, then how can we show it otherwise, and if it isn't, then how can we disprove it? Also, what's equally important - what did I do wrong in the Weierstrass-M test?
Your choice of $q$ depends on $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Generating integral triangles with two equal sides How can I generate all triangles which have integral sides and area, and exactly two of its three sides are equal? For example, a triangle with sides ${5,5,6}$ satisfies these terms.
Heron's formula says the area of a triangle with sides $a, b, c$ is $$Area = \sqrt{s(s-a)(s-b)(s-c)}$$ where $s$ is the semiperimeter, $s = \frac{a+b+c}{2}$. Now, your assumption is two sides are equal, so $a, b, b$. The area is now $$Area = \sqrt{s(s-a)(s-b)^2} = (s-b)\sqrt{s(s-a)}$$ Now $s-b$ might not be an integer, if $s$ is not, but it will at worst be a fraction of the form $\frac{z}{2}$. I will deal more with this later. For now, we want $\sqrt{s(s-a)}$ to be an integer, so $s(s-a)$ must be a perfect square. Now, $s = \frac{a + 2b}{2} = \frac{a}{2} + b$ so this simplifies to wanting $$s(s-a) = (\frac{a}{2} + b)(\frac{a}{2} + b - a) = (b + \frac{a}{2})(b - \frac{a}{2}) = b^2 - \frac{a^2}{4}$$ a perfect square, so let's say it equals $c^2$. Thus, we want all integer solutions to $$c^2 + \left(\frac{a}{2}\right)^2 = b^2$$ If we happen to get a solution such that $(s-b)\sqrt{s(s-a)}$ is not an integer, then multiply all side lengths by 2 to get a similar triangle with $s$ and thus $s-b$ an integer. Other than that, the problem is reduced to finding all solutions to this equation, which is a well known problem with a well known solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Distance between real finite dimensional linear subspaces Is there a usual distance between linear subspaces ($V,W$) of an n-dimensional normed vector space with inner product? In the case of hyper-planes one could use the angle (based on the inner product of the vector space). What can be used in the case of subspaces with lower dimension (not necessarily equal)? e.g. $dim(V)= n-2$ and $dim(W) = n-4$ Thanks
There is, if both subspaces have the same dimension. You can actually make the set of $k$-dimensional subspaces of a vector space $V$ into a metric space (a manifold, in fact) called the Grassmannian, denoted $\mathrm{Gr}(k,V)$. The distance between two subspaces $W$ and $W'$ is then $\|P_W-P_{W'}\|$ where $P_X$ denotes projection onto $X$ and $\|\cdot\|$ is the operator norm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Basic set questions I would really appreciate it if you could explain the set notation here $$\{n ∈ {\bf N} \mid (n > 1) ∧ (∀x,y ∈ {\bf N})[(xy = n) ⇒ (x = 1 ∨ y = 1)]\}$$ 1) What does $∀x$ mean? 2) I understand that $n ∈ {\bf N} \mid (n > 1) ∧ (∀x,y ∈ {\bf N})$ means $n$ is part of set $\bf N$ such that $(n > 1) ∧ (∀x,y ∈ {\bf N})$. What do the $[\;\;]$ and $⇒$ mean? 3) Prove that if $A ⊆ B$ and $B ⊆ C$, then $A ⊆ C$ I could prove it by drawing a Venn diagram but is there a better way?
$\forall$ means "for all" or "for each". $\forall x, y \in \mathbb{N} ...$ means that no matter what natural numbers $x, y$ you choose, what follows has to be true. The square brackets are just for grouping things; they behave like parentheses for most purposes. The $\implies$ means "implies". $A \implies B$ means that whenever $A$ is true, $B$ must also be true too. Knowing all this, let's decipher the notation. All those symbols you posted define a set; let's call it $S$. Then we have $S = \{n \in \mathbb{N}\ |\ (n > 1) \wedge (\forall x, y \in \mathbb{N})[(xy = n) \implies (x = 1 \vee y = 1)] \}$. The first thing is $n \in \mathbb{N}$. This tells us that whatever the elements of $S$ are, they will be natural numbers. Then comes the vertical bar (a forward slash or a colon are also sometimes used), which tells us that if $n$ is going to be in $S$, then $n$ has to satisfy all the following conditions. We have two conditions joined by an "and" symbol. This means that both have to be true at the same time if $n$ is to be in $S$. $n > 1$ just means that we exclude $1$ from $S$. The other part is $(\forall x, y \in \mathbb{N})[(xy = n) \implies (x = 1 \vee y = 1)]$. As I said, the square brackets can be replaced by parentheses if you prefer: $(\forall x, y \in \mathbb{N})((xy = n) \implies (x = 1 \vee y = 1))$. The first thing here is $\forall x, y \in \mathbb{N}$. This means that no matter which pair of natural numbers $x,y$ we choose, what follows has to be true. If we manage to find just one pair $x,y$ that don't fulfill the conditions we're about to set, $n$ can't be in $S$. The condition is: $(xy = n) \implies (x = 1 \vee y = 1)$. So if the product of the $x,y$ we picked is $n$, this means that at least one of them is $1$. This must be true no matter what $x,y$ we use. Therefore, $S$ is the set of prime numbers. Note something: if the $x,y$ we picked don't satisfy $xy = n$, then we don't care what happens next. The definition of $A \implies B$ is that whenever $A$ is true then $B$ must also be true. If $A$ is false, well, we don't care what happens to $B$. As for your last question: we want to prove that if $A \subseteq B$ and $B \subseteq C$ then $A \subseteq C$. Let's work from the definition: $A \subseteq B$ means that for every $x \in A$ that we choose, then it also happens that $x \in B$. Likewise, $B \subseteq C$ means that for all $x \in B$, $x \in C$. Let's take some $a \in A$. The fact that $A \subseteq B$ tells us that $a \in B$. But since $a \in B$, $B \subseteq C$ implies that $a \in C$. We have proved that no matter what element of $A$ we pick, it will also belong in $C$. Therefore, $A \subseteq C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
How to calculate standard deviation with streaming inputs? Is there a formula that is capable of operating on streaming inputs and approximating standard deviation of the set of numbers?
You can use the formula $\sigma = \sqrt{\bar{x^2}-(\bar x)^2}=\sqrt{\frac {\sum x^2}N-\left(\frac {\sum x}N\right)^2}$ Each sum can be accumulated as the data comes in. The disadvantage compared to averaging the data first and subtracting the average from each item is you are more prone to overflow and loss of significance, but mathematically it is equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 2, "answer_id": 0 }
Example where $f\circ g$ is bijective, but neither $f$ nor $g$ is bijective Can anyone come up with an explicit example of two functions $f$ and $g$ such that: $f\circ g$ is bijective, but neither $f$ nor $g$ is bijective? I tried the following: $$f:\mathbb{R}\rightarrow \mathbb{R^{+}} $$ $$f(x)=x^{2}$$ and $$g:\mathbb{R^{+}}\rightarrow \mathbb{R}$$ $$g(x)=\sqrt{x}$$ $f$ is not injective, and $g$ is not surjective, but $f\circ g$ is bijective Any other examples?
If $X$ is any set at all with $\left| X \right| > 1$ then the diagonal map $$\begin{align}\Delta : X &\to X \times X \\ x &\mapsto (x,x)\end{align}$$ and the projection map $$\begin{align}\pi : X \times X &\to X\\ (x,y) &\mapsto x \end{align}$$ satisfy $\pi \circ \Delta = \text{id}_X$, which is bijective, and yet neither $\Delta$ nor $\pi$ is bijective. In a similar vein, if $f : X \to Y$ is any function and $\left|Y\right| > 1$ then we can take the graph $\Gamma_f : X \to X \times Y$ given by $\Gamma_f(x) = (x,f(x))$ and the same projection. The above example is the special case where $f(x)=x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 7, "answer_id": 1 }
Equivalence statement of Ratio Test 1.Series $\sum a_n$ diverges if (i) there exists $N\in \mathbb{N}$ such that $n≧N ⇒ |a_{n+1}/a_n|≧1$ 2.Series $\sum a_n$ diverges if (ii)$\limsup |a_{n+1}/a_n|>1$ Here, both 1&2 are true. It is easy to see that (ii) implies (i), but is the converse true? Is 2 more general than 1? Plus, it's usual that root test is harder to apply than ratio test, but i think root test is easier when a sequence $\{a_n\}$ has infinite 0 terms, since $|a_{n+1}/a_n|$ cannot be defined. Am i right or is there a "trick" to avoid this? EDIT: I just noticed that this post is wrong. Statement 2 is indeed false.
Take $a_n=1$ to see that the converse is not true. i) means that the sequence $\{|a_n|\}$ is increasing, but it has to converge to $0$ in order to have convergence of the series $\sum_{n=1}^{+\infty}a_n$, so $a_n=0$ for $n$ large enough. But we had to assume that $a_n>0$ in order to i) make sense. i) $\Rightarrow$ ii) is not true: take $a_n=2^{(-1)^n}$ for example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Row reduction over any field? EDIT: as stated in the first answer, my initial question was confused. Let me restate the question (I have to admit that it is now quite a different one): Let's say we have a matrix $A$ with entries from $\mathbb{C}$. Such matrices form a vector space over $\mathbb{R}$ and also form another vector space over $\mathbb{C}$. If we want to row reduce $A$, we see that often we cannot row reduce it over $\mathbb{R}$, that is, using elementary operations involving only scalars which are real numbers, however we can row reduce it using operations that involve complex numbers. In conclusion, it seems to me that the row reduction of a matrix with elements from a field (all those matrices form a vector space) may or may not be possible, depending on the underlying field of that vector space.. So this helpful tool (row reduction) is not always available for us to use when we get a little more far from, let's say, the most "elementary" vector spaces. Is my observation correct? Question as initially stated (please ignore): Consider the vector space of complex matrices over the field of (i) complex numbers and (ii) real numbers. It is straightforward to find examples where a complex matrix can be row-reduced (say, to the identity matrix) in case (i) but cannot be row-reduced in case (ii). What gives? So, we can assume that a matrix may be invertible over $\mathbb{C}$ but not over $\mathbb{R}$? Thanks in advance..
Your question is confused. What would it mean for a non-real complex matrix to be invertible over $\mathbb R$? That it has an inverse in the set of real matrices? This seems unlikely, as the inverse of a real matrix is always real, if it exists, and being inverse matrices is a symmetric relation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability On Numbers Each of 20 identical cards is numbered with exactly one of the numbers 1,2,3,.....20. One card is drawn randomly and it is known that the number on the card is less than 13. What is the probability that the number on the card is an even number?
If the card's value is less than 13, then given the other information its value must be in the range 1..12. This range has an even number of elements (12), and thus exactly half of them (6) will be even, so the probability of the card having an even value is $\dfrac{6}{12} = \dfrac{1}{2} =.5$. The facts that (1) the set actually contains more than 12 cards, and that (2) the overall probability of drawing any card in the full set with an even number is also .5, are immaterial; you are told the card's value is in a subset of the full set (cards < 13), thus you are determining the probability of it being in a subset of the subset (even cards < 13), as if the subset were the full set of cards. If we did not know the card was < 13, and were asked what the probability was of the card being an even number < 13, that's a very different question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exactly one nontrivial proper subgroup Question: Determine all the finite groups that have exactly one nontrivial proper subgroup. MY attempt is that the order of group G has to be a positive nonprime integer n which has only one divisor since any divisor a of n will form a proper subgroup of order a. Since 4 is the only nonprime number that has only 1 divisor which is 2, All groups of order 4 has only 1 nontrivial proper subgroups (Z4 and D4)
You began reasonably, but then you went off the track. First, $D_4$ won’t work: it’s the Klein $4$-group, $(\Bbb Z/2\Bbb Z)\times(\Bbb Z/2\Bbb Z)$, which has three non-trivial proper subgroups, one generated by each of the non-identity elements. Secondly, the fact that some integer $n$ divides the order of $G$ does not ensure that $G$ has a subgroup of order $d$, as noted in the comments. You do, however, have the first Sylow theorem available. (Note: This replaces the nonsense that I wrote originally.) Finally, $4$ isn’t the only non-prime with only one non-trivial divisor: for each prime $p$, $p^2$ is such a number. You want at least the groups $\Bbb Z/p^2\Bbb Z$, or in your notation $\Bbb Z_{p^2}$, for all primes $p$. Can you identify exactly what the non-trivial proper subgroup is in each of these groups?
{ "language": "en", "url": "https://math.stackexchange.com/questions/198744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Modular Arithmetic over a Matrix What are the rules for modular arithmetic when multiplying two matrices? I want to calulate $C = AB \mod{n}.$ Aside from the obvious way of performing the modulo after the multiplication, when and where can i safely perform the modulo during the multiplication algorithm? [1] Normally: $C_{ij}=\displaystyle\sum\limits_{k=0}^m A_{ik}B_{kj}$ Can I take each of these summands $A_{ik}B_{kj} \mod{n}$, as follows? [2] $C_{ij}=\displaystyle\sum\limits_{k=0}^m [ A_{ik}B_{kj}\pmod{n} ]$ Here is an example: $A = \left(\begin{array}{cc} 9 & 2 \\ 10 & 10 \\ \end{array}\right)$ $B = \left(\begin{array}{cc} 7 & 3 \\ 1 & 6 \\ \end{array}\right)$ $C = \left(\begin{array}{cc} 65 & 39 \\ 80 & 90 \\ \end{array}\right)$ $C \equiv \left(\begin{array}{cc} 2 & 4 \\ 3 & 6 \\ \end{array}\right) \mod{7}$ edit: using [2] $C \mod 7= \left(\begin{array}{cc} 2 & 11 \\ 3 & 6 \\ \end{array}\right)$ This doesn't result in the same matrix.
Once you take pass into modular arithmetic, you're stuck there: $C \mod{7}$ has values in the integers $\mod{7}$, not in the integers themselves. But modulo 7, $\left(\begin{matrix}2&11\\3&6\end{matrix}\right)=\left(\begin{matrix}2&4\\3&6\end{matrix}\right)$ simply because $11\equiv 4 \mod{7}$. And in general, yes, you can apply $[2]$; you can even get $C\mod{7}\equiv(A\mod{7})(B\mod{7})\mod{7}$. All this follows from the facts that $$ab \mod{n}\equiv (a\mod{n}b\mod{n}) \mod {n}$$ and $$a+b \mod{n}\equiv (a\mod{n}+b\mod{n}) \mod{n}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/198919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Is there a minimal diverging series? Is there a function $f:\mathbb{N} \to \mathbb{R}^+$ s.t. its series $\Sigma_{i=0}^\infty f(n)$ diverges but the series for all function in $o(f)$ converge?
Sadly, no such function can exist. For suppose $f$ is such a function. Define the partial sums: $$F(n) := \sum_{i=0}^n f(n) $$ Then you can find a function that diverges more slowly, say: $$ G(n) := \sqrt{F(n)} $$ This new $G$ is increasing, so you it is the sequence of the partial sums for $g$ given as: $$ g(n) = G(n) - G(n-1) $$ It remains to check that $g = o(f)$: $$ \frac{g(n)}{f(n)} = \frac{G(n) - G(n-1)}{G(n)^2 - G(n-1)^2} = \frac{1}{G(n) + G(n-1)} < \frac{1}{G(n)} = o\left(1\right)$$ where the $o(1)$ approximation follows from divergence of $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Cantor ternary set problem Let C be a cantor ternary set If $x,y \in C,$ then obviously $x-y \in [-1,1]$ Conversely I want to prove that if $w \in [-1,1],$ then there exists $x,y \in C$ such that $x-y=w$ How to prove this problem?
Let $w$ in $[-1,1]$, then $w=-1+2u$ with $u$ in $[0,1]$. Like every number in $[0,1]$, $u$ can be written in base $3$, that is, as a series of negative powers of $3$, namely, $$ u=\sum\limits_{k\geqslant1}\frac{u_k}{3^k},\qquad u_k\in\{0,1,2\}. $$ Let $w_k=2u_k-2$, hence $w_k$ is in $\{-2,0,2\}$. Then $$ w=-1+\sum\limits_{k\geqslant1}\frac{w_k+2}{3^k}=\sum\limits_{k\geqslant1}\frac{w_k}{3^k}. $$ For every $k\geqslant1$, define * *$x_k=0$ and $y_k=2$ if $w_k=-2$, *$x_k=y_k=0$ if $w_k=0$, *and $x_k=2$ and $y_k=0$ if $w_k=2$. Then $w_k=x_k-y_k$ for every $k$ hence $w=x-y$ with $$ x=\sum\limits_{k\geqslant1}\frac{x_k}{3^k},\qquad y=\sum\limits_{k\geqslant1}\frac{y_k}{3^k}. $$ Since $x_k$ and $y_k$ are in $\{0,2\}$ for every $k$, both $x$ and $y$ are in $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Breakdown of solution to inviscid Burgers equation Let $u = f(x-ut)$ where $f$ is differentiable. Show that $u$ (amost always) satisfies $u_t + uu_x = 0$. What circumstances is it not necessarily satisfied? This is a question in a tutorial sheet I have been given and I am slightly stuck with the second part. To show that $u$ satisfies the equation I have differentiated it to get: $u_t = -f'(x-ut)u$ $u_x = f'(x-ut)$ Then I have substituted these results into the original equation. The part I am unsure of is where it is not satisfied. If someone could push me in the right direction it would be much appreciated.
Following @martini's comment, and rearranging, you should find $$ (1+tf')u_t=-uf',$$ omitting the argument of $f'$ for convenience. Likewise, $$ (1+tf')u_x=f'.$$ These lead to $$(1+tf')(u_t+uu_x)=0,$$ from which the equation $u_t+uu_x=0$ follows, provided the coefficient $1+tf'(x-ut)$ is non-zero. This is the source of the 'almost always' aspect of the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show $8\mid n^2-1$ if $n$ is an odd positive integer. Show that $n^2-1$ is divisible by $8$, if $n$ is an odd positive integer. Please help me to prove whether this statement is true or false.
Viewing the integer numbers modulo 8, write $a \equiv b$ for $8|(a-b)$. (This structure, $\mathbb Z_8$ is generated by $8\equiv 0$ and is friendly with operations $+$ and $\cdot$.) We have the following set of odd numbers: $\{ 1,3,5,7\}$. Or, rewriting by $5\equiv-3$ and $7\equiv -1$, this is only $$\{ 1,3,-3,-1\}$$ The squares of these: $$(\pm 1)^2 = 1\ \text{ and }\ (\pm 3)^2=9\equiv 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/199185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 10, "answer_id": 0 }
How to find the least path consisting of the segments AP, PQ and QB Let $A = (0, 1)$ and $B = (2, 0)$ in the plane. Let $O$ be the origin and $C = (2, 1)$ . Consider $P$ moves on the segment $OB$ and $Q$ move on the segment $AC$. Find the coordinates of $P$ and $Q$ for which the length of the path consisting of the segments $AP, PQ$ and Q$B$ is least.
If you draw it, $ACBO$ is a rectangle, and the path $APQB$ is a zig-zag. Reflect the $PQB$ part to the line $OB$ ($P$ and $B$ stays, $Q$ goes to $Q'$, say), and let $AC$ go to $A'C'$ by this reflection, a horizontal subsegment of $y=-1$. Reflect $Q'B$ to this new $A'C'$ line, taking $B$ to $B'$. Drawn? Then $AP+PQ+QB = AP+PQ'+Q'B'$, and this is clearly minimal iff it is straight $A-B'$. That is the $P$ and $Q$ points are at one and two third parts of the original segments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Combinatorially prove that $\sum_{i=0}^n {n \choose i} 2^i = 3^n $ So I'm not sure at all how to prove things using a combinatorial proof. Where to do I start? What do i need to think about etc. For example how would i prove $$\sum_{i=0}^n {n \choose i} 2^i = 3^n $$
The general strategy is to count the same thing in two different ways. But this is too general a recipe to be truly useful. The devil is in the details: What shall we count in two different ways? In our case, $3^n$ is the number of $n$-letter words in the alphabet $\{a,b,c\}$. These words can be also counted as follows. Choose the $i$ places that will get an $a$ or a $b$. There are $\dbinom{n}{i}$ ways to do this. For each such choice there are $2^i$ ways to fill the chosen places with $a$'s and/or $b$'s. So $\dbinom{n}{i}2^i$ counts the words that have a total of $i$ $a$'s and/or $b$'s, or equivalently the number of words that have $n-i\,$ $c$'s. Now add up, $i=0$ to $n$. This counts all the words.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What is the inverse function of $\ x^2+x$? I think the title says it all; I'm looking for the inverse function of $\ x^2+x$, and I have no idea how to do it. I thought maybe you could use the quadratic equation or something. I would be interesting to know.
We can write $x^2+x=(x+\frac{1}{2})^2 - \frac{1}{4}$ by completing the square. If you restricted yourself to the domain $x \ge -\frac{1}{2}$ then this function would be invertible, and its inverse would be the function $$x \mapsto \sqrt{x+\frac{1}{4}} - \frac{1}{2}$$ You could do a similar thing on the domain $x \le -\frac{1}{2}$; but the function has no well-defined inverse on $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 1 }
Find the necessary and sufficient conditions on $a$, $b$ so that $ax^2 + b = 0$ has a real solution. This question is really confusing me, and I'd love some help but not the answer. :D Is it asking: What values of $a$ and $b$ result in a real solution for the equation $ax^2 + b = 0$? $a = b = 0$ would obviously work, but how does $x$ come into play? There'd be infinitely many solutions if $x$ can vary as well ($a = 1$, $x = 1$, $b = -1$, etc.). I understand how necessary and sufficient conditions work in general, but how would it apply here? I know it takes the form of "If $p$ then $q$" but I don't see how I could apply that to the question. Is "If $ax^2 + b = 0$ has a real solution, then $a$ and $b =$ ..." it?
If $ax^2 + b=0$, then $x^2=\dfrac{-b}{a}$. The solutions are therefore $x=\pm\sqrt{-b/a}$. For the equation to have a real solution, $-b/a$ must be non-negative. A fraction is non-negative if * *the numerator and denominator are positive, or *the numerator and denominator are negative, or *the numerator is $0$ and the denominator is not $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Pursuit Curve. Dog Chases Rabbit. Calculus 4. (a) In Example 1.21, assume that $a$ is less than $b$ (so that $k$ is less than $1$) and find $y$ as a function of $x$. How far does the rabbit run before the dog catches him? (b) Assume now that $a=b$, and find $y$ as a function of $x$. How close does the dog come to the rabbit? Example 1.21 A rabbit begins at the origin and runs up the $y-axis$ with speed $a$ feet per second. At the same time, a dog runs at speed $b$ from the point $(c,0)$ in pursuit of the rabbit. What is the path of the dog? Solution: At time $t$, measured from the instant both the rabbit and the dog start, the rabbit will be at the point $R=(0,at)$ and the dog at $D=(x,y)$. We wish to solve for $y$ as a function of $x$. $$\frac{dy}{dx}=\frac{y-at}{x}$$ $$xy'-y=-at$$ $$xy''=-a\frac{dt}{dx}$$ Since the $s$ is a arc length along the path of the dog, it follows that $\frac{ds}{dt}=b$. Hence, $$\frac{dt}{dx}=\frac{dt}{ds}\frac{ds}{dx}=\frac{-1}{b}\sqrt{1+(y')^2}$$ $$xy''=\frac{a}{b}\sqrt{1+(y')^2}$$ For convenience, we set $k=\frac{a}{b}$, $y'=p$, and $y''=\frac{dp}{dx}$ $$\frac{dp}{\sqrt{1+p^2}}=k\frac{dx}{x}$$ $$\ln\left({p+\sqrt{1+p^2}}\right)=\ln\left(\frac{x}{c}\right)^k$$ Now, solve for $p$: $$\frac{dy}{dx}=p=\frac{1}{2}\Bigg(\left(\frac{x}{c}\right)^k-\left(\frac{c}{x}\right)^k\Bigg)$$ In order to continue the analysis, we need to know something about the relative sizes of $a$ and $b$. Suppose, for example, that $a \lt$ $b$ (so $k\lt$ $1$), meaning that the dog will certainly catch the rabbit. Then we can integrate the last equation to obtain: $$y(x)=\frac{1}{2}\Bigg\{\frac{c}{k+1}\left(\frac{x}{c}\right)^{k+1}-\frac{c}{1-k}\left(\frac{c}{x}\right)^{k-1}\Bigg\}+D$$ Again, this is all I have to go on. I need to answer questions (a) and (b) stated at the top.
The book from which this question comes contains a typo. 1.18 is a typo. It should read 1.21. example 1.18 is irrelevant for this problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
uncountable subset of $C[0,1]$ has uniformly convergent subsequence If $S$ is an uncountable subset of $C[0,1]$, then there is a uniformly convergent sequence $\{f_n\}$ of distinct functions of $S$. I know how to do this for $C^1[0,1]$ since $S \subset \cup_{m,n \ge 0} \{f: \sup_{[0,1]} |f(x)| \le m, \sup_{[0,1]} |f'(x)| \le n \}$ and then $\{f: \sup_{[0,1]} |f(x)| \le m, \sup_{[0,1]} |f'(x)| \le n \}$ must be uncountable for some $m,n$ (since otherwise $S$ would be countable) and so contains a uniformly convergent sequence by Arzelà–Ascoli.
Suppose $S$ is such that no $x \in C[0,1]$ is the uniform - that is normwise - limit of a sequence $(x_n)$ of distinct elements of $S$. Let $x \in C[0,1]$, then there is an $\epsilon_x > 0$ such that $U_{\epsilon_x}(x)= \{y \in C\in C[0,1] \mid \|x-y\| < \epsilon\}$ has finite intersection with $S$ (otherwise, for $n \in\mathbb N$ let $x_n \in U_{1/n}(x) \cap S$ pairwise distinct, giving $x_n \to x$). We have $C[0,1] = \bigcup_x U_{\epsilon_x}(x)$. As $C[0,1]$ is seperable metric, it is Lindelöf, therefore there is a countable $A \subseteq C[0,1]$ with $C[0,1] = \bigcup_{x\in A} U_{\epsilon_x}(x)$. Now $S = \bigcup_{x \in A} (U_{\epsilon_x}(x) \cap S)$ is a countable union of finite sets, hence countable. As your $S$ is uncountable, there is a normwise convergent sequence $(x_n)$ of distinct elements of $S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Question about norms and coarseness of topology I've been thinking about norms and asked myself the following question: If I have two norms $\|\cdot\|_A$ and $\|\cdot\|_B$ with $\|\cdot\|_A \leq \|\cdot\|_B$, which topology is coarser, that is, has less open sets? I tried to answer it as follows, is this correct? It is enough to think about the ball of radius one around zero. Since $\|\cdot\|_A \leq \|\cdot\|_B$, there are more poins in $B_{\|\cdot\|_A}(0,1)$ than in $B_{\|\cdot\|_B}(0,1)$. In particular, there is a point that is in $B_{\|\cdot\|_A}(0,1)$ but not in $B_{\|\cdot\|_B}(0,1)$. Around this point we cannot make an epsilon $A$-ball that is contained in the $B$-unit-ball. Hence the $B$-unit ball is not open in the $A$-topology, hence the $B$-topology is coarser than the $A$-topology. Thanks for help!
Let $\|\cdot\|_A \leq \|\cdot\|_B$. Consider the open unit ball $B_A (0,1)$. Let $x \in B_A (0,1)$. Since $B_A (0,1)$ is open with respect to $\|\cdot\|_A$, there exists an $\varepsilon$ such that $B_A(x,\varepsilon) \subset B_A(0,1)$. Since $\|\cdot\|_A \leq \|\cdot\|_B$, $B_B(x,\varepsilon) \subset B_A(x,\varepsilon)$, hence $B_A(0,1)$ is also open in the $B$-topology. Hence from $\|\cdot\|_A \leq \|\cdot\|_B$ we can conclude that $T_A \subset T_B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What are imaginary numbers? At school, I really struggled to understand the concept of imaginary numbers. My teacher told us that an imaginary number is a number that has something to do with the square root of $-1$. When I tried to calculate the square root of $-1$ on my calculator, it gave me an error. To this day I still do not understand imaginary numbers. It makes no sense to me at all. Is there someone here who totally gets it and can explain it? Why is the concept even useful?
I just think of imaginary numbers as a definition. In the "real world" you cannot take the square root of $−1$ (which is what is happening with your calculator). However, we just define some "number", call it $i$, such that $i^2=−1$, add it to our number system and see what happens. So when you study imaginary numbers, you are just "seeing what happens". One can then write every number as $a+ib$ where $a,b\in\mathbb{R}$ ($a$ and $b$ are real numbers) and $i^2=−1$. In his comment, ivan is taking this pair $(a,b)$ and pointing out that this pair defines a point on a plane (so, like, a piece of paper, as when you draw a graph). This is the way that people often view imaginary numbers - as points on the plane (and the plane is the Complex Plane, or an Argand diagram).
{ "language": "en", "url": "https://math.stackexchange.com/questions/199676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "515", "answer_count": 22, "answer_id": 6 }
Cardinality of $R[x]/\langle f\rangle$ via canonical remainder reps. Suppose $R$ is a field and $f$ is a polynomial of degree $d$ in $R[x]$. How do you show that each coset in $R[x]/\langle f\rangle$ may be represented by a unique polynomial of degree less than $d$? Secondly, if $R$ is finite with $n$ elements, how do you show that $R[x]/\langle f\rangle$ has exactly $n^d$ cosets?
We will use the following fact (it should have been proved in the above-mentioned class): Let $K$ be any field. Then $K[x]$ is an Euclidean ring, i.e., for every two polynomials $f,g \in K[x]$ such that $g \neq 0$, there exist (unique!) polynomials $q, r$ such that $f = g \cdot q + r$ and $\deg r < \deg g$, with $\deg 0 = -\infty$. Now, let $\overline{g}$ be any coset of $R[x]/\langle f \rangle$, and $g$ its representative. By division with remainder, we have $g = f \cdot q + r$ with $\deg r < \deg f$ and ... (fill in the gaps here). For the second part, count the polynomials with degree $< d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove or Disprove: ∃x ∈ N such that ∀y ∈ N, 2x ≤y + 1 The first thing that I tried to do is: let y be an arbitrary natural number. I then tried to choose a value for x, but I cannot think of a value in which 2x ≤ y + 1.. So I then tried to prove the negation: ∀x ∈ N, ∃y ∈ N, 2x > y + 1 So I then let x be an arbitrary natural number and tried to set a value for y .... but the smallest value that y can be is x, correct? Because anything less than x and y would not be a natural number all of the time. So I do not know how to solve this, because I can't think of anything that works.
Translate the statement into simple English: Is there a natural number (call it $x$), such that for every other natural number $y$, the number $y + 1$ is at least twice as large as $x$? And try rephrasing/simplifying that statement: Is there a natural number $x$ which is less than or equal to $(y+1)/2$, for every natural number $y$? (The translation into simple English is meant to help you with the quantifiers, here — for the algebra of dividing by two, using math rather than English will probably be less cumbersome.) Because we're talking about a number which is smaller than or equal to some expression $f(y)$, where $f(y)$ grows with $y$, your best approach is to consider the smallest allowed $y$, and see if any values of $x$ work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Question about prime ideals and union of ideals The questions asks to show that if $A$ is a ring and $I, J_{1}, J_{2}$ ideals of $A$, and $P$ is a prime ideal, then $I \subset J_{1} \cup J_{2} \cup P$ implies $I \subset J_{1}$ or $J_{2}$ or $P$. I've been trying to work with the contrapositive and use that $I \subset J_{1} \cup J_{2}$ implies $I \subset J_{1}$ or $I \subset J_{2}$ in order to ascertain an element of $I$ that's not in the union, but I had no luck. Can anyone please shed some light? Thanks a lot in advance!
As commented by Georges Elencwajg, this is a special case of the prime avoidance Lemma. Here is a proof for your situation. Assume the claim is wrong. First we deal with the special case that $I$ is a subset of the union $I_1 \cup I_2$ of any two of the three ideals $J_1, J_2$ and $P$. Then by our assumption there is an $a\in I\setminus I_2$ and a $b\in I\setminus I_1$. Since $I\subset I_1\cup I_2$, $a\in I_1$ and $b\in I_2$ and from $a + b\in I$ we get w.l.o.g. $a + b\in I_1$. Thus $b = (a + b) - a\in I_1$, contradiction. It remains to consider the case that there exist elements $a\in I\setminus (P \cup J_1)$, $b\in I\setminus (P\cup J_2)$ and $p\in I\setminus (J_1\cup J_2)$. Then $a\in J_1$, $b\in J_2$ and $p\in P$. The element $x = ab + p$ is in $I$ and therefore in at least one of the ideals $J_1$, $J_2$ or $P$. If $x\in J_1$, then $p = x - ab\in J_1$ (note that from $a\in J_1$ and the ideal property of $J_1$ it follows that $ab\in J_1$). Contradiction. In the same way, $x\in J_2$ yields a contradiction. If $x\in P$, then $ab = x - p\in P$. From the primality of $P$ we get $a\in P$ or $b\in P$, which is again a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Calculus and Physics Help! If a particle's position is given by $x = 4-12t+3t^2$ (where $t$ is in seconds and $x$ is in meters): a) What is the velocity at $t = 1$ s? Ok, so I have an answer: $v = \frac{dx}{dt} = -12 + 6t$ At $t = 1$, $v = -12 + 6(1) = -6$ m/s But my problem is that I want to see the steps of using the formula $v = \frac{dx}{dt}$ in order to achieve $-12 + 6t$... I am in physics with calc, and calc is only a co-requisite for this class, so I'm taking it while I'm taking physics. As you can see calc is a little behind. We're just now learning limits in calc, and I was hoping someone could help me figure this out.
Think it this way. On a $x$ versus $t$ plot. The slope of the line is the rate of change of $x$ with respect to $t$ which is given by, $lim \ \Delta t\rightarrow0 \ \frac{\Delta x}{\Delta t}=\frac{dx}{dt}=v$ For more information, check this out http://en.wikipedia.org/wiki/Derivative
{ "language": "en", "url": "https://math.stackexchange.com/questions/199865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What set of graphs can be drawn on a plane such that no edges intersect? It's seems like all acyclic graphs can, but not all cyclic graphs (I.e. The fully connected 4 node graph can, but the fully connected 5 node graph cannot) Also, is there a name for this property? (Don't know if it makes a difference, but if it does, let's assume edges can only be drawn as straight lines and that nodes are drawn as points, not shapes w/ an area.) Edit: Follow up question: While Wikipedia has taught me that planarity testing can be done in a computationally efficient manner, I wonder about a related problem: given a non-planar graph, is it possible to determine the smallest number edges that must be removed to create a planar graph in some manner that's more efficient than simple brute force?
Planar graphs are those which have a $\mathbb{R}^2$ embedding without edge crossings. Kuratowski's Theorem states that a graph is planar if and only if it has no $K_5$ or $K_{3,3}$ minor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the discrete metric can not be obtained from $X\neq\{0\}$ If $X \neq \{ 0\}$ is a vector space. How does one go about showing that the discrete metric on $X$ cannot be obtained from any norm on $X$? I know this is because $0$ does not lie in $X$, but I am having problems. Formalizing a proof for this. This is also my final question for some time, after this I will reread the answers, and not stop until I can finally understand these strange spaces.
Suppose$(X,d)$ is a vector space with the discrete metric. If$X$ is a normed space, then for ∀ $x≠y ∈ X$,$α≠0 ∈K, αx≠αy$.So then $\lVert\alpha x-\alpha y\rVert = 1$ However, by metric and norm properties, $$\lVert\alpha x-\alpha y\rVert =\lVert\alpha(x-y)\rVert = \lvert\alpha\rvert\,\lVert x-y\rVert\le\vert \alpha \vert.1=\alpha,$$ a contradiction!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/200023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
If $f$ is uniformly differentiable $\Longrightarrow $ $f'$ is continuous I'm not sure how to go about proving this theorem: Let $U\subset \mathbb{R}^m$ (open set) and $f:U\longrightarrow \mathbb{R}^n$ a differentiable function such that: $\forall \epsilon>0\,,\exists \delta>0:|\!|h|\!|<\delta,[x,x+h]\subset U \Longrightarrow |\!|f(x+h)-f(x)-f'(x)(h)|\!|<\epsilon |\!|h|\!|$ then it holds that $f':U\longrightarrow \mathcal{L}(\mathbb{R}^m,\mathbb{R}^n)$ is continuous.We can also say that $f'$ is uniformly continuous? Any hints would be appreciated.
If you interchange the roles of $x$ and $x+h$ (and replace $h$ by $-h$), then you see that $$ |f(x)-f(x+h)+f'(x+h)(h)|<\epsilon|h|, $$ which, upon combining with what you have, and using the triangle inequality, shows that $f'(x)$ and $f'(x+h)$ are $\epsilon$-close.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Help with odd partial derivatives in velocity $\bar v^2 = \dot x ^2+\dot y^2$ I am doing a physics -course Tfy-0.2061. My teacher claims that this is velocity squared, $\bar v^2 = \dot x ^2+\dot y^2$. I cannot understand why it is not $\bar v^2 = (\dot x +\dot y)^2$. If distance is $\bar d = \bar x + \bar y$. Then velocity is $\partial_t \bar d = \dot x + \dot y$. Now just square it to get $$\bar v^2 = \dot x^2 +2\dot x\dot y +\dot y^2 \not = \dot x^2 +\dot y^2.$$ What does my teacher mean by velocity $\bar v^2 = \dot x ^2+\dot y^2$? P.s. the goal was to do something called "nopeuden radiaalinen komponentti" that probably means radial component of velocity. I don't just understand what it means, some angular velocity? I am doing the exercise 3b here, sorry not in English. Trial 1 The only way that my teacher can be correct is if $y_0=0$ and $x_0=0$ because
This is shorthand for $$|v|^2=v\cdot v=\dot{x}^2+\dot{y}^2,$$ i.e. a statement about the length of the velocity vector whose components are $(\dot{x},\dot{y})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
A specific consequence of Cauchy's integral formula If f is holomorphic in an open subset $G \subset \mathbb{C}$, and if $f'(a)\neq0$ for some $a \in G$, then there exists $r>0$ such that \begin{eqnarray}|f'(z)-f'(a)|<|f'(a)|,\end{eqnarray} for $z \in D(a,r)$ ($D$ for 'disk' with centre $a$, radius $r$). The above is what I intend to prove. I've tried to use Cauchy's integral formulae, i.e \begin{eqnarray}2\pi i f'(z)=\int_{\partial D(a,r)}\frac{f(w)}{(w-z)^2} \, dw,\end{eqnarray} or \begin{eqnarray}2\pi i f'(z)=\int_{\partial D(a,r)}\frac{f'(w)}{w-z} \, dw,\end{eqnarray} but I don't get anywhere? If someone would give a hint I'd appreciate it.
You don't need Cauchy's integral formula for this. A holomorphic function is infinitely differentiable, so its derivative is continuous. Your inequality is an instance of the $\delta$-$\epsilon$-definition of the continuity of $f'$, with $\delta=r$ and $\epsilon=|f'(a)|\gt0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Problem with Ring $\mathbb{Z}_p[i]$ and integral domains Let $$\Bbb Z_p[i]:=\{a+bi\;:\; a,b \in \Bbb Z_p\,\,,\,\, i^2 = -1\}$$ -(a)Show that if $p$ is not prime, then $\mathbb{Z}_p[i]$ is not an integral domain. -(b)Assume $p$ is prime. Show that every nonzero element in $\mathbb{Z}_p[i]$ is a unit if and only if $x^2+y^2$ is not equal to $0$ ($\bmod p$) for any pair of elements $x$ and $y$ in $\mathbb{Z}_p$. (a)I think that I can prove the first part of this assignment. Let $p$ be not prime. Then there exist $x,y$ such that $p=xy$, where $1<x<p$ and $1<y<p$. Then $(x+0i)(y+0i)=xy=0$ in $\mathbb{Z}_p[i]$. Thus $(x+0i)(y+0i)=0$ in $\mathbb{Z}_p[i]$. Since none of $x+0i$ and $y+0i$ is equal to $0$ in $\mathbb{Z}_p[i]$, we have $\mathbb{Z}_p[i]$ is not an integral domain. However, I don't know how to continue from here.
Hint $ $ unit $\rm\alpha\in \Bbb Z_p[{\it i}]\iff$ unit $\alpha\bar \alpha\in \Bbb Z_p \iff \alpha\bar \alpha\not\equiv0,\:$ since $\rm\:\alpha\:|\:1\iff \alpha\bar\alpha\:|\:1\:$ via norms. Remark $\ $ The point of expressing it this way (vs. prior answer) is that it explicitly highlights how the norm, being a multiplicative homomorphism, must preserve purely multiplicative properties, such as the property of being a unit. That is the essence of the matter here (and throughout number theory), so it is well-worth explicit emphasis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Understanding Why If a set is open in $\mathbb{R}^n$, Today in lectures we were doing a brief review of some metric spaces stuff and I'm not quite sure about something we did: If we define two metrics as $d_1(x,y)= \max_{i=1,\ldots,n} |x_i-y_i|$ and the metric $d_2=\displaystyle\sum_{i=1}^n |x_i-y_i|$ show that a set is open in $(\mathbb{R}^n,d_1)$ iff it is open in $(\mathbb{R}^n,d_2)$ So this is the same in $\mathbb{R}^n$ as in $\mathbb{R}^2$ so for simplicity I'm just looking at it in $\mathbb{R}^2$ just now. So as $\max_{i=1,2} |x_i-y_i| \leq \displaystyle\sum_{i=1}^2 |x_i-y_i| \leq \sqrt{2\times\max_{i=1,2} |x_i-y_i|^2}=\sqrt{2} \max_{i=1,2} |x_i-y_i|$ Which gives $d_1(x,y)\leq d_2(x,y)\leq\sqrt{2}d_1(x,y)$ So I understand why this shows that any open set in $(\mathbb{R}^2,d_2(x,y))$ is open in $(\mathbb{R}^2,d_1(x,y))$ as we can make a smaller open ball round points but I can't see why this shows the other direction? Thanks for any help
Suppose $d_1,d_2$ are two metrics on a set, and there is a constant $c>0$ such that for all points $x,y$ we have $d_1(x,y)\le c d_2(x,y)$. Consider a $d_1$-open ball $B=\{z : d_1(x,z)\le r\}$. If $z$ is in the $d_2$-open ball of radius $r/c$ about $x$, then $z$ is in $B$. Thus $B$ is also a $d_2$-open set. Thus every $d_1$-open ball is a $d_2$ open set, and therefore every $d_1$-open set is a $d_2$-open set. The same proposition says that if there is also a constant $b>0$ such that for all $x,y$ we have $d_2(x,y)\le b d_1(x,y)$, then every $d_2$-open set is a $d_1$-open set. The only thing you need to know about $c$ and $b$ is that both are positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Failure during calculating the matrix exponential, but where? I have to calculate $e^{At}$ of the matrix $A$. We are learned to first compute $A^k$, by just computing $A$ for a few values of $k$, $k=\{0\ldots 4\}$, and then find a repetition. $A$ is defined as follows: $$ A = \begin{bmatrix} -2 & 2& 0 \\ 0 & 1 & 0 \\ 1 & -1 & 0 \end{bmatrix} $$ Because I couldn't find any repetition I used Wolfram|Alpha which gave me the following, http://goo.gl/JxyIg: $$ \frac{1}{6} \begin{bmatrix} 3(-1)^k2^{k+1} & 2(2-(-1)^k2^{k+1}) & 0 \\ 0 & 6 & 0 \\ 3(-(-2)^k+0^k) & 2(-1+(-2)^k) & 6*0^k \end{bmatrix} $$ Then $e^{At}$ is calculated as followed (note that $\sum_{k=0}^{\infty}\frac{0^kt^k}{k!} = e^{0t} = 1$, using that $0^0 = 1$): $$ e^{At} = \begin{bmatrix} \frac{1}{6}\sum_{k=0}^{\infty}\frac{3(-1)^k2^{k+1}t^k}{k!} & \frac{1}{6}\sum_{k=0}^{\infty}\frac{2(2-(-1)^k2^{k+1})t^k}{k!} & 0 \\ 0 & \frac{1}{6}\sum_{k=0}^{\infty}\frac{6t^k}{k!} & 0 \\ \frac{1}{6}\sum_{k=0}^{\infty}\frac{3(-(-2)^k+0^k)t^k}{k!} & \frac{1}{6}\sum_{k=0}^{\infty}\frac{2(-1+(-2)^k)}{k!} & \frac{1}{6}\sum_{k=0}^{\infty}\frac{6^k*0^k}{k!} \end{bmatrix} $$ Now this matrix should give as a answer $$ \begin{bmatrix} e^{-2t} & e^{2t} & 0 \\ 0 & e^{t} & 0 \\ e^{t} & e^{-t} & 1 \end{bmatrix} $$ Now when I compute this answer of $e^{At}$, I get different answers for some elements. Only the elements $A_{11} = e^{-2t}$, $A_{13} = A_{21} = A_{23} = A_{33} = 1$ and $A_{22} = e^t$. However when I calculate $A_{12}$ I get the following: $$ A_{12}=\frac{1}{6}\sum_{k=0}^{\infty}\frac{2(2-(-1)^k2^{k+1})t^k}{k!}=\frac{2}{6}\left(\sum_{k=0}^{\infty}\frac{2t^k}{k!}-\sum_{k=0}^{\infty}\frac{(-1)^k2^{k+1}t^k}{k!}\right)=\frac{4}{6}\left(\sum_{k=0}^{\infty}\frac{t^k}{k!}-\sum_{k=0}^{\infty}\frac{(-1)^k2^{k}t^k}{k!}\right)=\frac{4}{6}\left(e^t-e^{-2t}\right) $$ Which is of course not equal to $e^{2t}$. Where do I make a mistake? Or does maybe Wolfram|Alpha make a mistake, I know it is correct for $0\ldots 4$.
Corrected : Your $A^k$ is right but for $e^{At}$ you should just have multiplied every term by $\frac {t^k}{k!}$ before computing the sum. So no $6^k$ in the central term for example. Further (as explained by Robert Israel) for $k=0$ your $A^k$ expression is still valid with only diagonal $1$ terms (so no $1$ elsewhere). Last you seem to be supposing that $e^{At}$ will simply be the exponential of each term : this is not true as shown by your $A_{1,2}$ term (i.e. the 'should give...' part is not right). I'll add that, in Mathematica, you must use MatrixExp to compute a matrix exponential and not Exp that returns the exponential of the individual terms! Result : Hoping this helped more,
{ "language": "en", "url": "https://math.stackexchange.com/questions/200437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Determine the equations needed to solve a problem I am trying to come up with the set of equations that will help solve the following problem, but am stuck without a starting point - I can't classify the question to look up more info. The problem: Divide a set of products among a set of categories such that a product does not belong to more than one category and the total products within each category satisfies a minimum number. Example: I have 6 products that can belong to 3 categories with the required minimums for each category in the final row. For each row, the allowed categories for that product are marked with an X - eg. Product A can only be categorized in CatX, Product B can only be categorized in CatX or CatY. $$ \begin{matrix} Product & CatX & CatY & CatZ \\ A & X & & \\ B & X & X & \\ C & X & & \\ D & X & X & X \\ E & & & X\\ F & & X & \\ Min Required& 3 & 1 & 2\\ \end{matrix} $$ The solution - where * marks how the product was categorized: $$ \begin{matrix} Product & CatX & CatY & CatZ \\ A & * & & \\ B & * & & \\ C & * & & \\ D & & & * \\ E & & & *\\ F & & * & \\ Total & 3 & 1 & 2\\ \end{matrix} $$
Robert has already answered your question, but I will expand upon what he wrote. Suppose that you have $m$ products and $n$ categories. Then, you have a binary assignment matrix $X \in \{0,1\}^{m \times n}$ whose $(i,j)$-th entry, which we denote by $x_{ij}$, is given by * *$x_{ij} = 1$ if product $i$ is assigned to category $j$. *$x_{ij} = 0$ otherwise. There are some constraints on this matrix, namely: * *since a product cannot belong to more than one category, we have that there's only one entry equal to $1$ per row. We can write that as $X 1_n = 1_n$ where $1_n$ is the $n$-dimensional vector whose entries are all equal to $1$. *since the total number of products within each category must be greater than a given number $b_j$, we have that the sum of the elements in the $j$-th column of $X$ will be greater or equal than $b_j$. We can write that as $1_m^T X \geq b^T$, where $\geq$ applied to vectors denotes entry-wise $\geq$. In the example you gave, we have $m = 6$ and $n = 3$. If you want to brute-force this problem, you could generate all $2^{18} = 262144$ binary matrices of dimensions $6 \times 3$, and keep only the ones that satisfy the equality constraint $X 1_n = 1_n$ and the inequality constraint $1_m^T X \geq b^T$. However, there are much smarter ways of solving the problem. For example, you could start with a zero matrix and then pick one entry in each row and make it equal to $1$, which guarantees that $X$ satisfies the equality constraint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
proving a inequality about sup Possible Duplicate: How can I prove $\sup(A+B)=\sup A+\sup B$ if $A+B=\{a+b\mid a\in A, b\in B\}$ I want to prove that $\sup\{a+b\}\le\sup{a}+\sup{b}$ and my approach is that I claim $\sup a+ \sup b= \sup\{\sup a + \sup b\}$ and since $\sup a +\sup b \ge a+b$ the inequality is proved. Is my approach correct?
It is better to think about what the inequality says than just doing formal manipulations. On the left-hand side you have less freedom than on the rigt-hand side, this is easier to see if you write the argument of $a$ and $b$ explicitely, as such: $$ \sup_x\{a(x)+b(x)\} \le \sup_x a(x) + \sup_x b(x) $$ On the LHS, you must vary $x$ simultaneously in $a$ and in $b$, on the RHS you can vary $x$ separately in $a$ and in $b$. In that way, every value you can get on the LHS can be matched on thje RHS, but not in the opposite direction. That makes the inequality obvious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to solve an nth degree polynomial equation The typical approach of solving a quadratic equation is to solve for the roots $$x=\frac{-b\pm\sqrt{b^{2}-4ac}}{2a}$$ Here, the degree of x is given to be 2 However, I was wondering on how to solve an equation if the degree of x is given to be n. For example, consider this equation: $$a_0 x^{n} + a_1 x^{n-1} + \dots + a_n = 0$$
An approach that immediately comes to mind is to first establish the relationship (non-linear, typically) between each root of the polynomial (assumed, complex) and the coefficients of the power series, and then solve the resulting set of $n$ equations for the $n$ variables i.e, the roots, using an iterative numerical algorithm (say, Newton-Raphson extended to $n$ equations in $n$ variables) computed in the complex domain (to ensure convergence). A pretty good approach to get the power series coefficients is by using discrete convolution for polynomial multiplication (see here ). However, I was wondering on how to solve an equation if the degree of $x$ is given to be $n$. The only thing that's now between you and your solutions (all $n$ of them) is a computer program.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59", "answer_count": 10, "answer_id": 9 }
Question: Find all values of real number a such that $ \lim_{x\to1}\frac{ax^2+a^2x-2}{x^3-3x+2} $ exists. Thanks in advance for looking at my question. I was tackling this limits problem using this method, but I can't seem to find any error with my work. Question: Find all values of real number a such that $$ \lim_{x\to1}\frac{ax^2+a^2x-2}{x^3-3x+2} $$ exists. My Solution: Suppose $\lim_{x\to1}\frac{ax^2+a^2x-2}{x^3-3x+2}$ exists and is equals to $L$. We have $$\lim_{x\to1}{ax^2+a^2x-2}=\frac{\lim_{x\to1}ax^2+a^2x-2}{\lim_{x\to1}x^3-3x+2}*\lim_{x\to1}x^3-3x+2=L*0=0$$ Therefore, $\lim_{x\to1}{ax^2+a^2x-2}=0$ implying $a(1)^2+a^2(1)-2=0$. Solving for $a$, we get $a=-2$ or $a=1$. Apparently, the answer is only $a=-2$. I understand where they are coming from, but I can't see anything wrong with my solution either.
Your solution doesn't work because you multiply through by an indeterminate form, i.e. $0/0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Basic homework question about rotations in linear algebra Math.SE! I'd like some help understanding the premise of the following question: A rotation of $\mathbb{R}^2$ about the origin is a linear mapping $R_\psi$ given by $R_\psi$ $\begin{pmatrix} r\cos\phi \\ r\sin\phi \\ \end{pmatrix}$ = $\begin{pmatrix} r\cos(\phi+\psi) \\ r\sin(\phi+\psi) \\ \end{pmatrix}$ for $0\leq\psi<2\pi$ and where any vector $v\in \mathbb{R}^2$ can be written as $\begin{pmatrix} r\cos\phi \\ r\sin\phi \\ \end{pmatrix}$ where $r$ is the length of $v$ and $\phi$ is the angle between $v$ and the positive $x$-axis. Verify that $R_\psi = T_A$ where $A=[R_\psi]_E=\begin{pmatrix}\cos \ \psi&-\sin \ \psi\\ \sin \ \psi&\cos\ \psi\\ \end{pmatrix}$ and $T_A(v)=Av$ for $v \in V$. It wasn't difficult to actually verify this result - my real question is this: how can I obtain the fact $[R_\psi]_E=\begin{pmatrix}\cos \ \psi&-\sin \ \psi\\ \sin \ \psi&\cos\ \psi\\ \end{pmatrix}$ (where $E$ is the standard basis), and, more generally, how do I determine what $[R_\psi]_B$ is for any arbitary basis $B$ of $\mathbb{R}^2$? Thanks in advance for any help!
Given a linear transformation $T : V \to V$ and an ordered basis $\mathcal{B} = \{b_1, \dots, b_n\}$ of $V$, the standard matrix of $T$, with respect to $\mathcal{B}$, is given by $$[T]_{\mathcal{B}} = [T(b_1)\ \cdots\ T(b_n)]$$ where $T(b_i)$ is expressed in the basis $\mathcal{B}$. That is, $T(b_i)$, expressed in the basis $\mathcal{B}$, is column $i$ of $[T]_{\mathcal{B}}$. For your particular linear transformation, note that $(1, 0)^t = (\cos 0, \sin 0)^t$, so $R_{\psi}((1, 0)^t) = (\cos\psi, \sin\psi)^t$, the first column of $[R_{\psi}]_E$. Now note that $(0, 1)^t = (\cos\frac{\pi}{2}, \sin\frac{\pi}{2})^t$, so $R_{\psi}((0, 1)^t) = (\cos(\psi + \frac{\pi}{2}), \sin(\psi + \frac{\pi}{2}))^t = (-\sin\psi, \cos\psi)^t$, the second column of $[R_{\psi}]_E$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Conditions for Moving Function Outside Sine Argument Are there multiple discrete functions $f[n]$ such that \begin{align} \sin \left( f[n] x[n] \right) = f[n] \sin \left( x[n] \right) \end{align} I know that the above equation holds for $f[n] = 0$, $f[n] = 1$, $f[n] = u[n]$ (where $u[n] = 0$ for $n < 0$ and 1 otherwise) and $f[n] = \delta[n]$ (where $\delta[n] = 1$ for $n = 0$ and $0$ otherwise). If there are others, are there any special conditions for $f[n]$ and $x[n]$ that allow for this type of "commuting"? Is it basically just variations of functions that modulate between 0 and 1?
If $x[n]=pi/2$, $sin(f[n]*\pi/2)=f[n]$. Sine can only take on values between -1 and 1 so $f[n]\in [-1,1]$ \begin{align} \sin \left( f[n] x[n] \right) - f[n] \sin \left( x[n] \right) =0 \end{align} This equation has to hold for all functions $x[n]$ so let's assume $x[n]=\sqrt{2}$ so that we can find some functions $f[n]$ that may work. A plot of $f[n]$ vs $sin(f[n]\sqrt{2})-f[n]sin(\sqrt{2})$ for $f[n]\in [-1,1]$ shows that the only values that $f[n]$ can take are -1, 0 and 1. (plot) $f[n]={-1, 0, 1}$ $\ $ Alternate (analytic) solution: \begin{align} &fsin(x)=sin(fx)\\ \\ \text{apply $\frac{d^2}{dx^2}$ to both sides}\\ &-fsin(x)=-f^2sin(fx)\\ \text{divide both sides by $-f$}\\ &sin(x)=fsin(fx)\\ \text{substitute into first equation}\\ &f^2sin(fx)=sin(fx)&\\ \text{this equation can be true iff $sin(fx)=0$ or $f^2=1$}\\ \\ &f=-1,0,1 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/200787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Statements in Euclidean geometry that appear to be true but aren't I'm teaching a geometry course this semester, involving mainly Euclidean geometry and introducing non-Euclidean geometry. In discussing the importance of deductive proof, I'd like to present some examples of statements that may appear to be true (perhaps based on a common student misconception or over-generalisation), but are not. The aim would be to reinforce one of the reasons given for studying deductive proof: to properly determine the validity of statements claimed to be true. Can anyone offer interesting examples of such statements? An example would be that the circumcentre of a triangle lies inside the triangle. This is true for triangles without any obtuse angles - which seems to be the standard student (mis)conception of a triangle. However, I don't think that this is a particularly good example because the misconception is fairly easily revealed, as would statements that hold only for isoceles or right-angled triangles. I'd really like to have some statements whose lack of general validity is quite hard to tease out, or has some subtlety behind it. Of course the phrase 'may appear to be true' is subjective. The example quoted should be understood as indicating the level of thinking of the relevant students. Thanks.
A common attempt to trisect an angle is to construct an isosceles triangle, $\triangle ABC$, with $\angle ABC$ the desired angle; then trisect the opposite side $\overline{AC}$, finding points $D,E$ on $\overline{AC}$ with $AD = DE = EC$. One might guess that $m\angle ABD = m\angle DBE = m\angle EBC$ but in fact this is never the case. Although this isn't quite what you were asking for, the best example of a subtly flawed geometric proof I know of can be seen at Wikipedia here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
Mathematical model for truncation of digits. I wanted to find a mathematical expression to represent the truncation of the least $D$ digits of a number with radix $r$ without using the "floor" operation. So a $Q$-digit number written as $r_{Q-1}r_{Q-2}\ldots r_0$ should become $r_{Q-1}r_{Q-2}\ldots r_{D}$ after truncation. I think I came up with a satisfactory expression for my research for the truncated value of a number $n$ in a certain system \begin{align} \mathbb{T}\left\{n\right\} = \frac{1}{r^D} \left[ \left< n \right>_{N} - \left< n \right>_M \right] \end{align} where $N = r^Q$ and $M = r^D$ and $\left<n\right>_N$ returns the remainder of $n$ divided by $N$. I need the modulo of $n$ by $N$ because only the bottom $Q$ digits of $n$ are truncated. But I am struggling showing the following \begin{align} \mathbb{T}\left\{r^D n\right\} = \mathbb{T} \left\{r^D n + m\right\} \end{align} for $0 \leq m < r^D$ and $r^D +m > N$. Clearly this is true from my experience in computer science, but when I evaluate the expression for $r^D n+m$ mathematically, I get \begin{align} \mathbb{T}\left\{r^D n + m \right\} &= \frac{1}{r^{D}} \left[ \left<r^D n + m\right>_N - \left<r^D n + m \right>_M\right] \\ &= \frac{1}{r^D} \left[ \left<r^D n + m \right>_N - \left< \left<r^D n \right>_M + \left< m \right>_M \right>_M \right] \\ &= \frac{1}{r^D} \left[ \left< r^D n + m \right>_N - \left< m \right>_M \right]\\ &= \frac{1}{r^D} \left[ \left< r^D n + m \right>_N - m \right] \end{align} which in no way that I am familiar with readily simplifies to \begin{align} \mathbb{T} \left\{ r^D n \right\} = \frac{1}{r^D} \left[ \left< r^D n \right>_N \right] \end{align} I am sure that it does, but I might be missing some elementary modulo arithmetic tricks that other users of this site might readily have on hand.
Just working with the question in the comments, note that since $r^D=M$ and $0\le m\lt r^D$ we have $r^Dn$ reduced modulo $M$ is zero, and $m$ reduced modulo $M$ is $m$. So we are left with trying to prove that $r^Dn+m$ reduced modulo $N$ and $r^Dn$ reduced modulo $N$ differ by $m$. So write $r^Dn=Nq+t$ with $0\le t\lt N$. Note $t=r^Ds$ for some $s\lt r^{Q-D}$. Then $r^Dn+m=Nq+u$ where $$u=t+m\lt r^Ds+r^D=r^D(s+1)\le r^Q=N$$ so $r^Dn+m$ reduced modulo $N$ is $t+m$, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does this group theory statement ask for? This is a question that was asked in my group theory examination today: Let $G$ be a finite cyclic group generated by an element $x$ of $G$. If $y(\ne x)\in G$ is also a generator of $G$, find the relation between the elements $x$ and $y$. I do not think that given an arbitrary finite cyclic group one can give a nice relation between any two of its generators. For example if $\mathbb{Z}/50\mathbb{Z}$ what is the relation between 7 and 49 or 23 and 31 or say 3 and 43? I have not been able to understand clearly what kind of a relation the question asks for. I know that $x=y^m$ for some $m$, and $m$ is then coprime to the order of the group but I do not know how could this give a relation involving only $x$ and $y$. So what is the question asking for and what in general is a relation between $x$ and $y$?
Well, there are some very basic relations between $\,x\,,\,y\,$, for example say $\,|G|=n\,$ , then: $$\begin{align*}*&\;\;\;|x|=|y|\\**&\;\;\;y=x^k\,\,,\,\,x=y^s\;\;,\;\text{for some}\;\;k,s\in\Bbb Z\;\;\text{with}\;\;(n,k)=(n,s)=1\end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/200951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
In how many ways one can answer a 20-question exam with two-choice questions? There is a problem in my book: In how many ways one can answer a 20-question exam with two-choice questions? which I would solve as follows: Since there are 20 questions, and each question has two choices, there are $20 \cdot {{2} \choose 1}$ but I am wrong according to the solution written in the book. Why I can't do it this way? I know what the correct answer is, I just want to know why my solution doesn't work in this case.
Let's see what $20\binom{2}{1}$ actually counts. For any question, like the $7$-th, you have $\binom{2}{1}$ choices. So there are $\binom{2}{1}$ ways to answer the $7$-th question and leave all the others blank. Now your $20\binom{2}{1}$ counts all the different ways to answer exactly one of the $20$ questions and leave the others blank. My candidate for the right answer is $3^{20}$. For at any question, we can choose the first answer, or the second, or decide the question is too hard and go on to the next question. But the book probably gave $2^{20}$. Think of choosing the first answer to a question as the letter $a$, and choosing the second answer as the letter $b$. Then the number of ways to answer every question is just the number of "words" of length $20$ over the alphabet $\{a,b\}$. One useful way to see whether one has the right answer is to do a hand count in "small" cases, and check one's conjectured formula against that count. For $n=1$, your method gives $2$, which is correct. For $n=2$, your method gives $4$, which is correct. Good so far. For $n=3$, your method gives $6$, which is not right: listing gives $8$ ways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Are the only sets in $\mathbb{R^1}$ which are both open and closed $\mathbb{R^1}$ and $\emptyset$? As the topic, how to prove that the only set in $\mathbb{R^1}$ which are both open and close are the $\mathbb{R^1}$ and $\emptyset$. I tried to prove by contradiction, but i can't really show that the assumption implies the contrary.
Let $S\subset \Bbb R$ non-empty, open and closed. Fix $x_0\in S$. Let $I:=\{r>0:[x_0-r,x_0+r]\subset S\}$. As $S$ is open, $I$ is non-empty. If $I$ is bounded, let $\{r_n\}$ be a sequence which increases to $\sup I$. Then $x_0+r_n\in S$ for each $n$, and as $S$ is closed, $x_0+\sup I\in S$. But $S$ is open, so we can find $\delta>0$ such that $x_0\pm \sup I\pm t \in S$ for $0\leq t\leq \delta$, hence $\sup I+\delta\in I$, a contradiction. So $S=\Bbb R$. Note that such an approach works for $\Bbb R^d$ instead of $\Bbb R$. Just replace the interval $[x_0-r,x_0+r]$ by the closed ball $\bar B(x^{(0)},r):=\{x\in\Bbb R^d,\max_{1\leq j\leq d}|x_j-x_j^{(0)}|\leq r\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Prove that $\zeta (4)\le 1.1$ Prove the following inequality $$\zeta (4)\le 1.1$$ I saw on the site some proofs for $\zeta(4)$ that use Fourier or Euler's way for computing its precise value, and that's fine and I can use it. Still, I wonder if there is a simpler way around for proving this inequality. Thanks!
$$ \begin{eqnarray} \zeta(4)&<&\sum_{n=1}^{9}\frac{1}{n^4} +\left(\sum_{n=10}^{\infty}\frac{1}{n^2}\right)^2 \\ &=& \sum_{n=1}^{9}\frac{1}{n^4} + \left(\frac{\pi^2}{6}-\sum_{n=1}^{9}\frac{1}{n^2}\right)^2 \\ &=& 1.0929965... \end{eqnarray} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/201151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Inclusion Exclusion and lcm I would like to show that for any positive integers $d_1, \dots, d_r$ one has $$ \sum_{i=1}^r (-1)^{i+1}\biggl( \sum_{1\leq k_1 < \dots < k_i \leq r} \gcd(d_{k_1}, \dots , d_{k_i})\biggr) ~\leq~ \prod_{i=1}^r\biggl( \prod_{1\leq k_1 < \dots < k_i \leq r} \gcd(d_{k_1}, \dots , d_{k_i}) \biggl)^{(-1)^{i+1}}. $$ Note that the rhs of the upper inequality is exactly $\operatorname{lcm}(d_1,\dots,d_r)$. Also note that if we denote the lhs of the upper equation by $L(d_1, \dots, d_r)$, then one has that $$ L(d_1, \dots, d_r) = L(d_1, \dots, d_{r-2}, d_{r-1}) + L(d_1, \dots, d_{r-2}, d_{r}) - L(d_1, \dots, d_{r-2}, \text{gcd}(d_{r-1},d_r)). $$ Thanks for the help!
For any $r\geq 1$ and any positive integers $d_1, \dots d_r\in \mathbb Z_{>0}$ define $L(d_1,\dots , d_r)$ (think logarithmic lcm) to be \begin{equation*} L(d_1,\dots , d_r) ~:=~ \sum_{i=1}^r (-1)^{i+1}\Big(\sum_{1\leq k_1 < ... < k_i\leq r} \text{gcd}(d_{k_1}, \dots, d_{k_i}) \Big). \end{equation*} It is straightforward to check that $L$ is symmetric, homogeneous of degree 1 and that \begin{equation*} (i) ~~~~~ L(d_1,\dots , d_r) = L(d_1,\dots , d_{r-1}) + L(d_1,\dots , d_{r-2}, d_n) - L(d_1,\dots , d_{r-2}, \text{gcd}( d_{r-1},d_r)). \end{equation*} Furthermore it follows directly from symmetry and property $(i)$ that \begin{equation*} (ii) ~~~~\text{if} ~~ d_r \big \vert d_i~~ \text{for some}~ 1\leq i \leq r-1 \text{, then} ~~L(d_1,\dots , d_r) = L(d_1,\dots , d_{r-1}). \end{equation*} The third property we want to establish is that \begin{eqnarray*} (iii) ~~~L(d_1,\dots , d_r) %&=& \sum_{i=1}^r (-1)^{i+1}\Big(\sum_{1\leq k_1 < ... < k_i\leq r} \text{gcd}(d_{k_1}, \dots, d_{k_i}) \Big) %\\ &\leq& \prod_{i=1}^r \Big(\prod_{1\leq k_1 < ... < k_i\leq r} \text{gcd}(d_{k_1}, \dots, d_{k_i}) \Big)^{(-1)^{i+1}} = \text{lcm}( d_1 ,\dots d_r), \end{eqnarray*} with equality if and only if for some $1\leq i \leq r$, $L(d_1,\dots , d_r) $ can be reduced to $ L(d_i)$ in the sense of property $(ii)$, i.e. $ d_j \big \vert d_i$ for all $j\neq i$. To see this let $G$ be the cyclic group of order $\text{lcm}( d_1 ,\dots, d_r)$ and pick elements $c_1, \dots, c_r\in G$ of order $d_1, \dots, d_r$ respectively. One obviously has $$ \# \Big ( \bigcup_{i=1}^r ~ \langle c_i \rangle \Big )~\leq ~ \# G. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(\star) $$ Since $G$ is cyclic one has $\# \big( \langle c_{k_1} \rangle \cap \dots \cap \langle c_{k_i} \rangle \big) = \text{gcd}(d_{k_1}, \dots, d_{k_i})$. Hence by the inclusion exclusion principal the left hand side of equation ($\star$) is equal to $L(d_1,\dots, d_r) $ and the right hand side is equal to $\text{lcm}( d_1 ,\dots, d_r)$. Observe that equality holds iff one of the $c_i$ has order $\text{lcm}( d_1 ,\dots, d_r)$, which proves the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why the final equality in this verification of positive semi-definiteness? In showing that a particular Hessian is positive semidefinite the author writes: $$ \frac{2}{y^3} \left[ \begin{array}{cc} y^2 & -xy \\ -xy & x^2 \end{array} \right] = \frac{2}{y^3} \left[ \begin{array}{c} y & -x \\ \end{array} \right]^T \left[ \begin{array}{c} y & -x \\ \end{array} \right]$$ I can see from the left hand side that the matrix is positive semidefinite but I'm not sure what the factorization on the right is getting at or making reference too in terms of positive semidefiniteness of a matrix..
I presume $y > 0$, else it's not true. Any matrix of the form $A^T A$ where $A$ is a real $m \times n$ matrix (or in this case a vector, which can be considered as an $m \times 1$ matrix) is positive semidefinite. This is because for any real vector $v$, $v^T A^T A v = (A v)^T (A v) \ge 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof: If n is a perfect square, $\,n+2\,$ is NOT a perfect square "Prove that if n is a perfect square, $\,n+2\,$ is NOT a perfect square." I'm having trouble picking a method to prove this. Would contraposition be a good option (or even work for that matter)? If not, how about contradiction?
You don't even need algebra to do this – think squared paper. The only thing to specify about a square is its side length, so if you want to show a bigger square is a perfect square, the only thing you can do is increase the side length. But if you increase the side length by 1, you need to add at least 1 square to each side, and 1 on the corner, so you need to add at least three squares. This is aside from the case where $n = 0$. But $2$ isn't a perfect square, so that's fine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 9, "answer_id": 2 }
Necessary and sufficient conditions for $G$ to have $A(G)$ isomorphic to an operator algebra? Let $G$ be a locally compact group. We denote by $A(G)$ the Fourier algebra of $G$. An operator algebra is a closed subalgebra of $B(H)$ where $H$ is a Hilbert space. What are the necessary and sufficient conditions for $G$ to have a Fourier algbra which is isomorphic to an operator algebra?
Thanks to Ebrahim Samei for this proof. Assume $G$ is infinnite. Assme $A(G)$ is isomorphic to operator algebra then it is Arens regular [1]. It is shown in [2] and [3] that if $A(G)$ is Arens regular, then $G$ is discrete and non-amenable. By [4], for discrete groups, cb-homomorphism into $B(H)$ is similar to a $*$-representation. But no $*$-representaion of $A(G)$ into $B(H)$ can have a closed range (see the proof of theorem A in [5]). This contradict the assumption that $A(G)$ is cb-isomorphic to an operator algebra. If $G$ is finite the result is true. It is worth to mention that a weighted Fourier algebra can be an operator algebra [6]. [1] Operator Algebras and Their Modules: An Operator Space Approach By David P. Blecher, Christian Le Merdy (corollary 2.5.4) [2] Forrest, Brian(3-WTRL), Arens regularity and discrete groups. Pacific J. Math. 151 (1991), no. 2, 217–227. [3] Forrest, Brian(3-WTRL), Arens regularity and the $A_p(G)$ algebras. Proc. Amer. Math. Soc. 119 (1993), no. 2, 595–598. [4] Brannan, Michael; Samei, Ebrahim The similarity problem for Fourier algebras and corepresentations of group von Neumann algebras. J. Funct. Anal. 259 (2010), no. 8, 2073–2097. [5] Choi, Yemon; Samei, Ebrahim Quotients of Fourier algebras, and representations which are not completely bounded. Proc. Amer. Math. Soc. 141 (2013), no. 7, 2379–2388. [6] http://www.icmat.es/NTHA/WS-OSHA/notes/Lee.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/201354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Inner Measures of Subsets How can I show that for $A \subset B$, it must be true that $m_*(A) \leq m_*(B)$? That is, the inner measure of A is less than or equal to the inner measure of B? I understand how to show a similar proof for the outer measure, but is there an explicit way to show it for inner measure? Do I just extend the outer measure proof by showing that $\forall A$, $m_*(A) \leq m^*(A)$? How would I show that this wouldn't then violate the inner measure inequality (if, say, the inner measure of the set differs more greatly from it's outer measure than the subset does.)
It is clear from the definition of inner measure: $m_*(A) = \sup \{m(S): S \in \Sigma, S \subseteq A\}$, since if $A \subset B$, all $S$ that are considered for $m_*(A)$ are also considered for $m_*(B)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/201419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }