Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why are the Cauchy-Riemann equations in polar form 'obvious'? In my book on complex analysis I'm asked to prove the Cauchy-Riemann equations in polar form, which I did. However, at the end of the question the author asks why these relations are 'almost obvious'. Now I get the derivation using chain rules and also the idea of approach along a circle and along a radial line and then equating. But the fact that the author asks this question to me suggests that there is an even simpler way of seeing this. That or maybe the author just regards one of these approaches to be 'almost obvious'. So I'm looking for a more intuitive (so it also does not need to be 100% rigorous) way of thinking about the Cauchy-Riemann relations in polar form. These relations are $$u_r=\frac{1}{r}v_\theta, \quad \frac{1}{r}u_\theta = -v_r$$
A function $u+iv$ is complex differentiable if the derivative of $u$ in any direction $h$ is equal to the derivative of $v$ in direction obtained by rotating $h$ by $90$ degrees counterclockwise. This should make sense if you think of imaginary axis as real axis rotated by $90$ degrees counterclockwise. The Cauchy-Riemann criterion for differentiability is the observation that it's enough to check the above for horizontal and vertical $h$. (Because these directions span the plane.) This is expressed by $$u_x = v_y\quad\text{ and }\quad u_y = - v_x$$ where $-v_x$ is the derivative of $v$ in the direction of negative $x$-axis (the result of rotating the $y$ axis counterclockwise.) In polar coordinates, we use the radial and tangential directions instead. The rate of change in the tangential direction is $\frac{1}{r}\frac{\partial}{\partial \theta}$ because of angle-distance conversion. Since rotating the positive $r$-direction gives positive $\theta$-direction, $$ u_r = \frac{1}{r}v_\theta $$ And rotating positive $\theta$-direction counterclockwise gives negative $r$-direction, so $$ \frac{1}{r}u_\theta = -v_r $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1268368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If $Ha\subseteq Kb$ for some $a,b\in G$, show that $H \subseteq K$. Let $H$ and $K$ be subgroups of a group $G$. If $Ha\subseteq Kb$ for some $a,b\in G$, show that $H \subseteq K$. I constructed a proof by contradiction and I am wondering whether or not it is free from flaws. I thank you in advance for taking your time to inspect my work! Let us assume to the contrary that $H \nsubseteq K$. Then $\exists h \in H$ such that $h \notin K$. As $Ha\subseteq Kb$, then $H \subseteq Kba^{-1}$. So for this $h \in H$, it is of the form $h=kba^{-1}$ for some $k \in K$. Hence, $h=kba^{-1} \notin K$ by hypothesis $\Rightarrow h=kba^{-1} \in Kc$ for some $c \in G$, which then implies that $k \in Kcab^{-1}$. This last implication only holds if $cab^{-1}=1$ $\Rightarrow c=ba^{-1}$. As $h=kba^{-1} \in Kc$, then $h=kc \in Kc \Rightarrow h \in K$, which contradicts our assumption and thus concludes the proof.
Put $g=ba^{-1}$, then $H \subseteq Kg$. Since $1 \in H$, we have $1=kg$ for some $k \in K$. Hence $g=k^{-1}$ and $Kg=Kk^{-1}=$ (since $K$ is a subgroup )$K$, so $H \subseteq K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1268588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proving the existence of a supremum $A=\{(1+(1/n))^n \mid n\text{ is taken from positive integers}\}$ How can I prove that the set above has a supremum? I've started with an assumption that $(1+(1/n))^n < 3$ for every positive integer n but I could not find a way to prove it too. Could you please give me a hint?
Outline: Using the Binomial Theorem, we find that $$\left(1+\frac{1}{n}\right)^n \le 1+\frac{1}{1!}+\frac{1}{2!}+\cdots+\frac{1}{n!}.$$ The term $\frac{1}{3!}$ is less than $\frac{1}{2^2}$. Continue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1268692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove there exists $m$ and $k$ such that $ n = mk^2$ where $m$ is not a multiple of the square of any prime. For any positive integer $n$, prove that there exists integers $m$ and $k$ such that: $$n = mk^2 $$ where $m$ is not a multiple of the square of any prime. (For all primes $p$, $p^2$ does not divide $m$) I didn't have much success proving this inductively, any suggestion?
The prime decomposition of $n = \prod p_i^{a_i}$. Set $m= \prod_{odd\: a_i} p_i$; then $\frac{n}{m}$ has only even-exponent prime factors (if any) and can be represented as $k^2$ (which may possibly be equal to $1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1268774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is every diagonal matrix the product of 3 matrices, $P^{-1}AP$, and why? In trying to figure out which matrices are diagonalizable, why does my textbook pursue the topic of similar matrices? It says that "an $n \times n$ matrix A is diagonalizable when $A$ is similar to a diagonal matrix. That is, $A$ is diagonalizable when there exists an invertible matrix $P$ such that $P^{-1}AP$ is a diagonal matrix." I do not understand why it would begin with considering similar matrices. I mean, what is the motivation?
Matrices are connected to linear maps of vector spaces, and theres a concept of a basis for vector spaces. (A basis is something so that every element is a can be written uniquely as a sum of the elements in the basis.) Now, bases aren't unique, so if you want to see what your matrix looks like under a different basis, this is equivalent to conjugating by the change of basis matrix as you've written above. Now a matrix is said to be diagonalisable if there is some basis under which the matrix becomes diagonal. If you read further on in your textbook, it should tell you all this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1268931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Integration in complex measure Let $v$ be a complex measure in $(X,M)$. Then $L^{1}(v)=L^{1}(|v|)$. I have made: $L^1(v)\subset L^1(|v|)$?. Let $g\in L^1(v)$ As $v<<|v|$ and $|v|$ is finite measure, then for chain rule, $g.(\frac{dv}{d|v|})\in L^1(|v|)$. As $|\frac{dv}{d|v|}|=1\;|v|-a.e$, then $|g|=|g||\frac{dv}{d|v|}|\;|v|-a.e$, then $\int |g|d|v|=\int |g||\frac{dv}{d|v|}|d|v|=\int |g.\frac{dv}{d|v|}|d|v|< \infty$. Therefore $g\in L^1(|v|)$.
Let $g\in L^1(|v|). For \;$j=r,i$,\; we \:have, v_{j} ^ {\pm} \leq v_j^{+}+v_j^{-}=|v_j| \leq |v|$, then $g\in L^1(v_r^{+})\cap L^1(v_i^{+})\cap L^1(v_r^{-})\cap L^1(v_i^{-})$ , then $g\in L^1(v)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Filling in a missing portion of a truth table I have the following truth table: $$ \boxed{ \begin{array}{c|c|c|c} a & b & c & x \\ \hline F & F & F & F \\ F & F & T & F \\ F & T & F & F \\ F & T & T & F \\ T & F & F & T \\ T & F & T & T \\ T & T & F & F \\ T & T & T & T \end{array}} $$ I am supposed to use any logical operator: and, or, not. I tried like $10$ formulas already, and I am not able to think of what combination of $a,b,c$ will get that $x$ column right. Have I missed something really silly?
You need to think about what it means to find a formula $x$ that fits the truth table. The easiest way to generate such a formula is simply to write a disjunction of all the cases where you want $x$ to be true. Essentially, $x$ will say "I am true when this combination of $A$, $B$ and $C$ occurs, or when that one occurs, or when that one occurs, etc.". What you have to do is look up the lines where $x$ is true, get the corresponding truth values of $A$, $B$ and $C$, connect them with a conjunction, and link everything up with disjunctions. By looking at the three lines where $x$ is true, you get: $$x:=(A\land\neg B\land\neg C)\lor(A\land\neg B\land C)\lor(A\land B\land C)$$ Make sure you understand what this means and how the truth table follows. Now, any other formula fitting the truth table will be equivalent to $x$. For instance, we can "improve" $x$ by noticing that it requires $A$ to always be true, such that we can take it out of the parentheses: $$x=A\land\big((\neg B\land\neg C)\lor(\neg B\land C)\lor(B\land C)\big)$$ The part inside the big brackets is listing three out of four possible cases for the truth values of the pair $(B,C)$. In fact, the missing case is $(B\land\neg C)$, so that we can rewrite this segment to say "any combination of $B$ and $C$ works, except for $(B\land\neg C)$. We do so using a negation: $$x=A\land\neg(B\land\neg C)$$ And this is the most succinct expression of $x$ using only the connectives $\left\{\land,\neg\right\}$. Furthermore, you can compare it to the truth table: $x$ is true whenever $A$ is true except for the specific case where $B$ is true and $C$ is false. This corresponds precisely to the last four lines of the table, without the penultimate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Question about working in modulo? This question is in essence asking for understanding of a step in Fermats theorem done Group style. For any field the nonzero elements form a group under field multiplication. So let us take the Field $Z_p$. The group $Z_p$ - {0} form a group under field multiplication. This is my question. For any group $G$, let $a$ be an element of $G$, then $1)$$H$ = {$a^n$| $n$ is an element of $Z$} If we are working in $Z_p$ does that mean we can restrict the $n$ in equation 1 to just the elements from $0$ to $p-1$? Thanks yall
It is a theorem that a finite subgroup of the nonzero elements of a field is cyclic, and as a corollary, the nonzero elements of a finite field form s cyclic group. If the field is size n, the subgroup is size $n-1$, and yes, the first n powers of any element suffice to generate the cyclic subgroup generated by an element. In fact, even that is overkill, often.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integral over a sequence of sets whose measures $\to 0.$ If $ f \in L_p$ with $1 \leq p \leq \infty $ and ${A_n}$ is a sequence of measurable sets such that $ \mu (An) \rightarrow 0,$ then $ \int_{A_n} f \rightarrow 0$. Can someone give me a hint?
My proof might be a bit lengthy but I think it might be a more standard way in dealing with this kind of problem. First by $f\in L^{p}$ and DCT ($f_n= |f|^p 1_{\{|f|>n\}}\leq |f|, f_n\rightarrow 0$ a.e. as $f \in L^p$ ), we have $$\forall \epsilon >0, \exists N >1 , s.t. \int_{\{|f|>N\}} |f|^p <\epsilon/2.$$ Now since $|f|^p>f$ on $\{|f|>N\}$,$$|\int_{A_n}f |\leq \int_{\{A_n,\; |f|>N\}}|f| + \int_{\{A_n,\;|f|\leq N\}}|f| \leq \int_{\{|f|> N\}}|f|^p +N\mu(A_n)\leq \epsilon/2 + N\mu(A_n).$$ Since $N$ depends on $\epsilon$, you can chose $A_n$ small enough ($\epsilon/2N$)so that $$|\int_{A_n}f |\leq \epsilon.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
To Find Eigenvalues Find the eigenvalues of the $6\times 6$ matrix $$\left[\begin{matrix} 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ \end{matrix}\right]$$ The options are $1, -1, i, -i$ It is a real symmetric matrix and the eigenvalues of a real symmetric matrix are real. Hence $i$ and $-i$ can't be its eigenvalues. Then what else we can say? Is there any easy way to find it?
Here as you say it's a real symmetric matrix so all the eigen values are real. Now we know that the sum of eigen values is equal to the trace of the matrix. Here trace of the matrix is equal to $0$, so if we take only $1$ or $-1$ as the eigen value, the trace becomes non-zero, so both $1$ and $-1$ are the eigen values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 9, "answer_id": 2 }
How to solve this nonstandard system of equations? How to solve this system of equations $$\begin{cases} 2x^2+y^2=1,\\ x^2 + y \sqrt{1-x^2}=1+(1-y)\sqrt{x}. \end{cases}$$ I see $(0,1)$ is a root.
We solve the second system for $y$: $$y=\frac{-x^2+\sqrt{x}+1}{\sqrt{x}+\sqrt{1-x^2}}$$ and substitute into the first equation and solve for $x$. It is then seen that $(0,1)$ is the only real-valued solution. Computer analysis finds complex solutions where $x$ is the root of a certain $16$ degree polynomial. Edit: the precise polynomial is $$ x^{16}+12 x^{15}+30 x^{14}-96 x^{13}-79 x^{12}+360 x^{11}+70 x^{10}-804 x^9-92 x^8+972 x^7+230 x^6-600 x^5-207 x^4+192 x^3+62 x^2-36 x+1 =0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Convergence, finding limit $\lim_{n \to \infty} \frac{2^n}{n!}$ I just came across an exercise, however I don't know how to find the limit of $$\lim_{n \to \infty} \frac{2^n}{n!}$$ can any body help? Of course this is not homework, I'm only trying out example myself, from https://people.math.osu.edu/fowler.291/sequences-and-series.pdf page 38. I know that the limit exists, and it is $0$, checked it in on the wolfram, but I don't know how to solve it. thanks in advance!
Notice that $$\frac{2^n}{n!} = {{2 *2*2*2*...*2} \over {1*2*3*4*...*n}}=2*{{2*2*...*2} \over {3*4*...*n}} \leq 2*1*1*...*{{2} \over {n}}={4 \over n}$$ So $$0 \leq \frac{2^n}{n!} \leq {4 \over n}\xrightarrow[n\to\infty]{}0$$ And hence $\lim_{n \to \infty} \frac{2^n}{n!}=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Number of Partitions proof How do I prove that the # of partitions of n into at most k parts equals the # of partitions of n+k into exactly k parts? I was trying to improve my ability of bijective-proofs, unfortunately I was not able to find a working bijection to transform the Ferrers Diagram. Can someone give me a hint of what to do? I'd appreciate any help regarding this. Thanks :)
I present a proof using PET (Polya Enumeration Theorem) for future reference. We will be using $Z(S_k)$, the cycle index of the symmetric group on $k$ elements. The partition into at most $k$ parts is given by $$[z^n] Z(S_k)\left(\frac{1}{1-z}\right)$$ where the fraction includes the empty term to account for slots being left blank, which represents at most $k$ non-empty parts. On the other hand partitions of $n+k$ into exactly $k$ parts is $$[z^{n+k}] Z(S_k)\left(\frac{z}{1-z}\right)$$ where we have omitted the empty term. This is $$[z^{n+k}] z^k Z(S_k)\left(\frac{1}{1-z}\right)$$ which is $$[z^{n}] Z(S_k)\left(\frac{1}{1-z}\right)$$ and we have equality as claimed. Remark. We have a case here where the vbivariate generating function $$\prod_{q\ge 1} \frac{1}{1-ux^q}$$ would appear not to be all that useful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Elementary problems that would've been hard for past mathematicians, but are easy to solve today? I'm looking for problems that due to modern developments in mathematics would nowadays be reduced to a rote computation or at least an exercise in a textbook, but that past mathematicians (even famous and great ones such as Gauss or Riemann) would've had a difficult time with. Some examples that come to mind are group testing problems, which would be difficult to solve without a notion of error-correcting codes, and -- for even earlier mathematicians -- calculus questions such as calculating the area of some $n$-dimensional body. The questions have to be understandable to older mathematicians and elementary in some sense. That is, past mathematicians should be able to appreciate them just as well as we can.
I would say that computing the Fourier coefficients of a tamed function is a triviality today even at an engineering math 101 level. Ph. Davis and R. Hersh tell the long and painful story of Fourier series. I quote from their book: "Fourier didn't know Euler had already done this, so he did it over. And Fourier, like Bernoulli and Euler before him, overlooked the beautifully direct method of orthogonality [...]. Instead, he went through an incredible computation, that could serve as a classic example of physical insight leading to the right answer in spite of flagrantly wrong reasoning." (Fifth Ch. "Fourier Analysis".)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 3 }
A problem on convergent subsequences Find a sequence $\langle X_n\rangle$ such that $$L_{X_n}=\left\{\frac{n+1}{n}:n\in\mathbb N\right\}\cup \{1\}.$$ Where $L_{X_n} = \{ p\in \mathbb{R} : \text{There exists a subsequence } \langle X_{n_k}\rangle \text{ of } \langle X_n\rangle \text{ such that} \lim X_{n_k} = p\}$. Justify your answer. Since given $L_{X_n}$ is an infinite set, it was difficult for me to find a sequence in which I could select infinite ordered unending set of real numbers ( Subsequences) in such a way that they converge to the given real numbers in $L_{X_n}$. Can anyone please help me in order to solve this? *An edit was made to define $L_{X_n}$
For example $$\left( 2, 2,\frac{3}{2}, 2,\frac{3}{2} , \frac{4}{3},2,\frac{3}{2} , \frac{4}{3} ,\frac{5}{4} ,...\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why the $GCD$ of any two consecutive Fibonacci numbers is $1$? Note: I've noticed that this answer was given in another question, but I merely want to know if the way I'm using could also give me a proof. I did the following: $$F_n=F_{n-1}+F_{n-2} \\ F_n=[F_{n-2}+F_{n-3}]+[F_{n-3}+F_{n-4}]\\F_n=F_{n-2}+2F_{n-3}+F_{n-4}\\ F_n=[F_{n-3}+F_{n-4}]+2[F_{n-4}+F_{n-5}]+[F_{n-5}+F_{n-6}]\\F_n=F_{n-3}+3F_{n-4}+3F_{n-5}+F_{n-6}\\ \dots \tag{1}$$ I guess that the coefficients of the $F_n$'s might indicate something that could prove it. But I'm not sure if it's possible. Perhaps the impossibility of writing the expression as: $$n(b_1a_1+b_2a_2+\dots +b_n a_n)$$ With $n,b_n\in\mathbb{N}$ would show that. But I'm not sure on how to proceed. This should be true because if $a$ and $b$ have a common divisor $d$, then: $$a+b=a'd+b'd=d(a'+b')$$ It is possible to extend this: $$a+b+c=da'+db'+dc'=d(a'+b'+c')$$ I have noticed that the numbers that appear in the expansion I've shown in $(1)$ seems to be the Pascal's triangle. So perhaps these numbers as coefficients of the $F_n$'s might indicate that it's not possible to write them as: $$d(a'+b'+c')$$
If $\,f_n = a f_{n-1}\! + f_{n-2}\,$ then induction shows $\,(f_n,f_{n-1}) = (f_1,f_0)\,$ since $\, (f_n,f_{n-1}) = (a f_{n-1}\! + f_{n-2},\,f_{n-1}) = (f_{n-2},f_{n-1}) = (f_1,f_0)\,$ by induction Remark $\ $ Similarly one can prove much more generally that the Fibonacci numbers $\,f_n\:$ comprise a strong divisibility sequence: $\,(f_m,f_n) = f_{(m,n)},\:$ i.e. $\,\gcd(f_m,f_n) = f_{\gcd(m,n)}.\:$ Then the above is just the special case $\,m=n\!+\!1.\:$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1269956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
What does it mean if the standard Hermitian form of complex two vectors is purely imaginary? If $v,w \in \mathbb{C}^n$, what does it mean geometrically for $\langle v , w \rangle$ to be purely imaginary?
To understand what geometrically what it means for $\left<v,w\right>$ to be purely imaginary, it is necessary to understand geometrically how a Hermitian inner product $\left<\cdot,\cdot\right>_{\mathbb C}$ on $\mathbb C^n$ relates to the Euclidean inner product $\left<\cdot,\cdot\right>_{\mathbb R}$ on $\mathbb R^{2n}$. These are related because $\mathbb C^n$ as a real vector space is simply $\mathbb R^{2n}$. The key to the relationship is that multiplication by $i$ of $\mathbb C^n$ corresponds to applying a real linear map $J\colon \mathbb R^{2n}\to\mathbb R^{2n}$ such that $J^2=-1$. Concretely, you think of $\mathbb R^{2n}$ as $n$ copies of $\mathbb R^2$, and $J$ as the linear map of rotating all the planes simultaneously by $90^\circ$. Geometrically, this is a choice of axis through the whole space $\mathbb R^{2n}$ around which to do rotations. Then the formula expressing the relationship between the Hermitian inner product on $\mathbb C^n$ and the Euclidean inner product is $$\left<v,w\right>_{\mathbb C}=\left<v,w\right>_{\mathbb R}+i\left<v,Jw\right>_{\mathbb R}$$ (this is the formula when the Hermitian inner product is complex-linear in the first variable and conjugate-linear in the second; otherwise the imaginary part $\left<Jv,w\right>$ instead). From this, the geometric interpretation is easy. The real part of the Hermitian inner product is $0$ if the two vectors are perpendicular in $\mathbb R^{2n}$. The imaginary part of the Hermitian inner product is $0$ if rotating one of the two vectors by $90^\circ$ (around the designated axis!) makes them perpendicular. This is somewhat hard to visualize because the smallest non-trivial example requires thinking about $\mathbb R^4$, which is $1$ dimension higher than comfortable. You can check the formula with the following easy computation of dot products. Let $(z_1,\dots,z_n),(w_1,\dots,w_n)\in\mathbb C^n$ be given as $(a_1,b_1,\dots,a_n,b_n)\in\mathbb R^{2n}$ and $(c_1,d_1,\dots,c_n,d_n)\in\mathbb R^{2n}$. Then $$\begin{align*} (z_1,\dots,z_n)\cdot\overline{(w_1,\dots,w_n)}&=\sum_{j=1}^nz_j\cdot\overline w_j=\sum_{j=1}^n(a_j+ib_j)(c_j-id_j)\\ &=\sum_{j=1}^na_jc_j+b_jd_j+i(a_j(-d_j)+b_jc_j)\\ &=\sum_{j=1}^n(a_j,b_j)\cdot(c_j,d_j)+i((a_j,b_j)\cdot J(c_j,d_j))\\ =(a_1,b_1,\dots,a_n,b_n)\cdot(c_1,d_1,\dots,c_n,d_n)&+(a_1,b_1,\dots,a_n,b_n)\cdot J(c_1,d_1,\dots,c_n,d_n) \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1270105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Show that holomorphic function $f: \mathbb{C} \rightarrow \mathbb{C}$ is constant Let's $f: \mathbb{C} \rightarrow \mathbb{C}$ be a holomorphic function such that values $f$ are on line $y=ax+b$. Show that $f$ is constant. I think I should use Cauchy-Riemann equations but I don't know what does mean that values $f$ are on line $y=ax+b$. Can you explain me that? Thanks in advance.
Your hypothesis reads: $$f(x+iy) = u(x,y) + i(a u(x,y) + b),$$ that is, the point $f(x+iy)$ as a point in $\Bbb R^2$ belongs to the said line. The Cauchy-Riemann equations read, abbreviating the notation: $$u_x = a u_y, \quad u_y = -au_x.$$ So $u_x = -a^2u_x \implies (1+a^2)u_x = 0$ and... oh, you can conclude now! (if you need one more push, please tell me)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1270305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Algebraic Curves and Second Order Differential Equations I am curious if there are any examples of functions that are solutions to second order differential equations, that also parametrize an algebraic curve. I am aware that the Weierstrass $\wp$ - Elliptic function satisfies a differential equation. We can then interpret this Differential Equation as an algebraic equation, with solutions found on elliptic curves. However this differential equations is of the first order. So, are there periodic function(s) $F(x)$ that satisfy a second order differential equation, such that we can say these parametrize an algebraic curve? Could a Bessel function be one such solution?
The functions $\sin(x)$ and $\cos(x)$ solve a second-order differential equation, namely $$u'' = -u,$$ and $(\cos(x), \sin(x))$ parametrizes the algebraic curve $x^2 + y^2 = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1270396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Basic doubt about cosets Studying some basic group theory I had the following doubt: For $H$ subgroup of a finite group $G$ (doesn't matter invariance of $H$), is it true that $$|G/H|=|\{aHa^{-1}:a \in G\}| \space (\text{where G/H is the quotient of G by H})?$$ I've tried to define the most natural function between these two sets that comes to my mind, $$f:G/H \to \{aHa^{-1}:a \in G\}$$$$aH \to aHa^{-1}$$ but I couldn't show that $f$ is well defined and that it is injective. I would appreciate if someone could tell me whether these two sets have the same number of elements or not.
No, it is not true. In fact, if $H$ is a normal subgroup of $G$, then (by the definition of normal) we have that for any $a\in G$, $$aHa^{-1}=H$$ so that $$\{aHa^{-1}:a\in G\}=\{H\}$$ has exactly one element, whereas there is more than one element of $G/H$ as long as $G\neq H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1270520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
show $\sum_{k=0}^n {k \choose i} = {n+1 \choose i+1}$ show for n $\geq i \geq 1 : \sum_{k=0}^n {k \choose i} = $ ${n+1} \choose {i+1}$ i show this with induction: for n=i=1: ${1+1} \choose {1+1}$ = $2 \choose 2$ = 1 = $0 \choose 1$ + $1 \choose 1$ = $\sum_{k=0}^1 {k \choose 1}$ now let $\sum_{k=0}^n {k \choose i} = $ ${n+1} \choose {i+1}$ for n+1: ${n+1+1} \choose {i+1}$ = $n+1 \choose i$ + $n+1 \choose i$ = $ n+1 \choose i$ + $\sum_{k=0}^n {k \choose i} = \sum_{k=0}^{n+1} {k \choose i} $ Is this the right way ?
We can also use the recurrence from Pascal's Triangle and telescoping series: $$ \begin{align} \sum_{k=0}^n\binom{k}{i} &=\sum_{k=0}^n\left[\binom{k+1}{i+1}-\binom{k}{i+1}\right]\\ &=\sum_{k=1}^{n+1}\binom{k}{i+1}-\sum_{k=0}^n\binom{k}{i+1}\\ &=\binom{n+1}{i+1}-\binom{0}{i+1}\\ &=\binom{n+1}{i+1} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1270595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
For which values of $a,b,c$ is the matrix $A$ invertible? $A=\begin{pmatrix}1&1&1\\a&b&c\\a^2&b^2&c^2\end{pmatrix}$ $$\Rightarrow\det(A)=\begin{vmatrix}b&c\\b^2&c^2\end{vmatrix}-\begin{vmatrix}a&c\\a^2&c^2\end{vmatrix}+\begin{vmatrix}a&b\\a^2&b^2\end{vmatrix}\\=ab^2-a^2b-ac^2+a^2c+bc^2-b^2c\\=a^2(c-b)+b^2(a-c)+c^2(b-a).$$ Clearly, $$\left\{\det(A)\neq0\left|\begin{matrix}c\neq b\\a\neq c\\b\neq a\\a,b,c\neq 0\end{matrix}\right.\right\}\\$$ Is it sufficient to say that the matrix is invertible provided that all 4 constraints are met? Would Cramer's rule yield more explicit results for $a,b,c$ such that $\det(A)\neq0$?
suppose the matrix is singular. then so is its transpose, which must annihilate some non-zero vector $(h,g,f)$ this gives three equations: $$ fa^2 + ga+ h =0 \\ fb^2 + gb+ h =0 \\ fc^2 + gc+ h =0 \\ $$ we know from the algebra of fields that a quadratic equation can have at most two roots, so the three values $a,b,c$ cannot all be different
{ "language": "en", "url": "https://math.stackexchange.com/questions/1270699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 1 }
Suppose $v_1,...,v_m$ is a linearly independent list, Will the list of vectors $Tv_1,...,Tv_m$ be a linearly independent list. Suppose $v_1,...,v_m$ is a linearly independent list of vectors in $V$ and $T∈L(V,W)$ is a linear map from V to W. Will the list of vector $Tv_1,...,Tv_m$ be a linearly independent list in W? If it is, please give a rigorous proof; if not, give a counter-example. I know that if we reverse the condition and results, it's easy to prove v1,...,vm is linearly independent, but I'm get stuck in this one.
Short Answer: No, for example let $Tv=0$ for all $v\in V$. Long Answer: If null space of $T$ is $\{0\}$ you can conclude that $Tv_1, \dots, Tv_m$ are linearly independent. Otherwise, they are linearly dependent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1270774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Completion with respect a collection of measures. Given a set of measures $\mathcal{M}$ on a measurable space $(\Omega,\mathscr{F})$, the $\mathcal{M}$-completion of $\mathscr{F}$ is defined as $$ \overline{\mathscr{F}}^\mathcal{M}:=\bigcap_{\mu\in\mathcal{M}}\mathscr{F}^\mu,$$ where $\mathscr{F}^\mu$ is the completion of $\mathscr{F}$ with respect to the measure $\mu$. Let $\mathscr{N}_\mathcal{M}$ denote the set of all $\mathcal{M}$-null sets, that is, the collection of all sets $A\subset\Omega$ such that $\mu^*(A)=0$ for all $\mu\in\mathcal{M}$. It is clear that $$\sigma(\mathscr{F},\mathscr{N}_\mathcal{M})\subset\overline{\mathscr{F}}^\mathcal{M}.$$ The question is whether both are actually the same. I think not, but can't think of a counter example at the moment. The relevance of the question has to do with the notion of sufficient $\sigma$--algebra and minimal sufficient $\sigma$--algebras in statistics.
The answer is no. Example. Let $\Omega=[0,1]$. $$\mathcal{F}=\{Y\subset\Omega : Y \subseteq [0,1]\, \& \,Y \,\text{is countable or} \, [0,1]\setminus Y\,\text{is countable}\}$$ Let $\mathcal{M}=(\delta_x: x \in [0,1]\}$ be the family of Dirac probability measures on $(\Omega,\cal{F})$. Then $\cal{N}_M=\{ \emptyset\}$. Hence $\sigma(\cal{F},\cal{N}_M)=\sigma(\cal{F},\{ \emptyset\})=\cal{F}$. On the other hand, $\overline{\cal{F}}^{\mathcal{M}}=\cal{P}([0,1])$, because $\cal{F}^{\delta_x}=\cal{P}([0,1])$ for each $x \in [0,1]$, where $\cal{P}([0,1])$ denotes the power set of the set $[0,1]$. Clearly, $\cal{P}([0,1])\neq \cal{F}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1270878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Existence of a holomorphic function with the desired property Let $D=\{z\in \mathbb C:|z|<1\}$. Then does there exist 1.a holomorphic function $f:D\to \overline D$ with $f(0)=0$ and $|f(\frac{1}{3})|=\frac{1}{4}$ ? 2.a holomorphic function $f:D\to \overline D$ with $f(0)=0$ and $f(\frac{1}{3})=\frac{1}{2}$ ? How to approach these?I am new to complex analysis.So please do provide some hints or the theorems that can help me here?
For 1, use $\frac{3}{4}·\frac{1}{3} = \frac{1}{4}$ and $\lvert\frac{3}{4}\rvert < 1$. For 2, use the open mapping theorem and the Schwarz lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1271005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Describing groups with given presentation? $\langle x,y\ |\ xy=yx,x^5=y^3\rangle$ and $\langle x,y\ |\ xy=yx,x^4=y^2\rangle$. I'm trying to describe the groups with presentations $\langle x,y\ |\ xy=yx,x^5=y^3\rangle$ and $\langle x,y\ |\ xy=yx,x^4=y^2\rangle$. I have some problems getting a good picture of what they look like... For the first one, $xy=yx$ allows us to say that any element in the group can be written as $x^iy^j$ for some $i,j$. More precisely, using $x^5=y^3$, we can always reduce $j$ to $0,1$ or $2$, whereas $i$ could have any value. So I was thinking about $\mathbb{Z} \times C_3$. But the other way around (reducing $i$ this time) it's also $C_5 \times \mathbb{Z}$. But then I should be able to prove that both descriptions are equivalent, which I can't do... For the second, I proceed similarly and get to the same kind of problem... Could you tell me where I'm going wrong? Thank you very much in advance!
The group $\langle x,y | xy=yx, x^5=y^3 \rangle$ is the quotient of $\langle x,y | xy=yx \rangle = \mathbb{Z} \oplus \mathbb{Z}$ (with $x=(1,0)$ and $y=(0,1)$) by the (normal) subgroup generated by $5x-3y=(5,-3)$, and similarly $\langle x,y | xy=yx , x^4=y^2 \rangle$ is the quotient of $\mathbb{Z} \oplus \mathbb{Z}$ by the (normal) subgroup generated by $(4,-2)$. Now in general we have the result for $0 \neq (n,m) \in \mathbb{Z} \oplus \mathbb{Z}$, that $$(\mathbb{Z} \oplus \mathbb{Z})/\langle (n,m) \rangle \cong \mathbb{Z} \oplus \mathbb{Z}/\langle d \rangle,$$ where $d := \mathrm{gcd}(n,m)$. I am sure that this has been explained a couple of times on math.SE, but here you can also see it "directly": Write the first group additively as $\langle x,y : 5x=3y \rangle$ (the commutativity being implicit; i.e. we take a presentation as an abelian group). We may rewrite the relation as $2x=3(y-x)$, i.e. $2x=3y'$ after replacing $y$ by $y'+x$. The new relation can be written as $2(x-y')=y'$, i.e. $2x'=y'$ after replacing $x$ by $y'+x'$. But now $y'$ is superfluous, and we see that the group is freely generated by $x'=2x-y$. Does this remind you of the Euclidean algorithm? That's exactly what happens in the Smith normal form for $1 \times 2$-matrices, and the Smith normal form lets you decompose finitely generated abelian groups into direct sums of cyclic groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1271116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
How to Separate Quasi-Linear PDE I'm attempting to solve the non-homogenous quasi-linear PDE below: $$z\frac{\partial z}{\partial x} - \alpha y\frac{\partial z}{\partial y} = \frac{\beta}{x^3y}$$ From what I've read in texts, the general form of a quasi-linear PDE is defined as $$a(x,y,z)\frac{\partial z}{\partial x} + b(x,y,z)\frac{\partial z}{\partial y} - c(x,y,z) = 0$$ with solutions (called characteristic curves) $\phi(x,y,z) = C_1$ and $\psi(x,y,z) = C_2$ given by the characteristic equations $$\frac{dx}{a} = \frac{dy}{b} = \frac{dz}{c}$$ When I set up these equations for my problem, I find $$a(x,y,z) = z$$ $$b(x,y,z) = -\alpha y$$ $$c(x,y,z) = \frac{\beta}{x^3 y}$$ which leads to $$\frac{dx}{z} = -\frac{dy}{\alpha y} = \frac{x^3ydz}{\beta}$$ Due to the coupling in the last term I cannot find a way to separate these to get two expressions containing a total derivative. Could anyone help?
Where does this equation come from? I'm actually leaning towards there being no solution, for two reasons - one, Maple doesn't return one, and more seriously, I've tried solving it a couple of different ways and run into seemingly insurmountable problems in all of them. An example - If you take the second two terms $\frac{\mathrm{d}y}{b}=\frac{\mathrm{d}z}{c}$ and divide through you get \begin{equation} \frac{\mathrm{d}z}{\mathrm{d}y} = -\frac{\beta}{\alpha}\frac{1}{x^3y^2}, \end{equation} Which you can solve easily enough by integrating with respect to $y$. Since we're integrating with respect to only one variable, this gives us the answer up to an undetermined function of the other - in this case performing the integration gives us \begin{equation} z(x,y) = \frac{\beta}{\alpha}\frac{1}{x^3y} + f(x). \end{equation} If you try the same thing with $\frac{\mathrm{d}x}{a}=\frac{\mathrm{d}z}{c}$, you get \begin{equation} \frac{\mathrm{d}z}{\mathrm{d}x} = -\frac{\beta}{\alpha}\frac{1}{x^3yz}, \end{equation} which you can solve by separation of variables - taking the $z$ over to the other side and integrating with respect to x, you end up with \begin{equation} z^2(x,y) = -\frac{\beta}{x^2y} + g(y) \end{equation} for some unknown function $g(y)$. However, this is clearly not going to agree with what we got out of our first calculation. It doesn't seem like this impediment can be removed. I suspect the problem arises due to the combination of the nonlinearity of the equation in combination with the non-analyticity of the $\frac{\beta}{x^3y}$ term, however I'm not completely comfortable with this still and would be happy to be wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1271327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
solubility of nonlinear overdetermined PDE for holonomic scaling of a frame This question is essentially a tweak of under what conditions can orthogonal vector fields make curvilinear coordinate system? . Suppose we have a frame of vector fields $\nu_i$ on $\mathbb{R}^n$. As discussed in the linked question, these $\nu_i$ will (at least locally) form the $\frac{\partial}{\partial x^i}$ of a coordinate system on $\mathbb{R}^n$ iff they commute pairwise. Suppose that the $\nu_i$ do not commute. When is it possible (say, locally) to find $n$ non-vanishing functions $f_i$ such that $$[f_i\nu_i,f_j\nu_j]=0?$$ This collection of equations can be regarded as an overdetermined system of $\frac{1}{2}n^2(n-1)$ nonlinear PDE. What is the solubility condition for this system?
Without loss of of generality, we consider strictly positive functions $ f_i := \exp {g^i} $ and slove the problem for functions $g^i$. After calculation, the equation $[\exp g^i \cdot v_i, \exp g^j \cdot v_j] = 0$ is the same as $$ [v_i, v_j] = v_j g^i \cdot v_i - v_i g^j \cdot v_j .$$ Therefore, when the given involutive frame has dimension 2, i.e., $$ [v_1, v_2] = \alpha \cdot v_1 - \beta \cdot v_2 $$ for two smooth functions $\alpha$ and $\beta$, we only need to solve the following two independent differential equations: $$ v_2 g^1 = \alpha,\quad v_1 g^2 = \beta.$$ Locally, this is always possible. For involutive frame of higher dimension, we should necessarily have that each two vector fields are involutive from our rewriting of the equations. In this case, we denote by $\Gamma^i_j$ the corresponding coefficients (they are smooth functions): $$ [v_i, v_j] = \Gamma^i_j \cdot v_i - \Gamma_i^j \cdot v_j, \quad i \neq j. $$ And we then need to solve the overdetermined system for each $g^i$ : $$ v_j g^i = \Gamma^i_j, \quad j \neq i.$$ The following sufficient condition for the existence of $g^i$ comes from the Frobenius theorem, stated as Theorem 19.27 in Lee's Introduction to Smooth Manifolds, $$ v_k \Gamma^i_j - v_j \Gamma^i_k = \Gamma^k_j \cdot \Gamma^i_k - \Gamma^j_k \cdot \Gamma^i_j , \quad \forall k \neq i, \, j \neq i,$$ which is eq.(19.10) in the book. It is obvious that the above condition is also necessary as $ v_k(v_j g^i) - v_j(v_k g^i) = [v_k, v_j] g^i $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1271393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Right notation to recurse over a sequence or list I have a function $f(x, a)$ which is invoked over all the elements of a sequence feeding the result to the next call, with $x$ being the next element in the list and $a$ the accumulated result. What is the correct (and popular) notation to write this properly? Lets say $F(p, a)$ is the recursive function which takes the sequence, $p$, and runs $f(x, a)$ on each element of $p$. I was thinking of defining it something like this: $F(p, a) = \begin{cases} a & \text{if } p = \langle \rangle\\ f(F(\langle x_1, ..., x_{n-1} \rangle, a), x_n) & \text{if } p = \langle x_1, ..., x_n \rangle \end{cases} $ But for some reason it seems incorrect for the case when $p$ has just one element. Is there a better way to define it? Maybe some way of denoting the head and tail of the list?
You have the concept of folding. Your definition seem to be right for me. It's the right fold. Note, that there is also left folding: $$ F(p,a) = \begin{cases} a & ; p=[] \\ F([x_1,\ldots,x_{n-1}], f(x_n, a)) & ; p = [x_1, \ldots, x_n]; n \ge 1 \end{cases} $$ EDIT: You can also define it with the colon operator (which is the concatenation of an element to a list): $$ F(p,a) = \begin{cases} a & ; p=[] \\ F(xs, f(x, a)) & ; p = x : xs \end{cases} $$ That's the way it is usually done in haskell...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1271478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find integers $m$ and $n$ such that $14m+13n=7$. The Problem: Find integers $m$ and $n$ such that $14m+13n=7$. Where I Am: I understand how to do this problem when the number on the RHS is $1$, and I understand how to get solutions for $m$ and $n$ in terms of some arbitrary integer through modular arithmetic, like so: $$14m-7 \equiv 0 \pmod {13} \iff 14m \equiv 7 \pmod {13}$$ $$\iff m \equiv 7 \pmod {13} $$ $$\iff m=7+13k \text{, for some integer }k.$$ And repeating the same process for $n$, yielding $$ n=-7(2k+1) \text{, for some integer } k. $$ I then tried plugging these in to the original equation, thinking that I only have one variable, $k$, to solve for, but they just ended up canceling. The only way I can think to proceed from here is brute force, but I imagine there's a more elegant way to go about this. Any help would be appreciated here.
If $14x+13y=1$ then multiplying by $7$ gives $14(7x)+13(7y)=7.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1271738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 0 }
using a formula to make a statement with the use of greater than, less than symbol I am trying to make statement using the greater than less, less than symbol. I am unsure if the use of $\lt\gt$ symbols have to facing the same direction in an equation. Could you say: $x \lt x + y \gt y$ would this mean that $x$ is less than $x+y$ and also $y \lt x+y$ I'm going to use this for the engraving in my (future) husband wedding band. Just trying to say that we are better together. Hope this makes sense! Thank you!!!
You are correct in that the interpretation of $x<x+y>y$ would be $x<x+y$ and $y<x+y$. This isn't the usual way such a thing is notated - sometimes people write both equations, and sometimes people write $x,y<x+y$. Still, personally I think $x<x+y>y$ has a satisfying symmetry which $x,y<x+y$ doesn't have. And of course, congratulations!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1271813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What can be said about the relationship between row equivalent and invertible matrices? If $rref(A)$ = $I$ and $rref(B)$ = $I$, can we say that $A$ and $B$ are row equivalent? My intuition says Yes, but I can't really prove it. If this is true, does that mean that all invertible matrices of the same dimensions are row equivalent? (Assume that $A$, $B$ and $I$ are $n$ by $n$) I couldn't find an answer to this, maybe it's there somewhere, but in that case I couldn't understand it. Thanks.
Yes -- if you can show that the inverse of elementary row operations are also elementary row operations, the proof is essentially already done (just apply the inverse of matrix B's operations to both sides). And yes, this fact (plus a proof that an invertible matrix's row-reduced echelon form is always the identity matrix) does imply that all invertible matrices of the same dimension are row-equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1271902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show the probability that the sum of these numbers is odd is 1/2 Setting Let $S$ be a set of integers where at least one of the integers is odd. Suppose we pick a random subset $T$ of $S$ by including each element of $S$ independently with probability $1/2$, Show that $$\text{Pr}\Big[\Big(\sum_{i \in T} i\Big) = \text{odd}\Big] = \frac{1}{2}$$ Solution given Let $x$ be an odd integer in $T$. Then for every subset $S \subseteq T - \{x\}$, exactly one of $S$ and $S \cup \{x\}$ has an odd total, and both are subsets of $T$. Thus exactly half of the subsets have an odd total. Problem The solution makes very little sense to me. If someone has an alternate solution, or could pose the solution in a more explicable manner, it'd be really nice.
This is a proof by pairing - it establishes a bijection between sets with even sums and sets with odd sums, to show that there are the same number of subsets of each kind. Let $U$ be the set which contains all the elements of $S$ except for the. odd integer $x$. Every subset $T$ of $S$ either contains $x$ or it doesn't. If it doesn't then $T$ is a subset of $U$. We can add the element $x$ to any set $T\subset U$ to get a subset $V$. We match $T$ with $V$. One of these sets has an even sum, and the other has an odd sum. Do we get all the possible sets $V$? - Well if we have a $V$ containing $x$ we can strike out the element $x$ and get a set $T\subset U$. We have a perfect matching. We can't tell which of the two sets in each matching has an odd sum, but we know precisely one of them will. We therefore know that half of the sets have odd sums and half have even sums.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1271998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
One tailed or two tailed Ok so this the question: An administrator at a medium-sized hospital tells the board of directors that, among patients received at the Emergency room and eventually admitted to a ward, the average length of time between arriving at Emergency and being admitted to the ward is 4 hours and 15 minutes. One of the board members believes this figure is an underestimate and checks the records for a sample of 25 patients. The sample mean is 6 hours and 30 minutes. Assuming that the population standard deviation is 3 hours, and that the length of time spent in Emergency is normally distributed, use the sample data to determine whether there is sufficient evidence at the 5% level of significance to assert that the administrator's claim is an underestimate. The first scenario is that that for the null hypothesis the mean is equals to 4hrs 15mins. For the alternative hypothesis the mean is not equals to 4hrs 15mins. So it could be less or more. However the question says that one the board memeber thinks that it might be an UNDERESTIMATE so that means the alternative hypothesis must be higher than 4hrs 15 mins? Right? So that opens the possibility to a another scenario which is: In the null hypothesis the mean is equals to or less than 4hrs 15mins. And in the alternative hypothesis the mean is greater than 4hrs 15mins. So the first scenario is a two tailed test and the second scenario is one tailed test. The question asks me whether it is a one tailed test or two tailed, and there is only one correct answer. But I am not sure which one is correct. From my perspective both make sense. If someone could give me the correct answer and explain it to me it would help me out a lot.
There is only one scenario. The administrator's statement has to be taken at face value and is the null hypothesis. $H_0: t=4.25$ hours The board member has an alternative hypothesis. $H_1: t>4.25$ hours This is one-tailed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1272083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Hypothesis testing rejection question If a hypothesis is rejected at the $0.025$ level of significance, then it may be rejected or not rejected at the $0.01$ level. Is this statement true? If it is true can you explain it to me why? I cannot seem to understand why it would be true. Also if it is any level more than $0.025$, should it be rejected?
If a claim is rejected at $0.01$ significance level, it will be rejected at $0.025$, but the other way around need not be true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1272207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is the "homogenous solution" to a second-order linear homogenous DE always valid? That is, given an equation $ay''+by'+cy = 0$, I know that solutions are of the form $e^{rt},$ where r is a constant computed from $ar^2 + br + c = 0$. For some reason, I have written down in my notes adjacently that the "homogenous solution" is $c_1e^{r_1t}+c_2e^{r_2t}$. I also didn't note what precisely that was, but my assumption was that it means any equation of the form $c_1e^{r_1t}+c_2e^{r_2t}$ also satisfies the initial equation. Is the homogenous solution really valid for $ay''+by'+cy = 0$ in all places? If so, why is the starting point that the solutions are of the form $e^{rt}$, and why is the homogenous solution valid anyway? If not, what is meant by the "homogenous solution"?
An $n$th order, linear homogenous ODE has $n$ linearly independent solutions (See this for why.). For a $2$nd order linear homogenous ODE, we then expect two solutions. In your case, the coefficients of the ODE are contants. This arises to the notable family of solutions that you've noted: $c_1e^{r_1t}+c_2e^{r_2t}$. You've likely seen this, but let's assume $y=e^{rt}$. Upon substitution into the ODE, we can see our assumption is satisfied and indeed this is one of many solutions: $$ay''+by'+cy = 0$$ $$a(e^{rt})''+b(e^{rt})'+ce^{rt} = 0$$ $$ar^2e^{rt}+bre^{rt}+ce^{rt} = 0$$ $$ar^2+br+c = 0$$ Note that the division by $e^{rt}$ is permitted since $e^{rt}>0$. This last line, though, is just a quadratic equation with roots: $$r = \frac{-b \pm \sqrt{b^2 -4ac}}{2a}$$ The two roots give $r_1$ and $r_2$, and thus let us construct a general solution. Since it is linear, we sum the two solutions (which happen to be linearly independent. Look up Wronksian for further info). The summation of solutions is a solution itself, hence $$y = c_1e^{r_1t}+c_2e^{r_2t}$$ The above is your real-valued solution for DISTINCT roots $r_1 \ne r_2$. But what if $r_1 = r_2$ in the case of repeated roots? Or $r_1, r_2 \in \mathbb{C}$ in the case of complex roots? There are workarounds for these cases, and the answer will be again represented as a sum of two linearly independent solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1272309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
When is shear useful? I'd never heard of the shear of a vector field until reading this article. Shear is the symmetric, tracefree part of the gradient of a vector field. If you were to decompose the gradient of a vector field into antisymmetric ($\propto$ curl), symmetric tracefree (shear), and tracefull(?) ($\propto$ divergence) parts you'd get that it has components: $$\frac {\partial A_i}{\partial x^j} = \sigma(A)_{ij} - \frac 12 \epsilon_{ijk}(\nabla \times A)_k + \frac 13 \delta_{ij} (\nabla \cdot A)$$ where $\sigma(A)$ is the shear and is defined as $$\sigma(A)_{ij} = \frac 12 \left(\dfrac {\partial A_i}{\partial x^j} + \frac {\partial A_j}{\partial x^i}\right) - \frac 13 \delta_{ij} (\nabla \cdot A)$$ Is the shear of a vector field only useful in fluid mechanics? Does it have any use in pure mathematics -- perhaps analysis or differential geometry? If so, does anyone have a reference?
In the special case of two dimensions the shear operator is known as the Cauchy-Riemann operator (or $\bar \partial$ operator, or one of two Wirtinger derivatives), and is denoted $\dfrac{\partial}{\partial \bar z}$. It is certainly useful in complex analysis. The $n$-dimensional case comes up in the theory of quasiconformal maps where the operator $\sigma$ is known as "the Ahlfors operator", even though the concept certainly predates Ahlfors and goes back at least to Cauchy. Sometimes the name is expanded to be more historically accurate. Other times it's called "the distortion tensor" or the $n$-dimensional Cauchy-Riemann operator. Examples of usage: * *$L^\infty$−extremal mappings in AMLE and Teichmüller theory by Luca Capogna, page 4. *Logarithmic potentials, quasiconformal flows, and Q-curvature by Mario Bonk, Juha Heinonen, and Eero Saksman, page 10. *Another proof of the Liouville theorem by Zhuomin Liu, page 328.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1272399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Fundamental theorem of calculus related problem Let $$F(t) = \int ^t_0 f(s,t)\, ds$$ How can I find out $F'(t)$? I guess we should apply chain rule but I can't succeed. Well, perhaps, separation of variables?
Let $$G(u,v)=\int_0^uf(s,v)\,ds$$ Note that if $u(t)=t$ and $v(t)=t$, then $F(t)=G(u,v)$. Now $$ \begin{align} \frac{d}{dt}F(t)&=\frac{d}{dt}G(u,v)\\&=\frac{\partial G}{\partial u}\frac{du}{dt}+\frac{\partial G}{\partial v}\frac{dv}{dt}\\ &=f(u,v)\cdot1+\int_0^u\frac{\partial}{\partial v}f(s,v)\,ds\cdot1\\ &=f(t,t)+\int_0^t\left.\frac{\partial}{\partial v}f(s,v)\right|_{v=t}\,ds\\ \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1272516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
No. of Comparisons to find maximum in $n$ Numbers Given $n$ numbers, we want to find the maximum. In order to find the maximum in a minimal amount of comparisons, we define a binary tree s.t. we compare $n'_1=\max(n_1,n_2)$, $n'_2=\max(n_3,n_4)$; $n''_1=\max(n'_1,n'_2)$ and so on. How many maximum comparisons are needed to find the maximum of those $n$ numbers? When drawing and trying out on paper, I came up with $n-1$ comparisons, but I'm not able to prove this. Any ideas? It's more or less easy to see if $n$ is divsible by 2.
It is fairly straight-forward to prove with induction that $k-1$ comparisons are enough to find the maximum of $n_1,\dotsc,n_k$. * *Clearly we need no comparisons to find the maximum of $n_1$. *Consider now the numbers $n_1,\dotsc,n_k$ and assume that we can find the maximum of $k-1$ numbers with $k-2$ comparisons. Then we need only one last comparison because $\max(n_1,\dotsc,n_k) = \max(\max(n_1,\dotsc,n_{k-1}),n_k)$. On the other hand, this is the best case possible, too: suppose that $n_1$ is the maximum of $n_1,\dotsc,n_k$. To decide that it actually is the maximum you need to compare it with each of $n_2,\dotsc,n_k$, i.e. you need $n-1$ comparisons.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1272576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a concept of asymptotically independent random variables? To prove some results using a standard theorem I need my random variables to be i.i.d. However, my random variables are discrete uniforms emerging from a rank statistics, i.e. not independent: for two $u_1$,$u_2$ knowing $u_1$ gives me $u_2$. Yet, my interest is in the asymptotics for ($u_i$) growing infinitely large, and thus these r.v. are becoming more and more independent. Is there a concept to formalize this idea? Is this standard? Can I have an entry point in the literature?
If $(X_n, Y_n)$ are dependent for all finite $n$, but converges in distribution (or "weakly") to some random variable $(X,Y)$ where $X$ and $Y$ is independent, then we would say that $(X_n, Y_n)$ are asymptotically independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1272661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Proving direct sum when field is NOT of characteristic $2$. Let $\mathbb{F}$ be a field that is not of characteristic $2$. Define $W_1 = \{ A \in M_{n \times n} (\mathbb{F}) : A_{ij} = 0$ whenever $i \leq j\}$ and $W_2$ to be the set of all symmetric $n \times n$ matrices with entries from $\mathbb{F}$. Both $W_1$ and $W_2$ are subspaces of $M_{n \times n} (\mathbb{F})$. Prove that $M_{n \times n} (\mathbb{F})=W_1 \oplus W_2$. I know how to prove that $M_{n \times n} (\mathbb{F})=W_1 \oplus W_2$ for any arbitrary field $\mathbb{F}$ (by showing $W_1 \cap W_2 = \{0\}$ and any element of $M_{n \times n} (\mathbb{F})$ is sum of elements of $W_1$ and $W_2$). But what does "$\mathbb{F}$ be a field that is NOT of characteristic $2$" mean here (I am aware of definition of characteristic of a field), i.e. how does the characteristic of field affect the elements in $M_{n \times n} (\mathbb{F})$? And how does it change the proof? Thanks.
When you take the field $F$ to be of characteristic $2$ i.e if the field is $\mathbb Z_2$ i.e the matrix is over $\mathbb Z_2$ do you see that the problem has nothing left in it? Hint: Try to write down a matrix over $\mathbb Z_2$ and see what happens
{ "language": "en", "url": "https://math.stackexchange.com/questions/1272735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
On Learning Tensor Calculus I am highly intrigued in knowing what tensors are, but I don't really know where to start with respect to initiative and looking for an appropriate textbook. I have taken differential equations, multivariable calculus, linear algebra, and plan to take topology next semester. Since Summer break is right at the corner, I was wondering if anyone would tell me if my mathematical background is able to handle tensor calculus, or if I need to know other subjects in order to be ready to learn about tensor. Any and all help would be appreciated.
Have a look at: http://www.amazon.com/Einsteins-Theory-Introduction-Mathematically-Untrained/dp/1461407052/ref=sr_1_fkmr0_1?s=books&ie=UTF8&qid=1431089294&sr=1-1-fkmr0&keywords=relativity+theory+for+the+mathematically+unsohisticated which is Øyvind Grøn, Arne Næss: Einstein's Theory: A Rigorous Introduction for the Mathematically Untrained This is a serious book! the second autho is (well, was ...) a retired philosopher which required from the first author that everything in the book should be understandable for him, without a need of pencil and paper. Still it reaches some graduate level. By the way, the philosopher author was very famous, at least in Norway. The first review on Amazon says: "This is simply the clearest and gentlest introduction to the subject of relativity. This is NOT a popular reading, it does get into the math but the introduction and pace is what sets this text apart. The authors starts from baby steps and builds up the theory. The pace is easy and explanation are clear and the mathematical details are not left to the reader to figure out like in most text books.The physics is also explained very clearly From an electronics engineering background and not having dealt with tensors before, this text help me bridge the gap and allows me to fully appreciate the machinery underlying this wonderful theory.I have tried reading Schultz,Lawden etc to understand the subject on my own but this text is a much better intro to the subject." and other reviews are also very positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1272828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Linearization of an implicitly defined function $f(x,y,z)=e^{xz}y^2+\sin(y)\cos(z)+x^2 z$ Find equation of tangent plane at $(0,\pi,0)$ and use it to approximate $f(0.1,\pi,0.1)$. Find equation of normal to tangent plane. My attempt: I found that tangent plane is $(2\pi-1)(y-\pi)=0$ or $y=\pi$ (all partial derivatives except $y$ equal 0). I don't know how to find linear approximation from it. What is the equation for linear approximation? Possibly it is $L(x,y,z)=f(0,\pi,0)+(2\pi-1)(y-\pi)$
This is not an implicit function. An implicit function is like $0=e^{xz}y^2+\sin(y)\cos(z)+x^2 z$. Instead, your function is a 3D explicit function. You are correct on your linear approximation $L(x,y,z)=f(0,\pi,0)+(2\pi-1)(y-\pi)$ and that is exactly the tangent plane $w=f(0,\pi,0)+(2\pi-1)(y-\pi)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1272931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Restrict $x$ in an equation, but keeping only one equation How do you put restrictions on the $x$ in an equation without writing more than one equation? This is a two part question: * *How to take out a section of the graph of an equation? *How to take out everything but a section of the graph of an equation? For example, to take out $x$ from $-1$ to $1$ in $y=|x|$, you can change the equation to $$y=|x|\times\dfrac{\sqrt{|x|-1}}{\sqrt{|x|-1}}$$ I need to take a function, say $f(x)$, and restrict it to $1 < x < 4$.
You're on the right track. If you want to restrict the domain to $1 < x < 4$, simply multiply and divide by square root which would imply the two separate restrictions $x>1$ and $x<4$. In this case. Those would be $\sqrt{x-1}$ and $\sqrt{4-x}$. Put it all together and you have $f(x)_{restricted} = f(x) \cdot \frac{\sqrt{x-1}}{\sqrt{x-1}} \cdot \frac{\sqrt{4-x}}{\sqrt{4-x}}$. The process for eliminating everything but a certain region is similar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1273015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Logical puzzle. 3 Persons, each 2 statements, 1 lie, 1 true I got a question at university which I cannot solve. We are currently working on RSA encryption and I'm not sure what that has to do with the question. Maybe I miss something. Anyway, here is the question: Inspector D interviews 3 people, A, B and C. All of them give 2 statements, where 1 statement they say is true and 1 is wrong. The inspector knows that and he also knows that exactly one is guilty. Here are the statements:
c did it, but b doesn't know it, just a lucky guess.. Relation to RSA would be that you have one secret and one public verifiable statement. But it is somewhat abstract.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1273108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
Irreducibility and factoring in $\mathbb Z[i], \mathbb Z[\sqrt{-3}]$ In $\mathbb Z[i]$, prove that $5$ is not irreducible. In $\mathbb Z[\sqrt{-3}]$, factor $4$ into irreducibles in two distinct ways. I am completely stumped on how to do this. I really need all the help I can get and a possible walkthrough.
For the first assignment you correctly found $$5 = (1+2i)(1-2i)\ .$$ This already shows that $5$ is reducible in $\mathbb Z[i]$, as claimed. To provide a little intuition on how to find these factors, consider the third binomial rule for the product $$(a+bi)(a-bi) = a^2 - (bi)^2 = a^2+b^2$$ So any sum of perfect squares is reducible with such factors. Now it should be easy to see that $5$ fulfills this criteria: $5=4+1 = 2^2 + 1^2$. This immediately gives rise to two factorisations of $5$ in $\mathbb Z[i]$, one for each choice of $(a,b)$: $$5 = (1+2i)(1-2i) = (2+i)(2-i)$$ Here is a starter on the second assignment: $$(a+b\sqrt{-3}) \cdot (c+d\sqrt{-3}) = ac - 3bd + (ad+bc)\sqrt{-3}$$ So you need to find two distinct integer solutions to the equations $$\begin{align*} ac - 3bd & = 4 \\ ad+bc & = 0 \end{align*}$$ One solution is $(a,b,c,d) = (1,-1,1,1)$ corresponding to $$4 = (1-\sqrt{-3})(1+\sqrt{-3})$$ Now you must find another solution to get the second factorisation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1273189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is a topological space isomorphic to some group? I'm reading topology without tears to develop intuition before attempting Munkres. I noticed how similar a topological space was to a group in Abstract Algebra. 1.1.1 Definitions. Let $X$ be a non-empty set. A set $\tau$ of subsets of $X$ is said to be a topology on $X$ if * *$X$ and the empty set, $\varnothing$, belong to $\tau$, *the union of any (finite or infinite) number of sets in $\tau$ belongs to $\tau$, and *the intersection of any two sets in $\tau$ belongs to $\tau$. The pair $(X,\tau)$ is called a topological space. Can you see why $X$ and $\varnothing$ behave like identities? And $\cup$ and $\cap$ are like binary operations. And this 'group' is closed under both operations. Is there a group a topological space is isomorphic to? Or are these just coincidences?
Not in general. But you're not too far from the truth.1 Generally, if $X$ is a set, then $\mathcal P(X)$ is a group using symmetric difference as a commutative addition operator. It's even a ring when using $\cap$ for multiplication. But in the case of $\cap$ and $\cup$ as candidates for addition we run into some problems, since neither is reversible. Namely, $A\cup B=A\cup C$ does not imply that $B=C$, which means that there is no additive inverse. We can try and argue with symmetric difference again; but a topology need not be closed under symmetric differences, e.g. $(-1,1)\mathbin\triangle(0,1)=(-1,0]$, and the latter is not open in the standard topology of $\Bbb R$. * *Note, by the way, that a group is a set with an operation on that set. Like $\Bbb Z$ with $+$. Whereas a topological space is a set with a collection of subsets, rather than an operation. I took the liberty of understanding the question a bit differently. If $\tau$, the topology itself, is a group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1273373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
How do I show that the inversion mapping for linear transforms is continuous in the operator norm? I'm working through some analysis textbooks on my own, so I don't want the full answer. I'm only looking for a hint on this problem. My question is related to this question, but the textbook I'm working through approaches it without (directly) using matrices. The textbook I'm using states that "when using the operator norm as a metric, the inversion map is continuous" but I'm struggling to prove that. I know I need to prove that given that $S$ is an invertible linear transform from $\mathbb{R}^n$ to $\mathbb{R}^n$, if I pick some $\epsilon > 0$, there is a $\delta > 0$ such that $||(T+S)^{-1} - S^{-1}|| < \epsilon$ if $||T|| < \delta$ and $T$ is some linear transform between the same spaces. That's the definition of continuity in the operator norm. $T$ doesn't necessarily have to be invertible, because (for example) if $S$ is the linear transform represented by the $n \times n$ identity matrix, $T$ could be the matrix of all zeros (the zero transform), and obviously $S+T$ is still invertible, but $T = 0$ isn't. I understood the book's proof of the lemma that for any invertible linear transform $S$, there is some $\delta > 0$ and $M > 0$ such that if $||T|| < \delta$, then $||(S+T)^{-1}|| < M$ but that's the only lemma that's given in this section. Is this all I (somehow) need to prove continuity, along with the definitions of the operator norm, or is their a trick I'm missing?
Here are some algebraic manipulations that may serve as a hint for you. $$ \|(T+S)^{-1}-S^{-1}\|=\frac{\|(T+S)^{-1}S-S^{-1}S\|}{\|S\|}=\frac{\|(T+S)^{-1}T\|}{\|S\|}=\frac{\|(T+S)^{-1}\|}{\|S\|}\|T\| $$ From here you should be able to deduce the result using the information given to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1273466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is there a well-known type of differential equation consisting of $y$, $y'$, and $y''$ multiplied together? Is there some sort of well known type of differential equation consisting of first and second derivatives multiplied together? For example (I just made this up): $$y''(y')^2-y-x^2=0$$ Edit: Are there any applications of these sort of differential equations?
The height profile of a viscous gravity currents on horizontal surfaces can be described using a similarity solution as $$ (y^3y')' + 1/5(3\alpha+1)xy' - 1/5(2\alpha-1)y = 0, $$ on the domain $ 0 \leq x \leq 1$. Analytic solutions exist for $\alpha = 0$. Is this similar enough to what you're looking for?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1273550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve by using substitution method $T(n) = T(n-1) + 2T(n-2) + 3$ given $T(0)=3$ and $T(1)=5$ I'm stuck solving by substitution method: $$T(n) = T(n-1) + 2T(n-2) + 3$$ given $T(0)=3$ and $T(1)=5$ I've tried to turn it into homogeneous by subtracting $T(n+1)$: $$A: T(n) = T(n-1) + 2T(n-2) + 3$$ $$B: T(n+1) = T(n) + 2T(n-1) +3$$ $$A - B = T(n) - T(n+1) = T(n-1) + 2T(n-2) + 3 - (T(n) + 2T(n-1) + 3)$$ $$2T(n) - T(n+1) - T(n-1) + 2T(n-2) = 0$$ Assume $T(n) = x^{n}$ $$2x^{n} - x^{n+1} - x^{n-1} + 2x^{n-2} = 0$$ Dividing each side by $x^{n-2}$ leaves me with the impossible (beyond the scope of this class) equation: $$2x^2 - x^3 - x + 2 = 0$$ How can I solve this using substitution method?
suppose we look for a constant solution $t_n = a.$ then $a$ must satisfy $a = a+2a + 3.$ we pick $a = -3/2.$ make a change a variable $$a_n = t_n + 3/2, t_n = a_n - 3/2.$$ then $a_n$ satisfies the recurrence equation $$a_n= a_{n-1}+ 2a_{n-2}, a_0 = 9/2, a_1=13/2.$$ now look for solutions $$a_n = \lambda^n \text{ where }\lambda^2-\lambda - 2 = 0\to \lambda = 2, -1$$ the solution is $$a_n = c2^n + d(-1)^n, c+d = 9/2, 2c-d = 13/2$$ which gives you $$c = 11/3, d = 5/6.$$ finally $$t_n = \frac{11}3\, 2^n +\frac 56 \, (-1)^n -\frac32. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1273673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Real analysis: Show the limit $\lim nμ(E_n) = 0$. Let $(X, R, μ)$ be a measurable space. Let $f$ be a measurable and integrable function on $X$. Let $E_n = \{x ∈ X|f(x) > n\}$. Prove that $\lim nμ(E_n) = 0$. I know that $\mu(E_n)$ is equal to its outer measure, and it goes to $0$. But I do not know how show that and show $n\mu(E_n)$ goes to $0$.
$$\lim \;n\mu(E_n) = \lim \int_{\{x:|f(x)|>n\}} n \;d\mu \leq \lim \int_{\{x:|f(x)|>n\}} |f| \;d\mu = \int_{\{x:|f(x)|=\infty\}} |f| \;d\mu=0$$ That last equality follows because $f \in L^1$ implies that $|f|<\infty$ a.e, and the second-to-last equality follows because $\{x:|f(x)|=\infty\} = \bigcap_n \{x:|f(x)|>n\}$ is the intersection from a family of decreasing sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1273754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Math problems that are impossible to solve I recently read about the impossibility of trisecting an angle using compass and straight edge and its fascinating to see such a deceptively easy problem that is impossible to solve. I was wondering if there are more such problems like these which have been proven to be impossible to solve.
Constructing an algorithm to solve any Diophantine equation has been proven to be equivalent to solving the halting problem, as is computing the Kolmogorov complexity (optimal compression size) of any given input, for any given universal description language. In general I think what you're looking for is either problems that are proven to be equivalent to the halting problem, or problems that are formally independent from, say, ZFC, such as the Continuum Hypothesis or more generally Gödel's incompleteness theorems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1273867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 9, "answer_id": 0 }
Find a sequence $a_n$ such that its sum converges, but the sum of its logarithm doesn't - Generalisation Help This question came up in a past paper that I was doing, but it seems to be a fairly common, standard question. Give an example of a sequence $a_n$ such that $\sum(a_n)$ converges, but $\sum(\log(a_n))$ does not converge. Give an example of a sequence $b_n$ such that $\sum(b_n)$ does not converge, but $\sum(\log(a_n))$ does. Now, this particular question isn't all that hard (I just used $\frac{1}{n!}$ for the convergent series in both, and used the inverse function to get the non-convergent series). However, I got my answer through a bit of rational thinking, and maybe having seen similar questions before. My question is, can any of the tests for convergence be used to do their job backwards, ie. starting with the fact that the summation of the series converges (and the summation of the log of the series does not converge)? Are there any other ways of doing this, other than having a bit of a think? Thanks a lot!
In this case, it is easy to find a simple criterion. For $\sum \log(a_n)$ to converge, we should first have as a necessary condition $\log(a_n)$ converges to $0$ (by the limit test), and this only happens when $a_n$ converges to $1$ (and this, by turn, implies that $\sum a_n$ diverges). So, we limit our search to sequences converging to $1$. Suppose that the sequence we are looking for takes the form: $a_n = 1 + b_n$, where $b_n$ converges to $0$, and is positive just like $a_n$ is positive. Then, $\log(a_n) \sim b_n$. Thus, you may choose any sequence $b_n$ of positive entries such that $\sum b_n$ converges. Thus, any $(a_n)$ of positive entries, having the form $a_n = 1 + b_n$ where $b_n$ is of positive terms and $\sum b_n$ converges, will satisfy: $\sum a_n$ diverges and $\sum \log(a_n)$ converges. Now, any convergent series $\sum a_n$ of positive terms will necessarily have $\sum \log(a_n)$ divergent because $\lim \log(a_n) \neq 0$. In general, this depends on the function at hand and its unique properties. In places, other tests are more useful and other facts should be employed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1273959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Decision to play on in perfect square die game Question:You have a six-sided dice, and you will receive money that equals to sum of all the numbers you roll. After each roll, if the sum is a perfect square, the game ends and you lose all the money. If not, you can decide to keep rolling or stop the game. If your sum is $35$ now, should you keep play? Attempt: Should we be incorporating the $35$ already accumulated, in which case we have: \begin{equation*} E(\text{Winnings after 1 throw}) = \frac{1}{6}(-35)+\frac{5}{6}(35+4+3.5). \end{equation*} The $4$ comes from the fact that if we don't throw a one in the next toss, the average is $20/5 = 4$. And if we do not throw a one on the next toss, we are guaranteed not to hit a perfect square on the second toss after that, so we expect to make an extra $3.5.$ My approach: you lose your 35 dollars with 16.67% chance, or with 5/6 chance you keep your 35, plus gain the average of the next roll (2+3+4+5+6)/5, and also after the first roll, you are guaranteed to be too far away from the next perfect square (49) so you have an added 3.5 to look forward to with certainty
If you count rolling a $1$ as $-35$, then your reference point is your current $35$, and you shouldn't include $35$ in the chance of further winnings. On the other hand, you can certainly keep rolling while your score is below $43$. So I would take the value of the next roll to be at least $(1/6)(-35)+(5/6)8$ which is positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1274046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Contour integration of: $\int_C \frac{2}{z^2-1}\,dz$ I want to calculate this (for a homework problem, so understanding is the goal) $$\int_C \frac{2}{z^2-1}\,dz$$ where $C$ is the circle of radius $\frac12$ centre $1$, positively oriented. My thoughts: $$z=-1+\frac12e^{i\theta},\,0\leq\theta\leq2\pi$$ $$z'=\frac{i}{2}e^{i\theta},f(z) = \frac{}{}$$, from here it is going to get really messy, so I doubt that is the direction I should go. Where should I go? Or is it meant to get messy from here on out?
Use residue theorem to find integrals of this type. And observe that the only singular point for this analytic function in the disk above is at $z=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1274157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Function whose limit does not exist at all points There are functions which are discontinuous everywhere and there are functions which are not differentiable anywhere, but are there functions with domain $\mathbb{R}$ (or "most" of it) whose limit does not exist at every point? For example, $ f:\mathbb{R}\to\mathbb{N}, f(x) = $ {last digit of the decimal representation of $x$}. Is this even a valid function?
Things can get wild: There are functions $f:\mathbb {R}\to \mathbb {R}$ such that for every interval $I$ of positive length, $f(I) = \mathbb {R}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1274259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Integration of $\int \frac{e^x}{e^{2x} + 1}dx$ I came across this question and I was unable to solve it. I know a bit about integrating linear functions, but I don't know how to integrate when two functions are divided. Please explain. I'm new to calculus. Question: $$\int \frac{e^x}{e^{2x} + 1}dx$$ Thanks in advance.
You may write $$ \int \frac{e^x}{e^{2x} + 1}dx=\int \frac{d(e^x)}{(e^{x})^2 + 1}=\arctan (e^x)+C $$ since $$ \int \frac{1}{u^2 + 1}du=\arctan u+C. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1274356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Exhibit a bijective function $\Bbb Z \to \Bbb Z$ with infinitely many orbits I've the following exercise: Give an example of a bijective function $\Bbb Z\rightarrow\Bbb Z$ with infinitely many orbits. What would be its infinite orbits?
HINT: Recall that there is a bijection between $\Bbb Z$ and $\Bbb{Z\times Z}$. Can you find a bijection $f\colon\Bbb{Z\times Z\to Z\times Z}$ with infinitely many infinite orbits?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1274445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Prove that if $p\ge 7$ then $\exists n\in\Bbb{Z}$ such that $10^{n(p-1)}\equiv1 \mod 9p$. Prove that if $p\ge 7$ then $\exists n\in\Bbb{Z}$ such that $10^{n(p-1)}\equiv1 \mod 9p$. Edit: $p$ is prime, of course. I tried using theorems regarding Euler, but I can't seem to arrive at something useful. I could really use your guidance.
By pigeon-hole, there exist $k,m$ with $10^{k(p-1)}\equiv 10^{m(p-1)}$ and wlog. $k>m$. Since $10$ is coprime to $9p$, we can cancel $10^{m(p-1)]}$ and find $10^{(k-m)(p-1)}\equiv 1\pmod{9p}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1274520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
$x^9 - 2x^7 + 1 > 0$ $x^9 - 2x^7 + 1 > 0$ Solve in real numbers. How would I do this without a graphing calculator or any graphing application? I only see a $(x-1)$ root and nothing else, can't really factor an eighth degree polynomial ... Thanks.
Hint: You know that:$ x^9-2x^7+1=(x-1)(x^8+x^7-x^6-x^5-x^4-x^3-x^2+x-1)$. Note that the second factor is positive for $x\rightarrow \pm \infty$, and is $-1$ for $x=0$ so it has at least two real roots. You can use the Sturm theorem to find how many real roots has $x^8+x^7-x^6-x^5-x^4-x^3-x^2+x-1$, but it is very laborious, and don't gives the value of the roots. A numerical calculus show that it has really only two roots $x \sim 1.38$ and $x \sim -1.44$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1274615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Least Square method, find vector x that minimises $ ||Ax-b||_2^2$ Given Matrix A = | 1 0 1 | | 1 1 2 | | 0 -1 -1| and b = $[1\ \ 4\ -2]^T$ find x such that $||Ax - b||_2^2$ is minimised. I know I have to do something along the line $A^TAx = A^Tb$ got the vector $(1/3)* [4\ 7\ 0] ^T$. However the answer is $x = (1/3)* [4\ 7\ 0] ^T + \lambda*[-1\ -1 \ \ 1]^T $. I have no clue where does the $\lambda*[-1\ -1 \ \ 1]^T$ come from. Really appreciate for some help.
The vector $\begin{bmatrix} -1 \\ -1 \\1\end{bmatrix}$ is in the nullspace of $A^TA$. So $A^TAx=A^TA\begin{bmatrix} 4/3 \\ 7/3 \\0\end{bmatrix}+\lambda A^TA \begin{bmatrix} -1 \\ -1 \\1\end{bmatrix}=A^TA\begin{bmatrix} 4/3 \\ 7/3 \\0\end{bmatrix}+0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1274741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
List all the elements of ${\mathbb F_2}[X]/(X^3+X^2+1)$. I just don't get this. Looking through past papers, I came across this problem, Q. Let $F_2$ be a field with 2 elements. Let $P=x^3+x^2+1\in F_2[X]$. $I$ is the ideal of $F_2[X]$ generated by $P$. List all elements of the factor ring $F_2[X]/I$. My answer; technically, I do get it correct. Having come across similar problems multiple times, I just guessed it'll be in the form $a+bx+cx^2$, where each coefficient is from the said field. Seems like the 2 elements are 0,1 so I can just list all combinations of it. But, my question is, say what does $1+x^4+I$ become? I know I should treat $x^3+x^2+1=0$, but substitution doesn't help. And I cannot "decompose" or maybe factorize $1+x^4$ such that I can get a multiple of $x^3+x^2+1$ and let it be "absorbed' to $I$. It just confuses me, how I should specifically manipulate $1+x^4$, $1+x^5$, $x^5$ or anything like that. Well, if I haven't explained my confusion clearly, simply put; Would someone please give me steps to obtaining what $1+x^4+I$, $1+x^5+x^7+I$ and so might be, and how I should think about doing it??? Maybe because it's extremely abstract but in the past 3 months, including my lecturer and textbook, no one is really able to explain it with extensive clarity really, or maybe math is just not my thing. This idea of Factor Rings, while I can give definitions if asked, just doesn't click. It would be great if someone can help me out, thanks so much in advance
Use the division algorithm: For $f, g \in F[x]$, there exist $q, r \in F[x]$ such that $f(x) = q(x)g(x)+r(x)$ with $\deg r < \deg g$. For the example: we wish to interpret $f(x) = x^4+1$ modulo $g(x) = x^3+x^2+1$. Using the division algorithm, we have $$f(x) = (x+1)g(x)+(x^2+x),$$ so $(x^4+1)+I = (x^2+x+(x+1)g(x))+I = (x^2+x)+I$ because $g(x) \in I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1274916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
simplifying equations I have been trying to study analysis of algorithms with mathematical induction yet I found my algebra skills are very poor. So now I began restuddiing algebra (factoring, destributive property, simplifying, expanding) so I can handle mathematical equations better. But no matter what I do, I can't seem to explain how the following equation work. $$ \frac{n(n+1)}{2}+(n+1) = \frac{(n+1)(n+2)}{2} $$ I think $$ \frac{n(n+1)}{2}+(n+1) $$ have been simplified to $$ \frac{(n+1)(n+2)}{2} $$ I have a feeling I am thinking something stupid, but I really don't understand how the right side of the equation have been simplified. * *Can anyone explain how the formula got simplified? *What part of algebra should I study to understand equations like this?
Note that both expressions have the common factor $(n+1)$, which we can factor out. Hence we write $$\frac{n(n+1)}{2}+(n+1)=(n+1)\left[\frac{n}{2}+1\right]=(n+1)\left[\frac{n}{2}+\frac{2}{2}\right]=(n+1)\frac{n+2}{2}=\\=\frac{(n+1)(n+2)}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1274998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How does $-\frac{1}{x-2} + \frac{1}{x-3}$ become $\frac{1}{2-x} - \frac{1}{3-x}$ I'm following a solution that is using a partial fraction decomposition, and I get stuck at the point where $-\frac{1}{x-2} + \frac{1}{x-3}$ becomes $\frac{1}{2-x} - \frac{1}{3-x}$ The equations are obviously equal, but some algebraic manipulation is done between the first step and the second step, and I can't figure out what this manipulation could be. The full breakdown comes from this solution $$ \small\begin{align} \frac1{x^2-5x+6} &=\frac1{(x-2)(x-3)} =\frac1{-3-(-2)}\left(\frac1{x-2}-\frac1{x-3}\right) =\bbox[4px,border:4px solid #F00000]{-\frac1{x-2}+\frac1{x-3}}\\ &=\bbox[4px,border:4px solid #F00000]{\frac1{2-x}-\frac1{3-x}} =\sum_{n=0}^\infty\frac1{2^{n+1}}x^n-\sum_{n=0}^\infty\frac1{3^{n+1}}x^n =\bbox[4px,border:1px solid #000000]{\sum_{n=0}^\infty\left(\frac1{2^{n+1}}-\frac1{3^{n+1}}\right)x^n} \end{align} $$ Original image
$$ -\frac{1}{x-2} = \frac{1}{-(x-2)} = \frac{1}{-x+2} = \frac{1}{2-x}$$ and $$ \frac{1}{x - 3} = \frac{1}{-3 + x} = \frac{1}{-(3 - x)} = -\frac{1}{3-x}$$ Thus $$ -\frac{1}{x-2} + \frac{1}{x - 3} = \frac{1}{2-x} - \frac{1}{3-x} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1275071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 6 }
If I have the value of $\sqrt{1.3}$ could it be possible to find other square roots from that value? using the manipulation of surds? If I have the value of $\sqrt{1.3}$ could it be possible to find other square roots from that value? using the manipulation of surds?
You can use the knowledge of $\sqrt{1.3}$ to calculate $\sqrt{x}$ only if $x = 1.3*y$, where $y$ is some other number whose square root you already know. So $\sqrt{5.2}$ is easy, because it's just $\sqrt4*\sqrt{1.3}$, which you can calculate, assuming that you know that $\sqrt{4}=2.$ To calculate $\sqrt{2.6}$, you could argue that this is just $\sqrt2 *\sqrt{1.3}$, but that doesn't help unless you already know $\sqrt2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1275160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How to know if an integral is well defined regardless of path taken. I can calculate \begin{equation*} \int_0^i ze^{z^2} dz=\frac{1}{2e}-\frac12, \end{equation*} but why can I calculate this irrelevant to the path taken? Is this since it is analytic everywhere - if so, how would I go about verifying this? I can't see how to apply the Cauchy Riemann equations here since I don't know how I would break this into the sum of a real and complex component.
Going to the Cauchy-Riemann equations is not a good way of showing that $z\mapsto ze^{z^2}$ is differentiable everywhere, just like reasoning explicitly about $\lim_{h\to 0}\frac{f(x+h)+f(x)}{h}$ wouldn't be a good way to investigate whether the real function $x\mapsto xe^{x^2}$ is differentiable everywhere. Instead note that the symbolic differentiation rules you learned in ordinary real calculus still work in the complex case: The product and sum of differentiable functions are differentiable, the composition of differentiable functions is differentiable, and so forth -- with the expected derivatives! So $$ \frac{d}{dz} ze^{z^2} = e^{z^2} + z(\frac d{dz}e^{z^2}) = e^{z^2} + z\cdot 2z \cdot e^{z^2} $$ by the product and rule and then the chain rule. Since this computation works for every $z$, the function $ze^{z^2}$ is differentiable everywhere in the complex plane, and thus analytic, so Cauchy's integral theorem applies to it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1275268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Approximating non-rational roots by a rational roots for a quadratic equation Let $a,b,c$ be integers and suppose the equation $f(x)=ax^2+bx+c=0$ has an irrational root $r$. Let $u=\frac p q$ be any rational number such that $|u-r|<1$. Prove that $\frac 1 {q^2} \leq |f(u)| \leq K|u-r|$ for some constant $K$. Deduce that there is a constant $M$ such that $|r \frac p q| \geq \frac M {q^2}$. (This is useful in approximating the nonrational zeros of a polynomial.) (I wasn't sure what a non-rational root is. And used intermediate value theorem for $x=u-1$ and $x=u$ but couldn't move further.)
A non-rational root is simply a root that is not a rational number (i.e. it is irrational). The problem has nothing to do with the intermediate value theorem, it is an exercise in pure algebra. First, note that if $r$ is an irrational root, then the other root $r'$ must also be irrational (assume that $r'$ is rational; since $r+r'= - \frac b a$ which is rational, this would imply $r$ rational which is a contradiction). We shall use this in a moment. Now $|f(u)| = \frac {|a p^2 + bpq + c q^2|} {q^2}$, so let us show that the numerator is $\geq 1$. This numerator is an integer, so it is either $0$, or $\geq 1$. If it is $0$, then divide by $q^2$ and get $f(\frac p q)=0$. But this means that $f$ has a rational root, impossible according to the first paragraph. So, $\frac 1 {q^2} \leq |f(u)|$. Also, since $f(r)=0$, $|f(u)| = |f(u) - f(r)| = |a(u^2-r^2) + b(u-r)| = |u-r| \cdot |a(u+r)+b|$. Now, it is not clear what is meant by "constant" in the statement of the problem: constant with respect to what? I am going to assume that one means "constant with respect to u" (considering $r$ fixed). Then $|u+r| = |-r -u| = |-2r +r-u| \leq 2|r| + |r-u| \leq 2|r| + 1$ and then $|a(u+r) + b| \leq |a| \cdot |u+r| + |b| \leq |a|(2|r|+1)+|b|$. If you call this last number $K$, then $|f(u)| \leq K |u-r|$. Finally, take the endpoints of this inequality, $\frac 1 {q^2} \leq K|u-r|$, multiply them by $|u|$ and get $\frac {|u|} {q^2} \leq K |u^2 -ur| \leq Ku^2 + K|ur|$. Therefore $|\frac p q r| \geq \frac {|u|} {K q^2} - u^2$ and, if you let $M=\frac {|u|} K - u^2 q^2$, this can be rewritten as $|\frac p q r| \geq \frac M {q^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1275365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Elementary proof that $-1$ is a square in $\mathbb{F}_p$ for $p = 1 \mod{4}$ I am trying to proof that $-1$ is a square in $\mathbb{F}_p$ for $p = 1 \mod{4}$. Of course, this is really easy if one uses the Legendre Symbol and Euler's criterion. However, I do not want to use those. In fact, I want to prove this using as little assumption as possible. What I tried so far is not really helpful: We can easily show that $\mathbb{F}_p^*/(\mathbb{F}_p^*)^2 = \{ 1\cdot (\mathbb{F}_p^*)^2, a\cdot (\mathbb{F}_p^*)^2 \}$ where $a$ is not a square (this $a$ exists because the map $x \mapsto x^2$ is not surjective). Now $-1 = 4\cdot k = 2^2 \cdot k$ for some $k\in \mathbb{F}_p$. From here I am trying to find some relation between $p =1 \mod{4}$ and $-1$ not being a multiple of a square and a non-square.
For $p\ge 3$, $\mathbb F_p^*$ is a cyclic group of order $p-1$. If $g$ is a generator, then $g^\frac{p-1}2= -1$. In particular $-1$ will be a square if and only if $\frac{p-1}{2}$ is even - i.e. if $p \equiv 1 \pmod 4$. Note that this proof isn't fundamentally different from those using Fermat's little theorem. Indeed utilising the group theoretic structure of $\mathbb F_p^*$ gives one way of proving it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1275461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Result of the limit: $\lim_{x \to \infty} \sqrt{x + \sin(x)} - \sqrt{x}$? From my calculations, the limit of $\lim_{x \to \infty} \sqrt{x + \sin(x)} - \sqrt{x}$ Is undefined due to $sin(x)$ being a periodic function, but someone told me it should be zero. I was just wondering if someone could please confirm what the limit of this function is? Thanks Corey :)
or use the binomial theorem $$ \sqrt{x+\sin x}-\sqrt{x} = \sqrt{x} \left(\sqrt{1 + \frac{\sin x} {x}}-1\right) \\ = \sqrt{x}\left( \frac{\sin x}x + O(\frac1{x^2}) \right) \to 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1275542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 6 }
Coffee Shop Meeting $A$ and $B$ decide to meet at a cafe between $5$ p.m. and $6$ p.m. They agree that the person who arrives first at the cafe would wait for exactly $15$ minutes for the other. If each of them arrives at a random time between $5$ p.m. and $6$ p.m., what is the probability that the meeting takes place? I figured that if one of them arrive at the first minute then the probability of the two meeting each other would be $15/60$, because the second person could arrive from the $1^{st}$ minute till the $15^{th}$ minute and meet with him. Similarly if the first person arrives at the second minute the probability would be $16/60$. This will go on till the $14^{th}$ minute and the probability would be $29/60$. The probability will remain $29/60$ till the $45^{th}$ minute, after which it will gradually decrease in the order $28/60, 27/60,... , 15/60.$ I am not sure if my approach is correct. Also I am stuck after a point with my approach. Please explain elaborately how to solve such questions.
Let $X$ and $Y$ be the times in units of hours that $X$ and $Y$ arrive. I assume here that they are uniformly distributed on $[0,1]$ and independent. Then the meeting happens provided $|X-Y| \leq 1/4$. So the probability of the meeting is $$\frac{\int_{|x-y| \leq 1/4,0 \leq x \leq 1,0 \leq y \leq 1} dx dy}{\int_{0 \leq x \leq 1,0 \leq y \leq 1} dx dy}.$$ That is, it is the area of the region in the plane where they meet divided by the area of the square (which is just $1$). This region is the square except for the two triangles which lie above $y=x+1/4$ and below $y=x-1/4$. These have height and width $3/4$, so their areas are each $9/32$, which add up to $9/16$. So the area of the region is $7/16$, which is also the probability of the meeting. A similar argument can be done when you assume that $X$ and $Y$ have a discrete distribution instead (as you seem to be doing in the original question).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1275612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
When should matrices have units of measurement? As a mathematician I think of matrices as $\mathbb{F}^{m\times n}$, where $\mathbb{F}$ is a field and usually $\mathbb{F} = \mathbb{R}$ or $\mathbb{F} = \mathbb{C}$. Units are not necessary. However, some engineers have told me that they prefer matrices to have units such as metres, kg, pounds, dollars, etc. Assigning a unit of measurement to each entry to me seems restrictive (for instance if working with dollars then is $A^2$ allowed?). Here are a few things that I would like to understand more deeply: * *Are there examples where it is more appropriate to work with matrices that have units? *If units can only restrict the algebra, why should one assign units at all? *Is there anything exciting here, or is it just engineers trying to put physical interpretations on to matrix algebra? Also, see: https://stackoverflow.com/questions/11975658/how-do-units-flow-through-matrix-operations
The book by George W. Hart mentioned in the comments is actually a quite good reference on the topic. Furthermore, I also gave a talk on how a C++ library can be designed that is able to solve the topic for the general case of non-uniform physical units in vectors and matrices: https://www.youtube.com/watch?v=J6H9CwzynoQ
{ "language": "en", "url": "https://math.stackexchange.com/questions/1275724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Determine values of k for a matrix to have a unique solution I have the following system and need to find for what values of $k$ does the system have i) a unique solution ii) no solution iii) an infinite number of solutions $(k^3+3k)x + (k+5)y + (k+3)z = k^5+(k+3)^2 ky + z = 3 (k^3+k^2-6k)z = k(k^2-9)$ Putting this into a matrix I get $M =\left[ \begin{array}{cc} k^3 + 3k & k+5 & k+3 & k^5+(k+3)^2 \\ 0 & k & 1 & 3 \\ 0 & 0 & k^3+k^2-6k & k(k^2-9)\\ \end{array}\right]$ I understand for ii) we need the bottom row to read $0 \ 0 \ 0 \ k(k^2-9)$ which it does for values $k=2$. For iii) we need the bottom row to read $0 \ 0 \ 0 \ 0$ which we do when $k=0$ and $k=-3$. Because the first column only contains a value for the top row, I can't use elementary row operations to chance the value in $a_{11}$, so I don't know how I can make it into the identity matrix?
$\textbf{HINT:}$ Check the determinant. if $\det=0$ you will have infinite solutions, also you can check the last file, if $a_{32}=0$, then check $a_{34}:$ * *if $a_{34}\neq0$ then there are no solutions. *if $a_{34}=0$ there are infinite solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1275774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding a nullspace of a matrix. I am given the following matrix $A$ and I need to find a nullspace of this matrix. $$A = \begin{pmatrix} 2&1&4&-1 \\ 1&1&1&1 \\ 1&0&3&-2 \\ -3&-2&-5&0 \end{pmatrix}$$ I have found a row reduced form of this matrix, which is: $$A' = \begin{pmatrix} 1&0&3&-2 \\ 0&1&-2&3 \\ 0&0&0&0 \\ 0&0&0&0 \end{pmatrix}$$ And then I used the formula $A'x=0$, which gave me: $$A'x = \begin{pmatrix} 1&0&3&-2 \\ 0&1&-2&3 \\ 0&0&0&0 \\ 0&0&0&0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix}= \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}$$ Hence I obtained the following system of linear equations: $$\begin{cases} x_1+3x_3-2x_4=0 \\ x_2-2x_3+3x_4=0 \end{cases}$$ So I just said that $x_3=\alpha$, $x_4=\beta$ and the nullspace is: $$nullspace(A)=\{2\beta-3\alpha,2\alpha-3\beta,\alpha,\beta) \ | \ \alpha,\beta \in \mathbb{R}\}$$ Is my thinking correct? Thank you guys!
Since $x_1=2x_4-3x_3$ and $x_2=2x_3-3x_4\Rightarrow$ if $(x_1,x_2,x_3,x_4)\in$ nullspace($A$): $$(x_1,x_2,x_3,x_4)=(2x_4-3x_3,2x_3-3x_4,x_3,x_4)=x_3(-3,2,1,0)+x_4(2,-3,0,1)$$ So Nullspace$(A)=\langle (-3,2,1,0),(2,-3,0,1) \rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1275856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Regularity theorem for PDE: $-\Delta u \in C(\overline{\Omega})$ implies $u\in C^1(\overline{\Omega})$? I stumbled over this question in the context of PDE theory: Let $U$ be connected,open and bounded in $\mathbb{R}^n$ and $u \in C^0(\overline{U}) \cap C^2(U)$ and $\Delta u \in C^0(\overline{U})$ with $u|_{\partial U} = \Delta u|_{\partial U} = 0.$ Does this imply that $u \in C^1(\overline{U})$?
If $U$ is a bounded domain with regular boundary then, one approach is the following: Consider the problem $$ \left\{ \begin{array}{rl} -\Delta u=f &\mbox{ in}\ U, \\ u=0 &\mbox{on } \partial U. \end{array} \right. $$ Once $f\in C(\overline{U})$, we also have that $f\in L^p(U)$ for each $p\in [1,\infty)$. This implies in particular (see 1 chapter 9) that $u\in W_0^{2,p}(U)$ for each $p\in [1,\infty)$. Now take $p$ big enough to conclude that $W_0^{2,p}(U)$ is continuously embedded in $C^1(\overline{U})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1276027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Factor Modules/Vector Spaces and its basis with canonical mappings I'm having trouble with factor modules now. Well, specifically the following question from a past paper. Q. T=$R^2$ i.e. the real plane, and define $f:T$->$T$ with respect to the standard basis which has the 2 by 2 matrix $$A=\left( \begin{matrix} -6 & -9 \\ 4 & 6 \end{matrix} \right).$$ Give the basis $C$ for T/kerf and write the matrix for the mapping T/kerf ->T induced by $f$, with respect to the basis $C$ and the basis $B=(-3e_1+2e_2, 2e_1-e_2)$ where $e_1, e_2$ ae the standard basis for $T$. I figured $kerf$ has to be ${v \in T; v=k(\frac{-3}{2} ; 1)}$ for some real $k$. (Am I correct?) So, if $u,v \in T$ and if $u+kerf = v+kerf$ then $u-v \in kerf$. I can equate $u-v = k(\frac{-3}{2} ; 1)$ but to be honest, I don't see this leading me anywhere. My exam's tomorrow and I don't really have so much time to spend on one question right now...Can anyone please please please explain how I should think of this problem?? I might be asking much but with steps would be more than super appreciated... My textbook doesn't even have examples like this and I am absolutely in the dark....
You are correct about what the kernel is. When you say "Give the basis $C$..." I assume you mean "Give a basis $C$...". I.e. you have to find a basis. $T/ \text{ker} f$ is one-dimensional since $\dim(T/S) = \dim(T) - \dim(S)$. Any nonzero vector of a one-dimensional space constitutes a basis. (I.e. pick any vector $v$ not in $\ker f$ and $v + \ker f$ is a basis for $T/\ker f$. The image of the map $T / \ker f \to T$ is one-dimensional also, so it consists of multiples of a single vector (in this case it consists of scalar multiples of $(-3,2)$, say). Write $(-3,2)$ as a linear combination of the vectors of the basis $B$. This is equivalent to solving $$\left( \begin{matrix} -3 & 2 \\ 2 & -1\end{matrix} \right)\left( \begin{matrix} x \\ y \end{matrix} \right) = \left( \begin{matrix} -3 \\ 2\end{matrix} \right).$$ If you solve the above (you should be able to solve it without doing any work or writing anything down), and you pick as your basis for $T/ \ker f$ a vector $v$ such that $f(v) = (-3,2)$, then your matrix will be $$\left( \begin{matrix} x \\ y\end{matrix} \right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1276096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conditional Probability Four fair 6-sided dice are rolled. The sum of the numbers shown on the dice is 8. What is the probability that 2's were rolled on all four dice? The answer should be 1/(# of ways 4 numbers sum to 8) However, I can't find a way, other than listing all possibilities, to find the denominator.
I proved a relevant theorem here ( for some reason this was closed ) Theorem 1: The number of distinct $n$-tuples of whole numbers whose components sum to a whole number $m$ is given by $$ N^{(m)}_n \equiv \binom{m+n-1}{n-1}.$$ to apply this to your problem , you need to take into account that the theorem allows a minimum value of zero, but the dice show a minimum value of one. So you can just add one to each die so we are looking for the number of 4-tuples whose components sum to 4. I guess we are lucky that no element could possibly be greater than 6, so it will work for 6 sided dice. The denominator is thus given by $$\binom 73 = 35 $$ which I have verified by enumeration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1276177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Perfect matching problem We have a graph $G = (V,E)$. Two players are playing a game in which they are alternately selecting edges of $G$ such that in every moment all the selected edges are forming a simple path (path without cycles). Prove that if $G$ contains a perfect matching, then the first player can win. A player wins when the other player is left with edges that would cause a cycle. I tried with saying that $M$ is a set of edges from perfect matching and then dividing the main problem on two sub problems where $M$ contains an odd or even number of edges, but it lead me nowhere. Please help.
Bipartition the vertices according to the perfect matching into sets $A$ and $B$. The first player plays an edge of the matching; the second player cannot. However, since this is not a cycle, we are at a new vertex, so the first player can again play an edge from the matching. He can continue this way until all of the edges in the perfect matching are gone. This involves all of the vertices, so the next edge must form a cycle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1276237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is this problem asking? (limits of functions) I was asked to prove or find a counterexample of the statement "Suppose that $f: \mathbb{R}\to \mathbb{R}$, and let $x_k=f(\dfrac{1}{k})$. Then $\lim_{k\to\infty} f(x_k)=\lim_{t\to0} f(t)$." I'm not sure what the statement is saying. If anyone could explain it, it would be appreciated.
There are two different types of limits being used in the statement $$\lim_{k\to\infty}f(x_k)=\lim_{t\to 0} f(t).$$ On the left we have a sequence: $$f(x_1), f(x_2), f(x_3), \ldots$$ to which we might apply the definition of $x_k$ to rewrite as $$f(f(1)), f(f(1/2), f(f(1/3)), \ldots$$ The limit of this sequence is given by $L$ if for every real number $\epsilon > 0$, there exists a natural number $N$ such that for all $n > N$, we have $|f(f(1/n) − L| < \epsilon$. On the right, we have the limit of a function, which is given by $K$ if for all $\epsilon>0$, there exists a $\delta>0$ such that for all $t\in\mathbb{R}$ satisfying $0<|t−0|<\delta$, the inequality $|f(t)−K|<\epsilon$ holds. So the question is asking you to determine whether, for any arbitrary function $f\colon \mathbb{R}\to\mathbb{R}$, the limits $K$ and $L$ here must be equal. For a concrete example of what's going on, consider the function $f(x)=x$. Then $f(f(1/k))=f(1/k)=1/k$, so the sequence is just $$1,1/2,1/3,\ldots$$ and it is easy to show that the limit is zero by taking $N\geq 1/\epsilon$. On the other hand $\lim_{t\to 0} f(t)$ is also zero, since in the definition of the limit we can always just take $\delta=\epsilon$. So for this particular function, the statement holds. Does it hold for all others?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1276330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Consider $n, k ∈ \mathbb{N}$. Prove that $\gcd(n, n + k)\mid k$ Consider $n, k\in\mathbb N$. Prove that $\gcd(n, n + k)\mid k$ Here is my proof. Is it correct? A proposition in number theory states the following: $$\forall a, b \in \mathbb{Z}, \ \gcd(n,m)\mid an+bm$$ A corollary (consequence) to this is that if $an+bm=1$ for some $a, b \in \mathbb{Z}$, then $\gcd(n,m)=1$. Again in our problem $(n+k)-n=k$, therefore we can use this to see that $\gcd(n,n+k)=k$.
Even easier $d=\gcd(n,n+k)$ * *if $d=1$ then $d \mid k$ *if $d>1$ then $d \mid n$ and $d \mid n+k$ or $d\cdot q_1=n$ and $d\cdot q_2=n+k$. Altogether $d\cdot q_2 = d\cdot q_1 +k$ or $d(q_1-q_2)=k \Rightarrow d \mid k$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1276513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
For what values of k is this singular matrix diagonalizable? So the matrix is the following: \begin{bmatrix} 1 &1 &k \\ 1&1 &k \\ 1&1 &k \end{bmatrix} I've found the eigan values which are $0$ with an algebraic multiplicity of $2$ and $k+2$ with an algebraic multiplicity of $1$. However I'm having trouble with the eigan vector of the eigan value 0. I've put this through Wolfram and I should've got: \begin{pmatrix} -k\\ 0\\ 1 \end{pmatrix} and \begin{pmatrix} -1\\ 1\\ 0 \end{pmatrix} But when finding the eigan vectors what I end up (Having x_{1}=r and x_{2}=t) is \begin{Bmatrix} r\\ t\\ \frac{-r-t}{k} \end{Bmatrix} which even if I separate it wouldnt be the vectors Wolfram gave me. Is there something I'm missing? Also is there a better way to find out the initial answer?
Hint: Substitute $r = -k$ and $t = 0$ to get the first eigenvector Wolfram alpha is giving you, and $r = -1$ and $t = 1$ to get the second one. Wolfram alpha isn't giving you the entire two-dimensional eigenspace, but only an eigenbasis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1276602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$y^5 =(x+2)^4+(e^x)(ln y)−15$ finding $\frac{dy}{dx}$ at $(0,1)$ Unsure what to do regarding the $y^5$. Should I convert it to a $y$= function and take the $5$ root of the other side. Then differentiate? Any help would be great thanks.
Unfortunately we cannot rearrange this to find $y$. Fortunately we can still use implicit differentiation to find a derivative. Take the derivative of all terms w.r.t. $x$, using the chain rule and treating $y$ as a function of $x$ - we obtain $$ 5y^4\frac{\mathrm{d}y}{\mathrm{d}x} = 4(x+2)^3 + \mathrm{e}^x\ln y +\frac{\mathrm{e}^x}{y}\frac{\mathrm{d}y}{\mathrm{d}x}. $$ Note that we used the product rule for the $\mathrm{e}^x\ln y$ term. Now the trick is to rearrange to solve for the derivative, and then substitute $(x,y) = (0,1)$. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1276690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many 1's can a regular 0,1-matrix contain? A matrix of order $n$ has all of its entries in $\{0,1\}$. What is the maximum number of $1$ in the matrix for which the matrix is non singular.
It should be $n^2-n+1$. Let $v=e_1+\cdots+e_n$, where $\{e_1, \cdots, e_n\}$ is the standard orthonormal basis. Then the matrix with row vectors $\{v, v-e_1, \cdots, v-e_{n-1}\}$ is a matrix containing the maximum number of 1 for which it is nonsingular.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1276789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Proving that there exists a horizontal chord with length $1/n$ for a continuous function $f: [0,1] \to \mathbb R$ Given a continuous function $f: [0,1] \to \mathbb R$ and that the chord which connects $A(0, f(0)), B(1, f(1))$ is horizontal then prove that there exists a horizontal chord $CD$ to the graph $C_f$ with length $(CD) = 1 / n$ where $n$ is a nonzero natural number. I have proven the statement but (by methods of reductio ad absurdum and signum) but I was wondering if there is a proof by induction, namely proving that there exists some $x_n \in [0,1]$ such that $f(x_n) = f(x_n + 1/n)$. My logic is the following: * *Proving the statement for $n=2$ (pretty easy, just apply Bolzano's theorem to f(x) - f(x + 1/2) ). Note that $f(0)=f(1)$ *For $n = k$ we let $h_k : A_k \to \mathbb R$ be a continuous function where $$A_k = \bigg [0, \frac {k-1} k \bigg ] \\ h_k(x) = f(x) - f \bigg (x + \frac 1 k \bigg )$$ which satisfies our hypothesis (the existence of at least one $x_k: h_k(x_k) = 0$) *Proving the statement for $n = k + 1$. I was thinking of applying Bolzano's theorem to some interval but I have not found such an interval. Is there one?
Intervals with length $\frac{1}{k}$ and $\frac{1}{k+1}$ overlap pretty bad, so I do not think it is a good idea to use induction on $k$. Instead, just take $h_n:[0,1-1/n]\to\mathbb{R}$: $$ h_n(x) = f\left(x+\frac{1}{n}\right)-f(x) $$ that is obviously continuous. If $h_n$ is zero somewhere in $I=[0,1-1/n]$ we have nothing to prove, and the same holds if $h_n$ change its sign over $I$, by continuity. However, $h_n$ cannot have always the same sign, because in such a case: $$ 0 = f(1)-f(0) = \sum_{k=0}^{n-1}h_n\left(\frac{k}{n}\right)\neq 0$$ we get a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1276889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Identity of tensor products over an algebra I hope this question hasn't been asked before, but I wasn't sure what to search so as to check. Suppose $R$ is a commutative ring with 1 and that $A$, $B$ are unital $R$-algebras. Suppose further that $M$, $M'$ are $A$-modules and $N$, $N'$ are $B$-modules. Is the following isomorphism of $A \otimes_R B$-modules correct, and if so, how to prove it? $$(M \otimes_R N) \otimes_{A \otimes_R B} (M' \otimes_R N') \cong (M \otimes_A M') \otimes_R (N \otimes_B N').$$ Here we view $A \otimes_R B$ as an $R$-algebra in the typical way, while $M$, $N$, $M'$, $N'$ are $R$-modules via the embedding $r \mapsto r \cdot 1$, and $M \otimes_R N$ is an $A \otimes_R B$-module via the factor-wise action, etc. The obvious thing to try is the map $(m \otimes n) \otimes (m' \otimes n') \mapsto (m \otimes m') \otimes (n \otimes n')$, but showing it's well-defined seems an intractable nightmare. Thanks for any advice.
For $M,M',N,N'$ free there is an obvious, well defined isomorphism. In fact, if $M=A^{(I)}$, $M'=A^{(I')}$, $N=B^{(J)}$, $N'=B^{(J')}$, they are both naturally isomorphic to $A\otimes_R B^{(I\times I'\times J\times J')}$. In the general case, take free resolutions of $M,M',N,N'$, they induce free resolutions of both $(M \otimes_R N) \otimes_{A \otimes_R B} (M' \otimes_R N')$ and $(M \otimes_A M') \otimes_R (N \otimes_B N')$ as $A\otimes_R B$ modules, and we have an obvious isomorphism between the resolutions. Thus, also our modules are isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Factorising polynomials over $\mathbb{Z}_2$ Is there some fast way to determine whether a polynomial divides another in $\mathbb{Z}_2$? Is there some fast way to factor polynomials in $\mathbb{Z}_2$ into irreducible polynomials? Is there a fast way to find $n$ such that a given polynomial in $\mathbb{Z}_2$ divides $x^n-1$? Could there be a strategy/checklist to check this as fast as possible? Please help me.
To check for divisibility is easy: since $\mathbb Z_2$ is a discrete valuation ring, for any two polynomials $f, g$, $f$ divides $g$ iff it divides it over the fraction field $\mathbb Q_2$ and the content of $f$ divides the content of $g$. Testing this is just Euclidean division, which is “fast” enough by any definition. To factor into irreducibles, you first want to write the Newton polygon of your polynomial; this gives you a first factorization (by Hensel's lemma). Each segment of this polynomial, however, gives you a product of irreducibles; factoring this is then equivalent to factoring over the residue field $\mathbb F_2$. (This is the case of a horizontal segment of the Newton polygon; you can bring a non-horizontal segment to this case by substituting $x \leftarrow c x$, where $c$ has the appropriate valuation). In total, full factorization over $\mathbb Z_2$ should be at most cubic, which is still probably reasonably fast.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$N^2 = T^*T$, $N$ is non-negative, and $T$ is invertible, how to prove $N$ is also invertible? I'm reading Hoffman's "Linear Algebra", and met this line that I couldn't figure it out. In $\S9.5$ Spectral Theory, pp 342, he mentioned: Let $V$ be a finite-dimensional inner product space and let $T$ be any linear operator on $V$. Then $N$ is uniquely determined as the non-negative square root of $T^*T$. If $T$ is invertible, then so is $N$ because $$\langle N\alpha, N\alpha\rangle = \langle N^2\alpha, \alpha \rangle = \langle T^*T\alpha, \alpha \rangle = \langle T\alpha, T\alpha\rangle.$$ I'm lost at the last line: I have no problem with the equation, but how would that prove that "If $T$ is invertible, then so is $N$"?
If $T$ is invertible then $\ker T=\{0\}$. The equation $$ \|N\alpha\|^2 = \langle N\alpha,N\alpha\rangle = \langle T\alpha,T\alpha\rangle= \|T\alpha\|^2 $$ implies $\ker N = \ker T$. Hence $\ker N=\{0\}$, and since it is a linear mapping from the finite-dimensional $V$ to $V$, it is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to do this integral. I need to do this: $$\int_0^\infty e^{ikt}\sin(pt)dt$$ I already have the solution, I'm just clueless on how to actually calculate it. I've tried several changes,integration by parts and extending it to a $(-\infty,\infty)$ integral so it'd be a simple Fourier transform, but I can't solve it. I would appreciate solution or hints. Thank you very much. EDIT: The result must be of the form: $$\frac{a}{s}\pm ibs\text{ sign}(k)\delta(-p^2)$$ where $a,b$ are constants, and $s$ is the module of a vector measured with a metric of minkowsky type $\text{diag}(-1,1,1,1)$, where the $0$ component is $k$ and the $3d$ part has module $p$. All in all, the integral comes from calculating a green function that ends up being: $$\frac{1}{4\pi(x-x')}\int_0^\infty e^{-ik(t-t')}\sin[k(x-x')]dk$$ And the result should be: $$\frac{1}{8\pi^2(x-x')^2}-\frac{i}{8\pi}\text{ sign}(t-t')\delta(-(x-x')^2)$$ The constants in each term $\textit{may}$ be wrong, but that's the structure of the result.
Under the condition that all integrals exist ($|\operatorname{Im}(p)|<\operatorname{Im}(k)$) $$\begin{align} \int_0^\infty e^{ikt}\sin(pt)\,dt&=\frac{1}{2\,i}\int_0^\infty e^{ikt}(e^{ipt}-e^{-ipt})\,dt\\ &=\frac{1}{2\,i}\Bigl(\frac{e^{i(k+p)t}}{i(k+p)}-\frac{e^{i(k-p)t}}{i(k-p)}\Bigr|_0^\infty\Bigl)\\ &=\frac12\Bigl(\frac{1}{k+p}-\frac{1}{k-p}\Bigr). \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the practical meaning of derivatives? I mean practically integration means sum of all components, and the integral can be visualized as the area below a curve. Is there a similar intuition or geometric meaning of the derivative?
Interestingly the history of Calculus is a bit backwards from the way it is taught in schools. It began with the investigation into finding areas of certain geometric shapes, a field of Mathematics called Quadrature. This led the way to integration theory. One landmark result which is by Fermat, when he demonstrated: $$\int_0^a x^n dx = \frac{a^{n+1}}{n+1}.$$ This was done using a geometric series and methods for factoring polynomials. It wasn't until half a century later before the derivative was invented. What does the derivative represent? Well, the quick answer is that it measures the "instantaneous rate of change" of a function at a point $x$. A better way to think about it, is that the derivative is the slope of the line that best approximates a function at a point $x$. This is evident when we consider the definition of the derivative given as follows. Let $f:\mathbb{R}\to\mathbb{R}$ be a function. We say that $f$ is differentiable at a point $x$ if there exists an $M$ for which $$\lim_{h\to0} \frac{|f(x+h)-f(x) - Mh|}{h} \to 0$$ as $h \to 0$. We then say $f'(x)=M$. This says that when $h$ is small, $Mh$ becomes a good approximation of $f(x+h)-f(x)$. In other words, $$f(x+h)-f(x) \approx Mh = f'(x)[(x+h) - x].$$ Thus we can approximate the deviation of $f(x+h)$ from $f(x)$ by using the derivative, $f(x+h) \approx f'(x) h + f(x)$. It was later found that integration and differentiation were intimately connected. This is known as the Fundamental Theorem of Calculus, and it is what connects the two main ideas of Calculus, Integration and Differentiation. It tells us that if a function is written as $$F(x) = \int_0^x f(t) dt$$ then $F'(x) = f(x)$. In other words: $$\frac{d}{da} \frac{a^{n+1}}{n+1} = \frac{d}{da} \int_0^a x^n dx = a^n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
problem of numeric sequence Give an example of a sequence $(A_n)$ which is not convergent but the sequence $(B_n)$ defined by $\displaystyle B_n = \frac{A_1+A_2+\cdots+A_n}{n}$ is convergent.
You can also consider the alternating $0,1$ sequence i.e. $$A_n = 0 \mbox{ if $n$ is odd} $$ $$A_n =1 \mbox{ if $n$ is even }$$ Then $B_n \to 1/2$ but as $A_n$ is oscillatory, it does not converge. For more details, refer Cesaro sums and Cesaro means.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Extending an automorphism to the integral closure I need some help to solve the second part of this problem. Also I will appreciate corrections about my solution to the first part. The problem is the following. Let $\sigma$ be an automorphism of the integral domain $R$. Show that $\sigma$ extends in an unique way to the integral closure of $R$ in it's field of fractions. My solution: denote $\overline{R}$ for the normalization of $R$ and $\overline{\sigma}$ for the extension of $\sigma$. Given some $r/s\in \overline{R}$, we can define $\overline{\sigma}(r/s) = \sigma(r)/\sigma(s)$. From this I can show that $\overline{\sigma}(r/s)\in\overline{R}$. So this looks like a decent extension. I guess the extension is supposed to be a homomorphism (or even an automorphism), althought this is not said explicity. The problem comes when I try to show it is unique. I don't know what to do, all my approaches failed.
The uniqueness is fairly trivial. Since suppose $\bar{\sigma}$ is an extension then it must be defined as you said. Indeed $\bar{\sigma}(r/s)=\bar{\sigma}(r\cdot s^{-1})=\bar{\sigma}(r)\cdot\bar{\sigma}(s)^{-1}=\sigma(r)\cdot\sigma(s)^{-1}=\sigma(r)/\sigma(s)$. And indeed $\bar{\sigma}$ is an automorphism if $\sigma$ is since the inverse is given by the unique extension of $\sigma^{–1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
proof that $\frac{x^p - 1}{x-1} = 1 + x + \dots + x^{p-1}$ is irreducible I am reading the group theory text of Eugene Dickson. Theorem 33 shows this polynomial is irreducible $$ \frac{x^p - 1}{x-1} = 1 + x + \dots + x^{p-1} \in \mathbb{Z}[x]$$ He shows this polynomial is irreducible in $\mathbb{F}_q[x]$ whenever $p$ is a primitive root mod $q$. By Dirichlet's theorem there are infinitely many primes $q = a + ke$, so this polynomial is "algebraiclly irreducible", I guess in $\mathbb{Q}[x]$. Do you really need a strong result such as the infinitude of primes in arithmetic sequences in order to prove this result? Alternative ways of demonstrating this is irreducible for $p$ prime? COMMENTS Dirichlet's theorem comes straight out of Dickson's book. I am trying to understand why he did it. Perhaps he did not know Eisenstein's criterion. It's always good to have a few proofs on hand. Another thing is that Eisenstein's criterion is no free lunch since it relays on Gauss lemma and ultimately on extending unique factorization from $\mathbb{Z}$ to $\mathbb{Z}[x]$.
To clarify an issue that came up in the comments, by Gauss's lemma a monic polynomial is irreducible over $\mathbb{Q}[x]$ iff it's irreducible over $\mathbb{Z}[x]$, and by reduction $\bmod p$, if a polynomial is irreducible $\bmod p$ for any particular prime $p$, then it's irreducible over $\mathbb{Z}[x]$. So once you know that the polynomial is irreducible $\bmod q$ for any prime $q$ which is a primitive root $\bmod p$, it suffices to exhibit one such prime. Unfortunately, as far as I know this is no easier than Dirichlet's theorem. Among other things, it's a straightforward exercise to show that Dirichlet's theorem is equivalent to the assertion that for any $a, n, n \ge 2$ such that $\gcd(a, n) = 1$ there is at least one prime (rather than infinitely many primes) congruent to $a \bmod n$. Regarding your last comment, Eisenstein's criterion is still much, much, much easier to prove than Dirichlet's theorem, even with all of the details filled in.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Calculate fractional part of square root without taking square root Let's say I have a number $x>0$ and I need to calculate the fractional part of its square root: $$f(x) = \sqrt x-\lfloor\sqrt x\rfloor$$ If I have $\lfloor\sqrt x\rfloor$ available, is there a way I can achieve (or approximate) this without having to calculate any other square roots? I would like to avoid square roots for performance reasons as they are usually a rather slow operation to calculate. As an example of what I am thinking about, here is one option: $$f(x)\approx\frac{x-\lfloor\sqrt x\rfloor^2}{(\lfloor\sqrt x\rfloor+1)^2-\lfloor\sqrt x\rfloor^2}$$ As $x$ gets large, this approximation becomes better and better (less than 1% error if $\sqrt x\ge40$), but for small $x$, it's quite terrible (unbounded error as $x$ goes to $0$). Can I do better? Preferably an error of less than 1% for any $\sqrt x<10000$. The holy grail would be to find a formula that doesn't need any division either. Aside: In case anyone is interested why I need this: I'm trying to see if I can speed up Xiaolin Wu's anti-aliased circle drawing functions by calculating the needed variables incrementally (Bresenham-style)
After a bit more reading on Ted's Math World (first pointed out by @hatch22 in his answer), I came across a quite beautiful way to perform this calculation using the Pythagorean theorem: Given some estimate $y$ of the square root (in the case here we have$\lfloor\sqrt{x}\rfloor$), if we let the hypotenuse of a right triangle be $x+y^2$ and one of its other two sides $x-y^2$, then the remaining side will have length $2y\sqrt{x}$, i.e. it will contain the desired answer. The angle $\alpha$ formed by the side of interest and the hypotenuse can be calculated as: $$\alpha=\sin^{-1}\frac{x-y^2}{x+y^2}\approx\frac{x-y^2}{x+y^2}$$ where the approximation is the first term of the Maclaurin Series for $\sin^{-1}$. The side of interest can then be calculated from: $$2y\sqrt{x}=(x+y^2)\cos\alpha\approx(x+y^2)\cos\frac{x-y^2}{x+y^2}\approx(x+y^2)\left(1-\frac{1}{2}\left(\frac{x-y^2}{x+y^2}\right)^2\right)$$ Where the second approximation are the first two terms of the Maclaurin Series for $\cos$. From this, we can now get: $$\sqrt{x}\approx\frac{x+y^2}{2y}\left(1-\frac{1}{2}\left(\frac{x-y^2}{x+y^2}\right)^2\right)=\frac{x^2+6xy^2+y^4}{4y(x+y^2)}$$ To get the fractional part of $\sqrt{x}$ in the range $0..255$, this can be optimized to: $$y_{\,\text{square}}=y\times y$$ $$s=x+y_{\,\text{square}}$$ $$r=\frac{(s\times s\ll6) + (x\times y_{\,\text{square}}\ll8)}{s\times y}\,\,\&\,\,255$$ where $\ll$ signifies a bit-wise shift to the left (i.e. $\ll6$ and $\ll8$ are equivalent to $\times\,64$ and $\times\,256$ respectively) and $\&$ signifies a bit-wise and (i.e. $\&\,255$ is equivalent to $\%\,256$ where $\%$ stands for the modulus operator). The amazing part is that despite the minimal Maclaurin Series used, if we can use the closer of $\lfloor\sqrt{x}\rfloor$ and $\lceil\sqrt{x}\rceil$ as the estimate $y$ (I have both available), the answer in the range $0..255$ is actually EXACT!!! for all values of $x\ge1$ that don't lead to an error due to overflow during the calculation (i.e. $x<134\,223\,232$ for 64-bit signed integers and $x<2\,071$ for 32-bit signed integers). It is possible to expand the usable range of the approximation to $x<2\,147\,441\,941 $ for 64-bit signed integers and $x<41\,324$ for 32-bit signed integers by changing the formula to: $$r=\left(\frac{s\ll6}{y} + \frac{x\times y\ll8}{s}\right)\,\&\,\,255$$ But due to the earlier rounding, this leads to a reduction in the accuracy such that the value is off by $1$ in many cases. Now the problem: A little bit of benchmarking and reading indicates that on many processors a division operation is actually not much faster than a square root. So unless I can find a way to get rid of the division as well, this approach isn't actually going to help me much. :( Update: If an accuracy of $\pm 1$ is acceptable, the range can be increased significantly with this calculation: $$k = \frac{(x + y \times y) \ll 5}{y}$$ $$r = \left(\left(k + \frac{x \ll 12}{k}\right)\ll 1\right)\,\&\,\,255$$ For 32-bit signed integers, this works for any $x<524\,288$, i.e. it breaks down as soon as $x \ll 12$ overflows. So, it can be used for circles up to radius 723 pixels. Note, that $y$ does not change on every step of the Bresenham algorithm, so $1/y$ can be pre-calculated and therefore does not add a full second division to the algorithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Finding the integral I've come across the following integration problem online: $$\int{\frac{x^3}{x^4-2x+1}}\,\,dx$$ This is beyond my current knowledge of integrals, but it seems to be a very interesting integration problem. Would anyone be able to give a tip on how to being solving the expression? If there are any techniques, please elucidate - i.e. substitution, trigonometric identities, etc. Thanks!
HINT:You can solve the intgral in this way: $$\int{\frac{x^3}{x^4-2x+1}}\,\,dx=$$ $$\frac{1}{4}\int{\frac{4x^3+2-2}{x^4-2x+1}}\,\,dx=$$ $$\frac{1}{4}\int{\frac{4x^3-2}{x^4-2x+1}}\,\,dx+\frac{1}{4}\int{\frac{2}{x^4-2x+1}}\,\,dx=$$ $$\frac{1}{4}ln|x^4-2x+1|+\frac{1}{4}\int{\frac{2}{x^4-2x+1}}\,\,dx=$$ and then you have to factor the denominator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Notation for a vector with constant equal components of arbitrary dimension Is there a standard notation to specify vectors of the form $(c,c,\ldots,c)$, where $c$ is some constant?, e.g. suppose I have the vector $(c,c,c,c,c)$ which has 5 components. Is there a standard (short hand) way to write such a vector? I am dealing with such vectors of arbitrary size so would like a better way to specify them.
You can use $\overrightarrow{(c)}_n$ or $\overrightarrow{c}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Fermats Little Theorem: How $(3^7)^{17}$ will leave the same remainder when divided by $17$ as $3^7$? As it is used in this explanation, how and why $(3^7)^{17}$ will leave the same remainder when divided by $17$ as $3^7$? Thanks!
Because $a^p \equiv a \pmod p$ (where $p$ is a prime) by Fermat's little theorem. (Think $3^7$ as $a$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Find area of solid using cylindrical shell method Here's my problem, I have to use the shell method to find the area bounded between f(x)=4x-x^2 and y=3 by rotating about y=2. I understand I have to first solve for x. Which is weird because I have two x variables with different powers. So does this mean I am going to have x on both sides? How would I use the shell method if this is the case?
You only need one of those equations since both regions we want to rotate are equal regions if that makes sense. So you should solve that equation above and get $x =\pm \sqrt{4-y}+2$. Well use the right hand side of the parabola in rotate it. We will also use the axis of symmetry of our parabola to help guide us to what we need to put in our formula. $2 \cdot \int_3^4 2 \pi (\sqrt{4-y}+2-2) \cdot (y-2) dy$. The factor of 2 in front of the integral there comes from that we need to rotate both sides of $x=2$ about $y=2$. This answer should be equivalent to what we get doing the washer method $\pi \int_1^3 ((4x-x^2-2)^2-(3-2)^2) dx$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1277957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inequality used in the proof of Kolmogorov Strong Law of Large Numbers I'm trying show that convergence follows. $$\sum_{k \geq 1} \frac{\sigma^2_k}{k^2} < \infty \Rightarrow \lim_{M \rightarrow \infty}\frac{1}{M^2}\sum_{k \leq M} \sigma^2_k=0.$$ Let's consider $D_k = \sum_{n \geq k} \displaystyle\frac{\sigma_n^2}{n^2}$ para $k \geq 1$ and it is noted that for $k=1$ we have to: $$D_1 = \sum_{n \geq 1} \frac{\sigma_n^2}{n^2} < \infty \text{(by hypothesis)}.$$ or this I have proved that $\lim_{k \rightarrow \infty} D_k = \lim_{k \rightarrow \infty} \sum_{n \geq k} \frac{\sigma_n^2}{n^2} = 0$, Now note the following: \begin{eqnarray*} D_2 &=& \sum_{n \geq 2} \frac{\sigma_n^2}{n^2} = \frac{\sigma_2^2}{2^2} + \frac{\sigma_3^2}{3^2} + \frac{\sigma_4^2}{4^2} \ldots\nonumber \\ D_3 &=& \sum_{n \geq 3} \frac{\sigma_n^2}{n^2} = \frac{\sigma_3^2}{3^2} + \frac{\sigma_4^2}{4^2} \ldots \nonumber\\ D_4 &=& \sum_{n \geq 4} \frac{\sigma_n^2}{n^2} = \frac{\sigma_4^2}{4^2} \ldots \nonumber\\ &\vdots& \\ \lim_{k \rightarrow \infty} D_k &=& \sum_{n \geq k} \frac{\sigma_n^2}{n^2} = 0. \nonumber \end{eqnarray*} then I considered a $M$ such that $M \geq 1$ and I then come to the following equation $$\frac{1}{M^2}\sum_{k=1}^M \sigma^2_k = \frac{1}{M^2} \sum_{k=1}^M k^2\left(D_k - D_{k+1}\right).$$ I noted in a book which obtained the following inequality $$\sum_{n \geq k} \displaystyle\frac{\sigma^2_k}{k^2} = \frac{1}{M^2} \sum_{k=1}^M k^2 \left(D_k - D_{k+1}\right)\text{(How could justify this inequality?)} \leq \frac{1}{M^2} \sum_{k=1}^M (2k - 1)D_k$$ It is easy to see that \begin{eqnarray} \sum_{n \geq k} \displaystyle\frac{\sigma^2_k}{k^2} &=& D_k - D_{k+1}\nonumber \\ &=& \sum_{n \geq k} \displaystyle\frac{\sigma_n^2}{n^2} - \sum_{n \geq k+1} \displaystyle\frac{\sigma_n^2}{n^2} \nonumber \\ &=& \left\{\displaystyle\frac{\sigma_k^2}{k^2} + \displaystyle\frac{\sigma_{(k+1)}^2}{(k+1)^2} + \ldots \right\} - \left\{\displaystyle\frac{\sigma_{(k+1)}^2}{(k+1)^2} + \displaystyle\frac{\sigma_{(k+2)}^2}{(k+1)^2} + \ldots \right\}. \end{eqnarray} where $$\lim_{k \rightarrow \infty} \frac{1}{M^2} \sum_{k=1}^M (2k - 1)D_k = 0$$ but I don´t know to justify that step. Could you please give me a suggestion on how to justify the last step. Thank you very much, for you help.
\begin{align*} \sum_{k=1}^M k^2(D_k-D_{k+1}) &= \sum_{k=1}^M k^2 D_k - \sum_{k=1}^M k^2 D_{k+1} \\ &= \sum_{k=1}^M k^2 D_k - \sum_{k=2}^{M+1} (k-1)^2 D_k \\ &= D_1 + \sum_{k=2}^M (k^2-(k-1)^2) D_k - M^2 D_{M+1} \\ &= D_1 + \sum_{k=2}^M (2k-1) D_k - M^2 D_{M+1} \\ &= \sum_{k=1}^M (2k-1) D_k - M^2 D_{M+1} \\ &\le \sum_{k=1}^M (2k-1) D_k \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1278042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What is the probability your friend wins on his n'th toss? You and your friend roll a sixth-sided die. If he lands even number then he wins, and if you land odds then you win. He goes first, if he wins on his first roll then the game is over. If he doesn't win on his first roll, then he pass the die to you. repeat until someone wins. What is the probability your friend wins on his $n^{th}$ toss? : $(\frac{1}{2} )^{2n-1}$ I want to understand why this is the answer and how do I derive this answers? I asked my professor but he said that there isn't any formula to base this off of.
Both of you have the same probability of winning and losing on a single roll, $\frac{1}{2}$ The probability that he wins in his first toss is $\frac{1}{2}$. He wins in his second toss when he and you both don't win your first tosses and he wins in his second toss. The probability that this happens is $\frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{2}$. He wins in his third toss when he and you both don't win your first two tosses and he wins in his third toss. The probability that this happens is $\frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{2}$. He wins in his $n$th toss when he and you both don't win your first $n-1$ tosses and he rolls a win in his $n$th toss. The probability that this happens is $\frac{1}{2} ^{n-1} \frac{1}{2}^{n-1} \cdot \frac{1}{2} $ Which simplifies to your desired answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1278158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is Summation a linear operator Say I have the following recursive function, where superscript does not denote a power, but instead denotes a point in time. And I wonder if it's linear for the points in space (denoted by subscript i). $T$ means the tempurature, where $t$ means the actual time. (So timesteps in the recursion don't have to be equal) $$f(t,T_i) = \sum_{n=1}^{m} \frac{T^n_i-T^{n-1}_i}{\sqrt{t^m-t^n} + \sqrt{t^m-t^{n-1}}}$$ In other words does the following hold true (once again superscript is NOT power): $$f(t, T_0+T_1) \equiv f(t, T_0) + f(t, T_1)$$ I guess the following can then be said: $$\sum_{n=1}^{m} \frac{(T^n_0+T^n_1)-(T^{n-1}_0+T^{n-1}_1)}{\sqrt{t^m-t^n} + \sqrt{t^m-t^{n-1}}} = \\ \sum_{n=1}^{m} \frac{T^n_0-T^{n-1}_0}{\sqrt{t^m-t^n} + \sqrt{t^m-t^{n-1}}} + \sum_{n=1}^{m} \frac{T^n_1-T^{n-1}_1}{\sqrt{t^m-t^n} + \sqrt{t^m-t^{n-1}}}$$ Is that correct? - and so does it prove the linearity? I guess this boils down to the question: is summation a linear operator. EDIT: Also am I correct in removing teh $i$ subscript of the time, considering that the time (vector) for all positional elements is equal?
Write $T_i=(T_i^0,\dots,T_i^m)\in\mathbb{R}^{m+1}$ (I'm assuming "temperature" is a real variable...). Your map, for fixed $t$, is equivalent to $$f(T_i,t)=[\begin{array}{ccc}r_1(t)&\cdots&r_m(t)\end{array}]MT_i,$$ where $M$ is the $m\times (m+1)$ matrix defined by the equation $$M_{ij} = \cases{-1 & i=j\\ 1 & j=i+1\\ 0 & else},$$ and the $r_n(t)$ are the denominators in your sum, for each summand $n$. With this representation, it is clearly linear. Comment: Just wanted to make sure you were aware: your function is not defined recursively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1278274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let A be a $n\times n$ non-singular symmetric matrix with entries in $(0,\infty)$. Let $A$ be a $n\times n$ non-singular symmetric matrix with entries in $(0,\infty)$. Then we can conclude that (a) $|A| > 0 (|A|$ denotes the determinant of A). (b) $A$ is a positive definite matrix. (c) $B = A^2$ is a positive definite matrix. (d) $C = A^{-1}$ is a matrix with entries in $(0,\infty)$. I took an example matrix $A= \begin{bmatrix} 1 & 2\\ 2 & 1 \\ \end{bmatrix} $, and saw that options (a), (b) and (d) are wrong. But what is the proper approach for this problem ?
For a, b and d your approach is right you have a counterexample. Let's take a look at c. $A$ symmetric non singular and therefore is diagonalisable with real nonzero eigenvalues. $A^2$ will then have positive eigenvalues (the squares of those of $A$) on top of being symmetric so it is positive definite. Non singular is an essential assumption as $\begin{bmatrix} 1&1\\1&1\end{bmatrix}$ shows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1278375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }