Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What is it called when you have 2 subsets of a set who's elements are based on alternating by a divisor? I came across this concept when I had to figure out a way to alternate spacing in a program I was writing. I'm curious to know if there's a formal term for this concept. Here's the description. You have a divisor D [in this example it's 15] . You have set A of contiguous natural numbers [say 1-100] You have two subsets of A called B and C. Each subset B and C contains the first number and each sequential number up until and including the divisor D, and then switches to the other subset until the divisor is equally divisible again. This repeats until the union B$\bigcup$C of both subsets contain the entire set. To display visually: A = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100} B = {16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90} Is there a name for this kind of set alternation? Thank you!
Here's one way to solve the problem - it's another way to look at @GregMartin 's solution. Start with the set $S = \{1, \ldots D\}$. Then $A$ contains the numbers you get by adding even multiples of $D$ to the elements of $S$; $B$ contains the sums with odd multiples. This would be an easy standalone loop in your program, or the logic could be incorporated in another loop that dealt with the numbers. To answer your question: I don't know of any particular name for this construction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2010883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Extension of Scalars and Polynomial Rings I am wondering if the following is true: Let $A, R$ commutative rings, $\varphi:R\rightarrow A$ a surjective homomorphism, see $A$ as an $R$-algebra. Then $$ R[X_1,\dots,X_m]\otimes_R A\approx A[X_1,\dots,X_m]$$ I am thinking in applying this to the case of $R\rightarrow R/J$ where $J$ is an ideal of $R$. Any help would be greatly appreciated.
It is important to know that your special case is the only case, up to isomorphism. Here is a proof: define $\phi_1:A\to A[x_1, \dots x_n]$ by $a\mapsto a$ and $\phi_2:R[x_1, \dots x_n]\to A[x_1, \dots x_n]$ by $\sum_i a_Ix^I\mapsto \sum_I \varphi(a_I)x^I$. These maps both coincide on $R$, so by the universal property of tensor products, we have a map $\phi:A\otimes_R R[x_1, \dots x_n]\to A[x_1, \dots x_n]$. There is also a map, $f:A[x_1, \dots x_n]\to A\otimes_R R[x_1, \dots x_n]$, by $f(a_Ix^I)=a_I\otimes x^I$. It is easy to check that the composition of these two are the identity. This proves the statement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2010998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
The total probability of a minimum amount probability-varied choosers choosing one option over another. OK, the title may seem a bit convoluted but here it goes. This is an simplified example of what I am trying to get at. I am really more interested in the method of figuring this out, and how it can be applied to other, similar, problems. 10 people are presented with a binary choice with Option A and Option B. Person 1 has a 25% chance of picking Option B, Person 2 has a 30% chance, and so on, incriminating 5% each person (for simplicity sake). What is the total probability, in percentage, that at least 3 of these people will pick Option B?
An exact answer is going to be painful. Let $p_i$ be the probability that person $i$ chooses box B, and these probabilities are independent. Let $Q=(1-p_1)\cdots(1-p_{10})$. Then the probability that at least three of them pick $B$ is: $$1-Q\left[1+\sum_{i=1}^{10} \frac{p_i}{1-p_i} + \sum_{1\leq i<j\leq 10}\frac{p_i}{1-p_i}\frac{p_j}{1-p_j}\right]$$ In your case, $p_i=\frac{1}{4}+\frac{i-1}{20}=\frac{4+i}{20}$, so $$\begin{align} Q&=\frac{14!}{4!20^{10}}\\\frac{p_i}{1-p_i}&=\frac{4+i}{16-i} \end{align}$$ That sum is almost certainly still going to be complicated to calculate. This value is just subtacting the cases where exactly zero, one, or two of the people select $B$. The number of people choosing exactly three is: $$Q\left[\sum_{1\leq i<j<k\leq 10} \frac{p_i}{1-p_i}\frac{p_j}{1-p_j}\frac{p_k}{1-p_k}\right]$$ Again, ugly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2011101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Explain how and why the cancellation of $ 6 $’s in $ \dfrac{16}{64} $ to get $ \dfrac{1}{4} $ is a fallacious statement. Based on what we know from elementary and middle school teachers, most of us know that 16/64 correctly equals 1/4 because 16/64 is simplified with a common divisibility of 16. However, there is another way to prove that 16/64 equals 1/4 without dividing the numerator and denominator by 16. Who can explain how and why that method leads to a fallacious statement?
The wrong proof is more of a joke than a serious fallacy: $$ \frac{16}{64} = \frac{16\llap{/}}{\rlap{/}64} = \frac 14 $$ This joke exploits the notational ambiguity that writing two symbols next to each other can either mean multiplication or -- if the symbols happen to be digits -- be part of the usual decimal notation for numbers, in which case it means something quite different from multiplying the digits together. In the joke proof we pretend that $16$ and $64$ mean $1\cdot 6$ and $6\cdot 4$ (which of course they don't) and then "cancel the common factor" of $6$. This doesn't really work because the $6$ is not a factor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2011233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Number of ways to create a password You are allowed to choose from digits 0 to 9 (inclusive) and 20 letters to make a password with 2 digits and 4 letters. You can repeat digits and letters in this password. How many different passwords are possible? I know that there's $(10^2)(20^4)$ ways of selecting digits and numbers, but how do I account for the fact that digits and letters can go into different places in the password? Is it $\binom{6}{2}(10^2)(20^4)$?
There are you need to fill. Two spaces need to be filled with digits, so there are $6! \over 4!$ ways to arrange the two digits in six spaces. There are $4!$ ways to arrange the four letters in the four remaining spaces. There are $10^2$ ways to fill in the digit spots, and there are $20^4$ ways to fill in the letter spots. $(\frac{6!}{4!})(4!)(10^2)(20^4)$ is the number of possible passwords.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2011340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Evaluating $\iiint_B (x^2+y^2+z^2)dV$ where $B$ is the ball of radius $5$ centered at the origin. The question asks to use spherical coords. My answer is coming out wrong and symbolab is saying I'm evaluating the integrals correctly so my set up must be wrong. Since $\rho$ is the distance from the origin to a point on it, and it's a sphere, I got $0 \le \rho \le 5$ Since it's a sphere I did $\theta$ from $0$ to $2\pi$. And then for $\phi$ I have from $0$ to $\pi$. From an example problem, $x^2+y^2+z^2=\rho^2$ Thus $$\int^\pi_0\int^{2\pi}_0\int^5_0 [( \rho^2) \rho^2 \sin(\phi)]\,d\rho \,d\theta \,d\phi$$ The answer is $\frac{312,500\pi}{7}$ and I'm getting $\frac{-1250}{\pi}$.
Partition your ball $B$ into spherical shells of radius $\rho$ $(0\leq\rho\leq5)$ and thickness $d\rho$. The volume of such a shell is $dV=4\pi \rho^2\,d\rho$. It follows that $$\int_B \rho^2\>dV=\int_0^5\rho^2\>4\pi\rho^2\>d\rho=2500\pi\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2011442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Many products can operate if, out of $n$ parts, $k$ of them are working. What is the probability that the product operates? Many products can operate if, out of $n$ parts, $k$ of them are working. Say $n = 20$ and $k = 17$, $p = 0.05$ is the probability that a part fails, and assume independence. What is the probability that the product operates? So, I'm a little confused as to why this is $P(X\geq 17)$ instead of $P(X\leq 17)$. Since there are $17$ parts that work out of $20$ parts; shouldn't the probability that parts operate be translated to "probability of at most $17$ parts working", meaning $X$ is less than or equal to $17$? I mean... this question is looking at sample of $20$ parts and $17$ of them are said to be functioning; so it doesn't make sense to say that at least $17$ of them are working (which implies that $17$, $18$, $19$, or $20$ parts are working) since $17$ parts are the maximum parts that could operate.
"I mean... this question is looking at sample of 20 parts and 17 of them are said to be functioning; " I think there is some misunderstanding here. The question meant to say that out of 20 parts, if (at least) 17 of them are functioning, then the product is still working. So if $X$ is the random variable: Number of parts functioning out of 20, then the required probability should be $P(X\geq 17)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2011586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Complex number equation solving Can somebody help me to solve this equation? $$(\frac{iz}{2+i})^3=-8$$ ? I'm translating this into $(\frac{iz}{2+i})=-2$ But i recon it's wrong ...
You are in the right way $$\frac { iz }{ 2+i } =\sqrt [ 3 ]{ 8\cdot \left( -1 \right) } =-2\left( \cos { \frac { k\pi }{ 3 } +i\sin { \frac { k\pi }{ 3 } } } \right) ,k=0,1,2$$ check for instance $k=0$ $$\frac { iz }{ 2+i } =-2\\ iz=-4-2i\\ z=\frac { -4-2i }{ i } =\frac { -4i+2 }{ { i }^{ 2 } } =-2+4i$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2011754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Triangle inequality with spectral norm Let $A$ be a real square matrix, I have to prove that $$\left \| A \right \|=\sqrt{\lambda_{\text{max}}(A^*A)}$$ defines a norm. I don't know how to prove the triangle inequality. I have already proved that $\|A\|=\|A\|_2=\sup_{\|x\|_2=1} \|Ax\|_2$, but the exercise is to prove without using it.
Preliminary answer: This inequality follows from Theorem 3.3.16 in Horn and Johnson (1994) - Topics in Matrix Analysis, and I think they don't use the equality you said in their proof. Although I don't understand their proof, I am trying to do so since I have this question too (from research). I will update this answer later, once I understand the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2011855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Am I correctly using the conditional probability notation in this problem? Problem: Two numbers are chose at random from among the numbers 1 to 10 without replacement. Goal: Find the probability that the second number chosen is 5. Official Solution: $A_{i}:i \in \left\{1,2,...,10\right\}$ is event that first number chosen is $i$. $B$ is the event that second number chosen is 5. Use the total probability law to get: $$\begin{equation}P(B)=\sum_{i=1}^{10}P(B|A_{i})P(A_{i}) = 9 \, \frac{1}{9} \, \frac{1}{10}=\frac{1}{10}\end{equation}$$ My Solution:$$\begin{equation*}P(B|A_{i}) = \frac{n(B)}{n(S)} = \frac{9}{9 \cdot 10} = \frac{1}{10}\end{equation*}$$ Question: How can we at all write $P(B)$? Is probability for event $B$ not inherently conditional? You can't choose the second number without choosing the first number, so how could one write anything other than $P(B|A)$?
You are asked to evaluate the probability that the second number chosen is $5$. This is clearly $P(B)$. There is no condition mentioned. Yes you are right that you cannot choose a second number without having chosen the first number. But $P(B)$ this is the probability that the second chosen number is $5$ regardless which number is chosen first. You must choose a first number, but it is irrelevant which one. For this purpose the law of total probabilty is used. For instance $P(B|A_1)$ is the probability that the second number is a $5$ given that the first number is $1$. The number $1$ has been already drawn first. 9 numbers still can be secelcted. Number 5 is still among these numbers. Thus $P(B|A_1)=\frac19$ $P(A_1)$ is the probability that number 1 is selected first is $\frac1{10}$. Thus $P(B\cap A_1)=P(B|A_1)\cdot P(A_1)=\frac{1}{90}$ The probability $P(B\cap A_i)$ are all $\frac{1}{90}$ except for $i=5$. $P(B\cap A_5)=0$ Thus $P(B)=\sum\limits_{i=1}^{10} P(B|A_i)\cdot P(A_i)$ $=\frac1{90}+\frac1{90}+\frac1{90}+\frac1{90}+0+\frac1{90}+\frac1{90}+\frac1{90}+\frac1{90}+\frac1{90}+\frac1{90}=\frac{9}{90}$ P(B) can be calculated more easily. To draw a 5 secondly it must not be drawn at the first draw. $P(\overline A_5)=\frac9{10}$. $\overline A_5$ is the complementary event of $A_5$. This it the event that $5$ is not chosen first. And $P(B|\overline A_5)=\frac19$. This is the probabilty that number 5 is chosen secondly with the condition that 5 is not chosen first. Using the total law of probability: $P(B)=(B|\overline A_5)\cdot P(\overline A_5)+(B| A_5)\cdot P( A_5)=\frac19\cdot \frac{9}{10}+0\cdot \frac1{10}=\frac1{10}$ I cannot really comprehend what do you mean by $n(B)$ and $n(S)$. And how you got the numbers for them ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2012006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Projection onto subspace spanned by a single vector Given the vectors $x=(2,2,3)$ and $u_1=(0,3,0)$, vector $v_1$ is the projection of $x$ on the subspace spanned by $u_1$, so $v_1=\alpha u_1$. Determine the value of $\alpha$. My attempt I haven't dealt with a case where the subspace is spanned by only one vector before. My textbook says that I can write the projection as a linear transformation: $A^TA\alpha=A^Tx$ $A$ is the matrix that has the basis vectors as columns. The solution is $\alpha=(A^TA)^{-1}A^Tx$. When I use this I get: $A=\pmatrix{0\\3\\0}$ $A^TA=\pmatrix{0\\3\\0}(0,3,0)=9$ Then $(A^TA)^{-1}=\frac{1}{9}$, right? So $\alpha=\frac{1}{9}(0,3,0)(2,2,3)$ I can't multiply two row vectors. I must have done something wrong. Maybe I can't use the above formula.
Hint: The component of the vector $\mathbf{x}$ in the direction of $\mathbf{u_1}$ is: $$ \mathbf{v_1}=\frac{\mathbf{x}\cdot \mathbf{u_1}}{|\mathbf{u_1}|^2}\mathbf{u_1} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2012085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$n$ balls numbered from $1$ to $n$ If we consider a box containing $n$ balls numbered from $1$ to $n$. If $455$ is the number of ways to get three balls from the box such that no two balls are consecutively numbered, then we have to find the value of $n$. Someone please help me out in this. I am not getting anything, how to start it?
As I tend to make mistakes with this kind of problems, I asked my friend Ruby to do some test runs. She is pretty fast at counting and makes no silly mistakes if I explained it well. We got different numbers, much larger numbers until I realized that order should not matter for a draw, so drawing $1,3,5$ is considered the same as drawing $5,3,1$. Then Ruby found $455$ non-consecutive draws for $n=17$ balls, confirming Arthur's formula. Here is what I told her: balls.rb This is what she found: balls.log
{ "language": "en", "url": "https://math.stackexchange.com/questions/2012173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Are eigenspaces and minimal polynomials sufficient for similarity? This question comes out of the conversation in the comments of this answer. The answerer asserts the following: $\DeclareMathOperator{\rank}{rank}$ Suppose that $A$ and $B$ have the same minimal polynomial and that for all $\lambda \in \Bbb C$, $\rank(A - \lambda I) = \rank(B - \lambda I)$. Then $A$ and $B$ are similar. My question: is this true or false? I think it's false, and will attempt to build a counterexample as an answer. However, I welcome any attempts in either direction.
$\DeclareMathOperator{\rank}{rank}$The answer is no. Notably, the minimal polynomial determines the size of the largest blocks in the Jordan form, $\rank(A - \lambda I)$ determines the total number of blocks. Let $J_k$ denote the $\lambda = 0$ block of size $k$. Consider the matrices $$ A = J_3 \oplus J_2 \oplus J_2\\ B = J_3 \oplus J_3 \oplus J_1 $$ $A$ and $B$ have the same characteristic polynomial and minimal polynomial. They each have only the eigenvalue $\lambda = 0$, and satisfy $\rank(A) = \rank(B)$. However, they are not similar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2012475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Smoothness of functions up to the boundary Let $\Omega$ be an open set in $R^n$ with $C^1$ boundary $\partial \Omega$. Suppose that $f$ is a $C^1$ function on $\Omega \cup \partial \Omega $, where $C^1$-smoothness of $f(x)$ on $\partial \Omega$ is defined via local $C^1$- diffeomorphism. If we define a function $g$ on $\Omega$ as $g(x) = f(x)$ for $x \in \Omega$, and fix $x_0 \in \partial \Omega$, then is it true that $D^{\alpha} g(x) \to D^{\alpha} f(x_0)$ as $x \to x_0$ for each multiindex $|\alpha| \leq 1$?
No, it's not true, because your definition of being $C^1$ on $\Omega \cup \partial \Omega$ is bad. Your definition only guarantees that the restriction of $f$ to the boundary is $C^1$, but that is not good enough: $f$ can be $C^1$ on $\Omega$ and on $\partial \Omega$ without being even continuous on $\Omega \cup \partial \Omega$. You should say that $f$ is $C^1$ at points of $\partial \Omega$ if it admits a $C^1$ extension in small open sets around them. This is a good definition and it does not require $\partial \Omega$ to have any kind of regularity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2012732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that a quotient group has an element of order 2. Let $H$, $K$ be subgroups of $G$, with $H$ being a normal subgroup of $G$. Suppose the cardinality of $H\cap K$ is odd and $K$ has an element $k$ of order $2$. How would you prove that the quotient group $G/H$ has an element of order $2$? I really don't understand this material well, would it be possible to be explained at an elementary level?
If $k\in K$ has order two, this means that $k^2 = e$. $k$ cannot belong to $H$, because $H\cap K$ is a subgroup of $G$, and the order of any element in a group divides the order of the group (Lagrange's theorem). If $H\cap K$ has an odd number of elements, then there can be no element of even order in $H\cap K$. Elements of the quotient group are of the form $gH$ for $g\in G$, so consider $kH$. Because $k\not\in H$, $kH\neq eH$, so $kH$ is a nonidentity element of $G/H$. But $(kH)^2 = k^2 H = eH$, so $kH$ has order two, which is what was to be shown.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2012829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Let f,g be continous functions from X to Y. Let E be a dense subset of X. Show if $f(x)=g(x) \forall x \in E$, then f(x)=g(x) $\forall x \in X$. I'm trying to prove that if f,g are continuous functions, and if E is a dense subset of X $(\text{or } Cl(E) = X)$ and if $f(x)=g(x) \forall x \in E$ then $f(x)=g(x) \forall x \in X$. I understand that if f,g are continuous, then: $\exists \delta_1, \delta_2$ such that $\forall X \in E$ with $d(x,p)< \delta_1$, $|f(x) - f(p)| < \epsilon$ and similarly $\forall X \in E$ with $d(x,p)< \delta_2$, $|g(x) - g(p)| < \epsilon$ And by definition of closure, I know that: $Cl(E) = E \cup E'$ where E' is the set accumulation points of E, where p is an accumulation point if $\forall r>0, (E\cup N_r(p)) \backslash \{p\} \neq \emptyset $ I have zero clue on how to approach this problem. If $f(x) = g(x)$, then I'm guessing it implies that $|f(x) - f(p)| = |g(x) - g(p)|$. And so I'm guessing that $\delta_1 = \delta_2$. Help would be very much appreciated.
A little bit late, but I decided to give an alternate answer without using contradiction or the sequence definition of continuity. Let $p \in X$ and $\epsilon > 0$ be given. If $p \in E$ we are done, otherwise $p$ is a limit point of $E$. By continuity of $f, g$, there exists $\delta_1, \delta_2 >0$ s.t. $\forall x \in X$ we have $d(p,x) < \delta_1 \implies d(f(x), f(p)) < \epsilon/2$ and $d(p, x) < \delta_2 \implies d(g(p), g(x)) < \epsilon/2$. Set $\delta = \min\{\delta_1, \delta_2\}$. Since $E$ is dense in $X$, there is some $q \in E \cap B_{\delta}(p) \backslash \{p\}$. Thus, we have that $d(f(p), g(p)) \leq d(f(p), f(q)) + d(f(q), g(q)) + d(g(q),g(p)) < \epsilon$. Finally, since $\epsilon$ was arbitrary, we conclude that $d(f(p), g(p)) = 0 \implies f(p) = g(p)$ for all $p \in X$. Note: $d(f(q), g(q)) = 0$ since $f,g$ agree on $E$ by hypothesis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2012947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Line and segment relationship in the circle if the larger gear has 30 teeth and the smaller gear has 18, then the gear ratio(larger to smaller) is 5:3. when the larger gear rotates through and of 60°, through what angel measure does the smaller gear rotate?
The angle between two teeth in the lager gear is $\frac {360°}{30}=12°$ and for smaller gear is $\frac {360°}{18}=20°$. When the larger gear rotates for $60°$ it means $5$ teeth has moved $(\frac {60°}{12°}=5)$. Because these gears are connected then the smaller gear must move for $5$ teeth too, and $5$ teeth in smaller gear means $5\times 20°=100°$, so the smaller gear will rotate for $100$ degrees.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2013063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of Maclaurin series Find the sum of the infinite series \begin{equation} \sum_{n=2}^\infty\frac{7n(n-1)}{3^{n-2}} \end{equation} I think it probably has something to do with a known Maclaurin series, but cannot for the life of me see which one.. Any hints would be appreciated! Edit: Using your hints, I was able to solve the problem. Solving it here in case someone is wondering about the same thing: Manipulate \begin{equation}\sum_{n=2}^\infty x^n=\frac{1}{1-x}-x-1 \end{equation} by differentiating both sides: \begin{equation}\sum_{n=2}^\infty nx^{n-1}=\frac{1}{(1-x)^2}-1 \end{equation} differentiate again: \begin{equation}\sum_{n=2}^\infty (n-1)nx^{n-2}=\frac{2}{(1-x)^3} \end{equation} plugging in $\frac{1}{3}$ for x: \begin{equation}\sum_{n=2}^\infty n(n-1)(\frac{1}{3})^{n-2}=\frac{2}{(1-\frac{1}{3})^3} \end{equation} which is equivalent to \begin{equation}\sum_{n=2}^\infty \frac{n(n-1)}{3^{n-2}} \end{equation} finally, multiplying by 7 gives us \begin{equation}\begin{split}\sum_{n=2}^\infty \frac{7n(n-1)}{3^{n-2}}&=7\frac{2}{(1-\frac{1}{3})^3}\\ &=\underline{\underline{\frac{189}{4}}} \end{split} \end{equation}
Hint. We have that for $x\not=1$, and $N\geq 2$, $$\frac{d^2}{dx^2}\left(\frac{1-x^{N+1}}{1-x}\right)=\frac{d^2}{dx^2}\left(\sum_{n=0}^N x^n\right)=\sum_{n=2}^N n(n-1)x^{n-2}.$$ P.S. for the downvoters. I considered a finite sum because it is not so straightforward to say that we can interchange the differentiation and the infinite sum: for $|x|<1$, $$\frac{d^2}{dx^2}\left(\frac{1}{1-x}\right)=\frac{d^2}{dx^2}\left(\sum_{n=0}^{\infty} x^n\right)= \sum_{n=0}^{\infty} \frac{d^2(x^n)}{dx^2} =\sum_{n=2}^{\infty} n(n-1)x^{n-2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2013205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Calculus exercise (James Stewart book) : Volume of a box I'm learning calculus but I'm very stuck in this exercise. Exercise I know that the volume is : 2HL + 2HW + LW (only LW because open top). Thanks to the exercise we know that: W = 12 - 2x L = 20 - 2x H = x Consequently I replaced the values: 2x(20 - 2x) + 2x(12 - 2x) + (20 - 2x)*(12 - 2x) = -4x^2 + 240 My function is f(x) = -4x^2 + 240 However, I don't understand why in the correction it is written 4x^3 - 64x^2 +240x ? Correction
You haven't calculated the volume of the box, but the surface of the cutout.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2013335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
An differential equation$(x^{2}+y^{2}+3)\frac{dy}{dx}=2x(2y-\frac{x^{2}}{y})$ How to solve this ODE $$(x^{2}+y^{2}+3)\frac{dy}{dx}=2x(2y-\frac{x^{2}}{y})$$ I tried to find its integral factor, but failed. Many thanks for your help.
$(y^2+x^2+3)(y^2)'=4x(2y^2-x^2)$ $(y^2+x^2+3)(y^2+x^2+3)'=4x(2y^2-x^2)+2x(y^2+x^2+3)$ $(y^2+x^2+3)(y^2+x^2+3)'=10x(y^2+x^2+3)-4x(3x^2+6)$ With $\enspace z:=y^2+x^2+3\enspace$ we get $\enspace zz'=10xz-12x^3-24x$ . I don't know how to solve this, but Wolfram solves it in a closed form with http://www.wolframalpha.com/input/?i=z(dz%2Fdx)%3D10xz-12x%5E3-24x .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2013444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that $f$ can not be expanded into a power series in a neighborhood of $0$. Let $f(x)$ be defined by $$f(x)=\begin{cases}e^{-1/x^2},& \text{ if }x \neq 0,\\0, &\text{ if }x=0.\end{cases}$$ Show that $f$ can not be expanded into a power series in a neighborhood of $0$. We know that $f$ must be infinitely differentiable to be able to expand into a power series and $e^{-1/x^2}=\sum_{0}^{\infty} \dfrac{(-1)^n}{n!\cdot x^{2n}}$ . Then what can we do next? Anyone help me please?
If you try to compute all the derivatives $f'(0)$, $f''(0)$, etc, they are all zero. So, if $f$ could be expanded into a power series in a neighborhood of 0, it would be the zero function there. This is a contradiction, as $f$ is not the zero function in any neighborhood of 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2013565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Systems of linear equations: solving by addition I have not understood this question. it confused me a lot. I have tried to solve it but the solution was senseless. $$\begin{cases} 3x + 4y - 5 = 2x + 3y - 1\\ 6x - 2y + 2 = 4x - 3y - 5 \end{cases}$$ Can anyone help me with this?
HINT: First turn the top and bottom equations into standard form, so you end up with a system that looks like this. $$\begin{cases} Ax + By = C\\ Dx + Ey = F \end{cases}$$ This can be done by combining the $x$ and $y$-terms on the left and moving the constant terms to the right. Then solve this "normal-looking" system using addition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2013671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Shortest length of line segment that bisects triangle in two equal area A segment of a line $PQ$ with its extremities on $AB$ and $AC$ bisects a triangle $ABC$ with sides $a,b,c$ into two equal areas, then find the shortest length of the segment $PQ$. I was looking for small hint as how to approach this question? I am not able to initiate.
Let $AP=:p\geq0$, $AQ=:q\geq0$. Then we have to mnimize $$f(p,q):=p^2+q^2-2pq\cos\alpha$$ under a condition of the form $g(p,q):=pq-C=0$ for some $C>0$. As $p\to0+$ enforces $q\to \infty$ and therefore $f(p,q)\to \infty$, and vice versa, it follows that the minimum of $f$ is taken for certain $p>0$, $q>0$ found by using Lagrange's method. We therefore have to set up the auxiliary function $$\Phi(p,q,\lambda):=f(p,q)-2\lambda g(p,q)$$ and solve the system $$\eqalign{\Phi_p&=2p-2q\cos\alpha-2\lambda q=0 \cr \Phi_q&=2q-2p\cos\alpha-2\lambda p=0 \cr}\tag{1}$$ together with $g(p,q)=0$ for $p$ and $q$. Now $(1)$ implies $$p=(\lambda+\cos\alpha)q\quad\wedge\quad q=(\lambda+\cos\alpha)p\ ,$$ hence $p=q$. It follows that you have to choose $p=q$ such that the cut off triangle just has half the area of the original triangle, i.e., $$p=q=\sqrt{{|AB|\>|AC|\over2}}\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2013828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to replace 8-sided dice with other dice The exact question is: You need an 8-sided die for a game. You only have a coin, two four-sided and one 10-sided dice. How can you replace the 8-sided die? Re-rolls are not allowed. There are several solutions to this I've been told, I found one, but my solution wasn't one of the expected solutions. What solutions can you think of? My solution was: Roll 10-sided and 2x 4-sided dice, sum up the result and roll and substract 2-sided die (the coin) which you divide by 2 at the end to get the result (round up)
To keep the uniform repartition, here are some proposals : * *Roll the 10-face dice, re-roll on 9 and 10. *Roll the 4-face dice, then flip the coin. Add 4 on heads. *Roll the 4-face dice and multiply the result by 2. Flip a coin, and substract 1 on heads. There are probably a number of other solutions, the important part is to keep the repartition uniform.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2013926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 13, "answer_id": 12 }
Fundamental theorem of calculus with improper integral Suppose $f(x,y)$ is continuous everywhere except where $y=x$. Does FTC apply in a scenario where $x$ is given as a boundary? If so, how? Here's a general example. $$\frac{d}{dx}\int_x^bf(x,y) dy$$ There is a related post here that doesn't answer this question. The specific integral I'm looking at is more like $$\frac{d}{dx}\int_x^\infty \frac{g(y)}{\sqrt{y^2-x^2}} dy$$ where it is known that the original integral (prior to differentiation) converges for all $x\ge 0$. I'm tentatively approaching the derivative as $$\frac{d}{dx}\int_x^c\frac{g(y)}{\sqrt{y^2-x^2}} dy + \lim_{b\to\infty}\int_c^b\frac{d}{dx}\frac{g(y)}{\sqrt{y^2-x^2}} dy$$
I ended up solving my specific problem by introducing a substitution to set the limits of integration to something nicer. \begin{align} \frac{d}{dx}\int_x^\infty \frac{g(y)}{\sqrt{y^2-x^2}} dy & =\frac{d}{dx}\int_0^\infty \frac{g(y)}{\sqrt{u}} \frac{du}{2y} &\text{ using: } u = y^2 - x^2 \end{align} Since $u$ is a dummy variable, I don't see an issue with interchanging the order of operations. From there, one can apply the chain rule, and then undo the substitution. \begin{align} \frac{d}{dx}\int_0^\infty \frac{1}{\sqrt{u}} \frac{g(y)}{2y} du &= \int_0^\infty \frac{1}{\sqrt{u}} \frac{d}{dx}\left(\frac{g(y)}{2y}\right)du \\ &= \int_0^\infty \frac{1}{\sqrt{u}} \frac{d}{dy}\left(\frac{g(y)}{2y}\right)\frac{dy}{dx} du \\ &= \int_x^\infty \frac{1}{\sqrt{y^2-x^2}} \frac{d}{dy}\left(\frac{g(y)}{2y}\right)\frac{x}{y} (2ydy) \\ &= x\int_x^\infty \frac{1}{\sqrt{y^2-x^2}} \frac{d}{dy}\left(\frac{g(y)}{y}\right) dy \end{align} So, in conclusion: \begin{align} \frac{d}{dx}\int_x^\infty \frac{g(y)}{\sqrt{y^2-x^2}} dy &= x\int_x^\infty \frac{1}{\sqrt{y^2-x^2}} \frac{d}{dy}\left(\frac{g(y)}{y}\right) dy \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2014055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Result of Cauchy Schwarz inequlaity (homogeneous case) I have shown the Cauchy Schwarz inequality to be $$ \langle v_i(\vec x),v_j(\vec y)\rangle^2 \ \le \ \langle v_i(\vec x),v_i(\vec x)\rangle\langle v_j(\vec y),v_j(\vec y)\rangle.$$ How would I use this to show $$ \langle v_i(\vec x),v_i(\vec y)\rangle\ \le \ \langle v_i(\vec x),v_i(\vec x)\rangle$$ for a homogeneous case? I have considered setting $ _j {_\rightarrow} _i,$ but that doesn't appear to be helpful.
What you want to prove does not hold in general. For a counter example, suppose first the vectors have dimension $1$. You want to prove that $(ab)^2\leq a^2 b^2$ implies $(ab)\leq a^2$. However, if $a=1, b=2$, then $2=(ab)>a^2=1$. More generally, for any nonzero vector $x$ and $y=\alpha x$, $\alpha>1$, we have $\langle x,y\rangle = \alpha \langle x,x\rangle >\langle x,x\rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2014240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all square roots of $x^2+x+4$ in $\mathbb{Z}_5[x]$. Find all square roots of $x^2+x+4$ in $\mathbb{Z}_5[x]$. Also show that in $\mathbb{Z}_8[x]$ there are infinitely many square roots of $1$. I know how to find the square roots when in $\mathbb{Z}[x]$ however I keep getting confused with $\mathbb{Z}_5[x]$. From here I am assuming I will be able to figure out exactly why the second statement is true.
Hints: A square root of $x^2+x+4$ has the form $\pm x+a$. Let's identify: $$(\pm x+a)^2=x^2\pm 2a x+a^2=x^2+x+4.$$ We have to solve $\;\begin{cases}a^2=4\\\pm2a=1\end{cases}$. The first equation has solutions $\;a=\pm 2$. Checking the second equation, we find the square roots $$x-2\enspace\text{or}\enspace -x+2. $$ For the second question, try binomials $ax^n+b$, where the leading coefficient $a$ is nilpotent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2014367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$M$ and $N$ equal in $K_0$ $\Rightarrow$ $\exists P$ such that $M\oplus P\cong N\oplus P$ Let $\mathcal P(A)$ be the category of finitely generated projective $A$-modules ($A$ is a ring with unity). Then consider the free group $F$ over the isomorphism classes of $\mathcal P(A)$. I will indicate the isomorphism classes with square brackets $[\cdot]$. It means that an element of $F$ can be written as a finite sum: $$\sum_k n_k[M_k]$$ where $n_k\in\mathbb Z$. $H$ is the subgroup of $F$ generated by the elements of the type: $$[M\oplus N]-[M]-[N]$$ Now please help me to understand why the following implication is true: $[M]=[N] \operatorname{mod } H\Rightarrow$ there exists $P\in\mathcal P(A)$ such that $[M\oplus P]=[N\oplus P]$
By the definition of a free abelian group, we have the following observation: If $[M_1] + \dotsb + [M_r] = [N_1] + \dotsb + [N_s]$, we have $r=s$ and the summands are the same up to permutations. Since the direct sum is commutative, we get an isomorphism. $$M_1 \oplus \dotsb \oplus M_r \cong N_1 \oplus \dotsb \oplus N_s.$$ Using this, the proof is straight forward: By definition you find $A_i, B_i$ and $C_j, D_j$, such that $$[M]-[N] = \sum_{i=1}^m \Big([A_i \oplus B_i]-[A_i]-[B_i]\Big) - \sum_{i=1}^n \Big([C_j \oplus D_j]-[C_j]-[D_j]\Big),$$ or equivalently: $$[M] + \sum_{i=1}^m \Big([A_i]+[B_i]\Big) + \sum_{j=1}^n [C_j \oplus D_j] = [N] + \sum_{i=1}^m [A_i \oplus B_i] + \sum_{i=1}^n \Big([C_j]+[D_j]\Big)$$ Using the observation discussed above, we get the desired result with $$P = \bigoplus_{i=1}^m \Big(A_i \oplus B_i\Big) \oplus \bigoplus_{j=1}^n \Big(C_j \oplus D_j\Big).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2014510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can the Ricci tensor determine a manifold up to homeomorphism? I don't know how to ask this question. Maybe there are many improper statements. Let $(M, g)$ and $(N, h)$ be two Riemannian manifolds and $f:M \to N$ a bijection such that $\operatorname{Ric}(x)$ is equal to $\operatorname{Ric}(f(x))$ up to a coordinate transformation. Are $M$ and $N$ homeomorphic? In fact, I feel "up to a coordinate transformation" is not suitable, because, locally, I feel (not sure) that I can choose suitable coordinates making $\operatorname{Ric}(x) = \operatorname{Ric}(f(x))$. But I also don't know how to ask it. I just want to know whether the Ricci tensor can determine a manifold up to homeomorphism.
This is not true. For example, not all Ricci-flat manifolds are homeomorphic. More explicitly, for any bijection $f$ between $\mathbb{R}^n$ and $(S^1)^n$, $\operatorname{Ric}(x) = \operatorname{Ric}(f(x)) = 0$, but $\mathbb{R}^n$ and $(S^1)^n$ are not homeomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2014616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding domain of variables in joint density for marginal density Let $(X,Y)$ have joint density $f(x,y) = \frac{1}{2}(1+x+y)$ for $0<x<1$ and $0<y<1$. So the joint density of $X$ and $U=X+Y$ is $f_{X,U}(x,u)=\frac{1}{2}(1+u)$. Now it is simple to get the domain of $X$ since it is provided in the problem description, but to get the domain of $U$ seems so be trickier since you need to extract the case of $U<1$ and $U\geq 1$. I need these values in order to get the marginal densities, but I am not quite sure what I need to do after I get $0<U-X<1$.
The domain of $(X,U)$ is a parallelogram in $\mathbf{R}^2$: $X$ is in $(0,1)$, while $U$ is in $(X,X+1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2014730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove equivalence of two Fibonacci procedures by induction? I want to prove equivalence of the following Fibonacci procedures using proof by induction. * *version: $$\text{fib_v1}(n) = \text{fib_v1}(n-1) + \text{fib_v1}(n-2)$$ , with base cases $\text{fib_v1}(0) = 0$ and $\text{fib_v1}(1) = 1$. *version: $$\text{fib_v2}(n, a_0, a_1) = \text{fib_v2}(n-1, a_1, a_0+a_1)$$ , with base case $ \text{fib_v2}(0, a_n, a_{n-1}+a_n) = a_n$, and initial values of the accumulators $a_0 = 0$ and $a_1 = 1$. The claim is $\text{fib_v1}(n) = \text{fib_v2}(n, a_0, a_1)$, which is easily seen to be true for $n = 0$. However, I'm completely stuck in the induction case. Can anyone help?
Let me first save some writing by saying $f_1 = fib_{v1}$ and $f_2 = fib_{v2}$ I think you are stuck because the claim you try to prove: $f_2(n,a_0,a_1) = f_1(n)$ is only true for $a_0 = 0$ and $a_1 = 1$. The claim is not true for other values of $a_0$ and $a_1$. The claim you really want to prove is that for all $n$, $a_0$, and $a_1$: $f_2(n,a_0,a_1) = a_0$ (for $n = 0$) and $f_2(n,a_0,a_1) = f_1(n-1)*a_0 + f_1(n)*a_1 $ (for $n>0$) Proof: Base: n=0 Then $f_2(n,a_0,a_1) = a_0$ by definition of $f_2$ Step: Assume the claim holds for $n=k$ Now consider $n = k+1$: $f_2(k+1,a_0,a_1) =$ (definition $f_2$) $f_2(k,a_1,a_0 + a_1) =$ (inductive hypothesis) $f_1(k-1) * a_1 + f_1(k) * (a_0 + a_1) =$ $f_1(k-1) * a_1 + f_1(k) * a_0 + f_1(k) *a_1 =$ $f_1(k) * a_0 + (f_1(k-1) + f_1(k)) *a_1 =$ (definition $f_1$) $f_1(k)*a_0 + f_1(k+1)*a_1 $ which is exactly what we needed to show. Finally, now that you know this for any $a_0$ and $a_1$, we can infer as a special case: $f_2(0,0,1) = 0 = f_1(0)$ and $f_2(n,0,1) = f_1(n-1)*0 + f_1(n)*1 = f_1(n)$ (for $n>0$) Hence, for any $n$: $f_2(n,0,1) = f_1(n)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2015060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In a $\triangle ABC,\angle B=60^{0}\;,$ Then range of $\sin A\sin C$ In a $\triangle ABC,\angle B=60^{0}\;,$ Then range of $\sin A\sin C$ $\bf{My\; Attempt:}$ Using Sin formula: $$\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$$ So $\displaystyle \frac{a}{\sin A} = \frac{b}{\sin 60^0}\Rightarrow \sin A = \frac{a}{b}\cdot \frac{\sqrt{3}}{2}$ and $\displaystyle \frac{c}{\sin C} = \frac{b}{\sin 60^0}\Rightarrow \sin C = \frac{c}{b}\cdot \frac{\sqrt{3}}{2}$ So $$\sin A\sin C = \frac{3}{4}\cdot \frac{ac}{b^2}$$ Now using Cosine formula: $$\cos B = \cos 60^0 = \frac{1}{2}= \frac{a^2+c^2-b^2}{2ac}\Rightarrow b^2=a^2+c^2-ac$$ So $$\sin A\sin C = \frac{3}{4}\bigg[\frac{ac}{a^2+c^2-ac}\bigg] = \frac{3}{4}\bigg[\frac{1}{\frac{a}{c}+\frac{c}{a}-1}\bigg]\leq \frac{3}{4}$$ Using $\bf{A.M\geq G.M},$ We get $$\frac{a}{c}+\frac{c}{a}\geq 2\Rightarrow \frac{a}{c}+\frac{c}{a}-1\geq 1$$ $\bf{ADDED::}$ Using Jensen Inequality:: For $f(x) = \ln(\sin x)\;,$ $$\ln(\sin A)+\ln(\sin C)\leq 2\cdot \ln \sin \left(\frac{A+C}{2}\right) = \ln \cos 30^0 = 2\cdot \ln \frac{\sqrt{3}}{2} = \ln \frac{3}{4}$$ But I do not understand how to calculate the lower bound for $\sin A\sin C$. Thanks in advance!
Clearly $\sin(A)\sin(C)\geq 0$ since $A$ and $C$ are between $0^{\circ}$ and $180^{\circ}$. Let $A$ approach $0^{\circ}$. Then $\sin(A)$ approaches $0$ as well, while $\sin(C)$ is bounded above by $1$. This shows that $$\sin(A)\sin(C)\rightarrow 0$$ if $A\rightarrow 0^{\circ}$ so that $\sin(A)\sin(C)$ can be as small as needed. The lower bound is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2015166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Complex numbers: $ (\sqrt{3}+ i)^{30} $ to 'a + ib' form How do I rewrite this to 'a + ib' form? The power of 30 is troubling me. $ (\sqrt{3}+ i)^{30} $
Hint. Since $|\sqrt{3}+i|=2$, we have that $$(\sqrt{3}+ i)^{30}=2^{30}\left(\frac{\sqrt{3}}{2}+ \frac{i}{2}\right)^{30}=2^{30}\left(\cos(\pi/6)+ i\sin(\pi/6)\right)^{30}.$$ Then use De Moivre's Formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2015326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
For a normal Subgroup $N$ of $G$, is there an Isomorphism from $G$ to $N\times (G/N)$? For a normal Subgroup $N$ of $G$, is there an Isomorphism from $G$ to $N\times (G/N)$ (the Product with the Quotient)? [What I thought about: The number of elements is the same, so one could establish a bijective Map. But it is difficult for me to come up with a Homomorphism. One could map $g$ to $(n,gN)$ in $N\times (G/N)$ and if $g\in N$, one could choose $n=g$, otherwise I thought about taking one representative from each coset $g_1N, g_2N, \cdots, g_nN$ and define the map as $\varphi(g\in g_iN):=(g_i^{-1}g,g_iN)$. But then $\phi(g\in g_iN)\phi(g'\in g_jN)=(g_i^{-1}g,g_iN)(g_j^{-1}g',g_jN)=(g_i^{-1}gg_j^{-1}g',g_ig_jN)$ which does not seem to be a Homomorphism.] Thanks for all answers.
No. This is not true at all. Note that if what you think were true every solvable group would be abelian. For a specific example take $S_3$ and the subgroup $A_3$ which is cyclic of order $3$. The quotient is cyclic of order $2$. So the product is abelian, in fact cyclic of order $6$. But $S_3$ is non-abelian. It is not true for commutative groups either. It even does not work with a group of smallest cardinality that has non-trivial subgroups: the cyclic group of order $4$ has a subgroup of order $2$, the quotient is also of order $2$, so the product is the Klein four group, not the cyclic group of order $4$. There are some classes of groups where what you ask for is true (e.g., elementary abelian groups, divisible abelian groups), but in general it's not true at all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2015399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Expected number of distinct items picked from a set, putting back an item in the box every time it is picked I have a box with $N$ items. For $W$ times, I pick an item, label it somehow, and put it back. Items that are picked twice or more are labeled only once. At the end of the process, what is the expected number of items that I have labeled? This seems quite a simple problem, I would look for this on Google but I don't know exactly what to search.
First, if you're just looking around online, you should try the coupon collector's problem. In its canonical form, the coupon collector's problem is to calculate the expected number of picks until all the coupons (or items in your box) are labelled. Your problem has the tables turned. You want to calculate the expected number of coupons (items) labelled in a specified number of picks. Second, I think it's worth pointing out the bull-headed approach to this problem and why it doesn't work! In the bull-headed approach, the expectation of a discrete random variable $X$ is $\sum_m m\cdot\text{prob}(X=m)$. In this problem we need the probability that in $w$ rounds exactly $m$ items get labelled, a computation that requires some work. First, you can choose $m$ items among the entire collection of $n$, and there are $\binom n m$ ways to make this choice. Then you need to count the number of onto functions from the rounds $\overline{w}=\{1,\dots,w\} $ to the selected items $\overline{m}=\{1,\dots,m\}$. This is equivalent to selecting all $m$ items in the $w$ rounds. Using a well-known argument from the inclusion-exclusion principle, it turns out that there are $$\sum_{j=0}^m (-1)^j\binom m j(m-j)^w$$ onto functions from $\overline{w}$ to $\overline{m}$. Finally, you have to compare these onto functions with the total number of functions from $\overline{w}$ to $\overline{m}$, which is just $m^w$. Putting all this together, the expected number of items labelled in $w$ rounds is $$\sum_{m=1}^n m\binom n m \sum_{j=0}^m (-1)^j\binom m j(m-j)^w\big/m^w.$$ The problem with this method is that I certainly don't know how to get a closed-form expression for the double sum representing the expectation. (For that matter, neither does Maple.) This impasse highlights the elegance of the solution that Henry suggested. Also, it makes me interested in seeing a cleaned-up version of the "bull-headed" method that produces the same expected number of items that Henry got in his elegant solution, namely $n\bigl(1-(1-\frac1n)^w\bigr)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2015557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Linear elliptic partial diff. operator Let $L$ be the above operator \begin{equation} L u = \sum_{i,j=1}^{n} a_{ij}(x) u_{x_ix_j} + \sum_{i=1}^{n} \beta_i u_{x_i} + c(x) u(x) \end{equation} Assume furthermore that L is elliptic i.e. \begin{equation} 0 < \lambda {|\vec{\xi }|}^2 \leq \sum_{i,j=1}^{n} a_{ij}\xi_i\xi_j \text{ for every } \vec{\xi} \in \mathbf{R}^n\setminus\{ \vec{0}\} \text{and for every x in the domain } \end{equation} The last relation yields that the matrix $[A]_{ij} = a_{ij}$ is positive definite. So here's the thing if at the point $x_0$ we have a maximum then the matrix of the second derivative of $u$ at point $x_0$ is negative semi-definite, so (hessian) $[H]_{ij}=u_{x_ix_j}(x_0)$ and $x^t H x \leq 0 $. How can we conclud that \begin{equation} \sum_{i,j=1}^{n} a_{ij}(x_0) u_{x_ix_j} (x_0) \leq 0 \end{equation} I know that the product of matrix A and the H is negative semi definite (it is easy to show the last one).
If you already know how to show that $AH$ is negative semi-definite, then just note that $$\sum_{i,j=1}^n a_{ij}u_{x_ix_j} = \text{Trace}(AH)\leq 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2015674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about Improper Integrals I am trying to find whether the following integral converges or diverges, but I am a bit uncertain about the steps I'm taking. I have to following integral. $$\int_{-1}^1 \frac{e^x}{e^x-1} \,dx \\$$ Because I know it is discontinuous at x=0, I split the integral $$\int_{-1}^0 \frac{e^x}{e^x-1} \,dx + \int_{0}^1 \frac{e^x}{e^x-1} \,dx \\$$ Now I set up the limit $$\lim\limits_{t \to 0} \int_{-1}^t \frac{e^x}{e^x-1} \,dx + \lim\limits_{t \to 0} \int_{t}^1 \frac{e^x}{e^x-1} \,dx \\$$ After integrating using the following substitution: $$u={e^x-1} \, \\$$ I get the following: $$\lim\limits_{t \to 0} \int_{\frac1e-1}^{e^t-1} \frac{1}{u} \,du + \lim\limits_{t \to 0} \int_{e^t-1}^{\frac1e-1} \frac{1}{u} \,du \\$$ So finally, I get $$\lim\limits_{t \to 0} {(ln(e^t-1)-ln(\frac1e-1))} + \lim\limits_{t \to 0} (ln(\frac1e-1)-ln(e^t-1)) \\$$ So my question is, from the last line, if t is going to 0, shouldn't e^t tend to 1 and thus I would get ln (0) which would make the integral divergent. This is not the answer in the book. Where have I gone wrong? (Also, sorry for formatting, I'm still learning how to use all the symbols.)
Let us study $$I=\int_0^1\frac{e^x}{e^x-1}dx$$ with the change $t=e^x$, then $I$ becomes: $$\int_1^e\frac{dt}{t-1}$$ which diverges since $$\lim_{X\to 1^+}\int_X^2\frac{dt}{t-1}=\lim_{X\to1^+}(-\ln(|X-1|))=+\infty.$$ Your integral is therefore divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2015834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$x+y+z+4xyz=2$ prove that$ xy+yz+xz\le 1$ Let $x$,$y$,$z$ be non-negative real numbers, for which: $x + y + z + 4xyz=2$ Prove that: $xy + yz + zx \le1$. I am sure this can be done with Cauchy-Schwarz or AM-GM, but I have long forgotten how to use those...Any help is appreciated !
I don't know about using the inequalities, but this can be solved without any tricks! \begin{eqnarray} (1-2x)(1-2y)(1-2z) & \leq & 1 \\ 1 - 2(x+y+z) + 4(xy+xz+yz)-8xyz & \leq & 1 \\ 1 - 2(x+y+z+4xyz) + 4(xy+xz+yz) & \leq & 1 \\ 1 - 4 + 4(xy+xz+yz) & \leq & 1 \\ xy+xz+yz & \leq & 1 \end{eqnarray} The first inequality uses the fact that $x,y,z\geq 0$, and the fourth line substitutes the $x+y+z+4xyz$ from the third line for $2$. Using $(1+2x)(1+2y)(1+2z)\geq 1$, one can similarly show that $xy+yz+zx\geq-1$ Great point from the comments. The first inequality is not justified, and, in fact, there is an easy counter example to this. For example $x,y=1$ and $z=0$ satisfies the equation $x+y+z+4xyz=2$, yet $xy+xz+yz = 2\not\leq 1$. The second inequality that I gave above holds, and the first one would hold if, for example, $0\leq x,y,z\leq 0.5$ (lesser restrictions are also possible).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2015924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do we define vector spaces over a general $\mathbb{F}$ (field) rather than $\mathbb{C}$ (complex numbers)? In my linear algebra class, many times we define vector spaces over the field $\mathbb{F}$ (i.e. $\mathbb{F}^n$) and then prove things about them. The instructor has defined $\mathbb{F}$ as "either $\mathbb{C}$ or $\mathbb{R}$". Because $\mathbb{R}$ is a subset of $\mathbb{C}$, why can we not simply define vector spaces over $\mathbb{C}$ instead of introducing new notation? Isn't it true that a property holding over $\mathbb{C}$ immediately tell us it holds over $\mathbb{R}$ because $\mathbb{R}\subset \mathbb{C}$?
There are different fields: Field of rational numbers, algebraic function fields, algebraic number fields, p-adic fields... The field is the set of scalars for the vector space. Is the division in $\mathbb{R}$ similar to that in $\mathbb{C}$? Absolute value?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2016019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Does $\sum\frac{4^n(n!)^2}{(2n)!}$ converge? Does $\sum\frac{4^n(n!)^2}{(2n)!}$ converge? The ratio test is inconclusive. I know that $\sum\frac{2^n(n!)^2}{(2n)!}$ converges, if that's of any help.
Try $$ a_n = \frac{4^n (n!)^2}{(2n)!} $$ $$ \frac{a_{n+1}}{a_{n}} = 4 \frac{(n+1)^2}{(2n+1)(2n+2)} > 1 $$ conclude that $a_n$ is monotonically increasing and also $a_n > 0$ so $$ \lim \limits_{n \to \infty} a_n \ne 0 $$ So $\sum a_n $ diverge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2016076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 6 }
A picture frame measures $14$ cm by $20$ cm. $160$ cm$^2$ of the picture shows inside the frame. Find the width of the frame. A picture frame measures $14$cm by $20$cm. $160$cm$^2$ of the picture shows inside the frame. Find the width of the frame. This is the question I was given, word-for-word. Is it asking for the width of the picture? Becuase the way I see it, the width is simply $14$... EDIT: Is this the correct interpretation? (Sorry did a really quick drawing in MS paint, red represents what I am supposed to find)
You are expected to assume that the picture frame has a constant width. That width creates a border around the picture. Note that the area of the frame is $280 \text{ cm}^2$, which is larger than the picture. You are supposed to find th width of the frame.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2016177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 1 }
For $f(x)=4x^3+3x^2-x-1$, The Range of values $\frac{f(x_1)-f(x_2)}{x_1-x_2}$ can take is- My Attempt :- $f'(x)=12x^2+6x-1$ where $f'(x) \ge \frac{-7}{4}$. So (I think) from LMVT we can directly say that $$\frac{f(b)-f(a)}{b-a} \ge \frac{-7}{4}$$ But the answer Given is $$\frac{f(b)-f(a)}{b-a} > \frac{-7}{4}$$ So my Question is Is This Application of LMVT correct ,if yes then why $\frac{-7}{4}$ is excluded from the given range ?
Intuitively, as long as $f'(x)$ attains its minimum at one single point, there won't exist two distinct points where the secant matches that slope. To formalize, assume $f'(x)$ has a global minimum $m$, then it follows from MVT that: $$\frac{f(b)-f(a)}{b-a} \ge m \quad \text{for} \;\;\forall a \ne b$$ If $a,b$ points exist where the equality holds, then by MVT for the intervals $(a,\frac{a+b}{2})$ and $(\frac{a+b}{2},b)$: $$m = \frac{f(b)-f(a)}{b-a} = \frac{1}{2}\left(\frac{f(b)-f(\frac{a+b}{2})}{\frac{b-a}{2}} + \frac{f(\frac{a+b}{2})-f(a)}{\frac{b-a}{2}}\right) \ge \frac{1}{2}(m+m) = m $$ It follows that the middle inequality must be an equality as well, so: $$m = \frac{f(b)-f(a)}{b-a} = \frac{f(b)-f(\frac{a+b}{2})}{\frac{b-a}{2}} = \frac{f(\frac{a+b}{2}) - f(a)}{\frac{b-a}{2}} $$ From which: $$f\left(\frac{a+b}{2}\right) = \frac{f(a) + f(b)}{2}$$ It follows that $f(x)$ is both midpoint convex and concave, thus linear by continuity (see for example Function that is both midpoint convex and concave). But the given $f(x)$ is a $3^{rd}$ degree polynomial which is obviously not linear, so such points $a,b$ do not exist. Therefore the strict inequality holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2016315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding PDF of $Z \thicksim Q / \sqrt{Q^2 + R^2}$ where $Q, R \thicksim N(0,1)$? I have only done simple convolutions up to this point to find joint PDFs and am lost about how to proceed with finding the PDF of $Z \thicksim \frac{Q}{\sqrt{Q^2 + R^2}}$ where $Q, R \thicksim N(0,1)$. How do I approach finding the PDF of $Z$?
I assume that $Q$ and $R$ are independent. First, $F_Z(z)=0$ for $z\le -1$ and $F_Z(z)=1$ for $z\ge 1$. Also $$ \mathsf{P}(Z\le z\)=\mathsf{E}\!\left[\mathsf{P}\!\left(Q\le z\sqrt{Q^2+R^2}\mid R\right)\right]. $$ For $z\in (-1,1)$, $$ \mathsf{P}\!\left(Q\le z\sqrt{Q^2+r^2}\right)=\mathsf{P}\!\left(Q\le z\sqrt{\frac{r^2}{1-z^2}}\right)=\Phi\!\left(z\sqrt{\frac{r^2}{1-z^2}}\right) $$ so that $$ F_Z(z)=\int_{-\infty}^{\infty}\Phi\left(z\sqrt{\frac{r^2}{1-z^2}}\right)\phi(r)dr=\frac{1}{2}+\frac{\arcsin(z)}{\pi}, \quad z\in (-1,1). $$ and the corresponding pdf is $f_Z(z)=\frac{1}{\pi\sqrt{1-z^2}}$ on $(-1,1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2016419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finite sum $\sum_{j=0}^{n-1} j^2$ How can I calculate this finite sum? Can someone help me? $$\sum_{j=0}^{n-1} j^2$$
I like this explanation: Each row in the first triangle sums to $j^2$ So the sum of the rows is the number we seek. Take that triangle and rotate left and right. The sum is of the three trianges, then is 3 times the number we seek. But when we sum it up, we get $(2n+1)$ in every entry in the triange or $(2n+1)$ times the number of entries. $3\sum_\limits{j = 1}^n j^2 = (2n+1)\sum_\limits{j = 1}^n j$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2016571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 0 }
The locus of the point of trisection of all the double ordinates of the parabola $y^2= lx$ The locus of the point of trisection of all the double ordinates of the parabola $y^2= lx$ is a parabola whose latus rectum is ? I feel that since the double ordinates are trisected the latus rectum too should be trisected. So I thought the answer should be $\dfrac{l}{3}$. However the given answer says the latus rectum should be $\dfrac{l}{9}$. Am I going wrong somewhere ?
Am I going wrong somewhere ? It seems that you think that the focuses of the both parabolas have the same $x$-coordinate. The coordinates of the points both on the parabola $y^2=lx$ and on $x=t$ are $(t,\pm\sqrt{lt})$. Since the double ordinates on $x=t$ are trisected, we get $$\left(t,\frac{1\cdot\sqrt{lt}+2(-\sqrt{lt})}{1+2}\right),\quad \left(t,\frac{1\cdot(-\sqrt{lt})+2\cdot \sqrt{lt}}{1+2}\right),$$ i.e. $$\left(t,\pm\frac{\sqrt{lt}}{3}\right)$$ which are on the parabola $y^2=\frac{l}{3^\color{red}{2}}x=\frac{l}{9}x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2016669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
"Predicate" vs. "Relation" What's the difference between a predicate and a relation? I read the definition that an $n$-ary predicate on a set $X$ is a function $X^n\to \{\text{true}, \text{false}\}$ where $\{\text{true}, \text{false}\}$ is the set of truth values. Also, it is well-known that an $n$-ary relation is simply a subset of $X^n$. Trivially there is a canonical bijection between the set of predicates on a particular fixed set and the set of all relations on that set. So why do these two terms "predicate" and "relation" exist? Because of historical reasons? Is there a crucial difference between them?
I like the definition from Wikipedia. It says that a relation is a concept, and a predicate is simply the indicator function of a relation. It’s like "the concept of prime numbers" and "the isPrime function". Even in your question you say that a predicate is defined on the whole set, but a relation is a subset.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2016749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 2 }
Square root vs raising to $\frac{1}{2}$ What about the +/- ??? These seem equivalent, yet the raising to 1/2 seems to ignore the +/- aspect of a square root. Is one more valid than the other?
When you get a little further in math, you'll be told that raising a number to any exponent with a non-integer absolute value is what's called a "multi-valued function." It's just that with a rational exponent with an even denominator it's two of these that are real rather than one as for any other real exponent. Most of the time, the "principal value" is used, which for a positive real parameter is the positive real value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2016845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Sample uniformly from sorted monotonic integer sequences Consider all ${x+x-1 \choose x}$ different integer valued sequences $S$ of length $x$ whose elements are from $\{1,\dots,x\}$ and where $S_i \leq S_{i+1}$. How can I sample uniformly from this set of sequences? The simplest possible strategy is to sample each number uniformly and independently from $\{1,\dots,x\}$ and then sort the resulting sequence. However I don't think this gives a uniform sample.
Stepwise Sampling Method A uniform random selection from the set of such sequences can be constructed stepwise, sampling each element in turn, using an appropriate (often non-uniform) distribution in each step. Letting $N(m,x)$ denote the number of nondecreasing length-$m$ sequences with elements in $[1..x]$, it follows that $$N(m,x)=\binom{m+x-1}{m}.$$ Thus there are $N(x,x)=\binom{2x-1}{x}$ nondecreasing length-$x$ sequences with elements in $[1..x]$. The following algorithm (implemented in Sagemath) uses the function $N(m,x)$ to produce the necessary stepwise sampling distributions that generate a uniformly distributed outcome sequence: def N(m,x): return binomial(m+x-1,m) def rnd_seq(x): seq = [] a = 1 # an element will be drawn from [a..x] e = 0 # e is the index of the selected node numer = [N(x,x)] # initial list of numerators for i in [1..x]: denom = numer[e] # numerator of previously selected node numer = [N(x-i, x-a+1-j+1) for j in [1..x-a+1]] P = [num/denom for num in numer] # prob's assigned to resp. elements in [0..x-a] G = GeneralDiscreteDistribution(P) e = G.get_random_element() # random e is uniform on [0..x-a] y = e + a # shift to make y in [a..x] a = y # save the current random y seq += [y] # append y to the sequence return seq This corresponds to a rooted tree with $N(x,x)$ terminal nodes, every node being weighted by the number of terminal nodes for which it is an ancestor. Here's a picture for the case $x=3$: On my machine the above program requires less than $1$ second per sequence when $x=100$, even though the number of terminal nodes is then $N(100,100)=\binom{199}{100}\approx 10^{58.7}$. Rejection Sampling Method (inefficient) Without the monotonicity constraint, we can regard each sequence to be the unique bijective base-$x$ notation for an integer between $min=(111...1)_{\text{bijective base-}x}=\frac{x^x-1}{x-1}$ and $max=(xxx...x)_{\text{bijective base-}x}=x\cdot\frac{x^x-1}{x-1}$. To draw a uniform sample from your monotonic subset, we can just repeatedly sample this whole range of integers uniformly until a value, say $v$, in the subset is obtained (discarding any whose bijective base-$x$ notation is not monotonic), then output the bijective base-$x$ notation for $v$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2016988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find the limit of $(1-\frac2n)^n$ I am trying to find the limit of $$(1-\frac2n)^n$$ I know how $e$ is defined and I am sure the prove will involve substituting a term with $e$ at some point. But I do not really know where to start. I tried rewriting the term, simplifying it, using the binomial theorem, but all that does not seem to work out that well. Where do I start? Edit: As $n$ goes towards infinity
Note that $$ \left(1-\frac2n\right)^n = \left(1-\frac{1}{n/2}\right)^n = \left(\left(1-\frac{1}{n/2}\right)^{n/2}\right)^2 $$ Now take the limit as $\frac n2 \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2017089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Calculate:$\int _{|z|=1}\text{Re z dz}$. For a positively oriented unit circle Calculate:$\int _{|z|=1}\text{Re z dz}$. If $z=x+iy\implies \text{Re z}=x$ So $\int _{|z|=1}\text{Re z dz}=\int_{|z|=1} x d(x+iy)=\int _{|z|=1}x dx$.(as dy=0) $=\frac{x^2}{2}|_0^{2\pi }=2\pi^2$ But the answer is not matching.It is given as $\pi i$. Where am I wrong ?Please help.
I thought it might be useful to present a "brute force way forward." Proceeding, we have $$\begin{align} \oint_{|z|=1}\text{Re}(z)\,dz&=\color{blue}{\oint_{\sqrt{x^2+y^2}=1}x\,dx}+\color{red}{i\oint_{\sqrt{x^2+y^2}=1}x\,dy}\\\\ &=\color{blue}{\int_{1}^{-1} x\,dx+\int_{-1}^1x\,dx}+\color{red}{i\underbrace{\int_{-1}^1\sqrt{1-y^2}\,dy}_{=2\int_0^1\sqrt{1-y^2}\,dy}+i\underbrace{\int_1^{-1}\left(-\sqrt{1-y^2}\right)\,dy}_{=2\int_0^1\sqrt{1-y^2}\,dy}}\\\\ &=\color{blue}{0}+\color{red}{i4\underbrace{\int_0^1\sqrt{1-y^2}\,dy}_{=\pi/4\cdots\text{area of quarter circle}}}\\\\ &=i\pi \end{align}$$ as expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2017213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
moving limit through the integral sign Let $F(\alpha) = \int\limits_0^{\pi/2} \ln( \alpha^2 - \sin^2 x) \mathrm{d} x $ where $\alpha>1$ Im tempting to argue that $F(1) = \int\limits_0^{\pi/2} \ln (1 - \sin^2 x) dx = \int\limits_0^{\pi/2} \ln \cos^2 x dx $ But, $\alpha > 1$. Thus, the only way we can do this is if we can do the following $$ \lim_{\alpha \to 1^+ } \int\limits_0^{\pi/2} \ln( \alpha^2 - \sin^2 x) \mathrm{d} x = \int\limits_0^{\pi/2} \lim_{\alpha \to 1^+} \ln( \alpha^2 - \sin^2 x) \mathrm{d} x $$ Qs: are we allowed to move the limit in such a way?
It’s $\enspace\lim\limits_{\alpha\to 1^+} \ln(\alpha^2-\sin^2 x) =\ln\cos^2 x\enspace$ and we have to proof with $$\lim\limits_{\alpha\to 1^+}(( \int\limits_0^{t\pi/2}\ln(\alpha^2-\sin^2 x)dx - \int\limits_0^{t\pi/2}\ln(\cos^2 x) dx )|_{t\to 1^-}) =\frac{\pi}{2} \lim\limits_{\alpha\to 1^+} \int\limits_0^1 \ln(1+\frac{\alpha^2-1}{\cos^2(\frac{\pi}{2}x)}) dx$$ if $$\lim\limits_{\alpha\to 1^+} \int\limits_0^1 \ln(1+\frac{\alpha^2-1}{\cos^2(\frac{\pi}{2}x)}) dx = \int\limits_0^1 \lim\limits_{\alpha\to 1^+} \ln(1+\frac{\alpha^2-1}{\cos^2(\frac{\pi}{2}x)}) dx =0$$ holds. It's $\enspace\cos(\frac{\pi}{2}x)>1-x\enspace$ for $\enspace0<x<1\enspace$ and therefore $$0< \int\limits_0^t \ln(1+\frac{\alpha^2-1}{\cos^2(\frac{\pi}{2}x)}) dx <\int\limits_0^t \ln(1+\frac{\alpha^2-1}{(1-x)^2}) dx $$ gives us an upper bound for $\enspace 0<t\leq 1 $ . With $\enspace a:=\sqrt{\alpha^2-1}\to 0\enspace $ for $\enspace \alpha\to 1^+\enspace $ it's $$\int\limits_0^1 \ln(1+\frac{a^2}{(1-x)^2}) dx =$$ $$=[2a\arctan\frac{a}{1-x}-(1-x)(\ln((1-x)^2+a^2)-2)+(1-x)(\ln((1-x)^2)-2)]_0^1$$ $$\enspace \enspace = a\pi-2a\arctan a+\ln(1+a^2)\leq a\pi$$ for all $\enspace a\geq 0\enspace $ and it follows $$\lim\limits_{a\to 0} \int\limits_0^1 \ln(1+\frac{a^2}{(1-x)^2}) dx=0 $$ and therefore the answer to your question is Yes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2017297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Sum of elements of order $p$ in $(\mathbb{Z}/2p^2\mathbb{Z})^\times$? I have been working with the group $\mathbb{Z}/2p^2\mathbb{Z}$ and would like someone to correct my reasoning below. Let $p$ be an odd prime, and consider the multiplicative group $(\mathbb{Z}/2p^2\mathbb{Z})^\times$. This group is cyclic, and so contains some element $c$ of order $|(\mathbb{Z}/2p^2\mathbb{Z})^\times| = \phi(2p^2) = p(p-1)$. Now let $b = c^{k(p-1)}$ be an element of this cyclic group of order $p$ (i.e. $k \neq 0$). I am trying to find the value $1 + b + \ldots + b^{(p-1)}$ (mod $2p^2$). All equalities below are mod $2p^2$. First, we let $A = 1 + b + \ldots + b^{(p-1)}$, and note that: $A = b(b^{(p-1)} + 1 + b + \ldots + b^{(p-2)}) = bA$. Similarly, we have $A = b^kA$ for all $k$, and therefore $A = 1 + b + \ldots + b^{(p-1)} = 0$ since $b^k \neq 1$, in general. I know (from computing some examples numerically) that this is incorrect - I suspect that the sum is actually equal to $p$ mod $2p^2$ - but can't see where my reasoning is off. Any comments would be welcome.
It seems that you are including $1$ in your sum, even though it has order $1$ and not $p$. That's fine though: given an odd prime $p$, define $A$ to be the sum of the elements of order dividing $p$ modulo $p^2$. You are correct that $A\equiv p\pmod{2p^2}$. It's not hard to show that the elements of order $1$ or $p$ modulo $2p^2$ are $$ 1, 2p+1, 4p+1, \dots, 2(p-1)p+1. $$ Indeed, we know there are exactly $p$ such elements (since the multiplicative group is cyclic); and $$ (2kp+1)^p = \sum_{j=0}^p \binom pj (2kp)^j \equiv 1 + p\cdot2kp + \sum_{j=2}^p 0 \equiv 1\pmod{2p^2}, $$ so this list must be complete. (Alternately, use the fact that elements of order $p$ modulo $2p^2$ are precisely the $(p-1)$st powers, and any such integer must be congruent to $1\pmod p$ by Fermat's little theorem.) From here we easily compute that $A = \sum_{k=0}^{p-1} (2kp+1) = p^2(p-1)+p\equiv p\pmod{2p^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2017527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Confirming that the sequence $a_n = \frac{\sqrt{n}\cos{n}}{\sqrt{n^3}-1}$ converges I'm still trying to totally grasp the differences in proofs for sequences and series. I have a sequence $a_n = \frac{\sqrt{n}\cos{n}}{\sqrt{n^3}-1}$ In order to prove that this sequence converges, would it be correct to state that since: $ -1 \le \cos{n} \le 1$ , and for $n \to \infty$ , $\sqrt{n^3}-1 \gt \sqrt{n}$ The denominator will be increasingly larger than the numerator therefore the sequence converges?
You can write \begin{equation} -\frac{\sqrt{n}}{\sqrt{n^3}-1}\le\frac{\sqrt{n}\cos{n}}{\sqrt{n^3}-1}\le\frac{\sqrt{n}}{\sqrt{n^3}-1} \end{equation} and you use the squeeze theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2017694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find three distinct triples (a, b, c) consisting of rational numbers that satisfy $a^2+b^2+c^2 =1$ and $a+b+c= \pm 1$. Find three distinct triples (a, b, c) consisting of rational numbers that satisfy $a^2+b^2+c^2 =1$ and $a+b+c= \pm 1$. By distinct it means that $(1, 0, 0)$ is a solution, but $(0, \pm 1, 0)$ counts as the same solution. I can only seem to find two; namely $(1, 0, 0)$ and $( \frac{-1}{3}, \frac{2}{3}, \frac{2}{3})$. Is there a method to finding a third or is it still just trial and error?
Here's a start that shows that any other solutions would have to have distinct $a, b, $ and $c$. In $a^2+b^2+c^2 =1$ and $a+b+c= \pm 1$, if $a=b$, these become $2a^2+c^2 = 1, 2a+c = \pm 1$. Then $c = -2a\pm 1$, so $1 = 2a^2+(-2a\pm 1)^2 =2a^2+4a^2\pm 4a+1 =6a^2\pm 4a+1 $ so $0 = 6a^2\pm 4a =2a(3a\pm 2) $. Therefore $a=0$ or $a = \pm \frac23$. If $a=b=0$, then $c = \pm 1$; if $a=b=\pm \frac23$, then $c = -2a\pm 1 =\mp \frac43 \pm 1 =\pm \frac13 $ and these are the solutions that you already have. Therefore any other solutions would have to have distinct $a, b, $ and $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2017818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
pigeonhole principle clarification Would the following statement be applicable to the pigeonhole principle? Or can I simply do $100 \times 50 = 5000$? What is least amount of students in a school to guarantee that there are at least 100 students from the same state?
The minimum number to guarantee that would be $99\cdot 50+1=4951$. If you had fewer, say $4950$, then there's the possibility that there are exactly $99$ students from each state. Any one additional student from any state is enough at this point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2017902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
Help to understand a comment in Hoffman and Kunze's linear algebra book I'm reading Hoffman and Kunze's Linear Algebra and on page 52, the authors said: Let $P$ be the $n \times n$ matrix whose $i,j$ entry is the scalar $P_{ij}$, and let $X$ and $X'$ be the coordinate matrices of the vector $\alpha$ in the ordered bases $\mathscr{B}$ and $\mathscr{B}'$. Then we may reformulate (2-15) as $$ X = PX'. \tag{2-16} $$ Since $\mathscr{B}$ and $\mathscr{B}'$ are linearly independent sets, $X=0$ if and only if $X'=0$. Thus from (2-16) and Theorem 7 of Chapter 1, it follows that $P$ is invertible. Hence $$ X' = P^{-1}X. \tag{2-17} $$ I didn't find any mention of this result (highlighted in bold) in the book. How can I prove this fact?
Another way to see this is to use the fact that the vector $\alpha$ has unique coordinates with respect to any fixed basis (as discussed on page 50 of the same textbook). Let $\{ \beta_1,\dots,\beta_n\}$ be an ordered basis for the $n$-dimensional space $V$. We can always express the zero vector as $$ 0 = 0\beta_1 + \dots + 0\beta_n \quad. $$ By the uniqueness of coordinates, this means that the coordinates of the zero vector in every ordered basis is $$ [0]_\mathscr{B} = \begin{bmatrix} 0 \\ \vdots \\ 0 \end{bmatrix}, $$ and conversely, the $n \times 1$ matrix on the right is the coordinate matrix of the zero vector in every ordered basis. In particular, $X = 0 \Leftrightarrow X'=0$ because $X$ and $X'$ are the coordinate matrices of the vector $\alpha$ in the ordered bases $\mathscr{B}$ and $\mathscr{B}'$, respectively, and by the above discussion $X = 0 \Leftrightarrow \alpha = 0 \Leftrightarrow X'=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2017995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Show that the sum of a set of matrices isn't direct, and that the sum is the whole vector space. Let $U$ be upper triangular matrices in $M_2$ and $L$ lower triangular matrices in $M_2$. Show that their sum isn't direct, and that their sum is the whole vector space. I have the following definitions: If $L\cap M=\{0\}$, then we say that that is the direct sum. Notation: $L\oplus M$. If $L\oplus M=V$, we say that $M$ is a direct complement for $L$ and vice versa. I know that an upper triangular matrix can be written as $\begin{pmatrix} a & b\\0 & d\end{pmatrix}$ and lower triangular as $\begin{pmatrix} 0 & 0\\c & 0\end{pmatrix}$. If I add them up, I get $\begin{pmatrix} a &b \\c & d\end{pmatrix}$. But how do I prove their sum is the whole space $M_2$? And what about direct sum?
I think the answer above is incomplete, and I'd like to complete it. In order for $M_2(\mathbb{K})$ to not be equal to the direct sum, this is, for $M_2(\mathbb{K})$ to differ from $U \bigoplus L$ we need to know the definition of direct sum which says: each $x$ in $F_1 + ... + F_n$ is written in an unique way as the sum $x = x_1 + ... + x_n $ which is equivalent to say: for each $j = 1, ..., k$ we have $F_j \bigcap (F_1 + ... + F_{j-1} + F_{j+1}+...+ F_n) = \{0\}$ and it's evident that $M_2(\mathbb{K})$ isn't written as a superior and inferior triangular matrices in an unique way. That's why, we say that the direct sum is different from $M_2$. Source: Alégra Linear, Elon Lages (a brazilian mathematician).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2018148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove using induction that $2^{4^n}+5$ is divisible by 21 I have to show, using induction, that $2^{4^n}+5$ is divisible by $21$. It is supposed to be a standard exercise, but no matter what I try, I get to a point where I have to use two more inductions. For example, here is one of the things I tried: Assuming that $21 |2^{4^k}+5$, we have to show that $21 |2^{4^{k+1}}+5$. Now, $2^{4^{k+1}}+5=2^{4\cdot 4^k}+5=2^{4^k+3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5+5\cdot 2^{3\cdot 4^k}-5\cdot 2^{3\cdot 4^k}=2^{3\cdot 4^k}(2^{4^k}+5)+5(1-2^{3\cdot 4^k})$. At this point, the only way out (as I see it) is to prove (using another induction) that $21|5(1-2^{3\cdot 4^k})$. But when I do that, I get another term of this sort, and another induction. I also tried proving separately that $3 |2^{4^k}+5$ and $7 |2^{4^k}+5$. The former is OK, but the latter is again a double induction. Is there an easier way of doing this? Thank you! EDIT By an "easier way" I still mean a way using induction, but only once (or at most twice). Maybe add and subtract something different than what I did?... Just to put it all in a context: a daughter of a friend got this exercise in her very first HW assignment, after a lecture about induction which included only the most basic examples. I tried helping her, but I can't think of a solution suitable for this stage of the course. That's why I thought that there should be a trick I am missing...
Note that $2^3 = 1\mod 7$ and hence $2^{3n} = 1 \mod 7$. Now, $4^k -1 = 0\mod 3$ and it follows that $2^{4^k-1} = 1\mod 7$ and $2^{4^k} = 2\mod 7$, Thus $$2^{4^k}+ 5 = 0 \mod 7 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2018239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Complement of regular language is regular The Question Let $L$ be a regular language. Prove the $\overline{L}$ is a regular language without using automata. Explanation I have an assignment to solve without using automata at all. To solve one of the questions I want to use the following: "If $L_1$, $L_2$ are regular languages, then $L_1\bigcap L_2$ is a regular language". The properties allowed to use are listed in "Allowed to use only". So I thought of using De Morgan's law: $L_1\bigcap L_2$ = $\overline{\overline {L_1}\bigcup \overline{L_2}}$ But complement is not one of the properties allowed to use. If you can show the intersection is a regular without complement and without automata that would be good as well. Allowed to use only * *Alphabet $\Sigma$ set of symbols, final. *Word $w$ finite concatenation of symbols from $\Sigma$ and final. *Sigma Kleene star $\Sigma ^*$ all words with symbols from $\Sigma$ including $\varepsilon$ the empty string. *Language $L$ subset of $\Sigma ^*$. *Let $L$ $M$ be languages under $\Sigma$. Then $L\bigcup M$, $L\bigcap M$, $L\circ M$, $\overline{L}$, $\overline{M}$, are also languages under $\Sigma$. *Language $L$ is regular, if there exists a regex r, that is a string under $\Sigma \bigcup $ {$ ∅, \circ, *, \bigcup $}, which defines $L$. * *If $L_1$, $L_2$ are regular lang then so is $L_1\bigcup L_2$. *If $L_1$, $L_2$ are regular lang then so is $L_1\circ L_2$. *If $L$ is a regular lang then so is $L^*$.
There is also an algebraic characterization of regular languages. A language $L\subset \Sigma^*$ is regular iff it exists an homomorphism (of monoids) $\phi : \Sigma^*\rightarrow M$ with $M$ a finite monoid and $$ L=\phi^{-1}(S) $$ where $S\subset M$. You end using the formula $\phi^{-1}(\bar{S})=\overline{\phi^{-1}(S)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2018315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Average minimum distance between $N$ points generate i.i.d. uniformly on the shell (sphere) of ball What is the expected minimum euclidean distance between $N$ points uniformly and independently chosen on the shell(sphere) of 3-D ball of radius $R$? Note that the expected minimum distance might be difficult to compute, so a good lower bound is also fine. My approach I take the approach similar to this question. Suppose we put $N$ circles of radius $r$ uniformly on the surface of the ball. Therefore we could compute the expected minimum distance by the following expression \begin{align} E[D]=\int_0^\infty P(D>t) dt. \end{align} So, it remains to compute $P(D>t)$. Actually, since we are after a lower bound on $E[D]$ it is enought to give a lower bound on $P(D>t)$. Let $S_i$ be the event that the pair of circles does not intersect which should be given by \begin{align} P(S_i) = \frac{ Surf(R)-Surf(r)}{Surf(R)}= 1-\frac{ \pi r^2}{ 4\pi R^2}=1-\frac{ r^2}{ 4 R^2} \end{align} Now the probability that euclidean minimum distance $D$ between $N$ balls is bigger than $t$ should be similar to \begin{align} P[ D \ge t] \stackrel{?}{=} P[ \cap S_i] \end{align} However, I think we somehow need to converte from the geodesic distance to euclidean distance but I am not very sure how to do that?
The asymptotics is a straightforward (or boring) generalization of my previous answer. Sketch: Take a point over the unit sphere ($R=1$). Then the "excluded area" $a(D)$ (points that lie at a -euclidean- distance less than $D$ from the point) is $$a(D)=\pi D^2$$ The probability that a "link is free" (distance between points pair $i$ isgreater than $D$ is $$p(S_i) =1-\frac{1}{4}D^2 \hspace{1cm} 0\le D \le 2 \tag{1}$$ Assuming independence as asympotical approximation: $$p(\cap S_i) \approx (1-\frac{1}{4}D^2 )^M, \hspace{1cm} M=\frac{N(N-1)}{2} \tag{2}$$ Then, letting $t=$ minimum distance between points $$E(t) = \int P(t >D) \, dD = \int p(\cap S_i)\, dD \approx \int_{0}^2 (1-\frac{1}{4}D^2 )^M \, dD \tag{3}$$ For large $M$ this tends to $$E(t) \approx \frac{\sqrt{\pi}}{\sqrt{M}}\approx \frac{\sqrt{2 \pi}}{N} \tag{4}$$ For general ball radius $R$: $$E(t) \approx R \frac{\sqrt{\pi}}{\sqrt{M}}\approx R \frac{\sqrt{2 \pi}}{N} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2018453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given a divergent series $\sum x_n$ with $x_{n} \rightarrow 0$, show there exists a divergent series $\sum y_{n} $ with $ y_n/x_n\rightarrow 0$ The problem is what the title says, with the added requrement that both series should have positive terms. I ruled out defining $y_{n}$ by $x_{n}$ divided by some function of $n$ since I can't see how we can guarantee it's sum is divergent. So instead I was looking at defining $y_{n} = max \{ x_{2^{n}} , x_{2^{n}+1} , ... , \} $ but even if this were to work, I have no idea how to prove it. So can a simple/general $y_{n}$ be found that satisfies these conditions?
Slightly more general result: If $\sum x_n$ is a positive divergent series, then there exists a positive divergent series $\sum y_n$ such that $y_n/x_n \to 0.$ (Note that if in addition we have $x_n$ bounded, then $y_n \to 0.$) Proof: There are integers $1=n_1 < n_2 < \cdots \to \infty$ such that $$\sum_{n=n_k}^{n_{k+1}-1} x_n > 1$$ for all $k.$ Let $B_k = \{n: n_k \le n < n_{k+1}\}.$ Define the sequence $y_n$ block by block: If $n\in B_k,$ set $y_n = x_n/k.$ Then $$\sum_{n=1}^{\infty}y_n = \sum_{k=1}^{\infty}\sum_{n\in B_k}\frac{x_n}{k} = \sum_{k=1}^{\infty}\frac{1}{k}\sum_{n\in B_k}x_n \ge \sum_{k=1}^{\infty}\frac{1}{k}\cdot 1 = \infty.$$ For $n\in B_k$ we have $y_n/x_n = 1/k.$ As $n\to \infty,$ $n$ moves through all blocks $B_1, B_2, \dots$ forcing $y_n/x_n \to 0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2018669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proof that a continuous function maps connected sets into connected sets I'm trying to prove that, if f is a function from C to C, and its domain, D, is connected, then f(D) is also connected. How would I go about doing this? The definition of conectedness at play is "S is disconnected iff there exist open disjoint sets A and B such that none contains S, but their union does", and that of continuity is "f is continuous iff, if a sequence of members of the domain tends to z, then the image of the sequence tends to f(z)".
Suppose that $g:f(D)\rightarrow\{0,1\}$ is continues, (where we equip $\{ 0, 1 \}$ with the discrete topology $\big\{ \emptyset, \{ 0 \}, \{ 1 \}, \{ 0, 1 \} \big\}$) then $g \circ f$ is continues. Since $D$ is connected, we deduce that $g\circ f$ is constant. This implies that $g$ is constant since the image of the restriction of $f$ to $D$ is $f(D)$. We deduce that $f(D)$ is connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2018848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
$I = \int_{0}^\infty t^2 e^{-t^2/2} dt$ Q: Evaluate the integral $I = \int_\limits{0}^\infty t^2 e^{-t^2/2} dt$ Hint, write $I^2$ as the following iterated integral and convert to polar coordinates: \begin{align*} I^2 &= \int_\limits{0}^\infty \int_\limits{0}^\infty x^2 e^{-x^2/2} \cdot y^2 e^{-y^2/2} \, dx \, dy \\ \end{align*} I can see the final answer is $\frac{\pi}{2}$ but I don't see how to get this. This problem is very similar to the Gaussian Integral: $I = \int_\limits{-\infty}^\infty e^{-x^2} dx = \sqrt{\pi}$. I can follow the derivation to this. The Gaussian Integral technique of converting to polar coordinates doesn't seem to work as cleanly on this problem. I can convert to polar coordinates: \begin{align*} I^2 &= \int_\limits{0}^\infty \int_\limits{\pi/2}^\pi r^5 \sin^2 \theta \cos^2 \theta e^{-r^2/2} \, d\theta \, dr \\ \end{align*} That doesn't look easy to evaluate.
* *Why not integrate by parts from the initial integral? One has $$ I=-\frac12\int_0^\infty x \cdot \left(-2xe^{-x^2} \right)dx=-\frac12\int_0^\infty x \cdot \left(e^{-x^2} \right)'dx $$ then one may use the Gaussian integral to conclude. *Another path is to differentiate the gaussian identity $$ \int_0^\infty e^{-tx^2}dx=\frac{\sqrt{\pi }}{2 \sqrt{t}} \qquad t>0, $$ with respect to $t$ and put $t=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2018926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Find a general form of a sequence and its sum I have a problem to find a general form of the sequence \begin{align} - \frac{{n\left( {n - 1} \right)}}{{2\left( {2n - 1} \right)}},\frac{{n\left( {n - 1} \right)\left( {n - 2} \right)\left( {n - 3} \right)}}{{2 \cdot 4 \cdot \left( {2n - 1} \right)\left( {2n - 3} \right)}}, - \frac{{n\left( {n - 1} \right)\left( {n - 2} \right)\left( {n - 3} \right)\left( {n - 4} \right)\left( {n - 5} \right)}}{{2 \cdot 4 \cdot 6 \cdot \left( {2n - 1} \right)\left( {2n - 3} \right)\left( {2n - 5} \right)}}, \cdots :=a_n(k),\qquad n\ge 2 \end{align} and then to find the sum $\sum |a_n(k)|^2$, $1\le k\le n$. I have tried as follows: $a_n(k)=\frac{(-1)^kP(n,2k)}{(2k)!!A_n(k)}$, where $P(n,2k)=\frac{n!}{(n-2k)!}$ and $\,\,A_n(k):=(2n-1)(2n-3)(2n-5)\cdots (2n-2k+1),\qquad k\ge1$ Is this true. If yes, can we write $A_n(k)$ in a closed form?! After all of that what is the sum of $|a_n(k)|^2$, $1\le k\le n$.
Here is a more compact representation as sum formula, most of it was already stated in the comment section. Since \begin{align*} a_n(k)=\frac{(-1)^kn(n-1)\cdots (n-2k+1)}{2\cdot4\cdots (2k)\cdot(2n-1)(2n-3)\cdots(2n-2k+1)}\qquad\qquad 1\leq k\leq n \end{align*} We obtain \begin{align*} a_n(k)&=(-1)^k\frac{n!}{(n-2k)!}\cdot\frac{1}{(2k)!!}\cdot\frac{(2n-2k-1)!!}{(2n-1)!!}\tag{1}\\ &=(-1)^k\frac{n!}{(n-2k)!}\cdot\frac{1}{(2k)!!}\cdot\frac{(2n-2k)!}{(2n-2k)!!}\cdot\frac{(2n)!!}{(2n)!}\tag{2}\\ &=(-1)^k\frac{n!}{(n-2k)!}\cdot\frac{1}{2^kk!}\cdot\frac{(2n-2k)!}{2^{n-k}(n-k)!}\cdot\frac{2^nn!}{(2n)!}\tag{3}\\ &=(-1)^k\frac{n!n!}{(2n)!}\cdot\frac{1}{k!(n-k)!}\cdot\frac{(2n-2k)!}{(n-2k)!}\\ &=\frac{(-1)^k\binom{n}{k}\binom{2n-2k}{n}}{\binom{2n}{n}} \end{align*} Comment: * *In (1) we use double factorials $(2n)!!=(2n)(2n-2)\cdots4\cdot 2$ *In (2) we use $(2n)!=(2n)!!(2n-1)!!$ *In (3) we use $(2n)!!=2^nn!$ I don't think that the series \begin{align*} \sum_{k=1}^n\left|a_n(k)\right|^2=\binom{2n}{n}^{-2}\sum_{k=1}^{n}\binom{n}{k}^2\binom{2n-2k}{n}^2\qquad\qquad n\geq 1 \end{align*} has a nice closed formula. The first few terms are \begin{align*} 0,4,144,3636,82000,1764400,37164736,\ldots \end{align*} but they are not known to OEIS. I've also tried some standard techniques and checked for instance section 2.9 in Riordan Array Proofs of Identities in Gould’s Book which contains binomial identities of the type we need, but without success. Wolfram Alpha provides following representation via hypergeometric series \begin{align*} \sum_{k=1}^n\left|a_n(k)\right|^2 &={}_{4}F_{3}\left(\frac{1}{2}-\frac{n}{2},\frac{1}{2}-\frac{n}{2},-\frac{n}{2},-\frac{n}{2};1,\frac{1}{2}-n,\frac{1}{2}-n;1\right)-1 \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2018992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
The set of all equivalence classes from Cauchy sequences is complete Let $C(x)$ be the set of all Cauchy seq. on $X$ and define for $(x_n)_{n \in \mathbb{N}}, (y_n)_{n \in \mathbb{N}}$ the following relation $$ (x_n)_{n \in \mathbb{N}} \sim (y_n)_{n \in \mathbb{N}} \Leftrightarrow d_X(x_n, y_n) \to 0 \text{ when } n \to \infty $$ Let $$ \hat{X} = C(X)/\sim, \\ d_{\hat{X}}([(x_n)], [(y_n)]) = \lim_{n \to \infty} d(x_n, y_n) $$ Show that $\hat{X}$ is complete Let $(\gamma_m)_{m \in \mathbb{N}}$ be a Cauchy seq. in $\hat{X}$ i.e $\gamma_m = ([(x_n)]_m)$. How can I show that there exists $[(x_n)] \in \hat{X}$ such that for $m> n_0 \in \mathbb{N}$ $d_{\hat{X}}([(x_n)]_m, [(x_n)]) < \epsilon, \ \forall \epsilon$?
In order to avoid confusion, I will denote by $x^m$ the $m$th sequence and $x^m_n$ will be the $n$th element of the $m$th sequence: $x^m=(x^m_1,x^m_2,x^m_3,\ldots)$. We need a Cauchy sequence $y=(y_1,y_2,\ldots)$ such that $d(x^n,y)\to0$. The idea is to take a kind of diagonal: one element from each sequence, $y_1\in x^1$, $y_2\in x^2$, ... . $\bullet$ Construction of the sequence $y$: For every $n$, the sequence $x^n$ is Cauchy, so there exists some index $k_n$ such that $|x^n_i-x^n_{k_n}|<\frac1n$ for all $i\ge k_n$. Now choose $y_n=x^n_{n_k}$. $\bullet$ Proof that $y$ is Cauchy: Take an arbitrary $\varepsilon>0$. The sequence $x^1,x^2,\ldots$ is Cauchy in $\hat{X}$, so there is some $N$ such that $d(x^n,x^m)<\frac\varepsilon3$ for all $m,n\ge N$. Consider two arbitrary indices $m,n$ with $m,n>\max(N,\frac3\varepsilon)$. Since $\lim\limits_{i\to\infty}|x^n_i-x^m_i|=d(x^n,x^m)<\frac\varepsilon3$, there is some index $I$ such that $|x^n_i-x^m_i|<\frac\varepsilon3$ for $i\ge I$. By the definition of $k_n$, for all $i\ge k_n$ we have $|x^n_i-y_n|=|x^n_i-x^n_{k_n}|<\frac1n<\frac\varepsilon3$. Similarly, $|x^m_i-y_m|<\frac\varepsilon3$ holds for all $i\ge k_m$. Hence, with $i=\max(I,k_m,k_n)$ we have $$ |y_n-y_m| \le |y_n-x^n_i| + |x^n_i-x^m_i| + |x^m_i-y_m| < \frac\varepsilon3+\frac\varepsilon3+\frac\varepsilon3 = \varepsilon. $$ Therefore, for $m,n>\max(N,\frac3\varepsilon)$ we have $|y_n-y_m| < \varepsilon$. $\bullet$ Proof that $d(x^n,y)\to0$: Again, fix some arbitrary $\varepsilon>0$. The sequence $y$ is Cauchy, so there is an index $I$ such that $|y_i-y_j|<\frac\varepsilon3$ for all $i,j\ge I$. Consider an arbitrary index $n\ge\max(\frac3\varepsilon,I)$. For $i\le \max(n,I)$ we have $$ |x^n_i-y_i| \le |x^n_i-y_n| + |y_n-y_i| = |x^n_i-x^n_{k_n}| + |y_n-y_i| < \frac1n + \frac\varepsilon3 < \frac23\varepsilon. $$ From $i\to\infty$ we get $$ d(x^n,y) \le \frac23\varepsilon<\varepsilon. $$ Therefore, for $n\ge\max(\frac3\varepsilon,I)$ we have $d(x^n,y) \le \frac23\varepsilon<\varepsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2019077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is the matrix $A$ symmetric in the quadratic form? Given $x \in \mathbb{R}^n$ and $A \in \mathbb{R}^{n \times n}$ ($A$ is not necessarily symmetric), the quadratic form is written as $x^TAx$, a scaler. We have, $$x^TAx=(x^TAx)^T=x^TA^Tx$$ that is $x^T(A-A^T)x=0$ Why couldn't conclude $A=A^T$ from $x^T(A-A^T)x=0$, where $x \ne \boldsymbol{0}$? I know it's a false statement and there are counter examples, but it seems to me, mathematically, $A$ should be symmetric. Could someone help explain why I couldn't make such an inference?
In fact, any quadratic form can be reduced to a symmetric matrix. Suppose $A$ is a non-symmetric matrix. Then $(A-A^T)/2$ is skew-symmetric since $$ \left(\frac{A-A^T}{2}\right)^T=\frac{A^T-A}{2}=-\frac{A-A^T}{2} $$ And for a skew-symmetric matrix it is always true that $$ x^T\left(\frac{A-A^T}{2}\right)x=0 $$ Since $$ x^T\left(\frac{A-A^T}{2}\right)x=\left(x^T\left(\frac{A-A^T}{2}\right)x\right)^T=x^T\left(\frac{A-A^T}{2}\right)^Tx=-x^T\left(\frac{A-A^T}{2}\right)x $$ Thus we have $$ x^TAx=x^T\left(\frac{A+A^T}{2}+\frac{A-A^T}{2}\right)x=x^T\left(\frac{A+A^T}{2}\right)x+x^T\left(\frac{A-A^T}{2}\right)x=x^T\left(\frac{A+A^T}{2}\right)x $$ Clearly, $(A+A^T)/2$ is symmetric. So a quadratic form matrix can be always reduced to a symmetric matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2019154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Cardinality of multiplicative group of a field $GF(q)/f(x)$ where $f(x)$ is an irreducible polynomial over GF(q) If you have an irreducible polynomial $f(x)$ over $GF(q)$, then the following should be true: $GF(q)/f(x) \cong GF(q^k)$, where $k = deg(f(x))$. So the multiplicative group should have $\varphi(q^k)$ elements, where $\varphi$ is the totient. On the other hand, since $f(x)$ is irreducible, doesn't that mean that there are no divisors of $f(x)$ in $GF(q)/f(x)$, so all of its elements should be part of the multiplicative group, save for $0$. So, arguing that way, the group should have $q^k-1$ elements. It seems to me that I have a misunderstanding of the problem at hand, but I cannot figure out where my misconception lies. Can anyone point out what I'm doing wrong?
For what reason should the multiplicative group of $\operatorname{GF}(q^k)$ have $\varphi(q^k)$ elements? In general it is not true that $\operatorname{GF}(q^k)\cong\Bbb{Z}/q^k\Bbb{Z}$. In fact, if $k>1$ then $\Bbb{Z}/q^k\Bbb{Z}$ is not a even field because $q\neq0$ and $q^{k-1}\neq0$ in $\Bbb{Z}/q^k\Bbb{Z}$ but their product is zero. Your second line of reasoning is correct; because $\operatorname{GF}(q^k)$ is a field, every element other than zero is a unit, so the cardinality of its unit group is $q^k-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2019242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What is the relation between two row equivalent matrices? My question is very simple. Suppose $[T]_{\mathfrak B}$ is the matrix of the linear transformation $T:V\to V$ in the basis $\mathfrak B$. If a matrix $A$ is row equivalent to $[T]_{\mathfrak B}$, what is the relation between these two matrices? Can we know something about the linear transformation $T$ just looking to the matrix $A$?
Can we know something about the linear transformation $T$ just looking to the matrix $A$? Yes, we can. For instance, the rank of $A$ is equal to that of $[T]_{\mathcal{B}}$, which would give you the information about where $T$ is invertible or not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2019333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Area of a trapezoid without the height How would I find the area of a non-iscoceles trapezoid and without the height? The trapezoid's bases are $30$ and $40$, and the legs $14$ and $16$. Thanks
Area of a trapezium without knowing the height: (a+c) / 4(a-c) * √(a+b-c+d)(a-b-c+d)(a+b-c-d)(-a+b+c+d) Where a>c and 'a' is parallel to 'c'. 'b' and 'd' are the 'diagonals'. Therefore, (40+30) / 4(40-30) * √(40+14-30+16)(40-14-30+16)(40+14-30-16)(-40+14+30+16) 70 / 40 * √(40)(12)(8)(20) 7/4 * 277.128129... 484.974
{ "language": "en", "url": "https://math.stackexchange.com/questions/2019434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Are the functions with the map $\sum a_n b^{-n}\mapsto\sum a_n c^{-n}$ continuous? Let $(a_n)$ a sequence such that $0\le a_n<\min\{b,c\}$ and $b,c> 1$. Then define the function $$f:A\to[0,1],\quad \sum_{n=1}^\infty a_n b^{-n}\mapsto\sum_{n=1}^\infty a_n c^{-n}$$ where $A$ is the subset of $[0,1]$ where the function is defined. My question is: any map of this kind is continuous? I have a partial answer for the case of the Cantor function but I dont know if the continuity is true for the general case stated above. If it is true the continuity of these maps, can you show any proof or hint for some proof? Thank you.
For concreteness, consider the case where $b=4, c=5$. Then take any point $\mathbf{a}$ with a terminating quaternary expansion; for instance, $\mathbf {a}=\frac{9}{16}$, which corresponds to the sequence $\langle a_n\rangle = \langle 2, 1, 0, 0, 0, 0, \ldots\rangle$. Then the sequence of sequences $$\begin{align} \alpha_n&=\langle 2, 0, 3, 0, 0, 0, \ldots\rangle\\ \beta_n&=\langle 2, 0, 3, 3, 0, 0, \ldots\rangle\\ \gamma_n&=\langle 2, 0, 3, 3, 3, 0, \ldots\rangle\\ \end{align} $$ etc. clearly converges to $\mathbf{a}$ from beneath, but the sequence $f(\mathbf{\alpha}), f(\mathbf{\beta}), f(\mathbf{\gamma})$, etc. converges to $\frac25 + \frac0{25}+\frac{3}{125}+\frac{3}{625}+\ldots$ $=\frac{2}{5}+\frac{3}{100}$ $=\frac{43}{100}$ $\neq f(\mathbf{a}) = \frac{2}{5}+\frac{1}{25} = \frac{44}{100}$, so $f()$ isn't continuous from below. In fact, this is a general phenomenon; $f()$ won't be continuous at any value with a terminating base-$b$ expansion, if $b\lt c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2019555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to organize math study groups online for long-distance collaboration? How to organize math study groups online for long-distance collaboration? For example, how to organize a study group to: * *Effectively go through a textbook, *Taking notes (not necessarily collaboratively), *Organizing time, exercises, *Publishing solutions to exercises and reviewing these solutions?
From my experience running computational research projects between people in my research group (in the UK) and with our collaborators in Singapore we tend to use a few tools. Sharelatex.com / Dropbox with Latex files: Writing notes for maths is easiest in latex. I'm not sure if you already use it but if not its a mark up language which allows you to easily add mathematical and scientific notation. It is almost exclusively used in the research community and there are a ton of tutorials online. Notes and updates are well presented and easy for others to go through. Slack: As we are working in collaboration, or in your case studying together, it can be really useful to have a chat space dedicated to the study group. Facebook chat / groups and whatsapp messages proved to be a nightmare for me before and slack is easy to use and free! You can also use the calendar integrations etc. Speaking Together: Even though we are often based on different continents a voice conversation is infinitely better than just text. Of course this depends on the size of the group but I tend to call my collaborators on a regular basis and they call one another too. I realise this isn't exactly a study group setting but I think its similar enough?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2019683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Ratio between inner products on a vector space Let $V$ be a vector space and let$ \langle,\rangle_1$ and $\ \langle,\rangle_2$ be inner products in $V$ s.t. $\langle,\rangle_1=0 \iff \langle,\rangle_2=0$. Prove $ \langle v,w\rangle_1=c\langle v,w\rangle_2$ for every $v,w \in V$. I've been struggling coming up with a solution for this. This is what I have so far: Let $U$ be a subspace of $V$. We know $\ U \oplus U^{\bot}=V$ for $\langle,\rangle_1$ and $\langle,\rangle_2$. Let $\ B=\{u_1,...u_n\}$ be an orthonormal basis for $U$ and $\ C=\{v_1,...v_k\}$ an orthonormal basis for $\ U^{\bot}$ under $\langle,\rangle_1$. $\ B'=\{\frac{u_1}{||u_1||},...,\frac{u_n}{||u_n||}\}$ is an orthonormal basis for $U$ and $\ C'=\{\frac{v_1}{||v_1||},...,\frac{v_k}{||v_k||}\}$ an orthonormal basis for $\ U^{\bot}$ under $\langle,\rangle_2$. Take$\ u \in B $ and $\ v \in C$. If I prove for base vectors the argument is valid for any vector in the subspace. I'd like to find the ratio using the norms of the bases B' and C', I'm not sure how to get at it and if this is the right direction.
You are going in the right direction, but there are some unnecessary complications. There is no need to split the space into a direct sum. Just pick an orthonormal basis $\{v_1,v_2,\ldots,v_N\}$ of $V$ (suppose its dimension is $N$) with respect to $\langle,\rangle_1$. Then this basis is at least an orthogonal basis with respect to $\langle,\rangle_2$. Now, for every pair of distinct $i,j$, consider $x=v_i+v_j$ and $y=v_i-v_j$ and their inner products with respect to both $\langle,\rangle_1$ and $\langle,\rangle_2$. The rest should be straightforward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2019814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Guess Who is the apple thief ?? At the end of class, My mathematics teacher gave us an interesting problem Which is as follow: Out of six boys exactly two were known to have been stealing apples. Harry said:Charlie and George. Donald said:Tom and Charlie James said:Donald and Tom. George said:Harry and Charlie. Charlie said:Donald and James. Tom couldn't be found. Four of the five boys interrogated had named one of the miscreants correctly and lied about the other one.The fifth boy had lied outright! Who stole the apples ?? I tried to apply the logic but failed. I thought three boys have named charlie so charlie must be one of the two, but with the same reasoning I am unable to find the other (though I think this reasoning is wrong as the boys are lying too). I shall be thankful if you can provide a logical answer to such a great question.
Charlie is obviously a good candidate to focus on first, as he is mentioned more times than anyone else. We can start by assuming he is innocent, as this will obviously impose strong constraints on the other choices, and see if that is possible. If Charlie is not a thief, that means that two of George, Tom and Harry are the thieves, and that one of Harry, James and George was the one making a completely untrue statement. However this means that James' and Charlie's statements must each contain a correct name. In particular, from Charlie's statement, one of Donald or James must be a thief, and that gives us three thieves - a contradiction of the given constraint of two thieves. So it is not possible for Charlie to be innocent. Charlie is one of the apple thieves; George, Tom and Harry are innocent, and either James or Charlie named two innocent boys. In particular, since both of them accused Donald, he must also be innocent, and James must be the other apple thief.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2019892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Let $A$ be a $2\times 2$ squared matrix. Prove that max $tr\ (A)$ s.t. $AA^{T}=I$ has solution. Let a squared matrix $A=\bigl(\begin{smallmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{smallmatrix}\bigr)$. Prove that the following optimization problem: max $tr\ (A)$ s.t. $AA^{T}=I$ Where $tr$ is the trace of matrix A, $A^{T}$ is the transpose of matrix A and $I$ is the identity matrix. My attempt It follows the Berge's Maximum Theorem. But I have to prove two things: a.) $tr(A)$ is a continuous function. That is $tr(A)=x_{11}+x_{22}$ is continuous. b.) $AA^{T}=I$ is a compact set. That is, the following set is compact: $\{\begin{matrix} x_{11}^2+x_{12}^2=1 \\ x_{21}^2+x_{22}^2=1 \\ x_{11}.x_{21}+x_{12}.x_{22}=0 \end{matrix} $ But I don't know what else to do. Any suggestion?
I might be missing something but the condition that $AA^T = I$ implies that $A$ is an orthogonal matrix. Thus, the columns (or rows) of $A$ form an orthonormal basis for $\mathbb{R}^2$ (I'm assuming this is over the reals). Thus $x_{ij} \leq 1$ for all $i,j$. Therefore, $tr(A) \leq 2$. Since $tr(I) = 2$ we have that the maximum is attained.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2020012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to prove that $\lim\limits_{n\to\infty} \int_0^1 \cos^n(x)\, dx = 0$ I've got this tasks to prove that: $\lim\limits_{n\to\infty} \int_0^1 \cos^n(x) \,dx = 0$ I tried to think about a partition ${0,t,1}$, and say that if $t$ is small enough, I can get: $\lim\limits_{n\to\infty} \int_0^t \cos(x)\, dx = 0$ But then I'm stuck with the rest section $[t,1]$ which approaches $1$. Any clue?
For any $0<\epsilon< 1$ $$ \int_0^1\cos^n xdx=\int_0^{\frac\epsilon2}\cos^nxdx+\int^1_{\frac\epsilon2}\cos^nxdx\le\frac\epsilon2+\cos^n(\frac\epsilon2)<\frac\epsilon2+\frac\epsilon2=\epsilon\text{ as }n\text{ large}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2020147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluate a given limit Evaluate the following limit: $$\lim_{x \rightarrow 4} \left( \frac{1}{13 - 3x} \right) ^ {\tan \frac{\pi x}{8}}$$ I haven't managed to get anything meaningful yet. Thank you in advance!
see my nice answer: assume $x=t+4$ $$\lim_{x \rightarrow 4} \left( \frac{1}{13 - 3x} \right) ^ {\tan \frac{\pi x}{8}}=\lim_{t \to 0} \left( \frac{1}{13 - 3(t+4)} \right) ^ {\tan \frac{\pi (t+4)}{8}}$$ $$=\lim_{t \to 0} \left( \frac{1}{1-3t} \right) ^ {-\cot \frac{\pi t}{8}}$$ $$=\lim_{t \to 0} \left(1-3t \right) ^ {\cot \frac{\pi t}{8}}$$ above limit has form $1^{\infty}$, so $$=e^{\large \lim_{t \to 0} \left(1-3t-1 \right) \cdot {\cot \frac{\pi t}{8}}}$$ $$=e^{\large \lim_{t \to 0} \left(-3t\right) \cdot {\cot \frac{\pi t}{8}}}$$ $$=e^{\large-3\cdot \frac{8}{\pi}\lim_{t \to 0} \frac{\frac{\pi t}{8}}{\tan\frac{\pi t}{8}}}=e^{-24/\pi}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2020250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Can a sudoku with valid columns and rows be proved valid without evaluating every 3x3 inside it? I'm trying to solve a computer science challenge and have readily been able to validate whether or not the outside dimensions of a sudoku puzzle are valid. However, it doesn't check the validity of the inside squares and will "validate" the following incorrect sudoku puzzle because the 3x3 squares are not unique 1-9: [1, 2, 3, 4, 5, 6, 7, 8, 9], [2, 3, 4, 5, 6, 7, 8, 9, 1], [3, 4, 5, 6, 7, 8, 9, 1, 2], [4, 5, 6, 7, 8, 9, 1, 2, 3], [5, 6, 7, 8, 9, 1, 2, 3, 4], [6, 7, 8, 9, 1, 2, 3, 4, 5], [7, 8, 9, 1, 2, 3, 4, 5, 6], [8, 9, 1, 2, 3, 4, 5, 6, 7], [9, 1, 2, 3, 4, 5, 6, 7, 8] My question is this: if a sudoku puzzle has all valid columns and rows in the 9x9, is there a way to grab a single other set of values from the puzzle (say, for instance, the first 3x3) and know the whole puzzle to be correct? Or must one check every 3x3 for an otherwise valid whole puzzle square?
As proved in https://mathoverflow.net/q/129143, it is insufficient in general to additionally check three (or less) of the $3\times3$ blocks. On the other hand, it is enough to check four blocks: for instance, the three blocks on the diagonal and one more block. Together with rows and columns, this makes $9+9+4=22$ checks. It is in fact possible to make do with 21 checks (e.g., by checking suitable 6 rows, 6 columns, and 9 blocks). This is the best possible, as is also proved at the linked page.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2020343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that $\lim_\limits{n\to\infty }\frac{1}{\log(\log(n))}=0$ How to prove this limit? $$\lim_{n\to\infty }\frac{1}{\log(\log(n))}=0$$ I thought of something like $$0 \le \frac{1}{\log(\log(n))} \le \frac{1}{n}$$ Is it alright?
We have $\log x = \int_1^x {1 \over t} dt $, hence $\log$ is increasing. From the integral we see that $\log (n+1) \ge {1 \over 2} + {1 \over 3} + \cdots + { 1 \over n}$, and since the Harmonic series is divergent, we see that $\lim_{x \to \infty} \log x = \infty$. It follows that $\lim_{x \to \infty} \log (\log x) = \infty$ and so $\lim_{x \to \infty} {1 \over \log (\log x) } = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2020473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Finding the derivative of $\cos 2 x - 2 \sin x$ So, I've been teaching myself calculus, and I'm very new to all of this, so apologies in advance for what is probably a rather dumb question. I'm trying to find the derivative of the function $f(x) = \cos 2x - 2 \sin x$. I'm 99% sure that the derivative of $\cos$ is $-\sin$, and that the derivative of $\sin$ is $\cos$. So I got $-\sin 2 + \cos 1$. I just moved through from left to right - $2x$ becomes $2$ and $+-2$ becomes just plus the next thing because constants disappear, etc. However, the answer in my book is $-2 \cos x(1+2 \sin x)$. I have no clue how the book got this. Just in case I misunderstood the problem, it says, In Exercises 1 through 14, determine the derivative $f'(x)$. In each case it is understood that $x$ is restricted to those values for which the formula for $f(x)$ is meaningful. And then for each problem it gives a function like this particular one. Any help would be appreciated. Thanks!
So, the trick to this one is trigonometric identities: $$\sin(2x)=2\sin(x)\cos(x)$$ Basically, we end up with $$-2\cos(x)(1+2\sin(x))=-2\cos(x)-2\underbrace{(2\sin(x)\cos(x))}_{\large\sin(2x)}$$ And notice a small chain rule in the initial problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2020643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Distance Between 2 Points on a Given function I was trying to find an equation for the distance between 2 points on a function, following (tracing) that function i.e. not the shortest distance (Pythagoras), rather actually tracing the distance of the function from one point to the other. I reasoned that if I zoomed up on the function and the "rectangles" produced by Riemann sum integration, in particular as (in an example case) the function is decreasing, the distance between the individual points of the top vertices of the rectangles is found by Pythagorus' and is the square root of the dx^2 + the change in height ($f(x_i) - f(x_{i-1})$). Though there is an infinite number of these, as such I eventually concluded that it is given by; $$\lim_{n \to \infty} \sum_{i=1}^n \sqrt{(\Delta x)^2 + (\Delta y)^2}$$ Where, $\Delta x = \frac{b-a}{n}, \Delta y = f(x_i) - f(x_{i-1}), x_i = a+\Delta x (i)$ This works when I tested it on straight lines (I already knew how to find the distance), though when I tried it on a parabola ($y=x^2$) from $x=-1$ to $x=1$, I didn't get an answer that seemed correct (Link - it eventually got approx. 8.26). So my question is where is my oversight? For curiosity, is there a way to simplify the above sum? What is the actual formula for finding the distance between 2 points by tracing the function? Thanks
As lordoftheshadows commented, what you need is to use the arc length formula which corresponds to your sum for very small $\Delta x$ (or very large $n$). I suggest you have a look here. Between two points on the curve $y(x)$, the formula is just $$L=\int_{a}^{b}\sqrt{1+(y'(x))^2}\,dx$$ In the case of $y(x)=x^2$. If not yet done, you will very soon learn about the antiderivatives and be able to show that $$\int\sqrt{1+4x^2}\,dx=\frac{1}{2} x\sqrt{1+4 x^2} +\frac{1}{4} \sinh ^{-1}(2 x)$$ So, using the integration bounds, $$L=\left(\frac{1}{2} b\sqrt{1+4 b^2} +\frac{1}{4} \sinh ^{-1}(2 b)\right)-\left(\frac{1}{2} a\sqrt{1+4 a^2} +\frac{1}{4} \sinh ^{-1}(2 a)\right)$$ For illustration purposes, let us use $a=0$ and $b=2$; the result would then be $$L=\sqrt{17}+\frac{1}{4} \sinh ^{-1}(4)\approx 4.64678$$ Suppose that you apply the discrete method using $\Delta x=\frac{1}{10}$; you should arrive to $\approx 4.64598$; using $\Delta x=\frac{1}{100}$ you should arrive to $\approx 4.64678$ which is "exact" for six significant figures. Just for the fun, use the discrete method with Excel varying the size of $\Delta x$; you will notice how the result converges to the correct value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2020729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that if $f$ is lower continuous then $f^{-1}((\alpha,\infty))$ is open Let a function $f:X\to\Bbb R$, where $X$ is a metric space. Then $f$ is lower continuous if for all $a\in X$ we have that $f(a)\le \liminf f(x_n)$ for every $(x_n)\to a$. Alternatively we can say that $f$ is lower continuous if for all $\epsilon >0$ exists some $\delta>0$ such that $$x\in\Bbb B(a,\delta)\implies f(a)-f(x)<\epsilon$$ Now I must prove that for any $\alpha\in\Bbb R$ the preimage of $(\alpha,\infty)$ is open. What I tried is set $f(a)=\alpha$, but from this approach I cant conclude that the preimage of $(\alpha,\infty)$ is open. At most I can conclude that for any $\epsilon>0$ exists a ball $\Bbb B(a,\delta)$ where some of it images belong to some set of the kind $(\alpha,\beta)$. Some hint or solution will be appreciated, thank you.
Let's use second: $f$ LSC if for every $a\in X $ and $\epsilon>0$, $\exists \delta$ such that $f(a)-f(x)<\epsilon$ whenever $x\in B_\delta(a)$ For $\alpha\in {\mathbb R}$ let $I_\alpha=f^{-1}((\alpha,\infty))$. * *Suppose $f$ is LSC. Fix $\alpha\in \mathbb R$. Then for every $a\in I_\alpha$ and $0<\epsilon <f(a)-\alpha$, there exists $\delta>0$ such that whenever $x\in B_\delta(a)$, $f(a)-f(x)<\epsilon<f(a)-\alpha$, giving $f(x)>\alpha$. Hence $B_\delta(a)\subset I_\alpha$, and as a result, $I_\alpha$ is open. *Conversely, suppose that for every $\alpha\in {\mathbb R}$, $I_\alpha$ is open. Fix $a\in X$ and $\epsilon>0$. Let $\alpha = f(a)-\epsilon$. Then $I_\alpha$ is open and contains $a$. In particular there exists $\delta>0$ such that $B_\delta(a)\subset I_\alpha$, that is, for all $x\in B_\delta(a)$, $f(x)>\alpha=f(a)-\epsilon$, or $f(a)-f(x)<\epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2020816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How do i find the roots of this polynomial equation? The polynomial equation is: $x^4-5x^3+5x^2+5x-6=0$. How do i simplify this equation so that i can find its roots. Please, can anyone teach me how to find roots of equations of degree 4 and degree 3.
Hint. Look first for rational solutions: for a polynomial of degree $n$, $$a_n x^n+\cdots +a_1x+a_0$$ with integer coefficients, if $p/q\in\mathbb{Q}$ is a solution then $p$ divides $a_0$ and $q$ divides $a_n$. In your case, a very "lucky" one I would say, try with the divisors of $-6$ (note that $a_n=1$) that is: $\pm 1$,$\pm2$,$\pm3$,$\pm6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2020951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Are $\ell_p$ spaces complete under the $q$-norm? $\ell_p$ spaces are complete under the $p$-norm. The question is: if we change the norm, what can we say about the completness of the space? For example: $\ell_1$ (the space of all summable sequences) is complete or not in the two norm? If it's not complete is there a counter example showing that? Also the same question for $L_p$ spaces (Lesbegue $p$-integrable functions) with $q$-norm
In general: no. In the cases that $\mathscr l^p(X)$ or $L^p(X)$ end up being finite dimensional: yes. For $p<q$ you have $\mathscr l^p(X)\subset \mathscr l^q(X)$ but $\mathscr l^p(X)\neq\mathscr l^q(X)$ if $X$ is not a finite set. If $\mathscr l^p(X)$ were complete with $\|\cdot\|_q$ norm, then it would have to be a closed subset of $\mathscr l^q(X)$. But the space of finitely supported functions on $X$ $\mathscr l_0(X)$ is dense in $\mathscr l^q$ and lies in $\mathscr l^p$. So: $$\overline{\mathscr l^p(X)}\supseteq\overline{\mathscr l_0(X)}=\mathscr l^q(X) \supsetneq \mathscr l^p(X)$$ and $\mathscr l^p(X)$ is not closed in $\mathscr l^q(X)$ and so not complete. As to $L^p$ spaces: Note that depending on the measure space $X$ you can have both: $$L^p(X)\not\subset L^q(X)\qquad L^p(X)\not\supset L^q(X)$$ whenever $q\neq p$ and the question is ill defined. But for example if $X$ is a bounded open subset of $\mathbb R^n$ you have $L^p(X)\supset L^q(X)$, $L^p(X)\neq L^q(X)$ when $p<q$ and you can ask if $L^q$ is complete in $\|\cdot\|_p$ norm. The answer is again no with the same reasoning. The continuous functions on $X$ are in this case a dense subset in both $L^p$ and $L^q$ and the same reasoning as before applies.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2021158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I derive what is $1\cdot 2\cdot 3\cdot 4 + 2\cdot 3\cdot 4\cdot 5+ 3\cdot 4\cdot 5\cdot 6+\cdots + (n-3)(n-2)(n-1)(n)$ ?? I'd like to evaluate the series $$1\cdot 2\cdot 3\cdot 4 + 2\cdot 3\cdot 4\cdot 5+ 3\cdot 4\cdot 5\cdot 6+\cdots + (n-3)(n-2)(n-1)(n)$$ Since I am a high school student, I only know how to prove such formula's (By principal of mathematical induction). I don't know how to find result of such series. Please help. I shall be thankful if you guys can provide me general solution (Since I have been told that there exist a general solution by my friend who gave me this question).
Using finite calculus we have that $$\sum k^{\underline 4}\delta k=\frac{k^{\underline 5}}{5}+C$$ where $k^{\underline 4}=k(k-1)(k-2)(k-3)$ is a falling factorial. Then taking limits $$\sum_{k=m}^nk^{\underline 4}=\sum\nolimits_m^{n+1}k^{\underline 4}\delta k=\frac{k^{\underline 5}}{5}\bigg|_m^{n+1}$$ The standard case is $$\sum_{k=4}^{n+4}k^{\underline 4}=\frac15(n+5)^{\underline 5}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2021231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 10, "answer_id": 7 }
For which values of $1 ≤ p ≤ \infty$ is $\{f_n\} \subset (C[0,1],\lVert\cdot\rVert_p)$ a Cauchy sequence? The question is: Let $\{f_n\} \subset (C[0,1], \lVert\cdot\rVert_p)$, where $f_n(x)=1-nx$, if $0\leq x \leq \frac1n$, and $0$ otherwise. For which values of $p$, $1 \leq p \leq \infty$, is $\{f_n\}$ a Cauchy sequence? When it is a Cauchy sequence, does it converge? So, I know the definition of a Cauchy sequence. You have to start by letting $\epsilon > 0$, then there exists an $N \in \mathbb{N}$, such that if $n > m \geq N$, then $\lVert f_n - f_m \rVert_p < \epsilon$. But I can't even graph the function described above. Also, I don't know how a cauchy sequence can be related to the $\lVert \cdot \rVert_p$ . If anyone could shed some light on this question, I would deeply appreciate it.
Note that $f_n(x)=0$ if $x≥1/n$ and $|f_n(x)|≤1$ if $x\in[0,1/n]$. For that reason: $$\int_0^1|f_n(x)|^p\,dx≤\int_0^{1/n}1\,dx=1/n$$ So $$\|f_n\|_p≤\sqrt[\leftroot{-13}\uproot{2}p\quad ]{1/n}\to0$$ for all $p<\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2021353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Limits to infinity of a factorial function: $\lim_{n\to\infty}\frac{n!}{n^{n/2}}$ How can this limit to infinity be solved? I've tried with d'Alembert but it just keeps coming up with the wrong answer. $$\lim\limits_{n\to\infty}\frac{n!}{n^{n/2}}$$ I might have a problem in simplifying factorial numbers. Thank you in advance.
$$n!\geq\left (\frac{n}{4}\right )^{3n/4}\\\lim_{n\to\infty}\frac{\left (\frac{n}{4}\right )^{3n/4}}{n^{n/2}}=\lim_{n\to\infty}\frac{n^{n/4}}{4^{3n/4}}=\lim_{n\to \infty}\left(\frac{n}{64}\right)^{n/4}=\infty$$ The inequality comes since $4n!=1\cdots \underbrace{n\cdots 4n}_{3n}\geq \underbrace{n\cdots n}_{3n}=n^{3n}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2021494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 6, "answer_id": 2 }
Infinite Fibonacci sums $\sum_{n=1}^{\infty} \frac{1}{f_nf_{n+2}}$ - diverge or converge I am currently going through exercises regarding convergence/divergence. For my previous question I used the ratio test, and managed to get through it all okay (I think). I proved that: $$\sum_{n=1}^{\infty} \frac{n!}{n^n}$$ converges, and now I have to show whether or not an inverse Fibonacci sum converges/diverges and I'm not sure what method to use. What is the best way to tackle this problem? $$\sum_{n=1}^{\infty} \frac{1}{f_nf_{n+2}}$$ Where $f_n$ is the Fibonacci sequence, $f_n = f_{n-1} + f_{n-2}$ with initial terms $f_1 = f_2 = 1$ I don't believe it's similar to how I completed $\sum_{n=1}^{\infty} \frac{n!}{n^n}$ but let me know if I'm wrong. Based on looking at fairly similar questions on this website I have started trying to use proof by contradiction.
Another approach. The $n$th Fibonacci number is about $\varphi^n$, where $\varphi = (1+\sqrt{5})/2$ is the golden mean. Then your sum behaves like $\Sigma (1/\varphi^{2n})$. It's easy to show that converges. Of course you don't get the value of what it converges to, as in @achillehui 's nice answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2021615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Matrix derivative rule for the product of two matrices How to derive the matrix derivative of $AB$ (product of two matrices, where $A \in \mathbb{R}^{m\times n}$ and $B \in \mathbb{R}^{n\times r}$), like $$\frac{\partial AB}{\partial A}$$ and $$\frac{\partial AB}{\partial B}$$
Let $\phi((A,B))$ and note that $\phi((A+ \alpha, B+ \beta ) ) = \phi((A,B)) + \alpha B + A \beta + \alpha \beta$, from which we have $D \phi((A,B)) ((\alpha, \beta)) = \alpha B + A \beta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2021796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Counting Group Actions from a Finite Group to Itself Given a finite group $G$, how many group homomorphisms $G \to \mathrm{Perm}(G)$ are there? Alternatively, in how many ways can a finite group act on itself?
Here are some food for thoughts, this is far from an answer as I don't know it. * *If $G$ is cyclic, then $G$ can acts on itself through $|G|!$ different ways. Indeed, if $x\in G$ is a generator of $G$, there are as many homomorphisms from $G$ to $S(G)$ as there are elements in $S(G)$, since a group homomorphism from $G$ is uniquely determined by the image of $x$. *Let $G_1$, $G_2$ be two finite groups and let $G:=G_1\oplus G_2$. If $\varphi$, respectively $\psi$, is a group homomorphism from $G_1$ to $S(G_1)$, respectively from $G_2$ to $S(G_2)$, then $\varphi\oplus\textrm{id}_{G_2}$, respectively $\textrm{id}_{G_1}\oplus\psi$ is a group homomorphism from $G_1\times G_2$ to $S(G)$. Furthermore, distincts $\varphi$, respectively distincts $\psi$, leads to distincts $\varphi\oplus\textrm{id}_{G_1}$, respectively distincts $\textrm{id}_{G_2}\oplus\psi$. Besides, if $\varphi$ or $\psi$ is not the identity of $G_1$ or $G_2$, then $\varphi\oplus\textrm{id}_{G_2}$ is distinct from $\textrm{id}_{G_2}\oplus\psi$. Therefore, one has: $$|\textrm{Hom}(G,S(G))|\geqslant|\textrm{Hom}(G_1,S(G_1))|\times|\textrm{Hom}(G_2,S(G_2))|-1.$$ I believe this lower bound is far from being optimal. *If $G$ is finite abelian, then $G$ is a direct product of a finite number of cyclic groups. This follows from the classification of finitely generated modulus on principal ideal domains. If $\displaystyle G=\bigoplus_{i=1}^n\mathbb{Z}/(d_i)$, according to the following point, there is at least $d_1!\cdots d_n!-n+1$ group actions of $G$ on itself. Notice that whenever $G$ is cyclic, this lower bound is reached. It would be an interesting question to ask when this lower bound is reached, but I have no clue at that moment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2021923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Every automorphism of L:K fixes K? I know that if an monomorphism $L:K$ is an algebraic extension and $\tau: L \rightarrow L$ has the property that fixes $K$, then $\tau$ is 1-1. Ok then, in the proof of this result we can see that $\tau$ even permutes the roots of $m_\alpha$ ($\alpha$ is an algebraic element). The question that arises for me is that: If $L:K$ is an Galois is an algebraic extension, what happens with the set $Aut(L)$ ? Every automorphism of $L$ fixes $K$? Edit I was conjecturing this because I need to show this: Let $f(x) \in \mathbb{Q}[X]$, $L = Gal(f,\mathbb{Q})$ - the Galois extension - if $\alpha$ is a root of $f(x)$ and $\tau \in Aut(L)$ then $\tau(x)$ is a root of $f(x)$. I tried to show this in the following way: We can write $$f(x) = \sum_{i = 0}^{n} a_i X^i$$ since $\alpha$ is a root, then applying $\tau$ we have $$\tau( f(\alpha) ) = \tau(0) = 0 = \sum_{i = 0}^{n} \tau(a_i)[\tau(\alpha)]^i$$ that is my problem, because for some $i$, $\tau(a_i)$ may not be an element of $\mathbb{Q}$ ? In this case, $$\tau(f(\alpha)) \neq f(\tau(\alpha)) ?$$
Not necessarily. For example, $\mathrm{Aut}(\mathbb{C})$ is uncountable, while precisely two of these automorphisms (the identity map and complex conjugation) fix $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2022034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How many abelian groups of order $p^{5}$ are there? I want to find how many abelian groups of order $p^{5}$ there are, up to isomorphism, where $p$ is a prime number. Are there any theorems that can help with this?
Edit: Adding my answer of your question in comments: To use the theorem you must factor suppose $\vert G \vert=n$ and factor $n$ into prime factors. Then you must get divisors $d_1,d_2,\dots,d_m$ such that $d_i\mid d_{i+1}$, and $d_1\cdot\dots\cdot d_{m}=n$, and your group can be $$G\cong \mathbb{Z}_{d_1}\oplus \mathbb{Z}_{d_2}\oplus \dots \oplus \mathbb{Z}_{d_m} $$ Original answer Yes, the theorem that you're looking for is the fundamental theorem of finitely generated abelian groups. From this you have that $p^5$ can factor into $$p^5=p^5$$ $$p^5=p\cdot p^4$$ $$p^5=p^2\cdot p^3$$ $$p^5=p\cdot p\cdot p^3$$ $$p^5=p\cdot p^2\cdot p^2$$ $$p^5=p\cdot p\cdot p\cdot p^2$$ $$p^5=p\cdot p\cdot p\cdot p\cdot p$$ So you have all the direct sums of that groups. $$G\cong\mathbb{Z}_{p^5}$$ $$G\cong\mathbb{Z}_{p}\oplus \mathbb{Z}_{p^4}$$ $$G\cong\mathbb{Z}_{p^2}\oplus \mathbb{Z}_{p^3}$$ $$G\cong\mathbb{Z}_{p}\oplus \mathbb{Z}_{p}\oplus \mathbb{Z}_{p^3}$$ $$G\cong\mathbb{Z}_{p}\oplus \mathbb{Z}_{p^2}\oplus \mathbb{Z}_{p^2}$$ $$G\cong\mathbb{Z}_{p}\oplus \mathbb{Z}_p\oplus \mathbb{Z}_{p}\oplus \mathbb{Z}_{p^2}$$ $$G\cong\mathbb{Z}_{p}\oplus \mathbb{Z}_p\oplus \mathbb{Z}_p\oplus \mathbb{Z}_{p}\oplus \mathbb{Z}_{p}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2022131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Almost Complex Structure The question itself is nothing but linear algebra: Let $\{x_1,\cdots,x_n\}$ be n linearly independent vectors (not necessarily orthogonal) in $\mathbb{R}^{2n}$ and $J^2=-1$ is the almost complex structure, we want to show that $\{x_1,\cdots, x_n, Jx_1,\cdots, Jx_{n}\}$ span $\mathbb{R}^{2n}$. To check that they are linearly independent, we assume:\begin{equation}\begin{aligned} \sum_i a_ix_i+\sum_j b_jJx_j=0 \\ \end{aligned} \end{equation} Act $J$ on both side, we have:\begin{equation} \sum_i a_i Jx_i-\sum_jb_jx_j=0 \end{equation} Then I saw somebody concluded right away the following which I can not figure out why: \begin{equation} \sum_i (a_i^2+b_i^2)x_i=0 \end{equation} Thanks for your help!
In fact, choose any $x_1 \neq 0$. Then $x_1$ and $Jx_1$ are linearly independent since otherwise $J$ has a real eigenvalue. Next, choose $$ x_2 \not\in \mathop{\mathrm{span}}\{x_1,Jx_1\}. $$ Then $x_1$, $x_2$, $Jx_1$, $Jx_2$ are linearly independent since otherwise $V = \mathop{\mathrm{span}}\{x_1,x_2,Jx_1\}$ is invariant for $J$ and then $J|_V$ must have a real eigenvalue $\lambda$ since $\dim V = 3$. Then $\lambda$ is also an eigenvalue for $J$, which is a contradiction. Proceeding in this way, you find a basis of the form $x_1$, $\dots$, $x_n$, $Jx_1$, $\dots$, $Jx_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2022245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Alternating Series sum up to $99$ terms is $>0.5$ Prove that $$\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\cdots \cdots \cdots -\frac{1}{99}+\frac{1}{100}>0.2$$ $\bf{My\; Try::}$ We can write series as $$\frac{1}{2}\bigg(1+\frac{1}{2}+\frac{1}{3}+\cdots \cdots +\frac{1}{50}\bigg)-\bigg(\frac{1}{3}+\frac{1}{5}+\frac{1}{7}+\cdots \cdots +\frac{1}{99}\bigg)$$ Now How can i solve after that, Help Required, Thanks
$$\left(\frac12-\frac13\right)+\left(\frac14-\frac15\right)+\cdots+\left(\frac1{98}-\frac1{99}\right)+\frac1{100}>\left(\frac12-\frac13\right)+\left(\frac14-\frac15\right)=0.21\overline 6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2022420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does there exist a sequence $x_n$ such that $ \sum x_n >0$ but $ \sum x_n ^3 <0 $ where $ |x_n|<1$ Does there exist a sequence $x_n$ such that $ \sum_{n=1}^{\infty} x_n >0$ but $ \sum_{n=1}^{\infty} x_n ^3 <0 $ where $ |x_n|<1$ for all $n$? Intuitively I would say no, since cubing preserves the sign of each term and since the first sum is greater than $0$, we can match up each -ve term with a sufficient number of +ve terms such that their sum is +ve. So after cubing each of these 'partial sums' will have the same sign (i.e. also +ve) hence the overall sum is also positive. Is this correct? Is there a better way of wording the correct argument?
Take, for instance, $x_n=-3/5$ if $n$ is divisible by $3$ and $2/5$ otherwise. Then $\sum_{n=1}^\infty x_n=\infty$ while $\sum_{n=1}^\infty x_n^3=-\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2022576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculating $\limsup\limits_{n \to \infty}a_n$ and $\liminf\limits_{n \to \infty}a_n$ with $a_n=\left( \frac{n+(-1)^n}{n} \right)^n$ I need to calculate $\limsup\limits_{n \to \infty} a_n$ and $\liminf\limits_{n \to \infty}a_n$ where $a_{n} = \left( \dfrac{n+(-1)^n}{n}\right)^n, \: n \in \mathbb{N}$ My approach is to calculate the limits for $n$ being even or odd. * *$n \pmod 2 \equiv 1$ $$\quad \lim_{n \to \infty} \left(\dfrac{n-1}{n} \right)^n$$ * *$n \pmod 2 \equiv 0$ $$\quad \lim_{n \to \infty} \left(\dfrac{n+1}{n} \right)^n$$ I don't know how I can solve the two formulas above. How should I continue?
Hint. One may recall that $$ \lim_{n \to \infty} \left(1+\frac{x}n\right)^n=e^x, \qquad x \in \mathbb{R}, $$ and one may observe that $$ \left(1-\frac1n\right)^n\le a_n \le \left(1+\frac1n\right)^n, \quad n=1,2,\cdots. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2022675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
The number of ways of selecting 6 shoes from 8 pairs of shoes so that exactly 2 pairs of shoes are formed The number of ways of selecting 6 shoes from 8 pairs of shoes so that exactly 2 pairs of shoes are formed? My try: Let us first choose $2$ pairs from $8$ pairs. It can be done in $_{8}C_2$ ways. Suppose I have chosen pair $(1,2)$ and $(3,4)$. We have to chose other 2 shoes such that one is not a pair of another. For example if I choose 5 I shouldn't choose it's pair ie 6. It can be chosen in $_{12}C_1 \cdot _{10}C_1$ ways. So total no. of ways = $_8C_2\cdot _{12}C_1\cdot_{10}C_1=3360$ ways. Am I correct. Please clarify.
When unsure, try with a smaller instance of the problem. Let's count the ways of choosing 2 shoes from 2 pairs so that they are not in the same pair. Your method would say start with 0 full pairs out of 2 and then multiply by $_4C_1$ and $_2C_1$: $$\binom{2}{0} \binom{4}{1} \binom{2}{1} = 8$$ But we can count directly: out of $ABab$, the choices fulfilling our conditions are $Aa$, $Ab$, $Ba$, $Bb$. That's not 8 so the formula is not correct. At the same time it shows what should have been done differently and why: divide your result by 2 to reflect that the last 2 shoes that must not be from the same pair can be picked in any order ($aA = Aa$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2022861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
how to calculate percentage of chance in a price negotiation I would like to know how to calculate a Percentage of chance in a negotiation. Here is the problem : The product has a price of 30€ which represents an acceptance rate of 100% The minimum price accepted is 27€ which represents an acceptance rate of 50% How to calculate the acceptance % of the prices between 30 and 27 ? Thank you for your help.
This is guessing the function $r(p)$, while knowing $r(27€)=50 \%$ and $r(30€)=100 \%$. If there is nothing else, the simplest thing to do is to assume a linear law, doing linear interpolation $$ r(p) = r_1 (1 - \lambda) + r_2 \lambda \\ \lambda = \frac{p - p_1}{p_2 - p1} $$ Example: $p=28€$, then $$ r(28€) = 50\% \left( 1 - \frac{28€-27€}{30€-27€}\right) + 100\% \frac{28€-27€}{30€-27€} = 66.67\% $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2022955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the range of $f:[0,2\pi) \rightarrow \Bbb R, f(x)=(1+3^{\lfloor\frac{x}{\pi}\rfloor}|\sin x|)^\frac{1}{2}$. I need to check my solution for this problem: Let $f:[0,2\pi) \rightarrow \Bbb R, f(x)=(1+3^{\lfloor\frac{x}{\pi}\rfloor}|\sin x|)^\frac{1}{2}$. $1)$ Find the range of $f$. $2)$ Let $f_1=f\circ f$. Is $f_1$ a bijection? For the first one I got that the range is $[1,2]$. And for the second one, I think that $f_1$ is not a bijection, since $f$ is not a bijection, because $|\sin x|$ is not injective. Is this correct?
The given function is $$f(x)=\begin{cases} \sqrt{(1+\sin x)} & \text{ if } 0 \leq x < \pi \\ \sqrt{(1-3\sin x)} & \text{ if } \pi \leq x < 2\pi \end{cases} $$ Now observe that $f(f0))=f(f(\pi))$, hence the composition function is not bijective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2023106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Particular solution for $\sin(x)y'(x)+y(x)\cos(x)=-\sin(2x)$. Let the ODE $$y'(x)\sin(x)+y(x)\cos(x)=-\sin(2x).$$ I know that a particular solution is of the form $w(x)=A\cos(2x)+B\sin(2x)$. When I put it in the equation, I get, $$-2A\sin(2x)\sin(x)+2B\cos(2x)\sin(x)+A\cos(2x)\cos(x)+B\sin(2x)\cos(x)=-\sin(2x)$$ and thus $$\sin(2x)\Big(-2A\sin(x)+B\cos(x)\Big)+\cos(2x)\Big(2B\sin(x)+A\cos(x)\Big)=-\sin(2x),$$ and get the system $$\begin{cases} -2A\sin(x)+B\cos(x)=-1\\ 2B\sin(x)+A\cos(x)=0. \end{cases}.$$ How can I do now ?
Note that you can recognize the product rule of differentiation on the left side, $$(\sin(x)y(x))'=\sin(x)y'(x)+\cos(x)y(x),$$ which greatly simplifies the equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2023215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is $\lim_{x\rightarrow \infty}(1+\frac{a}{bx+c})^{dx+f}=e^{\frac{a\cdot d}{b}}$? I am studying limits through a study book, and I am given this simple rule but without any explanation. $$\lim_{x\rightarrow \infty}(1+\frac{a}{bx+c})^{dx+f}=e^{\frac{a\cdot d}{b}}$$ Why does this hold true for all values of $a, b, c, d, e$ and $f$? This is probably proven by writing the expression in the form of $e^{\ln(...)}$ and then applying l'Hôpital, but I seem to be lost. And secondly, can this also be proven without resorting to l'Hôpital? Any help would be appreciated, thanks! Solution $$\lim_{x\rightarrow \infty}(1+\frac{a}{bx+c})^{dx+f}=\lim_{x\rightarrow \infty}(1+\frac{a}{bx})^{dx}=\lim_{x\rightarrow \infty}(1+\frac{1}{\frac{bx}{a}})^{\frac{bx}{a}\cdot \frac{ad}{b}}$$ Let $u:=\frac{bx}{a}$ $$\lim_{x\rightarrow \infty}(1+\frac{1}{\frac{bx}{a}})^{\frac{bx}{a}\cdot \frac{ad}{b}}=\lim_{u \rightarrow \infty}\left((1+\frac{1}{u})^{u}\right)^{\frac{ad}{b}}=\left(\lim_{u \rightarrow \infty} (1+\frac{1}{u})^u\right)^{\frac{ad}{b}}$$ $\lim_{u \rightarrow \infty} (1+\frac{1}{u})^u$ is the definition of $e$. Here's the why: $$\lim_{u \rightarrow \infty} (1+\frac{1}{u})^u=e^{\ln(\lim_{u \rightarrow \infty} (1+\frac{1}{u})^u)}=e^{\lim_{u \rightarrow \infty} \left(u \cdot \ln(1+\frac{1}{u})\right)}=e^{\lim_{u \rightarrow \infty} \left(\frac{\ln(1+\frac{1}{u})-\ln 1}{\frac{1}{u}}\right)}=e^{\lim_{\frac{1}{u} \rightarrow 0} \left(\frac{\ln(1+\frac{1}{u})-\ln 1}{\frac{1}{u}}\right)}=e^{(\ln)'(1)}=e$$ Thus: $$\left(\lim_{u \rightarrow \infty} (1+\frac{1}{u})^u\right)^{\frac{ad}{b}}=e^{\frac{ad}{b}}$$
Assuming $b\ne 0$ we have that \begin{align}\lim_{x\rightarrow \infty}\left(1+\frac{a}{bx+c}\right)^{dx+f}&\\=&\lim_{x\rightarrow \infty}\left(1+\frac{a}{bx+c}\right)^{{(bx+c)}\cdot\frac{dx+f}{bx+c}}\\=&\left(\lim_{x\rightarrow \infty}\left(1+\frac{a}{bx+c}\right)^{bx+c}\right)^{\lim_{x\to\infty}\frac{dx+f}{bx+c}}.\end{align} Now, use that $$\lim_{x\rightarrow \infty}\left(1+\frac{a}{bx+c}\right)^{bx+c}=e^a$$ and $$\lim_{x\to\infty}\frac{dx+f}{bx+c}=\dfrac{d}{b}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2023338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }