Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Let $A\in M_{1\times3}(\mathbb{R})$ be a arbitrary matrix. Find the eigenvalues and eigenvectors of matrix $A^TA$. Let $A\in M_{1\times3}(\mathbb{R})$ be a arbitrary matrix. Find the eigenvalues and eigenvectors of matrix $A^TA$. My approach: $$ A^TA = \begin{bmatrix} a^2 & ab & ac\\ ab & b^2 & bc\\ ac & bc & c^2 \end{bmatrix}; \\ \lambda_1\lambda_2\lambda_3=\det(A^TA)=0 \qquad (1)\\ \lambda_1+\lambda_2+\lambda_3=\text{tr}(A^TA)=a^2+b^2+c^2 \qquad (2) $$ So from these two properties we know that at least one eigenvalue must be $0$. Solving $A^TA-\lambda I=0$ for $\lambda=0$ we get that $\dim(\text{ker}(A^TA))=2$. Since the algebraic multiplicity has to be equal to or larger than the geometric multiplicity and from $(2)$ we conclude that the algebraic multiplicity had to be equal to the geometric multiplicity. So we can say that $\lambda_1=a^2+b^2+c^2,\lambda_2=\lambda_3=0$. And now we just need to find the eigenvectors for the corresponding eigenvalues. Is my approach correct?
$\DeclareMathOperator{\tr}{tr}$The columns of $A^TA$ are all scalar multiples of $A^T$, so for $A\ne0$, this matrix has rank 1: its column space is spanned by $A^T$ and two of its eigenvalues are $0$. The last eigenvalue you get “for free” since the trace is equal to the sum of the eigenvalues, so it is $\tr A^TA-0-0=AA^T$, with eigenvector $A^T$. You can verify this by observing that $(A^TA)A^T=A^T(AA^T)$. The eigenspace of $0$ is just the null space of $A^TA$. Since each row of this matrix is a scalar multiple of $A$, this amounts to solving $AX=0$, which describes the set of vectors orthogonal to $A^T$. If $A=[a,b,c]\ne0$, then at least two of $[-c,0,a]$, $[0,-c,b]$ and $[b,-a,0]$ are non-zero and are obviously linearly independent, so will form a basis for this eigenspace.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2386612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why is the word "complement" used in set theory? Maybe this should have been on the English Exchange, but why do we use the word "complement" in set theory? If I have: $$(A \cup B)'$$ Why does "complement" mean everything but the union? Is it because it is "all the things" that the original operation is not, thus it "completes" it? I looked at the dictionary and wasn't sure. Edit: $(A \cup B)'$ was only an example so I could use the complement mark. It was picked at random and was only meant to ask the question of what the word meant. It was not specifically about a union. I could have probably picked anything that allowed me to use the "tick mark" to indicate complement. My MathJax is not good and cumbersome for me, so I only used the single example.
"X complements Y" in colloquial English means basically "X has what Y lacks". This is exactly what the complement is in set theory, except that the complement of $A$ also has none of what $A$ has. That said, my suspicion is that this term actually originates in mathematical French and was borrowed directly from mathematical French into mathematical English.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2386710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 1 }
The number of the partition of the set $A$ into $k$ bounded blocks. Let $A=\{1,2,\cdots,n\}$ be a set. We want to partitions of this set into $k$ non-empty unlabelled subsets $B_1,B_2,\cdots ,B_k$ such that cardinality of each $B_i$ between positive integers $a$ and $b$, that means $a\leq |B_i|\leq b$. Let $D_{a,b}(n,k)$ be the number of partitions of the set $A$ into $k$ non-empty unlabelled subsets $B_1,B_2,\cdots ,B_k$ which $a\leq |B_i|\leq b$. How can calculate the number of such partitions? I try obtained the recurrence relation for $D_{a,b}(n,k)$ with the definition of Stirling numbers of the second kind but I couldn't. Very thanks for any help and comment.
Supposing that we are trying to generalize Stirling numbers here we get the combinatorial class $$\def\textsc#1{\dosc#1\csod} \def\dosc#1#2\csod{{\rm #1{\small #2}}} \textsc{SET}_{=k}(\textsc{SET}_{a\le\cdot\le b}(\mathcal{Z}))$$ which yields the generating function $$G(z) = \frac{1}{k!} \left(\sum_{q=a}^b \frac{z^q}{q!}\right)^k.$$ Differentiate to obtain $$G'(z) = \frac{1}{(k-1)!} \left(\sum_{q=a}^b \frac{z^q}{q!}\right)^{k-1} \sum_{p=a-1}^{b-1} \frac{z^p}{p!}.$$ Extracting coefficients we find $$D_{a,b}(n+1,k) = n! [z^n] \sum_{p=a-1}^{b-1} \frac{z^p}{p!} \frac{1}{(k-1)!} \left(\sum_{q=a}^b \frac{z^q}{q!}\right)^{k-1} \\ = n! \sum_{p=a-1}^{b-1} \frac{1}{p!} [z^{n-p}] \frac{1}{(k-1)!} \left(\sum_{q=a}^b \frac{z^q}{q!}\right)^{k-1} \\ = n! \sum_{p=a-1}^{b-1} \frac{1}{p!} \frac{1}{(n-p)!} D_{a,b}(n-p, k-1) \\ = \sum_{p=a-1}^{b-1} {n\choose p} D_{a,b}(n-p, k-1).$$ The base cases here are $D_{a,b}(n, k) = 0$ if $n\lt 1$ or $k=0$ and $D_{a,b}(n, 1) = [[a\le n\le b]]$ where we have used an Iverson bracket. These were verified using enumeration, coefficient extraction from $G(z)$, and the recurrence. We also checked that $D_{1,n}(n,k)$ does indeed produce Stirling numbers. (Use the standard recurrence which exploits differentiation in a different way if you need regular Stirling numbers only.) This was the code. with(combinat); ENUM := proc(n, k, a, b) local res, part, mset, inrange, psize; res := 0; part := firstpart(n); while type(part, list) do inrange := select(p -> a <= p and p <= b, part); psize := nops(part); if nops(inrange) = psize and k = psize then mset := convert(part, `multiset`); res := res + n!/mul(p!, p in part) /mul(q[2]!, q in mset); fi; part := nextpart(part); od; res; end; GCF := (n, k, a, b) -> n!*coeftayl(add(z^q/q!, q=a..b)^k/k!, z=0, n); DX := proc(n, k, a, b) option remember; if n < 1 or k = 0 then return 0 fi; if k = 1 then if a <= n and n <= b then return 1; fi; return 0; fi; add(binomial(n-1, p)*DX(n-1-p, k-1, a, b), p=a-1..b-1); end; ST2 := (n, k) -> DX(n, k, 1, n); Remark. These data are available at the OEIS, consult e.g. sequences A059022, A059023, A059024, and A059025.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2386814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
The integral of complex function is zero Let $f:\mathbb{C}\rightarrow \mathbb{C}$ be a continuous function on $\mathbb{C}$ and holomorphic in $\mathbb{C}\setminus \mathbb{R}$. Prove that for every closed curve $\gamma$: $\int_\gamma f(z)\,dz=0$. So if $\gamma$ does not intersect $\mathbb{R}$ at all then we know that $\int_ \gamma f=0$ from Cauchy's theorem, but I don't know how to continue from here...
One thing you could do is define $F(z) = \int_{[0,z]}f(w)\,dw.$ If you can show $F'(z) = f(z)$ everywhere, then you'll know $F$ is entire, hence its derivative $f$ is entire. The conclusion then follows from Cauchy's theorem. To get started, suppose $z$ is in the upper half plane $\mathbb H.$ Then $z+h\in \mathbb H$ if $|h|$ is small. Let $\Delta$ be the triangle $0,z,z+h.$ Then by continuity, $$\tag 1 \int_\Delta f(w)\,dw = \lim_{\epsilon\to 0^+} \int_{\Delta +i\epsilon} f(w)\,dw.$$ Since the triangle $\Delta +i\epsilon$ lies in $\mathbb H,$ where $f$ is holomorphic, the right side of $(1)$ is $0$ for every $\epsilon.$ Hence the left side of $(1)$ is $0.$ This allows you to claim $F(z+h) -F(z)$ is the integral of $f$ along $[z,z+h].$ The leads to $F'(z) = f(z)$ quite nicely. The proof for the lower half plane is the same, and if $z\in \mathbb R,$ there are some cases to consider, but it's basically the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2386933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Are there 4 consecutive numbers that are each the sum of 2 squares? Some numbers can be expressed as the sum of two squares (ex. $10=3^2+1^2$) such as: $$0,1,2,4,5,8,9,10,13,16,17,18,20,25,...$$ Other numbers are not the sum of any two squares of integers: $$3,6,7,11,12,14,15,19,21,22,23,24,27,...$$ There are a lot of examples of $3$ consecutive numbers that are each the sum of 3 squares. The first is $$0=0^2+0^2\quad\quad1=1^2+0^2 \quad\quad 2=1^2+1^2$$ The largest example I've found is $$99952=444^2+896^2\quad 99953=568^2+823^2\quad 99954=327^2+945^2$$ This question seems to be related to the prime decomposition of consecutive numbers. It's pretty easy to use Fermat's theorem on sums of two squares and divisibility by $3$ to show that there can't be $6$ consecutive numbers of this type. But I can't manage to prove anything about $4$ or $5$ such consecutive numbers. There were no examples of $4$ consecutive numbers of this type less than $100000$. Are there any?
No, there aren't Every odd integer that is the sum of two squares, is congruent to $1$ modulo $4$ because every prime factor of the form $4k+3$ must occur in a power with even exponent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2387032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What will be the negation of this statement: Every street in the city has at least one house in which we can I find a person who is rich and beautiful or highly educated and kind. Negation: 'There exists a street in the city where in every house we can find no person who is rich and beautiful or highly educated and kind.' Is the negation correct? Please someone check... Thank you..
The negation is NOT(Every street in the city has at least one house in which we can I find a person who is rich and beautiful or highly educated and kind.) Which means --- NOT Every street in the city has at least one house in which we can I find a person who is rich and beautiful or highly educated and kind. Which is equivalent to your statement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2387120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 1 }
Find the real and imaginary part of z let $z=$ $$ \left( \frac{1 + \sin\theta + i\cos\theta}{1 + \sin\theta - i\cos\theta} \right)^n$$ Rationalizing the denominator: $$\frac{1 + \sin\theta + i\cos\theta}{1 + \sin\theta - i\cos\theta}\cdot\left( \frac{1 + \sin\theta + i\cos\theta}{1 + \sin\theta + i\cos\theta}\right) = \frac{(1 + \sin\theta + i\cos\theta)^2}{(1 + \sin\theta)^2 + \cos^2\theta}$$ $$=\frac{(1 + \sin\theta)^2 + \cos^2\theta + 2i(1 + \sin\theta)\cos\theta }{(1 + \sin\theta)^2 + \cos^2\theta}$$ thus $$x = \frac{(1 + \sin\theta)^2 + \cos^2\theta }{(1 + \sin\theta)^2 + \cos^2\theta} $$ $$y= \frac{2i(1 + \sin\theta)\cos\theta }{(1 + \sin\theta)^2 + \cos^2\theta}$$ According to the binomial theorem, $$(x+y)^n = \sum_{k=0}^n \binom{n}{k} x^{n-k}y^k$$ we get $$z = \frac{1}{(1 + \sin\theta)^2 + \cos^2\theta}\sum_{k=0}^n \binom{n}{k} ((1 + \sin\theta)^2 + \cos^2\theta)^{n-k}\cdot(2i(1 + \sin\theta)\cos\theta)^k$$ ...and that is where I'm stuck. What do you think? Thanks for the attention.
HINT: Express the fraction as $r e^{i\theta}$ and compute $r^n e^{i n\theta}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2387172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 8, "answer_id": 0 }
Understanding the use of brackets in set theory notation I would appreciate help understanding the presence and absence of brackets in this particular example, which hopefully will clarify things for me. It is a line from a short proof by contradiction based on the Axiom of Foundation that No Set is an Element of Itself. Why are there brackets on some $S$'s and not on others. Suppose $S\in S$. Let $T=\{S\}$, then $T\bigcap S=\{S\}$, etc. Thanks
Given an arbitrary set $S$, the goal is to prove that $S$ is not an element of itself. The proposed proof proceeds by considering the set $T=\{S\}$, i.e., the set $T$ whose one and only member is the set $S$. Then $T$ is nonempty, because it has a member, namely $S$. So we can apply the axiom of regularity to conclude that $T$ has a member that has no members in common with $T$. That member of $T$ is $S$, because $S$ is the only member of $T$. So $S$ has no members in common with $T$. Tbat is, no member of $T$ can also be a member of $S$. Well, $S$ is a member of $T$, so $S$ cannot be a member of $S$, q.e.d.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2387236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
solutions for $x^n = 1$ I'm supposed to solve this in terms of $n$, a natural number. I'm really getting tripped up on this, and I don't really know why. The only way this can have a solution is if $n = 0$, specifically the algebra I wrote to show this is $ \begin{gather*} x^n = 1\\ \log_x(x^n) = \log_x(1)\\ n\log_x(x) = 0\\ n = 0 \end{gather*} $ So, does this mean there are infinite solutions or just one unique solution? I'm not sure how this would change using $x$ as a complex number.
Your question is a bit unclear. You say that you are solving this "in terms of $n$, a natural number. The following doesn't give you all the details, you will need to figure out what the question is actually asking before you write down your solution. First question is: What is your definition of natural number? Or to put it differently, is $0$ a natural number. Let me assume that it is. Second question is: Is $x$ a constant and you are solving for $n$ or is $n$ a constant and you are solving for $x$? Case 1a: $x$ is positive constant not equal to $1$ and you are solving for $n$. In this case the only solution is $n=0$. For all other $n$, $x^n \neq 1$. Case 1b: $x =1$ and you are solving for $n$. Here any $n$ will work. Case 1c: $x$ is a negative constant and you are solving for $n$. In this case you have $x = -y$ for some positive $y$. So $x^n = (-1)^ny^n$. For this to be $1$ you need $(-1)^n = 1$ (otherwise the whole thing is negative because $y^n$ is always positive). This is satisfied exactly for the even numbers (including $0$). Then you need $y^n = 1$. If $y =1$, then this is true for all $n$ and your solution set is the set of all even $n$. If $y\neq 1$, then $n=0$ is the only solution. Case 2a: $n$ is a constant (natural number greater than $0$) and you are solving for $x$. Let's assume that we are looking for solutions in the complex numbers. Here the solution set is know as the $n$th roots of unity. If $$ x_0 = e^{2\pi i/n} $$ The the so $n$ solutions are $$ x_0, x_0^2, \dots, x_0^{n}. $$ If you are just solving within the real numbers, then you only get those (from the list) that are real. This will be $-1$ and $1$. You only get $-1$ when $n$ is even. Case 2b: $n=0$ and you are solving for $x$. In this case the solution set is all real (or complex) numbers except $0$. While some are going to disagree with me on this, $0^0$ is commonly not defined. (Just ask your teacher what convention you use in the class.) Note here the case where you are solving for $n$ and $x\neq 1$ is a positive constant. In this case we got that $n=0$. And here your solution is actually correct! (within the real numbers.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2387351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Does $\sum^{\infty}_{1}\frac{1}{k+1} = \sum^{\infty}_{2}\frac{1}{k}$ diverge because $\sum^{\infty}_{1}\frac{1}{k}$ diverges? If you manipulate the index of a series does it still converge/diverge? For example: Does $\sum^{\infty}_{1}\frac{1}{k+1} = \sum^{\infty}_{2}\frac{1}{k}$ diverge because $\sum^{\infty}_{1}\frac{1}{k}$ diverges?
$$\sum^{\infty}_{1}\frac{1}{k+1}=\frac 12 +\frac13 +\frac 14+\ldots$$ $$\sum^{\infty}_{1}\frac{1}{k}=1+\frac 12 +\frac13 +\frac 14+\ldots$$ Hence $$\sum^{\infty}_{1}\frac{1}{k+1}=\sum^{\infty}_{1}\frac{1}{k}-1$$ Since $$\sum^{\infty}_{1}\frac{1}{k}\to \infty \implies \sum^{\infty}_{1}\frac{1}{k}-\color{blue}1 \to \infty \implies \sum^{\infty}_{1}\frac{1}{k+1 }\to \infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2387425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
How many squares does the diagonal of this rectangle go through? I have a rectangle made of tiles measuring $9$ by $12$ tiles to get $108$ tiles. A diagonal line is cut through the top left corner down to the bottom right corner. How many tiles does the diagonal go through? I had this question on a test today and I want to know the answer. I did this by literally drawing it up, and shading the squares the line went through. I got 14 squares. Now I searched this up on MSE and found this answer which asks the same question. However, I don't quite understand the answer, and I don't get what the $(N,M)$ part of the last line meant either. (I'm only a Year 7). So, can someone give me a way how to find out the answer using techniques a Year 7/8 would know. Also, please keep formulas to a minimum. (We are not allowed to use a formula in our working out, in which we have to show). Thank you
The diagonal $d$ will go through two grid points which divide it into three equal parts. Each part $d'$ is a diagonal of a $4\times3$ grid rectangle $R$. It intersects three horizontal and two vertical interior grid lines of $R$. These $5$ intersection points partition $d'$ into $6$ parts. It follows that $d'$ traverses $6$ tiles, hence $d$ traverses $3\cdot 6=18$ tiles.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2387729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solving a linear matrix equation Could you please help me solve this problem? Given a $n$ by $m$ matrix $Q$ whose columns are orthonormal ($rank(Q)=m$) and a $n$ by $n$ symmetric definite positive matrix $X$ (the unknown) and another $n$ by $n$ symmetric definite positive matrix $B$. We want to solve the following problem: $$QQ^{t}XQQ^t=B$$ We put $A=QQ^t$, the problem is then formulated as: $AXA=B$ in this case $det(A)=0$ One manipulation that could be done is: $$Q^tQQ^tXQQ^tQ=Q^tBQ=Q^tXQ$$ since $$Q^tQ=I$$ (Vectors are orthonormal) Now I'm stuck with the following equation, is there any ideas to solve it to find X? $$Q^tXQ=Q^tBQ$$ or $$Q^t(X-B)Q=Q^tMQ=0$$ Thank you in advance.
If I understand the statement of the problem correctly, $A=Q Q^t$ is an orthonormal projection onto an $m$ dimensional subspace of ${\Bbb R}^n$: $A^2=A=A^t$. You have a solution iff the image of $B$ is in the image of $A$. In this case $ABA=B$ as well (when $B$ is symmetric) so a solution is simply $X=B$. If $B\neq ABA$ then your problem has no solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2387959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is it possible to dissect a circle of radius 9 into 81 equal areas, using only circles? You can dissect a circle of radius $3$ into $9$ equal areas by placing within it $5$ unit circles in an orthogonal cross shape. You could then place $5$ of these radius $3$ circles into a circle of radius $9$ in a similar way. Is it then possible to dissect the $4$ irregular shapes each into $9$ equal areas using only circles? Or is there another entirely different way of achieving a similar result?
My first thought was to make 80 concentric circles, all centered at the center of the big circle. Choose radii $r_1, r_2, \cdots, r_{80}$ so that each band of the dart board has the area $1/81$ of the area of the big circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2388054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Difference between Ritz vectors and Eigenvectors This is probably a silly question, as it came from an error in the Eigenvectors I found using ARPACK (Fortran). In this case, the values of the Ritz vectors are identical in value to the theoretical Eigenvectors but different in sign. So what is the real difference between the two? Should I expect this behavior?
First of all, the eigenvectors corresponding to eigenvalues of multiplicity one are only unique up to a scalar. Hence, if $v$ is an eigenvector, so is $\mu v$ for $\mu \in \mathbb{C}$. Let me turn to the definition of Ritz vectors. Ritz vectors are usually approximations to the eigenvectors of a matrix $A$ that are obtained using the Arnoldi method (see here). The Arnoldi method computes a matrix $H_m$ and $V_m$. If $y$ is an eigenvectors of the matrix $H_m$, then $V_m y$ is called the Ritz vectors and approximate the eigenvectors of $A$. More generally, if $q_1, \dots, q_m$ is a set of normalized and orthogonal vectors and $Q_m = [ q_1 | \dots | q_m ]$. Then if $y$ is an eigenvector of $Q_m^T A Q_m$, then $Q_m y$ is called a Ritz vector and the corresponding eigenvalue a Ritz value of $A$ w.r.t. the subspace $\mathrm{span}\{q_1, \dots, q_m\}$. In the case of the Arnoldi method, we have $Q_m = V_m$ and $H_m = Q_m^T A Q_m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2388155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Interpret this function notation? I have the following function notation \begin{align} f: &\, \mathbb{R} \rightarrow \mathbb R^2 \\ &x\mapsto y=f(x) \end{align} Does it actually mean \begin{gather} y=f(x) \\ (y_1,y_2)=(f_1(x),f_2(x)) \\ \begin{cases} y_1=f_1(x) \\ y_2=f_2(x)\end{cases} \qquad ? \end{gather} Or with an exemple, if $f(t)=(e^t,t^2)$ and $y=f(t)$, so \begin{align} (y_1,y_2)=(f_1(t),f_2(t))=(e^t,t^2) \end{align} \begin{cases} y_1=f_1(t)=e^t\\ y_2=f_2(t)=t^2 \end{cases} Is this correct?
Yes. The notation $f:\mathbb{R}\to\mathbb{R}^2$ tells you that the function takes in a single real number and returns an ordered pair of real numbers. Hence the notation $x\mapsto y=f(x)$ is telling you that $y$ is an ordered pair of real numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2388272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I fit two equal-length arcs in the opposite corners of a right triangle? I am trying to make two arcs of the same length such that they will both fit in a right triangle like so: I am given the first radius (r1) and a constant k that is the difference between the leg opposite angle one and the second radius (r2) I know that the angle of the arcs have to add up to Pi/2. I also know that this becomes impossible at some small value of k - for instance, if k was zero there is no way the two arcs could be of equal length. If I knew the angle, I could find the second radius, or vice-versa, using trigonometric functions. In the case where both the angles were Pi/4, I know that k would be r*(sqrt(2) - 1). In case anyone wants to know why I'm doing this, I'm trying to show a circle that is broken open at a point and the broken ends curve out until they are parallel and a certain distance apart. That distance will be 2*k. I want to make sure that the curved-outward parts are the same length as they would be if they were still completing the circle.
just hint The same length means that $$r_1\theta =r_2 (\frac {\pi}{2}-\theta) $$ or $$\theta=\frac {\pi r_2}{2 (r_1+r_2)} $$ in the triangle, $$\sin (\theta)=\frac {k+r_2}{r_1+r_2} $$ if we put $k=xr_2$ then $$x=\frac {2\theta}{\pi}\sin (\theta)-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2388372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Analytic Continuation for a Product I was trying to solve the functional equation $$\phi(x)^2\phi(2x)=x^2+2x+1$$ and by assuming that $\phi(1)=1$, and setting up a recurrence relation, I found the solution $$\phi(x)=\prod_{i=0}^{\log_2(x)-1} (2^i+1)^{x-i}$$ However, this only makes sense for values of $x$ that are perfect powers of two. How can I extend this to non-powers of $2$ but still satisfy the functional equation?
$\phi$ is non-negative. $f(x)=\log(\phi(x))$ satisfies $$2f(x)+f(2x)=\log\left((x+1)^2\right)$$ Since you are talking about analytic continuation For analytic solutions we can compute the derivatives at $x=0$ $$2f^{(n)}(x)+2^nf^{(n)}(2x)=\frac{d^n}{dx^n}\log\left((x+1)^2\right)$$ Therefore $$f^{(n)}(0)=\frac{1}{2+2^n}\frac{d^n}{dx^n}\log\left((x+1)^2\right)|_{x=0}=\frac{(-1)^nn!}{n\left(1+2^{n-1}\right)}$$ Therefore $$f(x)=\sum_{n=1}^{\infty}\frac{(-1)^n}{n\left(1+2^{n-1}\right)}x^n$$ and $$\phi(x)=\exp\left(\sum_{n=1}^{\infty}\frac{(-1)^n}{n\left(1+2^{n-1}\right)}x^n\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2388502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show that $e^{-x}x^n$ is bounded on $[0,\infty)$ and hence prove that $\int_0^\infty e^{-x}x^n \, dx$ exists. Show that $e^{-x}x^n$ is bounded on $[0,\infty)$ for all positive integral values of $n$. Using this result show that $\int_0^\infty e^{-x}x^n \, dx$ exists. My work: I know $f$ is a continuous function and $\lim\limits_{x \to \infty} f(x)=\lim\limits_{x \to \infty} \dfrac{x^n}{e^x}=\dfrac{n!}{e^x}=0$, by applying L'hospital's rule repeatedly. But how to prove its boundedness formally? Also for existence of an integration, my book only covers for those integration where the limits are finite (Darboux's condition for integrability). So, how do I prove existence of an integration if the limits are infinite.
For the boundedness, yoou can take the derivative and show that it is negative on some interval $[a,\infty)$ and thus show the function is bounded on $[a,\infty)$. It is bounded on $[0,a]$ since it's continuous. The way you show the integral on the infinite interval exists is to remember the definition of an improper integral existing: $$ \lim_{b\to\infty}\int_0^b x^n e^{-x} dx$$ exists and is finite. It should be said that bounded on $[0,\infty)$ does not imply that the integral exists (take $\frac{1}{x+1}$ for instance), so that part of the question seems a bit misleading. However, it's true here and a simple way to prove it is by induction. You can show the integral exists for $n=0$ straightforwardly, and then for the induction step, use integration by parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2388596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Decidability of irreducibility in $Z[X]$ I am interested in a naive algorithm for testing irreducibility in $\mathbb Z[X]$: given a polynomial $p=a_nX^n+\ldots+a_0$ in $\mathbb Z[X]$, is there a known explicit bound $M=M(n,\max_{i=0}^n|a_i|)$ such that if $p$ factorizes as $qr$ in $\mathbb Z[X]$, then the coefficients of $q$ and $r$ are all bounded above by $M$ in absolute value?
An alternative idea that doesn't involve the complex roots of $p$: Find $n+1$ distinct integers $x_0, x_1, \ldots, x_n$ such that $p(x_i) \neq 0$ for all i. Note that if $p$ factorizes as $p = qr$, then we have $p(x_i) = q(x_i) r(x_i)$ for all $i$. This gives us an upper bound on $\left|q(x_i)\right|$, and the values $q(x_i)$ fully determine the polynomial $q$ by polynomial interpolation in Lagrange form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2388698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to evaluate $\int_0^{\infty}\int_0^{\infty}\left(\frac{e^{-\Lambda x}-e^{-\Lambda y}}{x-y}\right)^2 \,dx\,dy$ How to evaluate $$F(\Lambda)=\int_0^{\infty}\int_0^{\infty}\left(\frac{e^{-\Lambda x}-e^{-\Lambda y}}{x-y}\right)^2 \,dx\,dy$$ where $\Lambda$ is a positive real number? I tried evaluating the innermost integral first, $\displaystyle\int_{0}^{\infty}\left(\frac{e^{-\Lambda x}-e^{-\Lambda y}}{x-y}\right)^2 dx$ and found the solution is terms of the exponential integral $E_1(u)$ and I am not sure if that would help at all.
By symmetry, for any $\Lambda>0$ we have $$\begin{eqnarray*} F(\Lambda)=\iint_{(0,+\infty)}\left(\frac{e^{-\Lambda x}-e^{-\Lambda y}}{x-y}\right)^2\,dx\,dy&=&2\int_{0}^{+\infty}\int_{0}^{x}\left(\frac{e^{-\Lambda x}-e^{-\Lambda y}}{x-y}\right)^2\,dy\,dx \\ &\stackrel{y\mapsto xz}{=}&2\int_{0}^{+\infty}\int_{0}^{1}\frac{\left(e^{-\Lambda x}-e^{-\Lambda xz}\right)^2}{x(1-z)^2}\,dz\,dx \\ &\stackrel{x\mapsto w/\Lambda}{=}&2\int_{0}^{1}\int_{0}^{+\infty}\frac{\left(e^{-w}-e^{-wz}\right)^2}{w(1-z)^2}\,dw\,dz \\ &\stackrel{\text{Frullani}}{=}&2\int_{0}^{1}\frac{2\log(1+z)-\log(4z)}{(1-z)^2}\,dz\end{eqnarray*}$$ hence $F(\Lambda)$ is constant and it equals $$\begin{eqnarray*} F(\Lambda)&=&2\int_{0}^{1}\frac{2\log\left(1-\frac{u}{2}\right)-\log(1-u)}{u^2}\,du\\&=&2\int_{0}^{1}\sum_{n\geq 2}\left(\frac{1}{n}-\frac{2}{n 2^{n}}\right)u^{n-2}\,du\\&=& 2\sum_{n\geq 2}\frac{1}{n(n-1)}\left(1-\frac{2}{2^n}\right)=\color{blue}{2\log 2}. \end{eqnarray*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2388892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
pick out the uniformly continous function Pick out the uniformly continuous function for $x \in (0,1)$ $$(1) \quad \quad\quad f(x)= \cos x \,\cos \frac {\pi}x$$ $$(2)\quad \quad \quad f(x) = \sin x \, \cos \frac {\pi}x$$ i was trying this question, i was think that $\sin x$ and $\cos x$ are periodic and continuous , so it is uniformly continuous function,, therefore from my point of both option 1 and option 2 both are true uniformly continuous. But i m not sure about my answer If anybody help me i would be very thankful to him thank
In case (2) you can define it as $0$ for $x=0$. This turns it into a continuous function on $[0,1]$. In fact, $\sin(x)\cos(\pi/x)$ is continuous on $(0,1]$ and $\lim_{x\to0^+}\sin(x)\cos(\pi/x)=0$. Therefore, by Cantor's theorem, it is uniformly continuous. In case (1), for $\epsilon=1/2$ we can find $x_n=\frac{1}{2k}$ and $y_n=\frac{1}{2k+1}$ which for $k$ large will have $|x_n-y_n|$ arbitrarily small, $\cos(x_n)$ and $\cos(y_n)$ very close to $1$, and $\cos(\pi/x_n)=1$ while $\cos(\pi/y_n)=-1$. Therefore $|\cos(x_n)\cos(\pi/x_n)-\cos(y_n)\cos(\pi/y_n)|>1/2=\epsilon$. Therefore, it is not uniformly continuous on $(0,1)$ or any neighborhood of $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2389042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Subgroups of $\{\sigma^k,\tau \sigma^k\mid 0\le k \le 7\}$ I am reading in Dummit and Foote, page $579$. We have the following group: $$G = \langle \tau,\sigma\mid \sigma^8,\tau^2,\sigma \tau = \tau \sigma^3\rangle = \{\sigma^k,\tau\sigma^k\mid 0\le k\le 7\}$$ which is just $Gal\left( \mathbb Q(i,\sqrt[8]{2})/\mathbb Q\right)$. It is stated that: "determining the subgroups of this group is a straightforward problem". How so? Is there an easy way to determine the subgroups? Here are the subgroups:
The Galois group of $K/\mathbb{Q}$ with $K:=\mathbb{Q}(\sqrt[8]{2},\zeta)$ has order $16$ and is an extension of $(\mathbb Z_8)^\times$ by $\mathbb Z_4$. For all groups of order $16$, in particular for this Galois group, the subgroups have been computed, see here, or here and similar references. One can use the classification of groups of order $1,2,4,8$ and other facts about $2$-groups. One also can use GAP. Doing the computations by hand may take some time. For the Galois group see also this MSE-question: Finding the Galois group over $\Bbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2389134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The Number Theoretic Statement is ..... Prove or Disprove : There exists $A\subset\mathbb{N}$ with exactly FIVE elements, such that sum of any three elements of $A$ is a prime number. I don't have any hint or insight about proving or disproving the above statement. I even don't know whether the statement is true or not. Can anyone guide me how to approach this problem ?
Consider the residues of the elements $\pmod 3$. Note that if we have $a_1 \equiv 0 \pmod 3, a_2 \equiv 1 \pmod 3, a_3 \equiv 2 \pmod 3$, then we are done - their sum is divisible by 3. So only two of the residues $\pmod 3$ can be present. But by the pigeonhole principle, that means that at least $\left\lceil\frac{5}{2}\right\rceil = 3$ of these elements have the same residue $\pmod 3$. Then their sum is divisible by $3$. Hence, no such set exists. (To be a little more precise, since the $a_i$'s are distinct, their sum must be at least $6$ if you take $0 \notin \mathbb{N}$. If you consider $0 \in \mathbb{N}$, then you could technically have $0,1,2$ all be elements to get this sum. In this case, we would then have $a_4 + 0 + 1$ and $a_4 + 0 + 2$ are both prime, which is a contradiction as one is even and strictly greater than $2$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2389264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why does the difference between squares of the $i^{th}$ even and odd integers increase by 4? I've been trying to find a formula for sum of the first n squares of odd numbers (a question from Spivak's calculus) and I was trying to subtract the the $i^{th}$ odd number from the $i^{th}$ even number, I've noticed that the the difference between the squares is (or seems to be) increasing by 4. This what I'm seeing: \begin{align*} 2 ^ 2 - 1 ^ 2 = 3\\ 4 ^ 2 - 3 ^ 2 = 7 \\ 6 ^ 2 - 5 ^ 2 = 11 \\ 8 ^ 2 - 7 ^ 2 = 15 \\ 10 ^ 2 - 9 ^ 2 = 19 \\ 12 ^ 2 - 11 ^ 2 = 23 \end{align*} Every consecutive term in $3,7,11,15,19,23$ differs by 4. The fact that the difference of the squares for the $i^{th}$ even and odd integers increases by 4 seems completely arbitrary. Why 4? Is there something going on that I'm just missing? I don't know if it has something to do with Dirichlet's arithmetic progression for $4n + 3$. The pattern indicates this formula to me: $$ \sum_{k=1}^{n} (2k)^2 = \sum_{k=1}^{n}(2k-1)^2 + \sum_{k=0}^{n-1}(4k+3) $$ I believe this formula is true (though I haven't proved it yet), this will make it possible for me to use the closed form: $$\sum_{k=1}^{2n}k^2 = \frac{n(2n+1)(4n+1)}{3}$$ to find the closed form for the sum of odd numbers, But the question is: why is the difference between the $n^{th}$ even and odd integer equal to $4(n-1)+3$?
Choose an even number and write as $2k$. Then the first two differences between even/odd squares are: $4k^2 - (2k - 1)^2 = 4k^2 - ( 4k^2 - 4k +1) = 4k - 1$ $ (2k+2)^2 - (2k+1)^2 = 4k^2 +8k + 4 - (4k^2 +4k + 1) = 4k + 3 $ and clearly $4k+3 - (4k-1) = 4$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2389403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there a geometric method to show $\sin x \sim x- \frac{x^3}{6}$ I found a geometric method to show $$\text{when}\; x\to 0 \space , \space \cos x\sim 1-\frac{x^2}{2}$$ like below : Suppose $R_{circle}=1 \to \overline{AB} =2$ in $\Delta AMB$ we have $$\overline{AM}^2=\overline{AB}\cdot\overline{AH} \tag{*}$$and $$\overline{AH}=\overline{OA}-\overline{OH}=1-\cos x$$ when $x$ is very small w.r.t. * we can conclude $$x^2 \approx 2(1-\cos x) \\\to \cos x \sim 1- \frac{x^2}{2}$$ Now I have two question : $\bf{1}:$ Is there other idea(s) to prove (except Taylor series) $x\to 0 , \space \cos x\sim 1-\frac{x^2}{2}\\$ $\bf{2}:$ How can show $\sin x \sim x- \frac{x^3}{6}$ with a geometric concept ? Thanks in advance.
Not a complete answer, I am afraid. Approximating the arc $AB$ as a line segment leads to the correct approximation of $\cos(x) \sim 1 - \frac{x^2}{2}$ but leads to the inaccurate result for the corresponding sine as $ \sin(x) \sim x - \frac{x^3}{8}$. Nevertheless, I post my approach, since it has not been covered in the preceding answers. In the above diagram, we have $OA=OB=1$ and $\angle BOA = x$ Let the co-ordinates of point $B$ be $(h,k)$ Since $B$ lies on the unit circle, $$h^2 + k^2 =1\tag{1}$$ The equation of the line $AB$ is given as $y^*=m(x^*-1)$ and since B lies on the line $AB$, we have $$ k=m(h-1\tag{2})$$ where $m$ is the slope of $AB$. From $(1)$ and $(2)$, we have $$\Rightarrow h^2 + m^2(h-1)^2=1 $$ $$\Rightarrow h^2(1+m^2) - h(2m^2) + (m^2 -1)=0 $$ which gives two values of $h$, of which one value $h=1$ is for the point $A$ $$\Rightarrow h=\frac{m^2 -1}{m^2 +1}, \; k =\frac{-2m}{m^2 +1} \tag{3}$$ The length of the line segment $AB$ is assumed to be $x$, hence $$\Rightarrow (h-1)^2 + k^2 =x^2 \tag{4}$$ Using $(2)$ and $(4)$, we get $$ \Rightarrow (1+m^2)(h-1)^2 =x^2$$ $$\Rightarrow x=|h-1|\sqrt{1+m^2}=(1-h)\sqrt{1+m^2}$$ From $(3)$, we have $$ \Rightarrow x=\frac{2}{\sqrt{1+m^2}}$$ $$ \Rightarrow 1+m^2 = \frac{4}{x^2}$$ $$ \Rightarrow m = \sqrt{\frac{4-x^2}{x^2}} \tag{5}$$ Now, using $(5)$ and $(3)$, $$ h=OD=\frac{4-2x^2}{4}, \; k=DB= \frac{x\sqrt{4-x^2}}{2}$$ Thus $OD = \cos(x) \sim 1-\frac{x^2}{2}$ and $DB= \sin(x) \sim x(1-\frac{x^2}{4})^{1/2}$ This gives $\sin(x) \sim x - \frac{x^3}{8}$, which isn't quite correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2389537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 6, "answer_id": 4 }
Finite symmetries for embeddings of genus $\geq 2$ surfaces in $\mathbb{R}^3$ Let $f : \Sigma \to \mathbb{R}^3$ be a genus $g \geq 2$ surface smoothly embedded in $\mathbb{R}^3$. Let $$ G(f) = \{ \phi \in \text{Isom}(\mathbb{R}^3) : \phi(f(\Sigma)) = f(\Sigma)\} $$ be the group of isometries of $\mathbb{R}^3$ that preserve $\Sigma$. Is the order of $G(f)$ always finite? If so is there a bound on the order of $G(f)$ (presumably in terms of $g$)?
Benson-Tilsen's paper "Isometry Groups of Compact Riemann Surfaces" seems to have part of the answer. Be careful, though; it only deals with orientation-preserving isometries ($G^+ \neq G$), so the numbers there are half of what they could be. And it doesn't deal with embedding. The important result, for the maximum possible size of the group for any surface with genus $g \geq 2$, is $$8(g+1) \leq \max|G^+| \leq 84(g-1)$$ (The max itself is a bound on the group's size, and this is a bound on the max.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2389635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Implicit differentiation for $x^2+y^2-x^2y^2=1$ while the slope of the curve must be ($0$) $x^2+y^2+cx^2y^2=1$ 1- What happens to the curve when $c=-1$? Describe what appears on the screen. Can you prove it algebraically? 2- Find $y'$ by implicit differentiation. For the case $c=-1$, is your expression for $y'$ consistent with what you discovered in part (b)? I've solved the first question and found that the curve is two parallel lines ($y=1$, $y=-1$); and proved that algebraically. My proof is as follows, please correct me if I'm wrong: When $c=-1, x^2+y^2-x^2y^2=1$. Then, $y^2-x^2y^2=1-x^2$. Then, $y^2(1-x^2)=1-x^2$. Dividing both sides by $(1-x^2)$, $y^2=1, y=1 or -1$ when $x\ne1$ The problem is in the 2nd question. The derivative of that curve when $c=-1$ is $\left[ y'= \dfrac{y^2x-x}{y(1-x^2)}\right]$, but according to the solution of the first question the slope must be ($0$), how can we make the two solutions consistent with each other???
When $c=-1$ then the curve reduces to $(y^2-1)(x^2-1)=0.$ That means either $y= 1$ or $y=-1$ or $x=1$ or $x=-1.$ So the graph consists of two vertical lines and two horizontal lines. When $y=1$ or $y=-1$ then the expression you've got for $y'$ becomes $0,$ as you'd expect given that those are two horizontal lines. When $x=1$ or $x=-1,$ then the denominator in the expression you've got for $y'$ becomes $0$ and the derivative is undefined, as you'd expect for two vertical lines.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2389848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $x^2\geq 0$ an axiom? For real numbers, $x^2\geq 0$ is always true, but why actually? Is it an axiom, definition or is there a proof?
Any of the above, depending on exposition. There are two typical axiomatic approaches to ordered fields, which boil down to how to relate multiplication to the ordering. One version axiomatizes the ordering, and including the requirement that if $0 \leq a$ and $0 \leq b$, then $0 \leq a \cdot b$. One can then use this, along with trichotemy, to show that $0 \leq x^2$. (using the fact $x^2 = (-x)^2$) In the other version, one axiomatizes the positive numbers, including the requirement that $x^2$ is positive for every nonzero $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given two stochastic processes on a probability space, will their compound process be a valid stochastic process on the same probability space? Let the stochastic process $M=(M_t, t\ge 0)$ and the stochastic pathwise continuous increasing process $Y=(Y_t,t\ge 0)$ be defined on the probability space $(\Omega, \mathcal F, P)$. Will the compound process $M_Y=(M_{Y_t},t\ge 0)$ also be valid (measureable on the same sigma algebra which $M$ and $Y$ maps from) on this probability space? If it is not valid in general, what if $M$ and $v$ are independent of each other? Will it then be 'valid'? Clarification: Does $\{\omega\in\Omega\colon M(\omega)\in B\}\in\mathcal{F}$, $\forall B\in \mathcal{B}(\mathbb{R})$ and $\{\omega\in\Omega\colon Y(\omega)\in B\}\in\mathcal{F}$, $\forall B\in \mathcal{B}(\mathbb{R})$ $\implies$ $\{\omega\in\Omega\colon M_Y(\omega)\in B\}\in\mathcal{F}$, $\forall B\in \mathcal{B}(\mathbb{R})$ hold? $\mathcal{B}(\mathbb{R})$ being the generated Borel $\sigma$-algebra.
Your "compound" process $M_Y$ at a fixed $t\ge 0$ is the composition of two mappings $\psi_t(\omega):=(\omega,Y_t(\omega))$ and $\varphi(\omega,u):=M_u(\omega)$. The former is an $\mathcal F / \mathcal F\otimes\mathcal B$ measurable mapping of $\Omega$ to $\Omega\times[0,\infty)$. (Where $\mathcal B$ denotes the Borel subsets of $[0,\infty)$.) This is because $\mathcal F\otimes\mathcal B$ is generated by rectangles of the form $F\times B$, $F\in\mathcal F, B\in\mathcal B$, and $\psi_t^{-1}(F\times B)=F\cap Y_t^{-1}(B)$. To finish you need to know that the latter mapping $\varphi$ is $\mathcal F\otimes\mathcal B / \mathcal R$ measurable. (Here I use $\mathcal R$ to denote the Borel subsets of $\Bbb R$.) This situation is referred to as $M$ being a "measurable process" and is an additional hypothesis. For example, if each random variable $M_u$ is $\mathcal F$ measurable and $u\mapsto M_u(\omega)$ is right continuous for each $\omega$, then $M$ is a measurable process. If both $\psi_t$ and $\varphi$ are measurable as indicated, then the composite function $M_{Y_t}=\varphi\circ\psi$ is $\mathcal F/\mathcal R$ measurable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why a definite integral is not infinite If i consider a definite integral of $f(x)=1$ from $[-1,1]$ in $\mathbb{R}$, why the integral is equal to $2$ even if from $-1$ to $1$ there are infinite points ? Thanks
Because the definite integral is calculating the area between the curve and the horizontal axis. It isn't just counting points. In the case of $f(x)=1$ on $[-1,1]$, the region under the curve is a rectangle with length $2$ and height $1$. This gives an area of $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to tell the number of solutions to a simultaeneous equation? How many solutions are there to the following simultaeneous equations?: $$ \begin{align} x - 2y + 3z = 1\\ 2x + 2y - z = 4\\ 4x - y + 5z = 6 \end{align} $$ How can I know the number of solutions that there are? EDIT: I have found z = 0, y = 2/7, x = 11/7 as solutions but the answers I have say that there are infinite solutions. How can one deduce this? Please note that I am a high school student and am unable to understand advanced mathematics.
Find the augmented matrix corresponding to the system of equations, then, using Gaussian elimination, manipulate it into row echelon form. This will give you information about the number of solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Curvature of a curve is independent of the parametrization My professor defined the curvature $\kappa$ of a regular curve $\gamma:[a,b]\to\mathbb{R}^n$ at a point $t\in(a,b)$ to be $$k(t) = \frac{\left|\gamma''(t)\right|}{\left|\gamma'(t)\right|^2}$$ but there was not enough time to prove that this definition is independent of the parametrization and I am trying to show that. Let $\phi:[c,d]\to[a,b]$ be a (smooth) diffeomorphism. For readability reasons, we will denote $\gamma\circ\phi(s)$ by $\gamma$ and $\phi(s)$ by $\phi$. Then \begin{array} ft\kappa^2(t) = \frac{\left|(\gamma\circ\phi)''(t)\right|^2}{\left|(\gamma\circ\phi)'(t)\right|^4} = \frac{|(\gamma'\phi')'|^2}{|\gamma'\phi'|^4} = \frac{|\gamma''\phi'^2+ \gamma'\phi''|^2}{|\gamma'\phi'|^4} &= \frac{\langle\gamma''\phi'^2+ \gamma'\phi'', \gamma''\phi'^2+ \gamma'\phi''\rangle}{|\gamma'\phi'|^4} \\&= \frac{|\gamma''|^2}{|\gamma'|^4} + \phi''\frac{\phi''|\gamma'|+ 2\phi'^2\langle\gamma',\gamma''\rangle}{|\gamma'\phi'|^4}. \end{array} By this calculation, if $\gamma$ is reparametrized by arc length, then $\phi'' = 0$, and we have our result. But can I ensure that the expression on the right is 0 for any diffeo $\phi$?
This expression is correct for any constant-speed parametrization, which are all related by affine diffeomorphisms (i.e. $\phi'' = 0$); so your calculation shows invariance within this restricted class. For formulae in a general coordinate see e.g. wikipedia.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Matrix with spectral radius greater or equal 1 such that fixed point iteration converges Let $M \in \mathbb{R}^{n \times n}$. In a lecture on numerical linear algebra we had a theorem which states that the iteration $\phi(x) = Mx+b$ converges for all $b \in \mathbb{C}^n$ and initial values $x_0 \in \mathbb{C}^n$ if and only if $\rho(M) < 1$ where $\rho$ is the spectral radius. So the question "When does $x_{k+1} = Mx_k + b$ converge for all $b \in \mathbb{C}^n$ and $x_0 \in \mathbb{C}^n$?" has the answer "if and only if $\rho(M) < 1$". My question is now "When does $x_{k+1} = Mx_k + b$ converge for all $b \in \mathbb{R}^n$ and $x_0 \in \mathbb{R}^n$?" Of course it is still sufficient that $\rho(M)<1$ but is it also necessary? If yes, I would appreciate a hint on how to prove this. EDIT: Proof of theorem mentioned above: Let $\rho(M) < 1$. Then there is some matrix norm $||\cdot ||$ such that $||M|| < 1$. Apply Banach fixed point theorem on $\phi$. Now, suppose that $\rho(M) \geq 1$. Then pick an eigenvalue $\lambda$ such that $|\lambda|\geq 1$ and let $v$ be some eigenvector to $\lambda$ (it may be complex). Then pick $b = x_0 = v$. It is easy to see, that this fixed point iteration does not converge.
If $v$ is an eigenvector for $\lambda$, then $\text{Re}(M^n v) = M^n \text{Re}(v)$ and $\text{Im}(M^n v) = M^n \text{Im}(v)$. If $\|M^n v\| > N$, then at least one of these has norm $> N/2$. Thus for $b = 0$ and at least one of $x_0 = \text{Re}(v)$ and $x_0 = \text{Im}(v)$, the iteration doesn't converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the derivative of $y=x^{\sin x}$ Could someone please explain step 3 for the following: Why do they multiply $1/y$ with $y'$? I understand that the derivative of $\ln y$ is $1/y$, but I don't understand why it is multiplied with $y'$ in step 3. Find the derivative for $y=x^{\sin x}$ Step 1: $\ln y=\ln x^{\sin x}$ Step 2: $\ln y=\sin x\ln x$ Step 3: $\frac{y'}y=\cos x\ln x+\frac{\sin x}x$ Step 4: $y'=y\left[\cos x\ln x+\frac{\sin x}x\right]=x^{\sin x} \left[\cos x\ln x+\frac{\sin x}x\right]$
The derivative of $\ln y$ with respect to $y$ is certainly $1/y$, but in this case the derivative needs to be taken with respect to x, so by the chain rule, the derivative of $\ln y$ with respect to $x$ is $y'/y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Scaling a set of reals to be nearly integers I have a set $P$ of $n$ positive real numbers, for example: $$ P = \{ \pi, e, \sqrt{2} \} \approx \{3.14159, 2.71828, 1.41421\} \;. $$ Given some $\epsilon > 0$, I would like to find the smallest scale factor $s \ge 1$ so that, for each $x \in P$, $s x$ is within $\epsilon$ of an integer. More precisely, if $[z]$ is $z$ rounded to the nearest integer, then $| sx - [sx] | < \epsilon$. For example, if $\epsilon = 0.08$, then $s=7.018$ works for the above $P$: $$ 7.018 \, P \approx \{22.0477, 19.0769, 9.92495\} \;, $$ and the gaps to the nearest integers are $$ \{0.0477, 0.0769, 0.0750\} \;. $$ But I don't know that $7.018$ is the minimum. Q. What is a general procedure to compute the smallest $s$, given $P$ and $\epsilon$?
Such $s$ leads to good rational approximations for the quotients of the components (here, $\frac \pi e\approx\frac{22}{19}$, $\frac\pi{\sqrt 2}\approx\frac{22}{10}$, and $\frac e{\sqrt 2}\approx\frac{19}{10}$). Generalizing, we look for rational approximatons $\alpha\approx \frac nm$ where more precisely $\frac{n-\epsilon}{m+\epsilon}<\alpha<\frac{n+\epsilon}{m-\epsilon}$. In other words, $\alpha=\frac{n+t\epsilon}{m-t\epsilon}$ with $-1<t<1$. Soloving for $t$, we get $$\tag1t=t(n.m)=\frac{\alpha m-n}{(1+\alpha)\epsilon} $$ and thus are looking for a rational approximation $\frac nm$ with $$\left|\alpha-\frac nm\right|<\frac{(1+\alpha)\epsilon}{m}$$ We see that we really want to rationally approximate our number $\alpha$ (in fact several numbers simultaneously) with a relative error $\sim\epsilon$. The trial method described in Hurkyl's answer is certainly simple and good enough for many applications - as long as $\epsilon$ is not too small and the quotients are not "too irrational". In more complex cases one may save time by going more systematically through the good enough rational approximations. Playing around with Farey fractions may be of good help there. Example computation: $\frac{\pi}{\sqrt 2}$ is between $\frac 21$ and $\frac 31$. Using $(1)$, we find $t(2,1)\approx 0.86$, which is good enough (whereas the multiple $t(4,2)\approx 1.7$ is not); but $t(3,1)\approx -3$ is bad. Next we try the Farey sum $\frac{2+3}{1+1}=\frac 52$: $t(5,2)\approx -2.2$ is bad. We try the next Farey sum (noting that $\frac 52$ replaces $\frac 31$ because their $t$-values are both negative) $\frac{2+5}{1+2}=\frac 73$. As $t(7,3)\approx -1.3$, we continue with $\frac{2+7}{1+3}=\frac 94$. As $t(9,4)\approx -0.44$ have found another candidate (in fact two candidates as $t(18,8)\approx -0.88$ is good as well). Now we have to test both $\frac{2+9}{1+4}=\frac {11}5$ and $\frac{9+7}{4+3}=\frac{16}{7}$. We find that $t(11,5)\approx 0.42$ and $t(22,10)\approx 0.83$, whereas $t(16,7)\approx -1.7$. So far we have found the first few terms of a sequence of candidate fractions or perhaps rather proportions (because we want to distinguish forms that are not in shortest terms) (not necessarily in shortest terms), roughly in ascending order of either part $$2:1, 9:4, 18:8, 11:5, 22:10, \ldots $$ We can concurrently generate the corresponding sequences for $\frac\pi e$ and $\frac e{\sqrt 2}$ and readily find the smallest "match", leading to $\pi:e:\sqrt 2\approx 22:19:10$. From here, it is straightforward to find the smallest $s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Sum of unitary operators converges to projection operator So let $\mathcal{H}$ be a hilbert space and $U$ is a unitary operator on $\mathcal{H}$. Let $I=\{v\in\mathcal{H}:U(v)=v\}$. Show that $\frac{1}{N}\sum_{n=1}^NU^n(v)\rightarrow Pv$ where $P:\mathcal{H}\rightarrow\mathcal{H}$ is the projection operator onto $I$. I don't know how to proceed with this problem. Can someone suggest hints as to how to do this.
if $v \in I$ the result is trivial. Let $v \in I^{\perp}$ since the restriction of $Id - U$ on $I^{\perp}$ is bijective (Indeed it is injective : if $x \in I^{\perp}$, $(Id-U)(x) = 0 \Rightarrow x \in I \Rightarrow x = 0$ it is also surjective because if $z \in Im(Id-U)$ then there is $y \in H$ such that $z = (Id -U)(y)$. Since $(Id-U)(P(y)) = 0$ setting $x = (Id-P)(y)$ then $(Id-U)(x) = z$), Let $v_n = \frac{1}{N}\sum_{n=1}^{N}U^n(v) \in I^\perp$ and $w_n = (I-U)(v_n)$ then $w_n = \frac{1}{N}\sum_{n=1}^{N}\left(U^n(v) - U^{n+1}(v)\right) = \frac{1}{N}(U(v) - U^{N+1}(v)) \rightarrow 0$ because $||U|| \le 1$. Setting $V$ the inverse of the restriction of $Id - U$ on $I^\perp$, we have $v_n = V(w_n) \rightarrow 0$. The result is also verified for $v \in I^{\perp}$. Using $H = I + I ^{\perp}$ and the linearity of $P$ we have the result for all $v$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solving $c A + I = (\det A) S$ I have a simple matrix equation $$c A + I = (\det A) S$$ which seems linear (or perhaps quadratic) and involves the determinant of the matrix being solved for. Here, $c$ is a constant, and $S$ is a matrix. How could I solve for $A$, a symmetric matrix in $\mathbb{R}^{k\times k}$? An extremely efficient way of computing $A$ would also be fine. If it helps solve the problem, then know that $S$ corresponds to a (full-rank) covariance matrix $\frac{2}{n}X X^{\intercal}$, and $c$ corresponds to the norm of a mean-vector $\left\Vert \frac{1}{n}\overline{x}\right\Vert_2 = \frac{1}{n^2}\overline{x}^{\intercal}\overline{x}$. If there is no simple solution, a solution for the $k=2$ case is all I really need.
I'm not sure if this will help or not. So, let us do case $k=2$. Let $A$ be a solution and let $Q$ such that $QAQ^{-1}$ is $A$'s Jordan form. Multiplying the equation we get something of form $$\begin{pmatrix} c\lambda_1+1 & 0\\ 0 & c\lambda_2+1 \end{pmatrix} = \lambda_1 \lambda_2 QSQ^{-1} = \lambda_1 \lambda_2 \begin{pmatrix} s_1 & 0\\ 0 & s_2 \end{pmatrix}$$ which can easily be solved for eigenvalues of $A$. Notice that this works since symmetric matrices are diagonalizable by orthogonal matrix. Also both $A$ and $S$ get into Jordan form under same base change. Thus, to find solution, diagonalize $S$, calculate eigenvalues of $A$ and return it back: $$A = Q^{-1}\begin{pmatrix} \lambda_1 & 0\\ 0 & \lambda_2 \end{pmatrix}Q.$$ Edit: To address the issue of finding eigenvalues in higher dimensions. Since we have $c\lambda_i+1 = s_i\prod\lambda_j$, we can see that $\frac{c\lambda_i+1}{s_i}$ is constant, let us denote it by $d$. So, we have $c\lambda_i+1 =s_id\implies \lambda_i =\frac{ds_i-1}{c}$. Finally, to find $d$, you need to solve $\prod\frac{ds_i-1}{c} = d$, which can be done for $k\leq 4$ by Abel-Ruffini theorem, otherwise you would have to do it numerically.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Finding a solution to a system Let $i$ be of the form $i=2^a3^b5^c$, where $a,b,c\ge 0$ are integers. Consider numbers $x_{i,4},x_{i,6}$ where $x_{i,6}$ is defined when $2$ or $3$ divides $i$, $x_{i,4}$ is defined only when $2$ divides $i$. The constraints are * *If $2$ divides $i$, then $$x_{i, 4} + x_{i,6} \le \frac{\frac{1}{2}\frac{2}{3}\frac{4}{5}}{i}$$ *If $3$ divides $i$ but $2$ doesn't divide $i$, then $$x_{i,6} \le \frac{\frac{1}{2}\frac{2}{3}\frac{4}{5}}{i}$$ *$$\sum x_{i,4} = \sum x_{i,6} = \frac{1}{6}.$$ I know that this system has a solution, but the technique doesn't work when I include more $x_{i,j}$'s. I've never solved systems of this type; what's a good way to approach this problem?
Actually this system have no solution, first $x_2,y_2,x_3,y_3,x_5,y_5 \in \mathbb{R} \geq 0$. And $x_2+y_2=1 = \sum \limits_{k=1}^{\infty} \frac{1}{2^k}$ And $x_3+y_3 = \frac{1}{2} = \sum \limits_{k=1}^{\infty} \frac{1}{3^k}$ And $x_5 +y_5 = \frac{1}{4} = \sum \limits_{k=1}^{\infty} \frac{1}{5^k}$. So $\sum x_{i,4} = \frac{1}{2}\frac{2}{3}\frac{4}{5}* \frac{1}{2} * ((x_2+1)(x_3+1)(x_5+1)-(x_3+1)(x_5+1))$ because if $2\not|i$ then its not defined for $x_{i,4}$. And $\sum x_{i,6} = \frac{1}{2}\frac{2}{3}\frac{4}{5}* \frac{1}{3} * ((y_2+1)(y_3+1)(y_5+1)-(y_5+1))$ because if $2\not|i$ and $3\not|i$ then its not defined for $x_{i,6}$. Solving this system of equations using Wolfram|Alpha (also simplex method works) yield that there is no solution for your system.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2390917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Matrix chain rule question: what is $\frac{d}{dX} f(S)$ where $S = (A+X)^{-1}$ I'm trying to find the following derivative: $$\frac{d}{dX} f(S)$$ where $f$ is a function that takes a matrix and returns a scalar, and $S=(A+X)^{-1}$. Assume that we know what $\frac{d}{dS} f(S)$ is (for example, if $f(S)=\exp(u^\intercal S u)$, we'd have $\frac{d}{dS} f(S)=u u^\intercal f(S)$). I want to employ something like a matrix chain rule that looks like $$\frac{d}{dX} f(S) = \left(\frac{dS}{dX}\right) \left(\frac{d}{dS} f(S)\right)$$ but the problem is that the $\left(\frac{dS}{dX}\right)$ doesn't seem to make sense, and I don't know how to do a matrix-by-matrix derivative. If it helps, assume that all matrices are symmetric and PSD.
We know how to calculate the gradient with respect to $S$ $$G=\frac{\partial f}{\partial S}$$ We also know that $$\eqalign{ X &= S^{-1} - A\cr dX &= -S^{-1}\,dS\,S^{-1} &\implies dS = -S\,dX\,S \cr }$$ Let's use this to write the differential of the function, and then perform a change of variables to find a result in terms of $X$ $$\eqalign{ df &= G:dS \cr &= -G:S\,dX\,S \cr &= -S^TGS^T:dX \cr &= -S^T\,\frac{\partial f}{\partial S}\,S^T:dX \cr \cr \frac{\partial f}{\partial X} &= -S^T\,\frac{\partial f}{\partial S}\,S^T \cr }$$ where colon denotes the inner/Frobenius product, i.e. $$A:B={\rm tr}(A^TB)$$ and the cyclic properties of the trace give rise to some rules for rearranging the product, i.e. $$\eqalign{ A:BC &= AC^T:B \cr A:BC &= B^TA:C \cr A:BC &= BC:A \cr }$$ As you've discovered, the chain rule can be difficult to apply to matrix problems when the intermediate quantities, i.e. matrix-by-matrix or vector-by-matrix derivatives, are higher-order tensors. The virtue of the differential approach is that the differential of a matrix behaves like an ordinary matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2391112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When is a graph which is a combination of two graphs connected Let $G$ and $H$ be simple graphs, we build the graph $G \times H$ such that every vertex $(u,v)$ in $G \times H$ is an ordered pair of one vertex from $G$ (the first) and another from $H$ (the latter). Additionally, two vertices $(u_1,v_1),(u_2,v_2)$ are connected if and only if $u_1$ and $u_2$ are connected in $G$, and $v_1$ and $v_2$ are connected in $H$. I need to prove that $G \times H$ is connected if and only if both graphs are connected, and at least one isn't bi-partite. Any hints / suggestions ?? Thanks in advance :D P.s sorry for the formatting, I'll correct the mathematical notation as soon as I get home
Note that $(u_1,v_1)$ and $(u_1,v_2)$ are not connected by an edge in $G\times H$. The requirement that the graphs $G$ and $H$ are connected is fairly straightforward. If say G can be decomposed into two graphs $G_1$ and $G_2$ which are not connected, then clearly there will also be no connections between $G_1\times H$ and $G_2\times H$ The requirement that at least one of the graphs is not bipartite is equivalent to requiring an odd cycle. Say such a cycle exists in graph $H$ as $h_1,h_2,...h_k$. This then allows the indirect connection of the images of any two connected points in $G$, say $g_1$ and $g_2$, through the existence of the edges $(g_1, h_1)$ $\to$ $(g_2, h_2)$ $\to$ $(g_1, h_3)$ $\to$ $\ldots$ $\to$ $(g_1, h_k)$ $\to$ $(g_2, h_1)$ $\to$ $(g_1, h_2)$ $\to$ $\ldots$ $\to$ $(g_2, h_k)$. Any two points are then connected by following a path to such a cycle. Without such a cycle the graph divides on parity distance from some reference point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2391227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Normal approximations: What is the probability that player A will have at least 10 points more than player B? Two players $A$ and $B$ play the following game. Player $A$ uses a fair 8-sided die with numbers of $1, ...,8$ and rolls it to earn points. For each roll player $A$ collects a numbers of points corresponding to the number shown on the die. Player $B$ uses a fair coin and flips it to earn points. If the outcome is "heads", then player B collects $1$ point, if "tails" player $B$ collect $8$ points. This game is as follows: Player $A$ rolls the die $n=50$ times and player $B$ flips the coin $n=50$ times, after which player $A$ has collected $Y_A$ points and player B has collected $Y_B$ points. After $n=50$ trials: a) What is the probability that player $A$ will have at least $10$ points more than player $B$? Use normal approximations. b) What is the probability that player $A$ and player $B$ together will have more than $340$ points? Use normal approximations. c) What is the probability that player $B$ will have exactly $225$ points? I have calculated the means/expected values to: mean($Y_A$) = 4,5 and mean($Y_B$) = 4,5, the variances I have calculated to: Var($Y_A$)=5,25 and Var($Y_B$)=12,25. a) Following @callculus answer: I have in a table found Φ(.32) to be .6255. So 1 - 0.6255 = 0.3745, will that say that the probability for player A having at least 10 more points than player B is 37% b) Following the method from a): $$P(Y_A + Y_B > 340) = 1- P(Y_A + Y_B >= 340) = 1 - Φ((340+0.5-450)/\sqrt{875}) \\= 1 - Φ(-3.7) = 1-0.00009 = 99.991\% \approx 100\%$$ There is almost 100% $P(Y_A+Y_B > 340).$ c) Using the binomial PMF with $50$ trials and $25$ successes. $$(50!)/((25!)*(50-25)! * 0.5^{25} * (1-0.5)^{50-25} = 0.112275173.$$ So $P(Y_B = 225) = 11\%.$
To start I give you some hints: a) Firstly the inequlity $Y_A > Y_B + 10$ can be transformed to $Y_A-Y_B>10$. Then using the converse probability: $P(Y_A-Y_B>10)=1-P(Y_a-Y_B\leq 9)$ The expected values of the random variables are $E(Y_A)=E(Y_B)=50\cdot 4.5$. The variances of the random variables are $Var(Y_A)=50\cdot 5.25, Var(Y_B)=50\cdot 12.25$ Let $Y_D=Y_A-Y_B$. Then $E(Y_D)=50\cdot 4.5-50\cdot 4.5=0$. And $Var(Y_D)=Var(Y_A)+Var(Y_B)=50\cdot 17.5=875$ With the help of the central limit theorem we get $$P(Y_D\geq 10)=1-P(Y_D\leq 9)\approx 1-\Phi\left(\frac{9+0.5-0}{\sqrt{875}}\right)$$ $+0.5$ is the continuity correction factor. b) $E(Y_A+Y_B)=E(Y_A)+E(Y_B), Var(Y_A+Y_B)=Var(Y_A)+Var(Y_B)$ c) Here you have to evaluate at what combinations of coin-flips you get exactly 225 points. If I´m right the only combination you get 225 points if you flip 25 times head and 25 times tail. Can you calculate the probability by using the binomial distribution ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2391358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding the remainder of $N= 10^{10}+10^{100}+10^{1000}+\cdots+10^{10000000}$ divided by $7$ $$N= 10^{10}+10^{100}+10^{1000}+\cdots+10^{10000000}$$. What is the remainder when N is divided by 7? $$N=(10^{10}+10^{100}+10^{1000}+\cdots+10^{10000000})/7$$ $$Rem[3^{10}+3^{100}+\cdots+3^{10000000}]/7$$ Now I did not understand it from the next step Now in the next step it has been given $Rem[3\cdot 3^9+3\cdot3^{99}+\cdots+3\cdot3^{9999999}]/7$ $Rem[\underbrace{(-3)+(-3)+\cdots(-3)}_{7~\text{times}}]/7$
3^9 is a multiple of 3^3 which can be written as 27 and what is 27=28-1 now if you divide 27 by 7 what is the remainder?. It is -1 so now you multiply it with 3 in every step so= $-3 (7times )/7$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2391497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to change the appearence of the correct answer of $\cos55^\circ\cdot\cos65^\circ\cdot\cos175^\circ$ I represented the problem in the following view and solved it: $$\begin{align}-\sin35^\circ\cdot\sin25^\circ\cdot\sin85^\circ\cdot\sin45^\circ&=A\cdot\sin45^\circ\\ -\frac{1}{2}(\cos20^\circ-\cos70^\circ)\cdot\frac{1}{2}(\cos50^\circ-\cos120^\circ)&=A\cdot\sin45^\circ\\ \cos20^\circ\cdot\cos50^\circ-\cos50^\circ\cdot\cos70^\circ+\frac{\cos20^\circ}{2}-\frac{\cos70^\circ}{2}&=A\cdot(-4)\cdot \sin45^\circ\\ \frac{1}{2}(\cos30^\circ+\cos70^\circ)-\frac{1}{2}(\cos20^\circ+\cos120^\circ)&=A\cdot(-4)\cdot \sin45^\circ\\ \frac{\sqrt{3}}{4}+\frac{\cos70^\circ}{2}-\frac{\cos20^\circ}{2}+\frac{1}{4}+\frac{\cos20^\circ}{2}-\frac{\cos70^\circ}{2}&=-2\sqrt{2}A\\ A&=-\frac{\sqrt{6}+\sqrt{2}}{16} \end{align}$$ I believe that the above answer is true. But that didn't match a variant below: A) $-\frac{1}{8}$ B)$-\frac{\sqrt{3}}{8}$ C) $\frac{\sqrt{3}}{8}$ D) $-\frac{1}{8}\sqrt{2-\sqrt{3}}$ E) $-\frac{1}{8}\sqrt{2+\sqrt{3}}$ I did the problem over again. After getting the same result, I thought that the apperance of my answer could be changed to match one above, so I tried to implement one of formulae involving radical numbers: all to no avail. How to change that?
As $\cos175^\circ=\cos(180^\circ-5^\circ)=-\cos5^\circ,$ Like prove that : cosx.cos(x-60).cos(x+60)= (1/4)cos3x $$4\cos(60^\circ-5^\circ)\cos5^\circ\cos(60^\circ+5^\circ)=\cos(3\cdot5^\circ)$$ Now use $15=60-45$ or $=45-30$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2391564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Showing that $\lim_{x \rightarrow \infty}\frac{e^x}{x^n} = \infty$ I have been told that I can show this by showing two things, first that $$f(x) = \frac{e^x}{x^n}>\frac{e^n}{n^n}, \quad (x > n)$$ then $$f'(x) = \frac{e^x(x-n)}{x^{n+1}}>\frac{e^{n+1}}{n^{n+1}}, \quad (x > n+1)$$ I have managed to show both of these things, the first is fairly easy and the second follows $$\frac{e^x(x-n)}{x^{n+1}} = \frac{e^x}{x^n} - \frac{n}{x}\frac{e^x}{x^{n}} > \frac{e^x}{x^n}\bigg(\frac{1}{n+1}\bigg) > \frac{e^{n+1}}{n^{n+1}}$$ but I do not see how this helps me deduce that $\lim_{x \rightarrow \infty}\frac{e^x}{x^n} = \infty$ Just to clarify, I know how to show this a few other ways, I am interested in showing this way!
$e^x =$ $1 + x + \frac{x^2}{2!} + ...\frac{x^{n+1}}{(n+1)!} .....$ Hence: $e^x \gt \frac{x^{n+1}}{(n+1)!}$ , for $x \gt 0$, $ n \in \mathbb{N}$, $n \ge 1$. $\frac{e^x}{x^n} \gt \frac{x}{(n+1)!}$. Finally: $\lim_{x \rightarrow \infty} \frac{e^x}{x^n} \ge \lim_{x \rightarrow \infty} \frac{x}{(n+1)!} = \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2391647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Can a set of surreal numbers be defined with arbitrary cardinality? It is my understanding that the surreal numbers form a class rather than a set, because their collection is larger than any set. Thus it would seem to follow that for any cardinality, such as $\aleph_n$ or $\beth_n$ for a fixed $n$, a set of surreal numbers can be defined with that cardinality. Is explicitly defining such a set something that can be easily done?
Yes: Since every ordinal is a surreal number, the sets you're looking for can be taken to be the initial ordinals that represent $\aleph_n$, $\beth_n$, and so forth.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2391762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Conditional probability on 3 events. Say you have 3 events $A, B$, and $C$. Then you have to calculate the probability of $B$ given $A$. The formula that the answer key states: $$P(B|A)=P(B|A,C)P(C) + P(B|A,C^\complement)P(C^\complement)$$ I understand that for just two events $B$ and $A$ it is: $$P(B)=P(B|A)P(A) + P(B|A^\complement)P(A^\complement)$$ How do you derive the first formula?
Sometimes it's easier to work with intersections rather than conditionals. The key formula here is that $$ P(A) = P(A\cap B)+P(A\cap B^c)$$ which follows from the fact that $A = (A\cap B)\cup(A\cap B^c)$ and $(A\cap B)\cap(A\cap B^c) = \emptyset$ and that disjoint unions are additive. The second formula that you write down is just this with the definition of conditional probability $P(A|B) = P(A\cap B)/P(B)$ used on the RHS. So you can write $$ P(A\cap B) = P(A\cap B\cap C)+P(A\cap B\cap C^c) $$ and by definition of conditional probability, $$ P(B\mid A)P(A) = P(B\mid A\cap C)\,P(A\cap C) + P(B\mid A\cap C^c)\,P(A\cap C^c)$$ and dividing both sides by $P(A)$ and using the definition again. $$ P(B\mid A) = P(B\mid A\cap C)\,P(C\mid A)+P(B\mid A\cap C^c)\,P(C^c\mid A).$$ It appears that your first formula only applies to when $C$ and $A$ are independent so that $P(C\mid A) = P(C)$ and $P(C^c\mid A)=P(C^c).$ So unless that's an assumption of the problem, the formula is wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2391829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Maximum Likelihood Estimator of Uniform($-2 \theta, 5 \theta$) Let $X = (X_1, \dots, X_n)$ be a random sample from the Uniform($-2 \theta, 5 \theta$) distribution with $\theta > 0$ unknown. Find the maximum likelihood estimator (MLE) for $\theta.$ Furthermore, determine whether the MLE $\hat{\theta}$ is a function of a one-dimensional sufficient statistic for $\theta.$ Let $M = \max{ \{X_1, \dots, X_n \}}$ and $L = \min{ \{X_1, \dots, X_n \}}.$ Consider the likelihood function of $\theta$ $$L(\theta; x) = \prod_{k=1}^{n} f(x_k ; \theta) = \prod_{k=1}^{n} \frac{1}{7 \theta} \cdot \mathbf{1}_{(-2 \theta, 5 \theta)}(x_k) = \frac{1}{(7 \theta)^n} \cdot \mathbf{1}_{(-2 \theta, 5 \theta)}(m) \cdot \mathbf{1}_{(-2 \theta, 5 \theta)}(\ell) \cdot \prod_{k=1}^{n} \mathbf{1}_{\mathbf{R}}(x_k).$$ By the Factorization Theorem, it follows that $(M, L)$ is sufficient for $\theta,$ and in fact, it is easy to show that $(M, L)$ is minimal sufficient for $\theta.$ Our candidates for the MLE include $M,$ $L,$ and functions of $M$ and $L,$ e.g., the midrange $\frac{M-L}{2};$ however, I am running into difficulty finding the MLE and establishing that it gives a maximum. On first glance, it appeared that $\hat{\theta} = L$ because $m \geq \ell$ implies that $\frac{1}{(7m)^n} \leq \frac{1}{(7 \ell)^n};$ however, this is only true if $m \geq \ell > 0.$ Reading a few other posts on here, I considered the possibility that the midrange $\frac{M-L}{2}$ is the MLE for $\theta.$ Of course, the difficulty arises out of the fact that there are many possibilities for $L$ and $M$: $m \geq \ell > 0,$ $m \geq 0 > \ell,$ and $0 \geq m > \ell,$ to name a few. Can anyone offer any helpful insight or tips?
The support of $L(\theta; x)$ is given by $L\ge -2 \theta$ , $M\le 5\theta$; or, equivalently $\theta \ge M/5$ and $\theta \ge -L/2$. Or $$\theta \ge T \triangleq \max(M/5,-L/2)$$ Because over its support $L(\theta; x)$ (for $n>1$) is decreasing, then $ \theta_{ML}=T$ Regarding $(M,L)$ being or not minimal sufficient: You say " it is easy to show that $(M,L)$ is minimal sufficient for $\theta$" but I don't think that's true. $$L(\theta; x)=\frac{1}{(7\theta)^n}\mathbf{1}_{[M \le 5 \theta]} \mathbf{1}_{[L \ge -2 \theta]}=\frac{1}{(7\theta)^n} \mathbf{1}_{[T\le\theta]} $$ tells us that both $(M,L)$ and $T$ are sufficient. $T$ is clearly minimal. Because $T=f(M,L)$ (but not the reverse) then $(M,L)$ cannot be minimal. Put in other way, consider some $x_1$ with $(M_1,L_1)=(100,-2)$ and some $x_2$ with $(M_2,L_2)=(100,-4)$, so that $T_1=T_2=20$ Then $\frac{L(\theta; x_1)}{L(\theta; x_2)}=1$ (doesn't depend on $\theta$), but $(M_1,L_1)\ne (M_2,L_2)$, hence $(M,L)$ is not minimal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2391911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
If $0\le x_1\le\dots\le x_n\le1$, then $\sum\limits_{i=1}^n\left(x_i-\frac{i}{n+1}\right)^2\le\sum\limits_{i=1}^n\left(\frac{i}{n+1}\right)^2$ Given positive integer $n$, $0\leq x_1 \leq \dots \leq x_n \leq 1$. Prove that $$ \sum_{i=1}^n \left(x_i- \frac{i}{n+1} \right)^2 \leq \sum_{i=1}^n \left(\frac{i}{n+1}\right) ^2 $$ I try to use transformation $x_i=\sum_{k=1}^i y_k$, prove the case when n=2, with a few discussions. It seems hard for $n\geq 3$. Any help is appreciated. Thx.
Hint: you can use the Chebyshev sum inequality to prove that $$ \frac{1}{n}\sum_{i=1}^n x_i \cdot \frac{i}{n+1} \geq \left(\frac{1}{n}\sum_{i=1}^n x_i\right) \cdot \left(\frac{1}{n}\sum_{i=1}^n \frac{i}{n+1}\right) = \frac{1}{2n} \sum_{i=1}^n x_i. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2391997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all functions of positive integers for $f(f(n))=n+2$ This is a very interesting word problem that I came across in an old textbook of mine. So I know its got something to do with induction, which yields the shortest, simplest proofs for proving the finite amount of functions, but other than that, the textbook gave no hints really and I'm really not sure about how to approach it. Any guidance hints or help would be truly greatly appreciated. Thanks in advance :) So anyway, here the problem goes: Let $\mathbb{N}^+$ denote the set of positive integers. Find all functions $f:\mathbb{N}^+ \rightarrow \mathbb{N}^+$ which are strictly increasing and such that for all positive integers $n$, we have $f(f((n))=n+2.$ So I found the function $f(n)=n+1$ works, but I'm not sure if it is the only possibility, and even so, how to prove that it is the only solution.
Note that $f(f(1))=3$ and $f(1)\ge 1$ We can't have $f(1)=1$ because that would make $3=f(f(1))=f(1)=1$ If we had $f(1)=3$, we'd have $3=f(f(1))=f(3)$, but $f$ is strictly increasing, so this is a contradiction. As would be if $f(1)=n\gt 3$ when we'd have $3=f(f(1))=f(n)\gt f(1)\gt 3$. So we have $f(1)=2$, and then $f(2)=f(f(1))=3$ and if $f(r)=r+1$ we have $r+2=f(f(r))=f(r+1)$, and you can prove by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2392092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
If $\frac {a_{n+1}}{a_n} \ge 1 -\frac {1}{n} -\frac {1}{n^2}$ then $\sum\limits_na_n$ diverges Let $(a_n)_{n \ge 1}$ be a sequence of positive real numbers such that, for every $n\ge1$, $$\frac {a_{n+1}}{a_n} \ge 1 -\frac {1}{n} -\frac {1}{n^2} \tag 2$$ Prove that $x_n=a_1 + a_2 + .. + a_n$ diverges. It is clear that $x_n$ is increasing, so it has to have a limit. I tried to prove the limit is $+\infty$ but without success. No divergence criteria from series seems to work here. UPDATE Attempt: Suppose a stronger inequality holds, namely that, for every $n\ge1$, $$\frac{a_{n+1}}{a_n}\geqslant1-\frac1n \tag 1$$ Then: $$\frac {a_3}{a_2} \ge \frac 1 2\qquad \frac {a_4}{a_3} \ge \frac 2 3\qquad \ldots\qquad \frac {a_{n-1}}{a_{n-2}} \ge \frac {n-3}{n-2}\qquad \frac {a_n}{a_{n-1}} \ge \frac {n-2}{n-1}$$ Multiplying all the above yields $$\frac {a_n}{a_2} \ge \frac 1 {n-1}$$ The last inequality proves the divergence.
It's easy to show that, for every $n\ge3$, $$ 1 -\frac {1}{n} -\frac {1}{n^2} \ge \frac {n-2}{n-1}$$ It follows that, for every $n\ge3$, $$\frac{a_{n+1}}{a_n}\ge \frac {n-2}{n-1}$$ Thus, $$\frac {a_4}{a_3} \ge \frac 1 2\qquad \frac {a_5}{a_4} \ge \frac 2 3\qquad \ldots\qquad \frac {a_{n-1}}{a_{n-2}} \ge \frac {n-4}{n-3}\qquad \frac {a_n}{a_{n-1}} \ge \frac {n-3}{n-2}$$ Multiplying all the above, one gets: $$\frac {a_n}{a_3} \ge \frac 1 {n-2}$$ hence $$a_n\ge \frac{a_3}{n-2}$$ The last inequality together with the comparison criteria for series proves the divergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2392220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
How to determine whether a curve lies on a plane? Given that a sphere $x^2+y^2+z^2=1$ and a cylinder $x^2+y^2=x$ intersect at point $(1/2,1/2,1/\sqrt 2)$ determine whether the curve from the intersection lies on a plane. We can find the curve using the following parametrization: $$ c(t)=\bigg( \frac{1}{2}+\frac{1}{2}\cos t,\frac{1}{2}\sin t,\sin\frac{t}{2}\bigg) $$ One of the ways to solve this is to make sure that tangent vectors to the curve are orthogonal to the plane $ax+by+cz=d$. The tangent vector is: $$ c'(t)=\bigg( -\frac{1}{2}\sin t,\frac{1}{2}\cos t,\frac{1}{2}\cos\frac{t}{2}\bigg) $$ For $t=\pi$ we have $c'(t)=( 0,1/2,0)$. Then: $$ (a,b,c).(0,1/2,0)=0\implies b=0 $$ Similarly for $t=0\implies c=0$ and for $t=\pi/2\implies a=0$. Therefore such a plane doesn't exist. I don't understand the logic of finding random points and why the fact that we found $a,b,c=0$ proves that the plane doesn't exist.
First, of all, draw some pictures and you'll see that a sphere-cylinder intersection is only planar (it's a circle) when the centerline of the cylinder passes through the center of the sphere. That's not the case, here. If you want "proof" as opposed to pictures and intuition, just use your parametric equations to compute four points $(x_i,y_i,z_i)$ on the curve. Then compute the following determinant $$ D = \left|\begin{matrix} x_1 & y_1 & z_1 & 1 \\ x_2 & y_2 & z_2 & 1 \\ x_3 & y_3 & z_3 & 1 \\ x_4 & y_4 & z_4 & 1 \\ \end{matrix}\right| $$ Actually, $\tfrac16D$ is the volume of the tetrahedron having the four points as vertices, so $D$ will be zero if and only if the four points are coplanar. If you pick four points at random, you will almost certainly get a non-zero value for $D$, which will show that the curve is non-planar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2392344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Is this an issue with the law of the excluded middle, or an issue with the proof? Part of a proof requiring you to prove that if $x^2$ is odd then $x$ is odd (given that $x \in \mathbb{N}$). It is my understanding that the contrapositive is used for this as follows. $x=2n, n \in \mathbb{N}$ $\Rightarrow x^2 = 4n^2$ $\Rightarrow x^2$ is even Then using the contrapositive: $\Rightarrow \lnot Even(x^2) \rightarrow \lnot Even(x)$ Now the Law of the excluded middle: $\Rightarrow Odd(x^2) \rightarrow Odd(x)$ So reasonably straight forward. However, my issue with this is whilst $x^2$ is even, it is even more strictly defined as a multiple of $4$. So in the contrapositive sense it doesn't feel right that it can be any even number. So what happens if that said number is 6? Not strictly divisible by 4 but still even. This is a bit difficult because this proof's conclusion is actually correct and any squared integer is either divible by 4 or odd. But that was found through exhaustion in a different way. Using the law of the excluded middle after saying that it wasn't just any even number seems spurious. Could someone please clarify if I am right with reservation about this? If not, please explain (without just saying it is contrapositive, therefore). I feel like there should be a continuing connection between the definition of what type of even and what can be deduced from that.
Ignore the broader proof - do you agree with the assertion "If $x$ is divisible by $4$, then $x$ is even?" This is all that's going on. We're always allowed to "forget" information in a proof, and this has nothing to do with the excluded middle. When you conclude "$x^2$ is even," this in no way implies that you've concluded "$x^2$ is even and that's the most that can be said."
{ "language": "en", "url": "https://math.stackexchange.com/questions/2392440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Let $R= \mathbb{Q}[x^2 ,x^3 ]$, the set of all polynomials over $\mathbb{Q}$ with no $x$ term. Show that the $\gcd$ of $x^5$ and $x^6$ does not exist Let $R= \mathbb{Q}[x^2 ,x^3 ]$, the set of all polynomials over $\mathbb{Q}$ with no $x$ term. Show that a gcd of $x^5$ and $x^6$ does not exist in $R$. I am reasoning for the absurd and I am assuming that $gcd (x ^ 5, x ^ 6) = ax ^ 2 + bx ^ 3\in\mathbb{Q}[x^2 ,x^3 ] $, then, for the Bezout identity there exists $p (x) = cx ^ 2 + dx^3$ and $q(x)= ex ^ 2 + fx ^ 3$ in $\mathbb{Q}[x^2 ,x^3 ]$ such that $x ^ 5p (x) + x ^ 6q (x) = ax ^ 2 + bx ^ 3$, which is absurd since $$9 = deg (x ^ 5p (x) + x ^ 6q (x)) \neq deg (ax ^ 2 + bx ^ 3) = 3.$$ Is it okay what I did? Thank you very much.
Suppose $\,d\,$ is a gcd of $\,x^5,x^6$ in $R.\,$ By the general (universal) definition of the gcd $\ \ \ c\mid x^5,x^6\! \iff c\mid d.\, $ Taking $c = d\,$ $\rm\color{#0a0}{shows}$ $\,d\mid \color{#c00}{x^5},x^6,\,$ i.e. the gcd is a common divisor. $x^3\mid x^5,x^6\ \ \Rightarrow\ \ x^3\mid d,\ $ hence $\ d = x^3,\, x^4,\,$ or $\ \color{#c00}{x^5}\, $ (up to unit factors). But none work since $x^2\mid x^5,x^6\ \ \Rightarrow\ \ \color{#90f}{x^2\mid x^3}\,$ if $\:d=x^3,\ $ $\rm\color{#0a0}{and}$ $\,d=x^4\Rightarrow\, \color{#90f}{x^4\mid x^5},\,$ and $\,d=x^5\Rightarrow\, \color{#90f}{x^5\mid x^6}.\ $ Thus every possibility for $\,d\,$ yields some $\,\color{#90f}{x^i\mid x^{i+1}}$ in $R$ $\,\Rightarrow\, x^{i+1}/x^i = x\in R,\,$ contradiction. Remark $ $ There are a couple problems in the attempted proof. First, it is easy to show that the elements of $\,\Bbb Q[x^2,x^3]$ are precisely the polynomials $\,f(x)\in \Bbb Q[x]\,$ whose coefficient of $\,x^{1}$ is zero, i.e. $\:x^2\mid f(x)-f(0).\,$ There is no reason given for your assumption that these have the very special form $\,ax^2+bx^3\,$ for $\,a,b\in \Bbb Q\,$ for the both gcd and the Bezout coefficients polynomials. Second, there is no reason to believe that if the gcd exists then it satisfies a Bezout identity. Indeed, this is proved in $\,\Bbb Q[x]\,$ by using the division algorithm, but that fails dramatically here. We cannot even perform the first step in computing the Bezout identity for $\,\gcd(x^6,x^5)\,$ by the (extended) Euclidean algorithm, i.e. we cannot divide $\,x^6\,$ by $\,x^5\,$ in $R$ to get a smaller degree remainder, since $\,x^6 = x^5 q + r\,\Rightarrow\, r=0\,$ by evaluating at $\,x=0,\,$ so $\,x^6 = x^5 q\Rightarrow\, q = x\in R,\,$ contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2392552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
relationship between the $2$-norm of a symmetric matrix and its maximum eigenvalue. If I have an $N\times N$ symmetric matrix $Q$, what is the relationship between $\|Q\|$ and its maximum eigenvalue, where $\|\|$ is the $2$nd norm?
The general answer to this is this $\|Q\| = \max \{|\lambda_{\max}(Q)|, |\lambda_{\min}(Q)|\}$. To see why this is true, consider $Q = I_{N}$ for the first case and $Q = -I_{N}$ for the second case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2392644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving that the limit of an abstract function with certain properties is zero. Let $$F(x) = \sum_{a \leq x} f(a)$$ where $f: A \subseteq \mathbb{R} \rightarrow \mathbb{R} $ such that: * *$f(a) \geq 0$ for any $a \in A$ and *$\displaystyle\sum_{a \in A} f(a) = 1$ How do I prove that $$\lim_{x\to -\infty} F(x) = 0\text{?}$$ I know how to do delta-epsilon proofs for limits that take $x$ to infinity, but I don't know any theorems that let me exploit the two conditions on the function allowing me to prove the following limit without having to "peak inside" the function to see what it looks like. My real analysis is very rusty.
I use the following fact below, so I thought it would be useful to prove it first. If $f:A\to\mathbb{R}$ is such that $f(a)\geq 0$ for every $a\in A$ and $\sum_{a\in A}f(a)<\infty$, then $[f>0]$ (notation for $\{a\in A \mid f(a)>0\}$) is countable. Proof: For each $x>0$, consider the set $[f>x]$. Note that $$ \operatorname{card}([f>x])x \leq \sum_{a\in[f>x]}f(a) \leq \sum_{a\in A}f(a) < \infty. $$ Thus $[f>x]$ is finite. Therefore $[f>0] = \cup_{n\in\mathbb{N}}[f>1/n]$ is countable, as it is a countable union of finite sets. $\square$ Now we consider the problem. Since $f(a)\geq0$ for all $a\in A$ and $\sum_{a\in A}f(a)$ converges, it must converge absolutely. Let $A':=\{a\in A \mid f(a)>0\}$. Then $A'$ must be countable, for otherwise $\sum_{a\in A}f(a)$ would diverge. Let's enumerate $A'=\{a_1,a_2,\ldots\}$ [thanks to Adayah for noticing mistake here]. Now we have $$ \sum_{n=1}^\infty f(a_n) = \sum_{a\in A'}f(a)=\sum_{a\in A}f(a)= 1. $$ Let $\varepsilon>0$. Since the above sum converges, there exists $N\in\mathbb{N}$ such that $\sum_{n=N}^\infty f(a_n) < \varepsilon$. Then whenever $x<\min_{n< N}a_n$, we have $$ F(x) = \sum_{a\leq x}f(a) \leq \sum_{n=N}^\infty f(a_n) < \varepsilon. $$ This proves that $\lim_{x\to-\infty}F(x)=0$. $\square$ I want to remark that we didn't actually need $\sum_{a\in A}f(a)=1$. We just needed the sum to converge. Response to old version of question: Even with the edits, there still seems to be something wrong. Consider $A=\{0\}$ and $f:A\to\mathbb{R}$ given by $f(0) = 1$. Then indeed $f(a)\geq 0$ for any $a\in A$ and $\sum_{a\in A}f(a)=1$. However, for all $x\geq 0$, we have $F(x)=1$. Therefore $\lim_{x\to\infty}F(x)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2392728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
The length of an arc within two intersecting circles I found this mathematical expression for the length of a arc $l(r)$ i.e the shorter arc ACB. In other words, why is $l(r)$ equal to that expression with respect to R i.e $l (r) = 2r \arccos (r/2R)$? I have tried hard to prove it but I couldn't. I hope someone could give the clue to the answer. The diagram and question can be found the the attached picture
We have $$ l(r) = 2r \angle ADC. $$ So we just need to compute $\angle ADC$. Let $Q$ be the center of the circle on the left. Note that $AD=r$ and $DQ=AQ=R$. It follows that $\angle ADQ=\angle ADC=\arccos(r/2R)$. To see why, note that $\triangle ADQ$ is isosceles and drop a perpendicular from $Q$ to the midpoint of $AD$. Is that clear?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2392822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Determine the highest order of an element of a Rubik's Cube group The period of a sequence of moves on a Rubik's Cube is the number of times it must be performed on a solved cube before the cube returns to its solved state. For example, a $90$° clockwise turn on the right face has a period of four; a $180$° clockwise turn on the right face and a $180$° turn on the top face has a period of $12$. Let's make a $3\times3\times3$ Rubik's Cube group $G$. Each element of $G$ corresponds to each possible scramble of the cube - the result of any sequence of rotations of the cube's faces. Any position of the cube can be represented by detailing the rotations that put a solved cube into that state. With a solved cube as a starting point, each of the elements of $G$ directly align to each of the possible scrambles of the Rubik's Cube. The cardinality of $G$ is $|G|=43{,}252{,}003{,}274{,}489{,}856{,}000=2^{27}3^{14}5^{3}7^{2}11$ and the largest order/period of any element in $G$ is $1260$. To elaborate, no algorithm needs to be performed on a cube more than $1260$ times to return it to the solved state. Now let's say we extended $G$ for other sizes of cubes, so $G_3$ is the group of a $3\times3\times3$ and $G_4$ is a the group of a $4\times4\times4$. (If this isn't a valid naming convention, forgive me, I've just begun learning group theory). Is there a way to find the highest order for any sequence of moves in $G_x$? For example, could I define a function $f$ such that $f(x)$ would give the highest order for any sequence of moves in $G_x$? What would $f$ look like? Would such a function be possible for any size of cube? Thanks a lot in advance. Once again I apologize for any mistakes I've made; feel free to point them out or correct them.
Notations are from https://ruwix.com/the-rubiks-cube/notation/ $RY$ is an element of order 1260. It's easy to check that $(RY)^{36}$ keeps all the corners intact but permutes the edges in 2 disjoint groups of order 5 and 7, respectively. So, the total order is $36*5*7=1260$. $RY$ applied 36 times - https://ruwix.com/saved-rubiks-cube/?moves=RYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRYRY
{ "language": "en", "url": "https://math.stackexchange.com/questions/2392906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 1, "answer_id": 0 }
Evaluating $\int x\sin^{-1}x dx$ I was integrating $$\int x\sin^{-1}x dx.$$ After applying integration by parts and some rearrangement I got stuck at $$\int \sqrt{1-x^2}dx.$$ Now I have two questions: * *Please, suggest any further approach from where I have stuck; *Please, provide an alternative way to solve the original question. Please help!!!
Alternative way: Let $x=\sin t$ so $$\int x\sin^{-1}x dx=\int t\sin t\cos t dt$$ by parts $t=u$ and $\sin t\cos t dt=dv$ and finish it!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2392990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Goodwin Staton integral $G(x) = \int_0^\infty \frac{e^{-t^2}}{t+x}dt$ and its symmetry The Goodwin Staton integral $$G(x) = \int_0^\infty \frac{e^{-t^2}}{t+x}dt$$ is said on Wikipedia to have the symmetry $$G(x) = -G(-x)$$ I'm not convinced by this symmetry... indeed if we consider $G(-x)$ and we choose $k = -t$ this integral becomes $$G(-x) = \int_0^{-\infty} \frac{e^{-k^2}}{-k-x}(-dk)$$ or $$G(-x) = -\int_{-\infty}^{0} \frac{e^{-k^2}}{k+x}dk$$ which does not seem to be equal to $-G(x)$ to me... Any suggestions ? EDIT : actually the symmetry of this integral is part from my problem. My final goal is to compute this integral : $$PV. \left( \int_{0}^{\infty} \frac{e^{-a^2(k-q)^2} \; k\; dk}{k_0^2-k^2} -\int_{0}^{\infty} \frac{e^{-a^2(k+q)^2} \; k\; dk}{k_0^2-k^2}\right) $$ If i'm not wrong, with $k\to -k$ in second integral we get that $$PV. \left( \int_{-\infty}^{\infty} \frac{e^{-a^2(k-q)^2} \; k\;dk}{k_0^2-k^2} \right) $$ Then by taking $k\to k+k_0$, we finally have $$ - PV. \left( \int_{-\infty}^{\infty} \frac{e^{-a^2(k-q)^2} \;dk}{k+2k_0} + k_0 \int_{-\infty}^{\infty} \frac{e^{-a^2(k-q)^2} \;dk}{k(k+2k_0)} \right) $$ which I do not know how to handle. These seem to be closely related to these Goodwin-Staton integrals / Dawson functions... But. Meh
I think Wikipedia is wrong. The integral does not converge for $x<0, $ try e.g. int(e^(-t^2)/(t-2),t=0..infinity) in Wolfram Alpha. It can be interpreted as a Cauchy principle value (see Nico Temme's answer http://mathforum.org/kb/message.jspa?messageID=7389647). I use $$G(-x) = - \frac{1}{2} e^{-x^2}\left(\pi\; \mathrm{erfi}(x) + \mathrm{Ei}(x^2)\right), \quad x>0$$ or $$G(-x) = \sqrt{\pi} F(-x) - \frac{1}{2} e^{-x^2}\mathrm{Ei}(x^2), \quad x>0.$$ i.e. http://dlmf.nist.gov/7.5.E13 extended to negative $x$ (with the Dawson integral $F$ and exponential integral $\mathrm{Ei}$). The Dawson integral is odd, and therefore $G(x)$ is not odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2393095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $X$ is a CW complex, then the path components of $X$ are the components of $X$. I'm self-learning Algebraic Topology from Rotman's Introduction to Algebraic Topology and I've come across this problem: If $X$ is a CW complex, then the path components of $X$ are the components of $X$. The proof states: If $A$ is a path component of $X$ and $Y$ is a component of $X$ containing $A$ and since $A$ is both open and closed, then it follows that $A=Y$. How does it follows here? I don't see the connection.
A subset $A$ of a topolgical set $X$ which is open and closed is a union of connected components. To see this, consider $x\in A$ and $C$ is connected component, $C\cap A$ is closed and $C\cap (X-A)$ is also closed, you deduce that $C\cap (X-A)$ is empty since $C$ is connected, henceforth $C\subset A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2393215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Dualizing simple module which is given by primitive idempotent Let $e$ be a primitive idempotent in an associative finite-dimensional $k$-algebra $A$. Then the two modules $Ae/\text{rad}(Ae)$ and $D(eA/\text{rad}(eA))$ are both simple, where $D: \text{mod}(A^{\text{op}}) \to \text{mod}(A)$ is the standard dualization. Question: Is there an easy proof of the fact (Or is it even true) that those two simple modules are isomorphic to each other as $A$-modules?
Denote by $k$ the ground field of the algebra $A$. Since both modules are simple, every nonzero homomorphism of $A$-modules $\varphi: Ae/\text{rad}(Ae) \to D(eA/\text{rad}(eA))$ will be an isomorphism. In order to find this, we first construct some $\tilde{\varphi}: Ae \to D(eA/\text{rad}(eA))$: Let $g \in D(eA/\text{rad}(eA))$ be an arbitrary linear map $eA/\text{rad}(eA) \to k$ such that $g(\overline{e}) \neq 0$. This exists since $e \notin \text{rad}(eA)$, i.e. $\overline{e} \neq 0$. We claim $eg \neq 0$: Indeed, we have $$(eg)(\overline{e}) = g(\overline{e}e) = g(\overline{e}) \neq 0$$ by definition of $g$. This gives rise to a map $\tilde{\varphi}: Ae \to D(eA/\text{rad}(eA)), \ ae \mapsto aeg$, which is clearly $A$-linear. We claim $\text{rad}(Ae) \subseteq \ker(\tilde{\varphi})$: For $ae \in \text{rad}(Ae)$ we get for all $a' \in A$ $$\tilde{\varphi}(ae)\left(\overline{ea'}\right) = (aeg)\left( \overline{ea'}\right) = g\left(\overline{ea'ae} \right) = g(0) = 0$$, where the second to last equality follows from $ea'ae \in eA\text{rad}(Ae) = e\text{rad}(A)e = \text{rad}(eA)e \subseteq \text{rad}(eA)$, i.e. $\overline{ea'ae} = 0$. Therefore, $\tilde{\varphi}(ae) = 0$ and thus $\text{rad}(Ae) \subseteq \ker(\tilde{\varphi})$. Hence we get an $A$-linear map on the quotient $$\varphi: Ae/\text{rad}(Ae) \to D(eA/\text{rad}(eA)), \ \overline{ae} \mapsto \tilde{\varphi}(ae) = aeg.$$ We are done if $\varphi \neq 0$. This is in fact the case, since $\varphi(\overline{e})(\overline{e}) = (eg)(\overline{e}) \neq 0$. Follow-up Question: Is there a more natural isomorphism?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2393374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to approximate $\pi$ using the Maclaurin series for $\sin(x)$ We have that $$\sin(x)=x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots$$ Now plugging in $x=\pi$, $$0=\pi - \frac{\pi^3}{3!} + \frac{\pi^5}{5!} - \frac{\pi^7}{7!} + \cdots$$ Is there a way we can use this beautiful result to calculate better and better approximations to $\pi$? EDIT: This is a problem from a Mathologer video here titled Euler's real identity NOT e to the i pi = -1 at minute 7:06. I think this problem is meant to be doable, so please try it and don't just say that it is impossible and pointless, because if that were the case, Mathologer wouldn't have put it in his video (hopefully?)
It is easy to find better approximations to $\pi$ by iteration. Let $a_{n+1}=a_n+\sin(a_n)$. If you start with $a_1$ close enough to $\pi$ the sequence converges to $\pi$. For example, $a_1=3$ will work. You can replace $\sin(x)$ with a truncated Taylor series $f(x)$ and the iteration will converge to the root of $f(x)$ closest to $\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2393485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Find the angle NMC In triangle $ABC$, $\measuredangle B = 70^{\circ}$, $\measuredangle C = 50^{\circ} $. On $AB$ and $AC$ take points $M$ and $N$ such that $\measuredangle MCB = 40^{\circ}$, $\measuredangle NBC= 50^{\circ}$. Find $\measuredangle NMC$.
Another solutions: Observe that by angle chasing $BN = CN$ and $BC = MC$. Let point $D$ be chosen on the line $AB$ so that the points $B$ and $M$ lie on the segment $AD$ and $MA = BD$. Consequently, triangles $ACM$ and $DCB$ are congruent by construction and therefore the triangle $ACD$ is equilateral. Draw the line passing through point $N$ and parallel to $AD$ and denote its point of intersection with $CD$ by $K$. Then triangle $CKN$ is also equilateral. Hence $$BN = CN = KN = CK$$ However, triangles $BCN$ and $MCK$ are congruent because $$CN = CK, \,\,\, BC = MC, \,\,\, \angle \, BCN = \angle \, MCK = 50^{\circ}$$ so $KM = BN = KN = KC$. Consequently, the circle centered at point $K$ and of radius $KM = KN = KC$ passes through the three points $C, \, M$ and $N$. Therefore, since $\angle \, CMN$ is incsribed in the circumcircle of $CMN$ while $\angle \, CKN$ is a central angle, $$\angle \, CMN = \frac{1}{2} \, \angle \, CKN = \frac{1}{2} \, 60^{\circ} = 30^{\circ}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2393623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
By using the definition of limit only, prove that $\lim_{x\rightarrow 0} \frac1{3x+1} = 1$ By using the definition of a limit only, prove that $\lim_{x\rightarrow 0} \dfrac{1}{3x+1} = 1$ We need to find $$0<\left|x\right|<\delta\quad\implies\quad\left|\dfrac{1}{3x+1}-1\right|<\epsilon.$$ I have simplified $\left|\dfrac{1}{3x+1}-1\right|$ down to $\left|\dfrac{-3x}{3x+1}\right|$ Then since ${x\rightarrow 0}$ we can assume $-1<x<1$ then $-2<3x+1<4$ which implies $$\left|\dfrac{1}{3x+1}-1\right|=\left|\dfrac{-3x}{3x+1}\right|<\left|\dfrac{-3x}{4}\right|<\left|\dfrac{-3\delta}{4}\right|=\epsilon$$ No sure if the solution is correct
Note that $$\left|\frac{1}{1+3x}-1\right|=\left|\frac{3x}{1+3x}\right| \tag 1$$ Now, we restrict $x$ such that $x\in [-1/4,1/4]$. And with this restriction, it is easy to see that $1/4/ \le 1+3x$. Using this in $(1)$ reveals that $$\left|\frac{1}{1+3x}-1\right|\le 12|x|\tag 2$$ Finally, given any $\epsilon>0$, $$\left|\frac{1}{1+3x}-1\right|<\epsilon$$ whenever $|x|<\delta =\min\left(\frac14,\frac{\epsilon}{12}\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2393756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show that for $a_i>0$ $\frac{a_1+\cdots+a_n}{n}$ converges to $0$ if and only if $\frac{a_1^2+\cdots+a_n^2}{n}$ converges to $0$. Let $\{a_n\}$ be a bounded and positive sequence. Show that $$\lim_{n\to \infty}\frac{a_1+\cdots+a_n}{n}=0$$ if and only if $$\lim_{n\to \infty}\frac{a_1^2+\cdots+a_n^2}{n}=0.$$ My attempt: The "$\Rightarrow$" is obvious. Note that $$\frac{a_1^2+\cdots+a_n^2}{n}\leq |M|\cdot\frac{a_1+\cdots+a_n}{n} $$ where $|M|$ is the bound of the sequence. So the convergence of the right side implies the convergence of the left side. As for the converse direction, I really have no idea... @kimchi lover points out using the Cauchy-Schwarz inequality and I had the following attempt... $$\frac{a_1+\cdots+a_n}{n}=\frac{\frac{1}{\sqrt{n}}(a_1+\cdots+a_n)}{\frac{1}{\sqrt{n}}n}\leq \frac{(a_1^2+\cdots+a_n^2)(\frac{1}{n}+\cdots+\frac{1}{n})}{\sqrt{n}}$$
By Cauchy—Schwarz inequality, $$ \sum_{k=1}^n \frac{1}{n}\cdot a_k \leq \sqrt{\sum_{k=1}^n a_k^2}\cdot \sqrt{\sum_{k=1}^n \frac{1}{n^2}} = \sqrt{\sum_{k=1}^n a_k^2}\cdot\sqrt{\frac{1}{n}} = \sqrt{\frac{1}{n}\sum_{k=1}^n a_k^2} $$ and you can conclude by the squeeze theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2393828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Why is $\omega+1+\omega+1 = \omega+\omega+1$? Why isn't it $\omega+\omega+2$? On the other hand, is it true that $\omega+\omega = 2\omega$? Also why is it that $2 \omega=\omega$? Here $\omega$ is taken to be the limit ordinal which is just $\mathbb{N}$. I am really confused as to why ordinals can't add/multiply just like natural numbers do, since they are essentially the same thing?
I am really confused as to why ordinals can't add/multiply just like natural numbers do, since they are essentially the same thing? The ordinals generalize the natural numbers in a certain sense, but that does not mean that every property of the natural numbers carries over to the ordinals. Weird though it may be, neither ordinal addition nor ordinal multiplication are commutative. Ordinals are weird beasts. It's best at first to think of an ordinal as a particular linear order - a collection of dots laid out in a line (not every linear order corresponds to an ordinal, of course, but every ordinal corresponds to a linear order). "$\alpha+\beta$" is what you get by putting a copy of $\beta$ after a copy of $\alpha$; "$\alpha\cdot \beta$" is what you get when you replace each point in $\beta$ with a copy of $\alpha$. Reasoning pictorially, $1+\omega$ is $$1+\omega\quad=\quad{\large\bullet}\quad +\quad{\large\bullet}+{\large\bullet}+{\large\bullet}+{\large\bullet}+...\quad=\quad{\large\bullet}+{\large\bullet}+{\large\bullet}+{\large\bullet}+...\quad=\quad\omega,$$ while $\omega+1$ is $$\omega+1\quad=\quad {\large\bullet}+{\large\bullet}+{\large\bullet}+{\large\bullet}+...\quad+{{\large\bullet}}\quad=\quad {\large\bullet}+{\large\bullet}+{\large\bullet}+...{\color{red}+ \color{red}{\large\bullet}},$$ and this latter does not look like $\omega$ (unlike $\omega$, it has a last element). Here's another picture-argument: first, thinking about $2\omega$, we have $$2\omega\quad=\quad (2)+(2)+(2)+...\quad=\quad ({\large\bullet}+{\large\bullet})+({\large\bullet}+{\large\bullet})+...\quad=\quad {\large\bullet}+{\large\bullet}+{\large\bullet}+{\large\bullet}+...\quad=\quad\omega,$$ but $\omega2$ is $$\omega2\quad=\quad(\omega)+(\omega)={\large\bullet}+{\large\bullet}+{\large\bullet}+...{\color{\red}+ \color{red}{\large\bullet}}+{\large\bullet}+{\large\bullet}+...,$$ and this does not look like $\omega$ (unlike $\omega$, it has an element with no immediate predecessor). In your title problem, since commutativity fails we can't argue that $$\omega+1+\omega+1=\omega+\omega+1+1=\omega+\omega+2;$$ the right answer is instead to see that $$\omega+1+\omega+1=\omega+(1+\omega)+1=\omega+\omega+1.$$ Similarly, we have to keep straight the difference between $\omega+\omega,$ which is $\omega2$, and $2\omega$. So the answer, written most snappily, is $\omega2+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2393922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Basis of a topology Does every topology have a unique basis? What I mean is if there is more than one topology that can be obtained from a given basis, which topology should I consider when they say the topology generated by the given basis?
No, there can be many basis for the same topology (but a basis generates a unique topology). For example in $\mathbb{R}^2$, for $r>0$, let $$B((x_0,y_0),r):=\{(x,y)\in \mathbb{R}^2:(x-x_0)^2+(y-y_0)^2<r^2\}$$ and $$S((x_0,y_0),r):=\{(x,y)\in \mathbb{R}^2:|x-x_0|+|y-y_0|<r\}.$$ Then $\{B((x_0,y_0),r): (x_0,y_0)\in \mathbb{R}^2, r>0\}$ and $\{S((x_0,y_0),r): (x_0,y_0)\in \mathbb{R}^2, r>0\}$ are two different basis (actually they have no set in common) for the euclidean topology in $\mathbb{R}^2$. P.S. One more example. The topology $\big\{\emptyset,\{x\},\{y\},\{x,y\}\big\}$ for the finite set $\{x,y\}$ has the following bases: * *$\big\{\{x\},\{y\}\big\}$ *$\big\{\emptyset,\{x\},\{y\}\big\}$ *$\big\{\{x\},\{y\},\{x,y\}\big\}$ *$\big\{\emptyset,\{x\},\{y\},\{x,y\}\big\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2394055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
On the necessity of being a *dense* subset in completion of a metric space Quoted from the book Introductory functional Analysis by Erwin Kreyszig : 1.6-2 Theorem (Completion). For a metric space $X = (X, d)$ there exists a complete metric space $\bar{X}=(\bar{X}, \bar{d})$ which has a subspace $W$ that is isometric with $X$ and is dense in $\bar{X}$. This space $\bar{X}$ is unique except for isometrics, that is, if $\bar{X'}$ is any complete metric space having a dense subspace $W'$ isometric with $X$, then $\bar{X'}$ and $\bar{X}$ are isometric. Why $W$ must be a dense subset of $\bar{X}$? Is it just because generalization is from completion of $\mathbb{Q}$ (to $\mathbb{R}$) or being dense is something to do with a super-space to be complete? And if so, why? Edit - For example (0,1) is not a complete space but its closure [0,1] is. And still (0,1) is not dense in [0,1] because int((0,1)) is not empty set.
I am not sure what is your question. Why $W$ must be dense? This is by the construction in the proof of the theorem. Why is it important? Because it basically tells you that in every metric space, only adding a 'few' limit points will make it complete. This is remarkable! Assume for a second you ignore the density part. Then a good question would be: given a metric space, what is the smallest complete space containing it? The answer we now know that it is it's 'closure'.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2394189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Question about Gödel's Incompleteness Theorem Gödel used prime factorization to encode each statement with a unique number (which is Gödel numbering). But I wonder if this statement can be encoded: "If this statement can be encoded, then this statement is false." If it can be encoded, then it will be false and the truth will become "This statement can be encoded and this statement is true." And that leads to a contradiction, so it cannot be encoded. Are there any mistakes? If not, did I just prove that Gödel's numbering is wrong??
Godel proved his result by first setting up a system of mathematical logic that can do the basics of arithmetic. It includes the symbols $\Rightarrow$ "if then", $\wedge$ "and", etc. It also includes the natural numbers and some of their operations. In order for your statement to work, you would need to encode "If this statement can be encoded, then this statement is false." into his system. Then you would need to show that this encoded statement is not itself undecidable. One or both of these things is not possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2394307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
What seemingly innocuous results in mathematics require advanced proofs? I'm interested in finding a collection of basic results in mathematics that require rather advanced methods of proof. In this list we're not interested in basic results that have tedious simple proofs which can be shorted through more advanced methods but basic results that necessarily require advanced methods. I appreciate the question asked here is very similar to another question asked on this website: It looks straightforward, but actually it isn't. Thank you for pointing this out. However, in my opinion, it does differ significantly (this is debatable). The main goal of the this discussion was to find examples that are easily digestible to non-advanced students of mathematics and related disciplines. This, I hope, will spur discussion of the dichotomy between what is considered trivial from a mathematics perspective and what may be considered intuitive. Quite often less experience students tend to gloss over fairly intuitive results under the assumption the proof follows easily. This I hope will be a good resource to show it is not the case. In particular, I was hoping to find a list of problems that may seem intuitive on inspection, but are out of the reach of elementary methods. The statement of the theorem should be able to be understood by junior undergraduate students but the proof rather inaccessible. Can you also mention why elementary methods fail to shed any light on the problem. Many thanks
Perhaps the parallel postulate? In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point It was only through trying to prove this "obvious" theorem that we discovered non-Euclidean geometries.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2394388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "97", "answer_count": 18, "answer_id": 8 }
How to calculate line integral over the intersection of paraboloid $z=x^2+y^2$ and plane $z=2x$ Calculate $\oint_C xyz\,dx+x^2\,dy+xz\,dz$ over the curve from the intersection of paraboloid $z=x^2+y^2$ and plane $z=2x$. The direction of the curve may be chosen as you see fit. It looks like if we chose to parametrize the curve and plug in the appropriate values straight into the integral, the calculation would be become pretty massive. I'd like to try this with Stokes' theorem. My main question is whether we're allowed to choose any surface when using Stokes' because here we have 2: paraboloid and plane, of which plane is of course much easier. Then the normal unit vector for the plane in the negative direction is: $$ n=\langle2,0,1\rangle/dS $$ The vector field is: $$ F=\langle xyz, x^2,xz\rangle $$ and $$ \operatorname{curl}F=\langle 0,xy-z,2x-xz\rangle $$ We can get the ranges for the curve: $$ x^2+y^2=2x\iff(x-1)^2+y^2=1\\ \implies0\le r\le 2\cos\theta, \quad-\pi/2\le\theta\le\pi/2 $$ Then: $$ \oint_CF\cdot dr=\iint_S \operatorname{curl}F\cdot ndS=\\ =\iint_S 2x-xz\stackrel{z=2x}{=}\iint_S 2x-2x^2=\int_{-\pi/2}^{\pi/2}\int_0^{2\cos\theta} (2r\cos\theta-2r^2\cos^2\theta)r $$ So my question is mostly if I can choose any surface when using Stokes' and whether my ranges are correct?
I'll start with some critique. First of all, your normal vector isn't quite correct: from the equation of the plane $-2x+z=0$, we get the normal vector $\mathbf{n}=\langle-2,0,1\rangle$ (or it could be its opposite, but this one gives the upward orientation, consistent with the counterclockwise orientation of the curve $C$). Fortunately, it doesn't affect your solution because the first component of curl is zero. Second, it is a really bad habit to drop differentials, representing the variables of integration, from integral notation! For example, the last line of your computation should be written as $$\iint_S (2x-xz)\,dx\,dy=\iint_S (2x-2x^2)\,dx\,dy=\int_{-\pi/2}^{\pi/2}\int_0^{2\cos\theta} (2r\cos\theta-2r^2\cos^2\theta)r\,dr\,d\theta.$$ Third, you must be much more clear regarding domains of integration. The "equality" $$\iint_S \operatorname{curl}F\cdot\mathbf{n}\,dS=\iint_S (2x-2x^2)\,dx\,dy$$ is wrong because the domains of integration in these two integrals are NOT the same and thus cannot be denoted by the same letter $\color{red}{S}$. If $S$ stands for the portion of the plane cut out by the paraboloid (or cylinder), then it's rightfully used in the first integral, but not in the second. The second one represents a double integral over a region $D$ in the $xy$-plane after you effectively parameterized the surface $S$. And this region $D$ is the disk $(x-1)^2+y^2=1$, that you correctly found. And to integrate over this $D$, it certainly makes sense to switch to polar coordinates. In the end of the day, you did get a correct double integral in polar coordinates (also see above), so you can finish solving this problem by evaluating that integral. (I presume you can do that, and you don't need us to give you the answer.) Now, a very short main answer to your main question: YES, we are allowed to choose any such surface. :-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2394456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Entire number continuum is equivalent to any finite segment: Courant and Robbins Book I am reading What is Mathematics? by Richard Courant and Herbert Robbins. They discuss about the fact the $\Bbb{R}$ is not countable and after they say «it is easy to show that the entire number continuum is equivalent to any finite segment, say the segment from 0 to 1 with the endpoints excluded. The desired biunique correspondence may be obtained by bending the segment at $1/3$ and $2/3$ and projecting from a point » Here is the figure is the book: Unfortunately I don't understand the construction. Not sure what "projecting from a point" mean here and how can I get the bijection. Can someone explain how to construct it?
Imagine that the dot at the center is a light bulb. Then every point on the cup-shaped part of the figure has a shadow point on the line. (That's what "projection" means in this context.) The points on the cup shaped figure correspond to the points on the unit interval using the "bending" Courant and Robbins describe.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2394531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $\left( \frac {11} {10}\right) ^{n}$ is divergent. Show that $\left( \dfrac {11} {10}\right) ^{n}$ is divergent. My proof. Let $B\in\mathbb{R}$. By the Archimedean property there is a $N$ in $\mathbb{N}$ such that $N>B$. Let $\varepsilon >0$ By the Bernoulli inequality, we have $\left( 1+\varepsilon \right) ^{n}\geq 1+n\varepsilon$ for all $n\in\mathbb{N}$. Now, take $\varepsilon=( \dfrac {11} {10}-1)$. Then, we obtain, $\left( \dfrac {11} {10}\right) ^{n}\geq \dfrac {n} {10}+1$. So, for all $n\geq N$ we have $\dfrac {n} {10}+1>\dfrac {n} {10}>n>N>B.$ Thus, since $\left( \dfrac {11} {10}\right) ^{n}\geq \dfrac {n} {10}+1$, $\left( \dfrac {11} {10}\right) ^{n}>B$ for all $n\geq N$. We are done. Can you check my proof?
Easy to think solution: Note that $\ln$ is increasing function. Note that $\ln\Big(\dfrac{11}{10}\Big)=\ln11-\ln10=c>0$ Now $\ln\Big(\dfrac{11}{10}\Big)^n=n(\ln11-\ln10)=nc$ Now since $c>0$, for every $N\in \mathbb{N}$ and $N>\Big\lfloor\dfrac{1}{c}\Big\rfloor+1$, you can find a $n\in\mathbb{N}$ such that $nc>N$. Hence $\ln\Big(\dfrac{11}{10}\Big)^n$ diverges to infinity. Since $\ln$ is increasing function $\Big(\dfrac{11}{10}\Big)^n$ also diverges to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2394625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Finite Recursion Theorem Does there exists a version of the Recursion Theorem for finite totally ordered sets (instead of natural numbers)? There are many cases where we have a finite totally ordered set and we have to define a thing recursively over that set, but how can it be formalized?. For example, if we have an ordered sequence $({x}_{i})_{i\in I}$ in a group, where $I$ is a finite totally ordered set, how can we formalize the definition of product of that family?
Any finite strictly totally ordered set has a unique strict order-isomorphism to a unique initial segment of $\mathbb N$. More precisely: If $(I; \prec)$ is a strict finite total order there is a unique $n \in \mathbb N$ (namely $n = \operatorname{card}(I)$) with a unique strict order isomorphism $$ \pi \colon (I; \prec) \to ( \{1,2, \ldots, n \}; <), $$ given by * *$\pi(\min(I; \prec)) = 1$ and *$\pi(\min(I \setminus \pi^{-1}\{1, 2, \ldots, k \}; \prec)) = k+1$. (*) Now use the regular Recursion Theorem. (*) On the surface it seems like I'm using the Recursion Theorem for $(I; \prec)$ to define $\pi$ but I really don't. The existence of $\pi$ follows easily by picking any bijection $f \colon I \to \{1, 2, \ldots, n\}$ together with a permutation $\sigma \colon \{1,2, \ldots, n \} \to \{ 1,2, \ldots, n\}$ such that for all $i,j \in I$ $$ i \prec j \iff \sigma(f(i)) < \sigma(f(j)). $$ The existence of $\sigma$ can be proved by the regular Recursion Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2394740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Calculation of the $s$-energy of the Middle Third Cantor Set As the title suggests, I am trying to calculate the $s$-energy of the middle third Cantor set. I am reading Falconer's Fractal Geometry book, available here: http://www.dm.uba.ar/materias/optativas/geometria_fractal/2006/1/Fractales/1.pdf and this is an exercise (exercise 4.9 on page 45 of the pdf - 68 of the book). First, we define the $s$-potential at a point $x\in\mathbb{R}^{n}$ as $$\phi_{s}(x)=\int\frac{\text{d}\mu(y)}{|x-y|^{s}}$$ and the define the $s$-energy as $$I_{s}(\mu)=\int\phi_{s}(x)\text{d}\mu(x)=\iint\frac{\text{d}\mu(x)\text{d}\mu(y)}{|x-y|^{s}}.$$ Now, let $F$ be the middle third Cantor set and let $\mu$ be the mass distribution on $F$ so that each $2^{k}$ $k$th level interval of length $3^{-k}$ has mass $2^{-k}$. Estimate the $s$-energy of $\mu$ for $s<\log{2}/\log{3}$, and deduce that $\dim_{\text{H}}F\geq\log{2}/\log{3}$. I get the feeling that this isn't an expecially hard exercise, but I'm stuck on where to start. Could someone help me please?
First, a general observation: if a finite measure $\mu$ on $\mathbb{R}$ has no atoms, then the diagonal $\{(x,x)\in\mathbb{R}^2\}$ has zero measure with respect to the product measure $\mu\times \mu$. To see why, partition $\mathbb R$ into $n$ intervals of measure $1/n$, and observe that the diagonal is covered by $n$ squares, each of which gets product measure $1/n^2$. So we integrate over $x\ne y$; by symmetry, it suffices to integrate over $x<y$. Write $x,y$ in base-3 as $$x = 0.\underset{n \text{ digits}}{\underbrace{\cdots}} 0\cdots , \quad y = 0.\underset{\text{same digits}}{\underbrace{\cdots}} 2\cdots $$ where $n$ is a nonnegative integer. Note that $|x-y|\ge 3^{-n-1}$. Also, the measure of all pairs $(x,y)$ as above is $$ 2^{n}(1/2)^{2(n+1)} = \frac14\cdot 2^{-n} $$ because there are $2^n$ choices of $n$ digits of $x$, and because fixing the first $(n+1)$ digits of a number restricts it to a subset of measure $(1/2)^{n+1}$. Putting it all together, $$ \iint_{x<y} |x-y|^{-s}\,d\mu(x)\,d\mu(y) \le \sum_{n=0}^\infty 3^{s(n+1)} \frac14\cdot 2^{-n} = \frac{3^s}{4} \sum_{n=0}^\infty (3^s/2)^n $$ which converges when $3^s<2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2394853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to apply Cauchy's MVT to evaluate the following? Use Cauchy's Mean Value theorem to evaluate $$\lim_{x\rightarrow 1} \left[\frac{\cos(\frac{1}{2}\pi x)}{\ln(1/x)}\right]$$ I can't understand how to apply Cauchy's MVT over here. Any hints?
$$\lim_{x\to 1}\frac{\cos\left(\frac{\pi}{2}x\right)}{-\log x}\stackrel{x\mapsto 1-z}{=}\lim_{z\to 0}\frac{\sin\left(\frac{\pi}{2}z\right)}{-\log(1-z)}$$ and since $\lim_{z\to 0}\frac{\sin z}{z}=1=\lim_{z\to 0}\frac{z}{-\log(1-z)}$ the wanted limit is $\frac{\pi}{2}$, you do not need anything fancy. If you like, you may apply de l'Hopital's rule to get $$ \lim_{x\to 1}\frac{\cos\left(\frac{\pi}{2}x\right)}{-\log x}=\frac{\pi}{2}\lim_{x\to 1}x\sin\left(\frac{\pi}{2}x\right)=\frac{\pi}{2}$$ or consider that $$\lim_{x\to 1}\frac{\cos\left(\frac{\pi}{2}x\right)}{-\log x}\stackrel{x\mapsto e^z}{=}\frac{\pi}{2}\lim_{z\to 0}\frac{1}{z}\int_{0}^{z}e^t\sin\left(\frac{\pi}{2}e^t\right)\,dt=\frac{\pi}{2}\lim_{z\to 0}\int_{0}^{1}e^{zt}\sin\left(\frac{\pi}{2}e^{zt}\right)\,dt $$ and reach the same conclusion through the dominated convergence theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2394970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find $x\in \Bbb Z$ such that $x=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ Find $x\in \Bbb Z$ such that $x=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ Tried (without sucess) two different approaches: (a) finding $x^3$ by raising the right expression to power 3, but was not able to find something useful in the result that simplifies to an integer; (b) tried to find $a$ and $b$ such that $(a+\sqrt{b})^3=2+\sqrt{5}$ without success. The answer stated for the problem in the original source (a local Math Olympiad Constest) is $x=1$.
$$x=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}\\x^3=2+\sqrt5+2-\sqrt5+3\sqrt[3]{2+\sqrt{5}}\cdot\sqrt[3]{2-\sqrt{5}}(\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}})\\x^3=4+3\cdot(-1)\cdot(x)$$so $$x^3+3x-4=0 \\(x-1)(x^2+x+4)\to\\ x=1,x^2+x+4=0 ,\Delta <0\\x=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2395060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 0 }
$\sqrt[3]{8\div{\sqrt[3]{8\div{\sqrt[3]{8\div{\sqrt[3]{8\div ...} }} }} }} $=? Suppose $$a=\sqrt[3]{8\div{\sqrt[3]{8\div{\sqrt[3]{8\div{\sqrt[3]{8\div ...} }} }} }} $$ $\bf{Question}:$Is it possible to find the value of $a$ Thanks in advance for any hint,idea or solution. $\bf{remark}:$ I changed the first question , But I got stuck on this .
Consider the sequence, $$x_{n+1}=\sqrt[3]{\frac{8}{x_n}}=2(x_n)^{-\frac{1}{3}}$$ With $x_1=1$. Our value of interest is $\lim_{n \to \infty} x_n$. Such a sequence follows, $$\ln x_{n+1}=\ln 2-\frac{1}{3} \ln x_n$$ Hence letting $\ln x_n=a_n$ we have the linear recurrence, $$a_{n+1}+\frac{1}{3}a_{n}=\ln 2$$ $$(a_{n+1}-\frac{3}{4}\ln 2)+\frac{1}{3}(a_n-\frac{3}{4}\ln 2)=0$$ One may show $a_n-\frac{3}{4} \ln 2 \to 0$ in much the same way $(-\frac{1}{3})^n \to 0$. For instance, by first finding a closed form by letting $b_n=a_n-\frac{3}{4}\ln 2$. Hence $a_n \to \frac{3}{4}\ln 2$, thus showing that $x_n \to 2^{3/4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2395125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Consider $ \ A \cap (B-C)$ and $ \ (A\cap B) - (A \cap C)$. Question: Consider $ \ A \cap (B-C)$ and $ \ (A\cap B) - (A \cap C)$. Are the two sets equal or is one a subset of the other? My attempt: We know that if $ \ x\in (A \cap B) - (A \cap C) \implies x \in A \cap B$ and $ \ x \notin A \cap C \implies x \in A$ and $ \ x\in B$ and $ \ x\notin C \implies x \in A $and $ \ x\in B-C \implies x \in A \cap (B-C)$ So, $ \ (A\cap B) - (A \cap C) \subseteq \ A \cap (B-C) $ Is the other way true?
Through logic it should be true using definition $\forall x: x\in (X-A) \iff (x \in X) \land (x \notin A)$ that $$\forall x: (x \in A) \land ((x \in B) \land (x\notin C)) =\forall x: ((x \in A) \land (x \in B)) \land ((x \notin C) \lor (x\notin A))$$ because in order to satisfy the formula, $x$ must be in $A$, therefore $(x \notin A)$ is always false and the rightmost term is false, meaning that $(x \notin C) \lor (x\notin A)$ is the same as $(x \notin C)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2395241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Easy exercise with boundary could I have a confirm or a suggestion about this little exercise? $\partial A=\emptyset$ if and only if $A$ is open and closed. Sol.: If $A$ is "clopen", then $Int(A)=A$ and $Cl(A)=A$, so $\partial A=A \setminus A=\emptyset $. If $\partial A=\emptyset$, then $Cl(A) \setminus Int(A)=\emptyset$. So doesn't exist a $x \in Cl(A) \cap Int(A)$. Then $Cl(A)=Int(A)$, and this is possibile only if $A$ is clopen
Your first part is O.K. Your second part is incomplete. If $\partial A=\emptyset$ then $Cl(A) \setminus Int(A)=\emptyset$. It follows that $Cl(A) \subseteq Int(A)$. Since $Int(A) \subseteq A \subseteq Cl(A)$, we get $$Int(A) = A = Cl(A).$$ Hence $A$ is clopen.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2395344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solving $\sinh z = 2i$ This is my attempt at the question (I stopped early because it did not work out...) $$\sinh z = 2i \\ e^{iz} - e^{-iz} = 4i \\ e^{2iz} - 4ie^{iz} - 1 = 0 $$ solving the quadratic gives $$e^{iz} = i(2\pm \sqrt{3})$$ I stop here to check: $$\sinh z = \frac{e^{iz} - e^{-iz}}{2}= \frac{i(2\pm\sqrt{3}) + i(2\pm \sqrt{3})}{2}= i(2\pm \sqrt{3}) \neq 2i...$$ I'm not sure if I'm doing something really wrong… I've redone this a few times and I can't see it.
Actually, correct version of the last expression is $$ \sinh(z)=\frac{e^{iz}-e^{-iz}}{2}=\frac{i(2\pm\sqrt{3})+i(2\mp\sqrt{3})}{2}=2i $$ Use $z=(2+\sqrt{3})i$ or $z=(2-\sqrt{3})i $ uniformly in the all part of expression
{ "language": "en", "url": "https://math.stackexchange.com/questions/2395443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Find all positive integers n for which the number obtained by erasing the last digit of n is a divisor of n? I know, through this, , that all numbers ending on 0 and 11, 12..19, 22, 24, 26, 28, 33, 36, 39, 44, 48, 55, 66, 77, 88, 99 are solutions. But how to prove that all 3- and more-digit numbers which do not end on 0 are not the solution?
First case: $ 100 \leq n $. In this case we $\color{Green}{\text{claim}}$ that $\color{Green}{\text{the last digit is equal to zero}}$ , conversly every integer with the last digit equal to zero has the above property. Let $$ 100 \leq n=\overline{ a_m a_{m-1} ... a_1 a_0 }= a_m10^m + a_{m-1}10^{m-1} + ... + a_110 + a_0 \ ; \ \ \ \text{i.e.} \ \ 2 \leq m $$ with $a_m \neq 0$, also on the otherhand let $$n ^ {\prime}=\overline{ a_m a_{m-1} ... a_1 }= a_m10^{m-1} + a_{m-1}10^{m-2} + ... + a_210 + a_1 \ .$$ then one can see easily that: $$\color{Blue} {n=10n^{\prime}+a_0} ,$$ also notice that $$ a_0 < 10 \ \ \ \ \ \ \ \ \ \ \ \ \text{and} \ \ \ \ \ \ \ \ \ \ \ \ 10 \leq n^{\prime},$$ so we can conclude that: $$a_0 < n^{\prime} \ \ \ \ \ \ \ \ ,\text{i.e.} \ \ \ \ \ \ \ \ \ \ \ \ 0 \leq \dfrac {a_0} {n^{\prime}} < 1 .$$ Now notice that: $$ \dfrac{n}{n^{\prime}} = \dfrac{ 10 n^{\prime} + a_0} { n^{\prime} } = \dfrac{ 10 n^{\prime} } { n^{\prime} } + \dfrac{ a_0} { n^{\prime} } = 10 + \dfrac{ a_0} { n^{\prime} } \ \ \ \ \ \ \ \ \Longrightarrow \\ \color{Red} { 10 \leq \dfrac{n}{n^{\prime}} < 10+1=11 } , $$ so we must have: $\color{Red} { \dfrac{n}{n^{\prime}}=10 }$ , i.e. $\color{Blue} {n=10n^{\prime}+0}$ ; which implies that $\color{Green}{a_0=0}$ . Second case : $n < 100$, which can be done by a simple calculation!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2395570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Problem of diagonalizability and vector space I've been reading the solution and want you to help me understand it. problem: Let V be a real vector space with 100-by-100 real matrices. Let $A \in V$, $W_A={B \in V | AB=BA}$, and $d_A$ be the dimension of $W_A$. Assume that $A^4-5A^2+4I=0$. Find the minimum of $d_A$. solution: From the minimal polynomial of A, A can have eigenvalues 1,-1,2,-2 and let $d_1,d_{-2},d_2,d_{-2}$ be the dimensions of the eigenspaces corresponding to the eigenvalues.(I understand the above) Let $M_k$ be the vectorspace of k-by-k real matrices. Then $W_A$ is isomorphic to $M_{d_1} \oplus M_{d_{-1}}\oplus M_{d_2}\oplus M_{d_{-2}} $.(Why isomorphic?) Therefore dim($W_A$)=${d_1}^2 +{d_{-1}}^2 +{d_2}^2+ {d_{-2}}^2$.(why?) Thus we have the minimum when $({d_1},{d_{-1}},{d_2},{d_{-2}})=(25,25,25,25)$.
Since $\psi_a(x) = (x-1)(x+1)(x-2)(x+2)$ we see that all Jordan blocks of $A$ are of size one and so $A=V \Lambda V^{-1}$ where $\Lambda$ is diagonal and $\{ [\Lambda]_{kk} \}_k = \{ \pm 1, \pm 2 \}$. It is straightforward to see that $W_A = V W_{\Lambda} V^{-1}$, in particular the dimensions of $W_A, W_\Lambda$ are the same. A little more work shows that $B'\in W_\Lambda$ iff $[\Lambda]_{ii} \neq [\Lambda ]_{jj}$ implies $[B']_{ij} = 0$ for all $i,j$. Hence if $[\Lambda]_{ii} = [\Lambda ]_{jj}$, then $[B']_{ij}$ can be chosen arbitrarily. In the following let $\lambda, \mu$ take values in $\{ \pm 1 \pm 2 \}$. Let $I_\lambda = \{ i | [\Lambda]_{ii} = \lambda \}$ and note that $d_\lambda = \dim \ker (A-\lambda I) = |I_\lambda|$. Then note that $B'\in W_\Lambda$ iff $[B']_{ij} = 0$ whenever $i \in I_\lambda, j \in I_\mu$ and $\lambda \neq \mu$. In the following let the summations be over values in $\{ \pm 1 \pm 2 \}$. If $i,j \in I_\lambda$, then $[B']_{ij}$ can take any value and since there are $d_\lambda^2$ such elements, we see that $\dim W_\Lambda = \sum_\lambda d_\lambda^2$. Hence the problem becomes $\min \{ \sum_\lambda d_\lambda^2 | \sum_\lambda d_\lambda = 100, d_\lambda \ge 1\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2395694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate $\lim_{x \rightarrow a} \frac{x^2 + ax - 2a^2}{\sqrt{2x^2 - ax} -a}$ I need to calculate: $$\lim_{x \rightarrow a} \frac{x^2 + ax - 2a^2}{\sqrt{2x^2 - ax} -a}$$ I get $0/0$ and can then use l'hopital's rule to find the limit, I can do this but someone asked me how I can do this without using l'hopital's rule. I guess I have to seperate $(x-a)$ in the nominator and denominator. The nominator can be written as $(x-a)(x+a)$ but I don't see how to seperate $(x-a)$.
Hint. Note that for $x\not=a$, $$\frac{x^2 + ax - 2a^2}{\sqrt{2x^2 - ax} -a}=\frac{(x+2a)(x-a)(\sqrt{2x^2 - ax} +a)}{(2x+a)(x-a)}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2395782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Understanding the definition of a natural isomorphism On Wikipedia I can read this: If, for every object $X$ in $C$, the morphism $η_X$ is an isomorphism in $D$, then $η$ is said to be a natural isomorphism (or sometimes natural equivalence or isomorphism of functors). But I don't understand this. I thought that, if $\eta: F \to G$ is a natural transformation; $F, G: C \to D$; and $X \in C$, this implies that $\eta_X \in C$. So I don't understand, how we can talk about $\eta_X$ is an isomorphism in $D$. EDIT: So does "$\eta$ is natural isomorphism" only imply that $\exists \varepsilon. \forall \eta_X. \exists \varepsilon_X. \eta_X \circ \varepsilon_X = 1_{G(X)} \wedge \varepsilon_X \circ \eta_X = 1_{F(X)}$, where $\circ$ is vertical composition of natural transformations? So is a natural isomorphism just a natural transformation, that happens to be an isomorphism (in the category of functors)?
A natural transformation $\eta$ maps objects of a category $\mathcal C$ to arrows of a category $\mathcal D$. Specifically, if $F,G:\mathcal C\to\mathcal D,\ \ x\in Ob\mathcal C\ $ and $\eta:F\to G$, we have $\eta_x$ is an arrow $F(x)\to G(x)$ in $\mathcal D$. A natural transformation $\eta$ is a natural isomorphism if * *each component $\eta_x$ is an isomorphism (invertible arrow in $\mathcal D$) *equivalently, if $\eta$ itself is an isomorphism (invertible arrow) in the category $Fun(\mathcal C,\mathcal D)$ of functors $\mathcal C\to\mathcal D$ with natural transformations as arrows. To see the equivalence, observe that composition of natural transformations in $Fun(\mathcal C,\mathcal D)$ is done 'componentwise', i.e. $(\eta\circ\psi)_x=\eta_x\circ\psi_x$, so that the inverse can also be taken componentwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2395875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Simple statistics exercise... Too simple? I have to solve the following exercise: There are three different parts needed to construct a machine: part $1$ ($1000$ pieces), part $2$ ($400$ pieces), part $3$ ($600$ pieces). The probability for a part $1$ or part $2$ piece to be defective is $2 \%$. The probability for a part $3$ piece to be defective is $2.8\%$. An engineer takes one piece and it turns out to be defective. The question is what is the probability that the engineer took a part $2$ piece? My approach is just dividing the number of defective part 2 pieces by the number of all defective pieces like that: $$\frac{400\times 0.02}{1000\times 0.2+400\times 0.02+600\times 0.028}$$ But it seems too simple. Does it seem correct to you? Thanks in advance
Nice intuition. To make your argument more rigourous, consider using Bayes rule. \begin{align}P(\text{part 2}|\text{defective})=\frac{P(\text{part 2})}{P(\text{defective})}P(\text{defective}|\text{part 2}) \end{align} $$P(\text{part 2})=\frac{400}{2000}$$ $$P(\text{defective})=\frac{2}{100}\frac{1000}{2000}+\frac{2}{100}\frac{400}{2000}+\frac{2.8}{100}\frac{600}{2000}$$ $$P(\text{defective}|\text{part 2}) = \frac{2}{100} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2395963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
which one is bigger $100^n+99^n$ or $101^n$ Suppose $n \in \mathbb{N} , n>1000$ now how can we prove :which one is bigger $$100^n+99^n \text{ or } 101^n \text{ ? }$$ I tried to use $\log$ but get nothing . Then I tried for binomial expansion...but I get stuck on this . can someone help me ? thanks in advance.
Of course $$1.01^n>1+\frac{n}{100}>2$$ for $n>100$, and obviously $1+0.99^n<2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2396093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 3 }
Polar integrals with trigonometric Problem The prompt is to evaluate the double integrals using polar system. The given constraints are: $$ \iint_D (x^2 + y^2) \, dx \, dy $$ $$ D = \{(x,y): x \ge 0, y \ge x, x^2 + y^2 \le 2y \}$$ Upon graphing we have a circle placed at the origin of 1 on the y-axis and line cutting through the origin and the circle. According to the constraints the the $\theta$ and $r$ values are $ \{\pi/2 \le \theta \le \pi/4 \}$ and $r$ has been derived as: $$ x^2+y^2-2y \le 0$$ $$r^2 = 2\sin \theta $$ $$ r = 2\sin\theta$$ $$ 0 \le r \le 2\sin\theta$$ The integral we get is $\int_{\pi/4}^{\pi/2} \int_0^{2\sin \theta} (r)^3 \, dr \, d\theta$ After evaluating a couple of steps I get stuck here, $\displaystyle \left. \int_{\pi/4}^{\pi/2} \frac {r^4}4 \right|_0^{2\sin\theta}$ I'm not really sure of the $r$ as well, any help and suggestions? How do I conclude this? I'm kinda new to this, please help.
The double integral set up should be: $$ \int_{\pi/4}^{\pi/2} \int_0^{2\sin \theta} r^3 \, dr \, d\theta= \int_{\pi/4}^{\pi/2} 4\sin^4\theta d\theta= \frac 1 4 \int_{\pi/4}^{\pi/2} (1-\cos(2\theta))^2 \, d\theta=\cdots.$$ Also use $\cos^2(2\theta) = \dfrac{1+\cos(4\theta)}{2}$ in expanding the integrand above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2396224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How one knows it is $(n-1)$ versus $(n+1)$ I have had a problem with this concept all my life so I thought I would reach out to the experts for help! Here is the problem statement: Quote: "Consider how much work is required to multiply two -digit numbers using the usual grade-school method. There are two phases to working out the product: multiplication and addition. First, multiply the first number by each of the digits of the second number. For each digit in the second number this requires $n$ basic operations (multiplication of single digits) plus perhaps some "carries", so say a total of $2n$ operations for each digit in the second number. This means that the multiplication phase requires basic $n(2n)$ operations. The addition phase requires repeatedly adding n digit numbers together a total of $(n-1)$ times. If each addition requires at most $2n$ operations (including the carries), and there are $(n-1)$ additions that must be made, it comes to a total of $(2n)(n-1)$ operations in the addition phase. Adding these totals up gives about $4n^2$ total operations. Thus, the total number of basic operations that must be performed in multiplying two n digit numbers is in $O(n^2)$ (since the constant coefficient does not matter)." End Why is it $(n-1)$ and $2n?$
When you add two $n$ digit numbers, you have to do $n$ additions, one for each digit, but there might be a carry at each digit, so that's another $n$ operations, and there's your $2n$. Now you have to add $n$ of these $n$-digit numbers, which means you have to do $n-1$ of these additions of pairs of numbers – right? To add two numbers, that's one addition; to add three numbers, that's two additions, and so on. But in the end, you're going for an estimate $O(n^2)$, so it makes no difference whether you use $n-1$ or $n+1$, and no difference whether you use $2n$ or $17n$ or $n/42$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2396332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Recursive derangement proof clarification I saw a proof of a recursive definition of $D_n$ = the number of derangements of a set of size $n$. A combinatorial proof of the recurrence for $D_n$: Here's a combinatorial proof of $D_n = (n-1)(D_{n-1} + D_{n-2})$ for $n \geq 2$ due to Euler. For any derangement $(j_1, j_2, \ldots, j_n)$, we have $j_n \neq n$. Let $j_n = k$, where $k \in \{1, 2, \ldots, n-1\}$. We now break the derangements on $n$ elements into two cases. Case 1: $j_k = n$ (so $k$ and $n$ map to each other). By removing elements $k$ and $n$ from the permutation we have a derangement on $n-2$ elements, and so, for fixed $k$, there are $D_{n-2}$ derangements in this case. Case 2: $j_k \neq n$. Swap the values of $j_k$ and $j_n$, so that we have a new permutation with $j_k = k$ and $j_n \neq n$. By removing element $k$ we have a derangement on $n-1$ elements, and so, for fixed $k$, there are $D_{n-1}$ derangements in this case. Thus, with $n-1$ choices for $k$, we have, for $n \geq 2$, $$D_n = (n-1)(D_{n-1} + D_{n-2}).$$ What I'm having trouble understanding is, if $j_n \neq n$, and $j_n=k$, why can $k$ only be any positive integer less than $n$, and does this only apply to the final element of the derangement (which is listed here as $j_n$), or does it apply to all of them? If it did apply to all of them, it seems like the only numbers being able to be chosen for a certain position would be those less than the position number, but that doesn't seem logical to me. Any help would be very much appreciated. Proof from: Mike Spivey (https://math.stackexchange.com/users/2370/mike-spivey), I have a problem understanding the proof of Rencontres numbers (Derangements), URL (version: 2012-08-28): https://math.stackexchange.com/q/83433
We do not have to pick the last element. We can pick other element as well and the proof will still work. If $j_n = n$, then it is not a derangement anymore. Hence $j_n = k$ where $k \neq n$. We can change the proof. You can pick a particular index $p \in \{ 1, \ldots, n\}$ For any derangement $(j_1, j_2, \ldots, j_n)$, we have $j_p \neq p$. Let $j_p = k$, where $k \in \{1, 2, \ldots, n\}\setminus \{p\}$. We now break the derangements on $n$ elements into two cases. Case $1$: $j_k = p$ Case $2$: $j_k \neq p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2396440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can't tell the difference between permutation and combination In an exam, a student has to answer 6 out of 8 questions. How many ways can she answer the 6 questions if a) there are no restrictions? b) The first 3 questions are compulsory? c) she must answer at least 3 of the first 4 questions? I am confused. Why is this question combination? I thought it is supposed to be a permutation. The 6 questions that the student is supposed to answer are specific. For example, each question is unique on its own. That is my understanding, but its wrong ._. Can anyone correct my logic?
I'm not giving actual answers but trying to resolve your confusions. I wouldn't say the questions are 'specific'. Each one is either answered or not answered, but the statistically important bit is that they are added together in aggregation for consideration (answer $6$ questions in total). Thus, for example, answering question number $123678$ is same as $123876$; the order (i.e. permutations) doesn't matter (e.g. answering which ones first). Hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2396522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Inverse elliptic integral, Weierstrass function, in other fields Take a separable cubic polynomial $4x^3-ax-b = 4 (x-e_1)(x-e_2)(x-e_3)$, let $h'(x) = (4x^3-ax-b)^{-1/2}$ and define its elliptic integral $h(x)= \int h'(x)dx$. Let $P(z) = h^{-1}(z)$ its inverse function. Then $\displaystyle P'(z) = \frac{1}{h'(P(z))}$ and $$P'(z)^2 = P(z)^3-a P(z)-b \tag{1}$$ * *For $z,x \in \mathbb{C}$ this is the definition of the Weierstrass function $\wp$ of the complex elliptic curve $$E(\mathbb{C}) = \{ (x,y) \in \mathbb{C}^2, y^2 = 4x^3-ax-b\}$$ Question 1 : How to show easily that $P$ is doubly periodic ? Take a $c \in\mathbb{C}$ with $P(c) \ne 0$ and a closed-curve $\gamma : P(c) \to P(c)$ enclosing one of the root $e_i$. Then $h \circ \gamma$ is a non-closed curve $c \to c+\omega$ and we find $$0 = \int_\gamma dx = \int_{h \,\circ\, \gamma} P'(z)dz = P(c+\omega)-P(c)$$ Thanks to $(1)$ it implies $P'(c+\omega) = \pm P'(c)$. We can show the sign is $+$ (if it was not we could double $\omega$) and the differential equation shows that $P$ is $\omega$ periodic. Applying the same process with a curve enclosing a different root will produce a different period $\omega_2$ which is $\mathbb{Z}$-linearly independent to $\omega$ (why ?) So $P$ is doubly periodic and we obtain the (Riemann surface and abelian group) isomorphism with a complex torus $$\varphi : \mathbb{C}/(\omega \mathbb{Z}+ \omega_2\mathbb{Z}) \to E(\mathbb{C}), \qquad \varphi(z) = (P(z),P'(z))$$ *Question 2 : Can we do the same with another (algebraically closed) field $K$ not contained in $\mathbb{C}$ and the corresponding elliptic curve over $K$ ? $K=\overline{\mathbb{F}}_p$ seems out of reach because it doesn't have an absolute value for making sense to analytic functions. What about the case $K=\overline{\mathbb{Q}}_p$ ? Algebraically, will $h$ be in some field of formal series, being the anti-derivative of $h' \in \overline{K(x)}$ ? And will its inverse function $P$ be well-defined ? In that case, does it tell us the structure of $E(\overline{\mathbb{Q}}_p)$ ?
One way to do this is to prove there is a lattice $\Lambda$ in $\Bbb C$ whose $\wp$-function satisfies $$\wp'(z)^2=4\wp(z)^3-a\wp(z)-b.$$ Cox gives a proof in Primes of the form $x^2+ny^2$. This involves the $j$ modular function. It is a fact that $j$ is surjective, so there is $\tau$ with $$j(\tau)=1728\frac{a^3}{a^3-27b^2}.$$ Then a lattice of the form $\alpha\Bbb Z+\alpha\tau\Bbb Z$ works. As $P$ and $\wp$ satisfy the same differential equation and their Laurent series at $0$ start off the same, then their Laurent series are the same, so $P=\wp$ by analytic continuation. As $\wp$ is periodic, so is $P$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2396620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Taylor expansion of $\cos^2(\frac{iz}{2})$ Expand $\cos^2(\frac{iz}{2})$ around $a=0$ We know that $$\cos t=\sum_{n=0}^{\infty}(-1)^n\frac{t^{2n}}{{2n!}}$$ So $$\cos^2t=[\sum_{n=0}^{\infty}(-1)^n\frac{t^{2n}}{{2n!}}]^2=\sum_{n=0}^{\infty}(-1)^{2n}\frac{t^{4n}}{{4n^2!}}$$ We have $t=\frac{iz}{2}$ $$\sum_{n=0}^{\infty}(-1)^{2n}\frac{(\frac{zi}{2})^{4n}}{{4n^2!}}=\sum_{n=0}^{\infty}(-1)^{2n}\frac{(\frac{zi}{2})^{4n}}{{4n^2!}}=\sum_{n=0}^{\infty}(-1)^{2n}\frac{({zi})^{4n}}{2^{4n}{4n^2!}}=\sum_{n=0}^{\infty}(-1)^{2n}\frac{({z})^{4n}}{2^{4n}{4n^2!}}$$ But the answer in the book is $$1+\frac{1}{2}\sum_{n=1}^{\infty}\frac{({z})^{2n}}{{2n!}}$$.
$$ \cos^2(\frac{iz}{2}) = \frac{1}{2}(1+\cos(2*\frac{iz}{2})) = \frac{1}{2}(1+\cos(iz)) = \frac{1}{2}(1+\sum_{n=0}^{+\infty} (-1)^n\frac{(iz)^{2n}}{(2n)!}) = \frac{1}{2}(1+\sum_{n=0}^{+\infty} (-1)^n (i)^{2n}\frac{(z)^{2n}}{(2n)!}) = \frac{1}{2}(1+\sum_{n=0}^{+\infty} (-1)^{2n}\frac{(z)^{2n}}{(2n)!}) = \frac{1}{2}(1+\sum_{n=0}^{+\infty} \frac{(z)^{2n}}{(2n)!}) = \frac{1}{2}(1+1+\sum_{n=1}^{+\infty}\frac{(z)^{2n}}{(2n)!}) = 1+\frac{1}{2}\sum_{n=1}^{+\infty} \frac{(z)^{2n}}{(2n)!} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2396742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
If $A$ is diagonalizable, find $\alpha$ and $\beta$ Let $A$ be a $5 \times 5$ matrix whose characteristic polynomial is given by $$p_A(\lambda)=(λ + 2)^2 (λ − 2)^3$$ If $A$ is diagonalizable, find $\alpha$ and $\beta$ such that $$A^{-1} = \alpha A + \beta I$$ I am unable to find the inverse of $5\times 5$ matrix, I only know how to invert $3\times 3$ matrices. I don't know how to find the values of $α$ and $β$. If anybody can help me I would be very thankful to them.
The fact that A is diagonalizable means that there exist an invertible matrix, P, such that $PAP^{1}= \begin{bmatrix}2 & 0 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 & 0 \\ 0 & 0 & -2 & 0 & 0 \\ 0 & 0 & 0 & -2 & 0 \\ 0 & 0 & 0 & 0 & -2 \end{bmatrix}$. So $(PAP^{1})^{-1}= PA^{-1}P^{-1}= \begin{bmatrix}\frac{1}{2} & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{2} & 0 & 0 & 0 \\ 0 & 0 & -\frac{1}{2} & 0 & 0 \\ 0 & 0 & 0 & -\frac{1}{2} & 0 \\ 0 & 0 & 0 & 0 & -\frac{1}{2} \end{bmatrix}$. So $A^{-1}= P^{-1}\begin{bmatrix}\frac{1}{2} & 0 & 0 & 0 & 0 \\ 0 & \frac{1}{2} & 0 & 0 & 0 \\ 0 & 0 & -\frac{1}{2} & 0 & 0 \\ 0 & 0 & 0 & -\frac{1}{2} & 0 \\ 0 & 0 & 0 & 0 & -\frac{1}{2} \end{bmatrix}P$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2396841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluate $\int_{-\infty}^{\infty}\frac{1}{(x^2+4)^5}dx$ $$\int_{-\infty}^{\infty}\frac{1}{(x^2+4)^5}dx$$ I am trying to use residue. We first need to find the singularities, $x^2+4=0\iff x=\pm 2i$ Just $2i$ is in the positive part of $i$ so we take the limit $lim_{z\to 2i}\frac{1}{(z^2+4)^5}$ but the limit is $0$
For any $a>0$ we have $$ \int_{-\infty}^{+\infty}\frac{dx}{x^2+a}=\frac{\pi}{\sqrt{a}} $$ and by applying $\frac{d^4}{da^4}$ to both sides we get: $$ 24\int_{-\infty}^{+\infty}\frac{dx}{(x^2+a)^5}=\frac{105 \pi}{16 a^4\sqrt{a}} $$ so by rearranging and evaluating at $a=4$ we get: $$ \int_{-\infty}^{+\infty}\frac{dx}{(x^2+4)^5} = \color{red}{\frac{35 \pi}{2^{16}}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2396943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Has this variant on multiplication by a natural number been studied before? Let $X$ denote an additively-denote commutative monoid. Then we get an action $\star$ of $\mathbb{N}$ on the powerset $\mathcal{P}(X)$ as follows: given a natural number $n$ and a set $A \subseteq X$, define $$n \star A = \left\{x \in X : \exists_{I \in \mathbf{FinSet}}\left(|I|=n \wedge \exists_{f:I \rightarrow A}\left(x = \sum_{i \in I} f(i) \right)\right)\right\}.$$ For example: * *$2 \star \{x\} = \{2x\}$ *$3 \star \{x\} = \{3x\}$ *$2 \star \{x,y\} = \{2x,x+y,2y\}$ *$3 \star \{x,y\} = \{3x,2x+y,x+2y,3y\}$ *$2 \star \{x,y,z\} = \{2x,2y,2z,x+y,x+z,y+z\}$ So it's basically a variant on multiplication by a natural number scalar in which the thing getting added to itself $n$ times is allowed to vary. Kind of reminds me of multichoose. Question. Has this variant on scalar multiplication studied before? If so, what is it called, and where can I learn more?
This looks like some sort of convolution on sets. For any two sets $A, B \in P(X)$, define the product $$ A \cdot B = \{a + b \mid a \in A, b \in B\}$$ For example, we have that $$ \{x, y\} \cdot \{x, y\} = \{2x, x + y, 2y\} $$ and in general, the star operator $n \star A$ is the $n$-fold product of $A$. Note that if $e \in X$ is the monoid unit, then $(P(X), \cdot)$ is a monoid with unit $\{e\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2397029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Polynomial $x^{8} + x^{7} + x^{6} + x^{5} + x^{4} + x^{3} + x^{2} + x + 1$ is reducible over $\mathbb{Q}$? Clearly, there are no roots, but how can I find factors of higher degree?
This is $$(x^6+x^3+1)(x^2+x+1).$$ In fact a polynomial $$\sum_{j=0}^m x^j=x^m+x^{m-1}+\cdots+x+1$$ for $m\ge1$ is irreducible over $\Bbb Q$ iff $m+1$ is prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2397116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Limit of $\int_0^1\frac{f(hx)}{x^2+1}dx$ when $h\to0$ Let $f\in \mathcal{C}^0\big([0,1],\mathbb{R}\big)$ and, for every $h\in(0,1]$, $$I(h)=\int_0^1\dfrac{f(hx)}{x^2+1}dx$$ For $\varepsilon >0$, show there exists $\eta>0$ such that for every $h\in(0,\eta)$, $$\left|I(h)-f(0)\frac{\pi}{4}\right|\leq \varepsilon$$ Since $\dfrac{\pi}{4} = \displaystyle\int_0^1\dfrac{1}{x^2+1}dx$ $$\left |\int_0^1\dfrac{f(hx)}{x^2+1}dx -\int_0^1\dfrac{f(0)}{x^2+1}dx \right|\leq\varepsilon$$ And now, I have got no idea how to solve this problem. I think, I should show that : $$|h|\leq \eta \implies \left|I(h)-f(0)\frac{\pi}{4}\right|\leq \varepsilon$$ but I'm not sure.
Note that we can write $$\begin{align} \left|\int_0^1 \frac{f(hx)}{x^2+1}\,dx-f(0)\frac\pi4\right|&=\left|\int_0^1 \frac{f(hx)-f(0)}{x^2+1}\,dx\right|\\\\ &\le \int_0^1 \frac{|f(hx)-f(0)|}{x^2+1}\,dx\\\\ & \le\frac\pi4 \sup_{x\in [0,1]}|f(hx)-f(0)| \end{align}$$ For any $\epsilon>0$, there exists a number $\eta>0$ such that $|f(hx)-f(0)|<4\epsilon/\pi$ whenever $|hx|<\eta$. Since $x\in [0,1]$, then whenever $|hx|<|h|<\eta$, we have $|f(hx)-f(0)|<4\epsilon/\pi$. Hence, $\sup_{x\in [0,1]}|f(hx)-f(0)|<4\epsilon/\pi$ whenever $|h|<\eta$. Putting it all together, we have for $|h|<\eta$, $$\left|\int_0^1 \frac{f(hx)}{x^2+1}\,dx-f(0)\frac\pi4\right|<\epsilon$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2397228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Construct a bijection $\mathrm{Hom}_{\mathbb C}(\mathbb C[x,y]/(xy-1),\mathbb C) \to \mathbb C\setminus \{0\}$ The question is : Construct a bijection $\mathrm{Hom}_{\mathbb C}(\mathbb C[x,y]/(xy-1),\mathbb C) \to \mathbb C\setminus \{0\}$. Here $\text{Hom}_{\mathbb C}(\mathbb C[x,y]/(xy-1),\mathbb C)$ is the set of all homomorphisms $$\phi :\mathbb C[x,y]/(xy-1) \to \mathbb C\;\; \text{s.t.}\;\; \phi|_{\mathbb C}=\mathrm{id}|_{\mathbb C}.$$ Please someone give some hints how can I do this? Thank you.
A homomorphism from $R=\Bbb C[x,y]/(xy-1)$ to $\Bbb C$ is a homomorphism $\Phi$ from $\Bbb C[x,y]$ to $\Bbb C$ with $\Phi(xy-1)=0$. Each homomorphism $\Phi:\Bbb C[x,y]\to \Bbb C$ has has the form $\Phi_{a,b}:f(x,y)\to f(a,b)$ where $a$, $b\in\Bbb C$. Then $\Phi_{a,b}(xy-1)=ab-1$. So $\Phi_{a,b}$ defines a homomorphism on $R$ iff $ab=1$, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2397337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Does this sum to $2^n-1\ $? $$N=\sum_{i=1}^{n} C^i_n = \sum_{i=1}^n\frac{n(n-1)\cdots(n-i+1)}{i!}$$ Does $N = 2^n-1$ hold ? I mean, $C^i_n = \binom{n}{i}$. According to the binomial formula, if this summation sums from $i=0$ instead of $i=1$, then it's equal to $2^n$. Because of this, does this sum to $2^n-1$?
We know that $$(a+b)^n=\sum_{i=0}^n \binom {n}{i}a^{n-i}b^i$$ with $a=b=1$, it becomes $$2^n=\sum_{i=0}^n\binom {n}{i} $$ $$=\sum_{i=\color {red} {1}}^n\binom {n}{i}+\frac {n!}{0!(n-0)!}$$ $$=N+1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2397466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }