Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Direct sum of compact operators is compact I have that $T_n$ are bounded operators on $H_n$ ($n\geq 1$) and that $\sup ||T_i||<\infty$. Define $T=\oplus T_n$ and $H=\oplus H_n$. I want to show that $T$ is compact iff $T_n$ is compact for all $n$ and $||T_n||\rightarrow 0$. Here is what I have so far: Assume that $T$ is compact, and let $B_n$ be the unit ball in $H_n$. Then we have that $\overline{T_n(B_n)}$ is a closed subset of $\overline{T(B)}$ (the unit ball in $H$), so we get compactness of $T_n$, and to see that $|T_n|\rightarrow 0$ just note that if the limit didn't go to zero, then for some $\epsilon>0$ there is an infinite subsequence $\{n_i\}$ such that $|T_{n_i}|>\epsilon$. Pick $m$ large, and let $h_{n_i}\in H_{n_i}$ be such that $|T_{n_i}(h_{n_i})|\geq \epsilon$ for $i=1,...,m$. Let $h\in H$ be equal to $h_{n_i}$ in the $n_i$-position and $0$ elsewehere. Then, $|h|=\sqrt{m}$ so $|T(h)/\sqrt{m}|\geq \epsilon \sqrt{m}$, so letting $m\rightarrow\infty$ we get that $T$ is unbounded, a contradiction. For the other direction I am a little stuck, I was thinking of using a theorem that says that for a bounded operator $S$, we have that $S$ is compact iff there is a sequence $S_n$ of operators of finite rank such that $|S-S_n|\rightarrow 0$. Maybe call $S_i$ to be $T_1\oplus...\oplus T_i$, and arguing that $S_i$ has finite rank? I can see that $|T-S_n|\rightarrow 0$ for if $h$ is a unit vector, then $$|(T-S_n)(h)|=|\sum_{n+1}^\infty T_n(h_n)|\leq \sup_{i\geq n+1}|T_i|\rightarrow 0$$, but I don't know where to use the hypothesis that $\sup |T_n|<\infty$ and how to show that $S_n$ has finite rank.
Fix $\varepsilon>0$. Choose $n_0$ such that $\|T_n\|<\varepsilon$ if $n> n_0$. For each $n=1,\ldots,n_0$, there exists a finite-rank $S_n$ with $\|S_n-T_n\|<\varepsilon$. Put $S_n=0$ for $n>n_0$. Then $\bigoplus_1^{n_0}S_n$ is finite-rank and $$ \|T-S\|=\sup\{\|S_n-T_n\|,\ n\in\mathbb N\} <\varepsilon. $$ So $T$ is a limit of finite-rank operators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/681707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Showing if a function is injective or surjective problem $F : \Bbb{P}(X) \rightarrow \Bbb{P}(X) ; U \rightarrow (U-A) \cup (A-U)$ My intuition has been telling me that this function is bijective but I having the most difficult time trying to show this. Any help would be appreciated, thank you!! edit: So far, I introduced sets $U_1$ and $U_2$, I am assuming $f(U_1)=f(U_2)$ and trying show $U_1=U_2$ in order to show injection. So I have $(U_1-A)\cup(A-U_1)=(U_2-A)\cup(U_2-A)$ and I'm unfamiliar with how to manipulate this into $U_1=U_2$. I haven't worked much with set notation before. As for showing that f is surjective, I've tried drawing some Venn diagrams to help me process the question but I'm not sure where to go. Thank you for the quick replies!
Trying it directly seems rough, especially since it's hard to picture if $U_1 = U_2$ using a Venn Diagram. I'm not sure how familiar you are with this idea, but $(U - A) \cup (A-U)$ is also called the symmetric difference of $U$ and $A$. It is denoted $U \triangle A$. Using the associativity of $\triangle$ with your function $F(U) = U \triangle A$, you can show $$(F \circ F)(U) = F(F(U)) = F(U \triangle A) = (U \triangle A) \triangle A = U \triangle (A \triangle A) = U \triangle \emptyset = U.$$ You can probably show associativity (the fourth equal sign) using Venn diagrams, comparing the diagram of each side. This entire equation tell us that your function has a left and right inverse. It's a bijection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/681798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Lower semicontinuous functional Consider the space $A=(C^1([0,1],\mathbb R),\|.\|_{L^\infty})$ norm and look at the functional $$ \mathcal F: A\to\mathbb R_+, f\mapsto \int_{0}^1\left|f'(t)\right|~\mathrm dt. $$ This functional is not continuous. My question: Is it lower semi-continuous? And could there be a meaningful extension of $\mathcal F$ to the closure of $A$ under the $L^\infty$-norm?
This functional is DEFINITELY continuous, when defined on $C^1[0,1]$, with its standard norm. Let $f\in C^1[0,1]$, then $$ \lvert Af\rvert=\Big\lvert\int_0^1 f'(t)\,dt\Big\rvert\le \int_0^1 \lvert f'\rvert\,dt \le \int_0^1 \| f'\|_\infty\,dt=\| f'\|_\infty\le \| f\|_\infty+\| f'\|_\infty=\|f\|_{C^1[0,1]}. $$ Is it lower semi-continuous, w.r.t. to $\|\cdot\|_\infty$-norm? Let's be reminded of the definition: $A$ is lower semi-continuous at $f=f_0$, if for every $\varepsilon>0$, there exists an open $U\subset C^1[0,1]$ (open w.r.t. to the $\|\cdot\|_\infty$-norm), with $f_0\in U$, such that $$ \int_0^1 \lvert f'(x)\rvert\,dx\le\int_0^1 \lvert f'_0(x)\rvert\,dx+\varepsilon, $$ for every $f\in U$. Apparently this is NOT possible, as if $U$ is open, then $B_\delta(f_0)\subset U$, for some $\delta>0$, and $$ f_n=f_0+\frac{\delta}{2}\sin nx\in B_\delta(f_0)\subset U, $$ and $$ \lim_{n\to\infty} \lvert Af_n\rvert=\infty. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/681880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If $m = g \circ f$ in a dagger category and $m$ is an isometry, is it possible that $f$ fails to be an isometry? Question. Suppose $m : A \rightarrow B$ is an isometry in a dagger category (by which I mean that $m^\dagger \circ m=\mathrm{id}_A$), and that we're given arrows $f : A \rightarrow Y$ and $g : Y \rightarrow B$ such that $m = g \circ f$. Is it possible that $f$ fails to be an isometry? The remainder of the question is motivation... In category theory, we have the following well-known result. Proposition. Suppose $m : A \rightarrow B$ is a split monomorphism, and that we're given arrows $f : A \rightarrow Y$ and $g : Y \rightarrow B$ such that $m = g \circ f$. Then $f$ is a split monomorphism. Proof. Since $m$ splits, let $e : B \rightarrow A$ satisfy $e \circ m = \mathrm{id}_A$. Then defining $e' = e \circ g$, we see that $$e' \circ f = (e \circ g) \circ f = e \circ m = \mathrm{id}_A.$$ Unfortunately, if we're in a dagger category and we try replacing "split monomorpism" by "isometry", the above proof doesn't seem to go through. In particular, although we can show that $f$ is a split monomorphism by defining $e' = m^\dagger \circ g$, there appears to be no guarantee that $f^\dagger$ equals $m^\dagger \circ g$.
Counterexample: let $m$ be the identity and pick $f$ and $g$ any inverses that aren't daggers of each other. For example in the dagger category of complex matrices with conjugate transpose: $$m=\begin{pmatrix}1&0\\ 0&1\\ \end{pmatrix}$$ $$f=\begin{pmatrix}1/2&0\\ 0&2\\ \end{pmatrix}$$ $$g=\begin{pmatrix}2&0\\ 0&1/2\\ \end{pmatrix}$$ Or even $m=1$, $f=1/2$ and $g=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/681974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Asymptotic expansion of $\sum_{k=0}^{\infty} k^{1 - \lambda}(1 - \epsilon)^{k-1}$ I'm seeing a physics paper about percolation (http://arxiv.org/abs/cond-mat/0202259). In the paper the following asymptotic relation is used without derivation. $$ \sum_{k=0}^{\infty} k P(k) (1 - \epsilon)^{k-1} \sim \left<k\right> - \left<k(k - 1)\right> \epsilon + \cdots + c \Gamma(2 - \lambda) \epsilon^{\lambda - 2}, $$ where $P(k) = c k^{-\lambda}$ with $\lambda > 2$ is a power-law probability mass function of $k$ and the bracket means average. It's the equation number (12) of the paper. I have no idea how to get the relation. Especially, from where does the $\epsilon^{\lambda - 2}$ term come?
I have not been able to see how is coming the $\epsilon^{\lambda - 2}$ term. However, and, may be, this could be a track $$ \sum_{k=0}^{\infty} k P(k) (1 - \epsilon)^{k-1}=\frac{c \Phi (1-\epsilon,\lambda -1,0)}{1-\epsilon}$$ when $P(k) = c k^{-\lambda}$ ($\Phi$ being the the Hurwitz-Lerch transcendent $\Phi(z,s,a)$ function). May be some asymptotic developments ..?
{ "language": "en", "url": "https://math.stackexchange.com/questions/682057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Abstract Algebra - Permutations I'm asked to show that $(1,2,3) \in S_3$ generates a subgroup which is normal. I know that I could show it explicitly but that would be tedious. I think it may have to do with the fact that $(1,2,3)$ generates all even permutations but I'm sure there's something I'm missing. Any help would be appreciated.
Actually, showing it directly would be relatively untedious as far as getting your hands dirty with group theory is concerned. There are only six elements in $S_3$, and $\langle(123)\rangle$ has three elements so it only has two cosets (where did I get "two" from?), so there isn't much work involved. I urge you to do the problem first by going this route. This kind of practice is necessary. For an abstract, and perhaps elegant, approach, you can argue that $\langle(123)\rangle$ is the only subgroup with some property (and explain why it is the only one), a property which is invariant under conjugation. This will tell you the subgroup is conjugation-invariant, i.e. normal (quiz: how does it tell us this?). Can you figure out the property? It's very basic: I mentioned it in the first paragraph. It's possible that the meat of this argument (how being the unique subgroup with a conjugation-invariant property implies the subgroup is conjugation-invariant) is what you're struggling with. But you have the right idea since the argument I gave above generalizes to showing $A_n\triangleleft S_n$ using the same argument with even permutations (although one could also show it's index two).
{ "language": "en", "url": "https://math.stackexchange.com/questions/682135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof of closed form of recursively defined sequence Let $f:\mathbb{N} \rightarrow \mathbb{N}$ be defined by $f(1) = 5, f(2) = 13$, and for $n \ge 2, f(n) = 2f(n - 2) + f(n - 1)$. Prove that $f(n) = 3\cdot 2^n + (-1)^n$ for all $n \in N$ So far I've proved that $f(n)$ is true when $n = 1, 2$. For $k \ge 3$, assume that $p(j)$ is true for all $j \in N, j < k$ Now I want to prove that $p(k)$ is true for all $k \in N$ How would I go about doing that?
Hint $\ $ Let $\,S g(n) = g(n\!+\!1)$ be the shift operator. $(S-2)(2^n) = 0 = (S+1)(-1^n)$ so their product $(S-2)(S+1) = S^2\!-S-2$ kills $\, f_n = c\,2^n + d (-1)^n\,$ for any $\,c,d\,$ independent of $\,n.$ Therefore we deduce $\, 0 = (S^2\!-S-2)f_n = f_{n+2} - f_{n} - 2f_n,\ $ i.e $\ f_{n+2} = f_{n+1} + 2 f_n.$ Remark $\ $ See this answer for another example and further explanation. This explains how the proof works in TooOldForMath's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/682210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Finding Number of Automorphisms of Z8? I'm trying to find the number of automorphisms of Z8. When I google around, I find stuff like: There are 4 since 1 can be carried into any of the 4 generators. The problem hint tells me to make use of the fact that, if G is a cyclic group with generator a and f: G-->G' is an isomorphism, we know that f(x) is completely determined by f(a). Thing is, I can think of 7! 1-1 and onto mappings of Z8 onto itself. I guess I don't see exactly why 1 has to get carried into a generator...why can't I have f(n) = n + 1 (mod 8), just shifting each element one to the right? Thanks for any guidance on this, Mariogs
Note that $1$ generates $\mathbb{Z}_{8}$ as a group, so any group morphism $\varphi:\mathbb{Z}_8\rightarrow\mathbb{Z}_8$ is determined by $\varphi(1)$. Furthermore, if $\varphi$ is an automorphism, then $\varphi(1)$ generates $\mathbb{Z}_8$. The possible generators of $\mathbb{Z}_8$ are $1,3,5,7$. It then remains to check that for each possible choice of generator, there exists $\varphi$ with $\varphi(1)$ equal to the generator. This is the case, so there are $4$ possible automorphisms of $\mathbb{Z}_8$. (To see this, define $\varphi(n)=3n, 5n, 7n$ and check each of these.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/682286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
solve the differential equation by integrating directly I am trying to solve a differential equation and I don't know how to solve it when it comes to integrating directly. I'd like to know how to do this so I can start doing other problems. Thanks in advance. Solve the differential equation by integrating directly $${{\rm d}y \over {\rm d} t} = {4t + 4 \over \left(t + 1\right)^{2}}$$
$$y'(t)=4t+\frac4{(t+1)^2}\implies y(t)=2t^2+\frac{4t}{t+1}+y(0)$$ $$y'(t)=\frac{4t+4}{(t+1)^2}\implies y'(t)=\frac{4}{t+1}\implies y(t)=4\log(t+1)+y(0)$$ The solution of the second version on the interval $(-\infty,-1)$ would be $$ y(t)=4\log|t+1|+y(-2). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/682404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the required digit What is the digit at the $50^{th}$ place from left of $(\sqrt50 +7)^{50}$ I thought of binomial expansion but it was way too lengthy. Can anyone suggest any other way?
Note that $a=7+\sqrt{50}$ and $b=7-\sqrt{50}$ satisfy $a+b=14$ and $ab=-1$. Note that $a\gt 14$ so that $a^n \gt 10^n$ (by some margin) so that $a^{50}\gt 10^{50}$ and $b^{50}\lt 10^{-50}$ Note that if $Y_n=a^n+b^n$ we have $Y_n=14Y_{n-1}+Y_{n-2}$ so the $Y_r$ are integers. ($a$ and $b$ satisfy $x^2-14x-1=0$, $Y_0=2, Y_1=14$) Conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/682501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Smallest next real number after an integer This might be a silly question, but is it possible at all for n.00000...[infinite zeros]...1 to be the next real number after n? If not, why not? Firstly, I know (I think) that $$\lim_{x\to \infty} \frac{1}{10^x} = 0$$ but I'm not talking about taking its limit. Surely I'm not required to. The obvious rebuttal is that n.00000...[infinite zeros]...1 (let's just call this number c) is that c divided by two will be between n and c, which is obviously a closer number to n than c. But why is c necessarily divisible? There are special exceptions for other numbers. For instance, if b=0 in $\frac ab$, we say it's undefined. Why can't $\frac c2$ be similarly undefined? Or something similar to the notion that infinity divided by two is still infinity? OR, does c equal n.0, similar to how 0.999... = 1? Apologies if this question has been asked a million times before (I was not able to find it asked quite this way) or if you find it stupid.
You can define a number like this one if you consider hyperreal numbers: and it would be written $n + \epsilon$, which is greater than n but less than any other number greater than n. Indeed, if you could define n.00000...[infinite zeros]...1 , then you should be able to define n.00000...[infinite zeros]...2 and many other numbers. This 0.000[infinite zeros]1 is called $\epsilon$ and is the first infinitesimal part. It can be multiplied by any real and would still be infinitesimal. $\epsilon^2$ is an even smaller infinitesimal part than the first one. But because of the hierarchy, you would also have $n < ... < n + \epsilon^3 < n + \epsilon^2 < n + \epsilon$ and even with ordinals you could not get around this problem. So even in this theory you cannot find "the" one after n, only one in a subhierarchy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/682578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 10, "answer_id": 7 }
How do I prove an inequality of a strict concave function? The function $f(x)$ is strict concave, strict increasing, and $f(0) = 0$. $a, b \in \mathbf R \; and \; a < b$, how can I get that $\frac{a}{b} < \frac{f(a)}{f(b)}$? Thank you! Oh sorry I forgot to mention that $f:\mathbf R_{+} \rightarrow \mathbf R_{+}$ is continuous; and $a, b$ are positive...
$\frac{f(a)}{f(b)}-\frac{a}{b}=\frac{bf(a)-af(b)}{bf(b)}$. Now, $bf(b)>0\forall b\in \mathbb{R}$ since $f(\cdot)$ is increasing and $f(0)=0$. Now, $$bf(a)-af(b)=(b-a)f(a)-a(f(b)-f(a))$$ For concave functions, if $f$ is differentiable, $\forall x,y\in \mathbb{R}$ $$f(y)\le f(x)+(y-x)f'(x)$$ So, $$f(b)-f(a)<(b-a)f'(a)\\ -f(a)=f(0)-f(a)<-af'(a)$$ Hence, $$bf(a)-af(b)>(b-a)(f(a)-af'(a))>0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/682646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Combinatorial proofs of the identity $(a+b)^2 = a^2 +b^2 +2ab$ The question I have is to give a combinatorial proof of the identity $(a+b)^2 = a^2 +b^2 +2ab$. I understand the concept of combinatorial proofs but am having some trouble getting started with this problem, any help would be appreciated.
Hint. You have $a$ different blue shirts and $b$ different pink shirts. In how many ways can you choose one shirt to wear today and one to wear tomorrow?
{ "language": "en", "url": "https://math.stackexchange.com/questions/682849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Basic B-Spline basis function question I am studying the basic recursion formula for generating B-Spline basis functions N(i,j) of a given degree from the basis for the lower degree, and puzzling at the magic. In particular what I am having a hard time getting through my head is why the new functions obtained have one more degree of continuity (at knots) than the next lower degree. It's obvious that the degree is elevated, but it's not obvious where this continuity is coming from at the knot. Consider the degree one basis functions. They are piecewise linear functions with slope discontinuities at the knots. How is it that taking a linear combination of these functions gives a piecewise quadratic which has a continuous slope at the knots? Where is this magic coming from?!! Thanks!
Splines satisfy many more identities than the Cox-de Boor relations. As an example, the cardinal B-splines $B_n(x)$ of degree $n-1$ and support $[0,n]$ also satisfy $$B_{n+1}'(x)=B_n(x)-B_n(x-1).$$ This relation directly shows that smoothness increases with degree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/682929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
A way of finding $x \in \mathbb{Z}$, congruency I'm trying to find all $x \in \mathbb{Z}$ that satisfies this equation $$3x \equiv 1 \pmod 6$$ I tried using trial and error, but couldn't find a suitable number for x. I know that the $\mbox{gcd}$ is $3$. How would I approach this?
A solution mod $\,6\,$ remains a solution mod $\,3,\,$ yielding $\ 0\equiv 3x\equiv 1\pmod 3,\,$ contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/682984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
If $a$ and $b$ are odd then $a^2+b^2$ is not a perfect square Prove if $a$ and $b$ are odd then $a^2+b^2$ is not a perfect square. We have been learning proof by contradiction and were told to use the Euclidean Algorithm. I have tried it both as written and by contradiction and can't seem to get anywhere.
Let $a=2m+1$ and $b=2n+1$. Assume $a^2+b^2=k^2$. Then: $$(2m+1)^2+(2n+1)^2=k^2 \iff \\ 4(m^2+m+n^2+n)+2=k^2 \iff \\ 4(m^2+m+n^2+n)+2=(2r)^2 \iff \\ 2(m^2+m+n^2+n)+1=2r^2 \iff \\ 2s+1=2r^2,$$ which is a contradiction. Hence, the assumption $a^2+b^2=k^2$ is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/683070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Calculate possible intersection point of two lines I'd like to know the $I$ coordinates $[x, y]$. $I$ is the possible intersection point of line $|AB|$ with $|XY|$ and $|X'Y'|$. Values that are known: * *Angle $AIX$ and $AIX'$ *Coordinates of points $A$, $B$, $X$, $Y$, $X'$, $Y'$ I have no idea how to calculate intersection point of two lines that do not directly intersect. Thanks a lot in advance. PS: Gray lines are imaginary
Given the coordinates $(x,y)$ of your four points $A,B,X,Y$, the parametrized equations for the two continuation lines are $$\vec{r}_1(t) =\begin{bmatrix}x_A \\ y_A \end{bmatrix} +t\cdot\begin{bmatrix}x_B-x_A \\ y_B-y_A \end{bmatrix},$$ $$\vec{r}_2(s) =\begin{bmatrix}x_X \\ y_X \end{bmatrix} +s\cdot\begin{bmatrix}x_Y-x_X \\ y_Y-y_X \end{bmatrix}.$$ Set $\vec{r}_1(t)=\vec{r}_2(s)$ and solve for the parameters $(s_0,t_0)$. Putting the parameter value into the above equations tells you the intersection point, e.g. $$\vec{r}_I=\vec{r}_1(t_0),\qquad \text{or}\qquad \vec{r}_I=\vec{r}_2(s_0).$$ EDIT Assume for example the positions $$A:(0,1),\quad B:(0,2),\quad X:(1,1),\quad Y:(2,2)$$ (which is a bit similar to your picture) Then the line-equations are $$\vec{r}_1 (t) =\begin{bmatrix}0 \\ t \end{bmatrix},\qquad \vec{r}_2(s)=\begin{bmatrix}1+s \\ 1+s \end{bmatrix}.$$ Setting them equal gives the set of equations $$0=1+s,\\t=1+s,$$ with the solution $(s_0,t_0)=(-1,0)$. Computing your intersection point as explained above, you obtain $$\vec{r}_I = \vec{r}_1(0)=\begin{bmatrix}0 \\ 0 \end{bmatrix}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/683247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determining Linear Independence and Linear Dependence? I understand that when solving for a linear dep/independent matrix, you can take the determinant of the matrix and if it is zero, then it is linearly dependent. However, how can I go about doing this for something like two $3\times 1$ vectors? Example: $$\begin{bmatrix} -6 \\ -1 \\ -7 \end{bmatrix}, \begin{bmatrix} -1 \\ -5 \\ 4\end{bmatrix} $$ The RREF yield infinitely many solutions, because R3 0=0. Does this tell me that the system is linearly independent if it has infinitely many solutions?
Define a matrix $A\in\mathbb{R}^{3\times 2}$ by setting $A:=\begin{bmatrix}-6 & -1 \\ -1 & -5 \\ -7 & 4 \end{bmatrix} $. By performing the row operations on the matrix $A$, we can show that the null space $\mathcal{N}(A)=\left\{\begin{bmatrix}0 \\ 0\end{bmatrix}\right\}$. Since the null space of $A$ contains only the zero vector, we have $Ax=0$ if and only if $x=\begin{bmatrix}0 \\ 0 \end{bmatrix}$. But $Ax$ is the combinations of the column vectors of the matrix $A$, and thus the only combination of the vectors $v_1:=\begin{bmatrix}-6 \\ -1\\-7\end{bmatrix}$, $v_2:=\begin{bmatrix}-1 \\ -5\\4\end{bmatrix}$ which produces the zero vector $\begin{bmatrix}0 \\ 0\\0\end{bmatrix}$ is the trivial combination $(0,0)$. In other words, we have $\displaystyle\sum_{1\le k\le 2}c_kv_k=0$ if and only if $c_k=0$ for all $1\le k\le 2$. Therefore by the definition of linear independence, the vectors $v_1,v_2$ are linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/683343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Stability of a feedback system Take the following feedback system: $\dot{x} = (\theta - k_1) x - k_2 x^3$ Now my book says: For $\theta > k_1$, the equilibrium $x = 0$ is unstable. I wonder why... Furthermore my book indicates that it is easy to see that $x(t)$ will converge to one of the two new equilibria $\pm \sqrt{\frac{\theta-k_1}{k_2}}$. Again, how did they obtain this result?
Question 1: From stability theory it is known, that a fixed point $x^*$ of $\dot{x}=f(x)$ is stable $\Leftrightarrow$ all eigenvalues of the jacobian $f'(x^*)$ have negative real parts. For $f(x)=(\theta - k_1) x - k_2 x^3$ and $x^*=0$ this restricts to $\theta<k_1$ Question 2: You can use the same argument. For $\theta>k_1$ the new equilibria become stable, as (assuming $k_2>0$) we have $$f'\left(\pm \sqrt{\frac{\theta-k_1}{k_2}}\right)=\theta-k_1-3k_2\frac{\theta-k_1}{k_2}=-2(\theta-k_1)<0$$ This fundamental change of the dynamics at $\theta=k_1$ is known as a pitchfork bifurcation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/683420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Differentiating this implicit expression? I am given: $$\tfrac{1}{4}(x+y)^2 + \tfrac{1}{9}(x - y)^2 = 1$$ Using the chain rule, and factoring out $y'$, I'm left with: $$y' \left(\tfrac{1}{2}(x+y) - \tfrac{2}{9}(x-y)\right) = 0$$ Now I need to isolate $y'$ but I'm not sure how. Should I do: $$y' = \frac{1}{\tfrac{1}{2}(x+y) - \tfrac{2}{9}(x-y)}$$ Am I going about this question the correct way? Thanks
Your first step is wrong. It should be $$\frac{1}{2}(1+y')(x+y)+\frac{2}{9}(1-y')(x-y)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/683497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The Hedgehog space is locally compact We'll consider The Hedgehog space, and use the definition from Wikipedia (link to the current revision): Let $\kappa$ a cardinal number, the $\kappa$-hedgehog space is formed by taking the disjoint union of $\kappa$ real unit intervals identified at the origin. Each unit interval is referred to as one of the hedgehog's spines. The hedgehog space is a metric space, when endowed with the hedgehog metric $d(x,y)=|x-y|$ if $x$ and $y$ lie in the same spine, and by $d(x,y)=x+y$ if $x$ and $y$ lie in different spines. Although their disjoint union makes the origins of the intervals distinct, the metric identifies them by assigning them 0 distance. Now, I want to prove that this space is locally compact. For each $t\in [0,1]$, I will denote $t_\alpha$ the belonging to $\alpha$-th spine, where $\alpha \in\kappa$ ($\alpha$ cardinal). Take an arbitrary $t_{\alpha}$. If $t_{\alpha}\neq 0_{\alpha}$, then $0_{\alpha}<t_{\alpha}\le 1_{\alpha}$. Consider $s\in [0,1]$ such that $0<s<t$. Can we say that $[s_{\alpha},1_{\alpha}]$ is a compact neighborhood of $t_{\alpha}$? And what about if $t_{\alpha}= 0_{\alpha}$? Thanks!
I don't think this space is locally compact. Indeed the whole space isn't compact (when $\kappa$ is infinite), as the open cover $$\big(B((2/3)_{\alpha},1/2)\big)_{\alpha\in\kappa}\cup\big(B(0,1/2)\big)$$ clearly admits no finite subcover. If the space were locally compact, $0$ would admit a compact neighborhood $K$ with $\overline{B(0,r)}\subset K$ for some $r>0$. However, $\overline{B(0,r)}$ is clearly homeomorphic to the total space which isn't compact, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/683550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
To maximise my chance of winning one prize should I put all my entries in a single draw? Every week there's a prize draw. It's free to enter using the code from a soup tin lid. You can enter as many times as you like during the week until Monday's draw and then it starts all over. The prizes are experiences and the value to me is not especially relevant. Winning more than once might be worth more in value terms, but I only want to win once. So my question is; in order to maximise my chance of winning once should I; * *batch up my lids and enter as many of them as I can in one go, probably during the last week of the prize draws? *enter lids as I go along. It feels like (1) to me, but the maths to explain it is beyond me. To be clear, I'm not actually interested in trying to gain a winning advantage. I realise that in reality the odds are small and about even either way, but as a maths puzzle I'm interested. Say for instance that I have 10 lids. Lets assume that the number of other entries are an even number (say 5 each week), and lets say there are 10 weeks and one prize available per week. Do I have more chance of winning a prize by entering one lid over 10 weeks, or 10 lids in any other week.
If there are lots of entries (so the number $n$ of your entries doesn't much change the probability of a number winning) and the probability of one ticket winning is $p$ so that $np$ is small then: If you enter all at once, the probability of winning is $np$ If you enter in $n$ separate weeks the probability of winning is $1-(1-p)^n=np-\binom n2p^2+\dots$ Since the trailing terms in the binomial expansion are small, because $p$ is small, you do better to enter all at once. The reason for this is that each ticket has the same chance of winning, but when you split your numbers, there is a chance of winning more than once. When you enter all in the blame week (given only one prize) you can only win once. There is a $\frac 16$ chance of throwing a $6$ with a fair cubical die. With two dice the chance of throwing at least one $6$ is $\frac {11}{36}$ rather than $\frac 13$. Where did that extra $\frac 1{36}$ go? Well the throw of double six counts one throw but two sixes. The principle is the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/683603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
How to prove two graphs are isomorphic if and only if their complements are isomorphic? How this can be proved that two graphs $G_1$ and $G_2$ are isomorphic iff their complements are isomorphic?
Let graph $G$ be isomorphic to $H$, and let $\overline G$, $\overline H$ denote their complements. Since $G$ is isomorphic to $H$, then there exists a bijection $f: V(G) \to V(H)$, such that $uv \in E(G)$ if and only if $f(u)f(v) \in E(H)$. -> [this should be edge set] Equivalently, there exists a bijection $f: V(G) \to V(H)$, such that $uv \notin E(G)$ if and only if $f(u)f(v) \notin E(H)$. -> [this should be edge set] Since the vertex set of $G$ and $\overline G$ are the same, therefore $f$ is a bijection from $V(\overline G)$ to $V(\overline H)$. Then suppose $uv \notin E(G)$, by definition of a complement, $uv \in E(\overline G)$. Likewise, if $f(u)f(v) \notin E(H)$, then $f(u)f(v) \in E(\overline H)$. Hence $\overline G$ and $\overline H$ are isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/683718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Vector question regarding plane and area How do i find the area enclosed within the $3$ points when the plane intersects the $x, y$ and $z$ axis and The plane is $ax+by+cz=1$ Need help on solving this as it has been bothering me for some time
Find the points where the plane intersects the $x$, $y$, and $z$ axis. Then find the distances between these three points and use Heron's formula to find the area of the trianlge defined by them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/683780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof of a bijection to the set of subsets? For part of a proof I wanted to show that $f: \{1,2\} \to \mathcal{S}(X)$ is a bijection, where $\mathcal{S}(X)$ is the set of subsets of $X$, which in this case I know to be $\{\emptyset , X\}$. So I define $f$ as $f(1) = \emptyset$ and $f(2) = X$. But then $f(1) = \emptyset \iff 1\in f^{-1}\emptyset$, and $f$ is a bijection means $f^{-1}$ is a bijection, so $f^{-1}\emptyset = \{f^{-1}(x) : x\in \emptyset\}$. But no such x is in the empty set, so $f^{-1}\emptyset = \emptyset$, when it should have 1 as an element. I'm confused as to what exactly is going wrong here; I'm pretty sure that $\mathcal{S}(X)$ should have size 2 so it should biject to $\{1,2\}$. Any clarifications would be appreciated!
You are confusing between $f^{-1}(a)=\{x\mid f(x)=a\}$ and $f^{-1}(a)=\{x\mid f(x)\in a\}$. The latter is sometimes written as $f^{-1}[a]$ to avoid this sort of confusion when $f(x)$ is a set itself. So $f^{-1}(\varnothing)=\{1\}$ and $f^{-1}[\varnothing]=\varnothing$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/683948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Factoring a hard polynomial This might seem like a basic question but I want a systematic way to factor the following polynomial: $$n^4+6n^3+11n^2+6n+1.$$ I know the answer but I am having a difficult time factoring this polynomial properly. (It should be $(n^2 + 3n + 1)^2$). Thank you and have a great day!
There's another approach: check that for, say $n=\pm 1,\, 0,\,\pm 2$ your polynomial is a perfect square:$$\begin{cases}25,&n=1\\1,&n=-1\\1,&n=0\\1,&n=-2\\121,&n=2.\end{cases}$$ Therefore, in five points a polynomial of fourth degree is a perfect square - you can hope to find a polynomial of the second degree that satisfies $$\begin{cases}\pm 5,&n=1\\\pm 1,&n=-1\\\pm 1,&n=0\\\pm 1,&n=-2\\\pm 11,&n=2\end{cases}.$$ This is an overdetermined system, but, luckily, it has a solution: wlog, we take the candidate as $ n^2+2bn+c$ (because of the coefficient at $n^4$). The value in $n=-1$ gives $1-2b+c = \pm 1$, and in $n=0$ $ c=\pm 1$. By checking $4$ solutions of the obtained $4$ linear systems we can eliminate all but one by testing against remaining points $n$: we obtain $c=1$, $b=3/2$. Finally we check that indeed $(n^2+3n+1)^2=n^4+6n^3+11n^2+6n+1 $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/684036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 6, "answer_id": 1 }
Show that a set of vectors spans $\Bbb R^3$? Let $ S = \{ (1,1,0), (0,1,1), (1,0,1) \} \subset \Bbb R^3 .$ a) Show that S spans $\Bbb R^3$ b) Show that S is a basis for $\Bbb R^3 $ I cannot use the rank-dimension method for (a). Is it possible to show via combination of 3 vectors? What would the final equations be like? I tried using this method but I don't think it's the final answer. $x+z = a \\ x+y=b\\ y+z=c$ So what do I do from here. I'm not sure how to do (b).
Let $$v\in \Bbb R^3$$ Then v = (v1,v2,v3). Let s1 = (1,1,0), s2 = (0,1,1), s3 = (1,0,1). So by row reducing the matrix (s1 s2 s3 v), we get that $$v = ((v_1+v_2-v_3)/2)*s_1 + ((-v_1+v_2+v_3)/2)*s2 + ((v_1-v_2+v_3)/2)*s_3$$ And since v was chosen arbitrarily this means that $$S\ \text{spans}\ \Bbb R^3$$ Now all you need to do is show that S is linearly independent. (Hint: show that there is only one solution to this (s1 s2 s3 x) where x is the zero vector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/684106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
weighted sum of two i.i.d. random variables Suppose we know that $X_1$ and $X_2$ are two independently and identically distributed random variables. The distribution of $X_i$ ($i=1,2$) is $P$, and we have some constraints on $P$ that $$\mathbb{E} X_1 = 0$$ (zero-mean) and $$\mathbb{E} X_1^2 = 1$$ (variance is normalized). We denote the set of all feasible distributions as $\mathcal{D}$. Then, my question is about the function $$f(w_1,w_2,a): = \sup_{P\in \mathcal{D}} \Pr(w_1X_1+w_2X_2\ge a)$$ where $w_1\ge 0$, $w_2\ge 0$ and $a> 0$. Q1: Can we get an analytic solution of $f?$ Q2: Can we know some properties of $f$? For example, it is easy to show that $f$ is monotonically decreasing as $a$ increases when $w_1$ and $w_2$ are fixed. I want to ask if $a$ is fixed, and $w_1+w_2$ is fixed, is $f$ (quasi-)convex (or concave, monotonic...) on $w_1$? For the two questions above, I also appreciate answers for another definition of the set $\mathcal{D}:=\{ P \mid \mathbb{E} X_1 = 0, P(|X_1|>1)=0 \text{ (bounded instead of restriction on variance)}\}.$ Thank you in advance.
If you let $a$ and $\omega_1 + \omega_2 =: c$ be fixed and, for notational convenience, let $0 < \omega_1 =: t$, then \begin{align} f(\omega_1,\omega_2,a) = f(t) &= \sup_{P \in \mathcal{D}}P[t X_1 + (c-t)X_2 \geq a] \\ &= \sup_{P \in \mathcal{D}} \int P\left[X_1 \geq \left(1-\frac{c}{t}\right)X_2 + \frac{a}{t} \mid X_2\right]dP \\ &= \sup_{P \in \mathcal{D}} \left\{ \int_{X_2 < \frac{a}{t-c}} P\left[X_1 \geq \left(1-\frac{c}{t}\right)X_2 + \frac{a}{t} \mid X_2\right]dP \right. \\ &\ \qquad + \left. \int_{X_2 > \frac{a}{t-c}} P\left[X_1 \geq \left(1-\frac{c}{t}\right)X_2 + \frac{a}{t} \mid X_2\right]dP \right\} \end{align} For fixed $x_2$, the term $\left(1-\frac{c}{t}\right)x_2 + \frac{a}{t}$ converges to $0$ as $t$ increases. It does so monotonically, from below if $x_2 > \frac{a}{t-c}$ and from above otherwise, and consequently the set $\{X_1 \geq (1-\frac{c}{t})x_2 + \frac{a}{t} \}$ decreases or increases respectively. Therefore, the first integrand will almost surely increase with $t$, and the second will almost surely decrease. From this and the moment conditions alone, it doesn't seem possible to conclude monotonicity of $f$. However, if $|X_2| < 1$ holds a.s., then if $a \geq \omega_2$ only the first integrand contributes and one can say that $f$ is increasing with $t$. If $a \leq - \omega_2$, the reverse conclusion can be drawn. It isn't clear that the increase/decrease should be monotonic though, so I'll hedge and say non-decreasing/non-increasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/684167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Rewrite $f(x) = 3 \sin (\pi x) + 3\sqrt{3} \cos (\pi x)$ in the form $f(x) = A \sin (Kx+D)$ I got a question like that said "Rewrite $f(x) = 3 \sin (\pi x) + 3\sqrt{3} \cos (\pi x)$ in the form $f(x) = A \sin (Kx+D)$". I'm inclined to think that since the periods are the same ($2$), that the amplitudes will just add up. But, I'm not sure. I also need to know the rules for combining sinusoids with different periods. I know that when you're multiplying them, the one with the longer period acts as a sort of envelope for the one with the smaller period. But what do you do with different periods when you add them? Thanks! evamvid
The standard way to combine such functions is to use the R Formula ( http://www.oocities.org/maths9233/Trigonometry/RFormula.html) : Specifically, if we wanna combine your function into : $$a\sin{\theta} + b\cos{\theta} = R\sin{(\theta + \alpha)}$$ Then it is possible to compute $R,\alpha$ : $$R = \sqrt{a^2 + b^2}, \alpha=\tan^{-1}{\frac{b}{a}}$$ In your case, $$ a=3, b = 3\sqrt{3}, \theta=\pi x$$ I believe it's very easy to continue on from here. Im sure there are many proofs of this online as well, if you need to be convinced that it works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/684248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Tychonoff vs. Hilbert Let $(\mathscr H_n,\langle\cdot,\cdot\rangle_n)_{n\in\mathbb N}$ be a sequence of Hilbert spaces. Let $$\mathscr H\equiv\bigoplus_{n\in\mathbb N}\mathscr H_n\equiv\left\{(h_n)_{n=1}^{\infty}\,\Bigg|\,h_n\in\mathscr H_n\,\forall n\in\mathbb N,\,\sum_{n\in\mathbb N}\|h_n\|_n^2<\infty\right\}$$ denote their direct sum, equipped with the inner product $$((h_n)_{n\in\mathbb N},(g_n)_{n\in\mathbb N})\mapsto\sum_{n\in\mathbb N}\langle h_n,g_n\rangle_n.$$ The norm $$(h_n)_{n\in\mathbb N}\mapsto\sqrt{\sum_{n\in\mathbb N}\|h_n\|_n^2}$$ naturally defines a metric topology on $\mathscr H$. I have encountered the following dilemma. If $C_n\subseteq\mathscr H_n$ is a compact set for all $n\in\mathbb N$ such that $\prod_{n\in\mathbb N} C_n\subseteq \mathscr H$, then I conjecture that $\prod_{n\in\mathbb N} C_n$ may not be compact in $\mathscr H$, seemingly in defiance of Tychonoff's theorem, because the metric topology on $\mathscr H$ may be different from the product topology corresponding to the metric topologies on $(\mathscr H_n)_{n\in\mathbb N}$. Is this conjecture correct? (I don't need a rigorous proof, I just wonder whether this is the case.)
If a product of compact sets lies within the direct sum, it will always be compact. The space in question is metrisable, so it's enough to check sequential compactness, and thanks to completeness, it's not hard to do that using the standard diagonal argument. This is not a consequence of Tychonoff's theorem, however, as Hilbert topology is much finer than the product topology. A similar argument will not work if you replace $\prod C_n$ with $\bigoplus C_n$, for example if you take $C_n=[0,1]$ in $\mathscr H_n={\bf R}$ (so that $\mathscr H=\ell^2$), the direct sum will be very far from compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/684399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
chain rule using trig functions So I have the following: $$ y = cos(a^3 + x^3) $$ This is what I got. $$ y' = \cos(a^3 + x^3) \ ( -sin(a^3 + x^3) ) \ ( 3a^2 + 3x^2 ) $$ I'm not sure what to do after this? Would this be the final answer?
$$d[\cos(a^3+x^3)]=-\sin(a^3+x^3)d(a^3+x^3)=-\sin(a^3+x^3)3x^2dx$$ assuming $a$ is a constant and $x$ is the variable. If you want to explicitly use the chain rule then let $$u=a^3+x^3,\frac{du}{dx}=3x^2$$ $$y=\cos u,\frac{dy}{du}=-\sin u$$ $$\frac{dy}{dx}=\frac{du}{dx}\times\frac{dy}{du}=-3x^2\sin(a^3+x^3)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/684488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Show $1+\cosθ+\cos(2θ)+\cdots+\cos(nθ)=\frac{1}{2}+\frac{\sin[(n+1/2)θ]}{2\sin(θ/2)}$ Show $$1+\cosθ+\cos(2θ)+\cdots+\cos(nθ)=\frac12+\frac{\sin\left(\left(n+\frac12\right)θ\right)}{2\sin\left(\frac\theta2\right)}$$ I want to use De Moivre's formula and $$1+z+z^2+\cdots+z^n=\frac{z^{n+1}-1}{z-1}.$$ I set $z=x+yi$, but couldn't get it.
Hint. $$ \operatorname{Re}(1+e^{i\theta}+\cdots +e^{ni\theta})=\operatorname{Re}\frac{e^{(n+1)i\theta}-1}{e^{i\theta}-1} $$ where $e^{i\theta}=\cos\theta+i\sin\theta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/684603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Summation of factorials modulo ten I have read that$$\sum\limits_{i=1}^n i!\equiv3\;(\text{mod }10),\quad n> 3.$$ Why is the sum constant, and why is it $3$?
Think about what you are summing: $$1+2+6+24+120+720+\dots = 33 + 120 + 720 + \dots$$ Taking mod $10$ of the sum, you can see that $33$ gives $3$, can you see that all other sumands are divisible by $10$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/684668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Cut locus that is a geodesic Can we characterize surfaces $S$, for which cut locus $C_p(S)$ with respect to a point $p$ on $S$, is itself a geodesic between the points it passes through? This holds for example for a cylinder and therefore surfaces isometric to it which do have a cut locus. I am looking for more examples.
If the manifold is compact a topological classification can be obtained as follows: First observe that the cut locus of a closed connected manifold is connected, as noted in Jason De Vitos comment. In fact for $p \in S$ and $q \in S \setminus \{p\}$ consider a minimal geodesic $\gamma$ from $p$ to $q$. Mapping $q$ to $\gamma(t)$, where $t$ is the first time such that $\gamma$ can not be extended to a minimal geodesic from $p$ to $\gamma(t + \epsilon)$ for small $\epsilon > 0$ defines a map from $M \setminus \{p\}$ to $Cut(p)$. It is well known that this map is continous (but not trivial). Hence $Cut(p)$ is connected if $M$ is. Let us assume that $S$ is compact and connected. Then $Cut(p)$ is compact as well. From your assumption and the above it follows that $Cut(p)$ is either homeomorphic to $\mathbb S^1$ or to a point. Via the map in the above argument one can moreover construct a homotopy equivalence $S\setminus\{p\} \cong Cut(p)$. Thus $S\setminus\{p\}$ is either homotopy equivalent to a point or a circle. In the first case it follows that $S$ is homeomorphic to $\mathbb S^2$. In the second case it follows that $S$ is homeomorphic to $\mathbb RP^2$, which is a bit harder to see, but follows from the calssification of surfaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/684821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Different basis over $\Bbb{R}$ and $\Bbb{C}$ V is a finite dimensional vector space over $\Bbb{C}$ and {v$_1$,...,v$_n$} be a basis of V. Show {v$_1$,iv$_1$,...,v$_n$,iv$_n$} is a basis of V over $\Bbb{R}$ and conclude: dim$_{\Bbb{R}}$V=2dim$_{\Bbb{C}}$V. I have proved this is true for the case V = $\Bbb{C}^2$ using e1, ie1, e2 and ie2. How can I extend this to a general V?
Let $\alpha_1,\beta_1,\alpha_2,\beta_2,\ldots,\alpha_n,\beta_n\in\Bbb R$ such that $$\alpha_1 v_1+\beta_1 i v_1+\cdots+\alpha_n+\beta_n i v_n=0$$ so with $z_i=\alpha_i+i\beta_i$ we have $$z_1v_1+\cdots+z_n v_n=0\Rightarrow z_i=0 \;\forall i$$ since $(v_1,\ldots,v_n)$ are linearly independant, moreover, it's clear that $(v_1,iv_1,\ldots,v_n,iv_n)$ spans the linear space $V$ hence we have the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/684917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are the ideals in ${\Bbb C}[x,y]$ that contain $f_1,f_2\in{\Bbb C}[x,y]$? This question is based on an exercise in Artin's Algebra: Which ideals in the polynomial ring $R:={\Bbb C}[x,y]$ contain $f_1=x^2+y^2-5$ and $f_2=xy-2$? Using Hilbert's (weak) nullstellensatz, one can identify all the maximal ideals of $R$ that contain $f_1$ and $f_2$. For the general ideals contain $(f_1, f_2)$, it suffices to identify ideals in $R/(f_1,f_2)$ by the correspondence theorem. But I don't see how to go on. Is there a systematic way to do it?
Here's a procedure that works more generally: the ideal $I := (x^2 + y^2 - 5, xy - 2)$ has height $2$ in $\mathbb{C}[x,y]$. To see this, note that $x^2 + y^2 - 5$ (or $xy - 2$) is irreducible over $\mathbb{C}$ (e.g. by Eisenstein), hence generates a height $1$ prime ideal in $\mathbb{C}[x,y]$, which does not contain the other generator. More generally yet, the generators of $I$ form a regular sequence, and any ideal generated by a regular sequence has height equal to the length of the sequence. Thus, the quotient $R := \mathbb{C}[x,y]/I$ has Krull dimension $\le \dim \mathbb{C}[x,y] - \text{ht}(I) = 0$, so $R$ is an Artinian ring. Every Artinian ring is a finite product of Artinian local rings, and ideals in a finite product are products of ideals, so writing $R = \prod_{i=1}^n R_i$, the number of $R$-ideals is the product of the numbers of $R_i$-ideals. In this particular case, $R = R_1 \times \cdots \times R_4$, each $R_i \cong \mathbb{C}$ by the Nullstellensatz, and $\mathbb{C}$ has precisely $2$ ideals, so $R$ has a total of $2^4 = 16$ ideals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/684984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Independent Event Complements I have the following homework assignment that I've already finished, but am confused on whether I've gotten right/wrong, and was hoping someone could help explain so I understand the problem better. An oil exploration company currently has two active projects, one in Asia and the other in Europe. Let A be the event that the Asian projet is successful and B be the event that the European project is successful. Suppose that A and B are independent events with P(A)=0.4 and P(B)=0.7 a.) If the Asian project is not successful, what is the probability that the European project is also not successful? Explain your reasoning. b.) What is the probability that at least one of the two projects will be successful? c.) Given that at least one of the two projects is successful, what is the probability that only the Asian project is successful?" Here is what I've gotten for each part: $$P(A \cap B) = P(A)P(B) = (.4)(.7) = .28 $$ a.) $P(B^c) = 1 - 0.7 = 0.3 $ b.) $P(A \cup B) = P(A) + P(B) - P(A \cap B) = .4 + .7 - .28 = .82$ c.) $P(A) - P(A \cap B) = .4 - .28 = .12 $ $.12/.82 = .146$ I am confused in that the two events are independent of each other and the book states that for part a the answer should be .126 instead of what I got. Am I doing these problems correctly or am I committing some error?
(a) This question belongs to "conditional model" But since A and B are independent You may take directly p(B not)=0.3 Reason: independent means P(A .B)=P(A).P(B)
{ "language": "en", "url": "https://math.stackexchange.com/questions/685090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Need help with a proof concerning zero-free holomorphic functions. Suppose $f(z)$ is holomorphic and zero-free in a simply connected domain, and that $\exists g(z)$ for which $f(z) =$ exp$(g(z))$. The question I am answering is the following: Let $t\neq 0$ be a complex number. Prove that $\exists h(z)$ holomorphic such that $f(z) = (h(z))^t$. I see that the idea makes sense, but a nudge in the right direction would be appreciated.
Nudge: Remember that for real numbers $r>0$, you can define $r^t = \exp(t\ln r)$. Maybe you can do something similar for holomorphic functions?
{ "language": "en", "url": "https://math.stackexchange.com/questions/685174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Tricky limit involving sine I'm trying to evaluate $$\text{lim}_{(x,y) \rightarrow (0,0)} \frac{x^4 + \text{sin}^2(y^2)}{x^4+y^4}.$$ I'm pretty sure that the limit exists and is $1$; at the very least, you get that if you approach $(0,0)$ along the lines $x=0$ and $y=0$ and $x=y$. But I can't seem to figure out how to to show that the limit is $1$. Thanks!
Consider $$1-{x^4+\sin^2(y^2)\over x^4+y4}={y^4-\sin^2(y^2)\over x^4+y^4}={y^4\over x^4+y^4}\left(1-{\sin^2(y^2)\over y^4} \right)$$ Now $$\left|{y^4\over x^4+y^4}\right|\le1$$ for all $(x,y)\not=(0,0)$ and $$\lim_{y\to0}{\sin^2(y^2)\over y^4}=\lim_{u\to0}\left({\sin u\over u}\right)^2=1$$ That should take care of things. (Note: the term ${y^4\over x^4+y^4}$ by itself doesn't have a limit at $(0,0)$, but it doesn't need to; all we needed was for it to be bounded.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/685264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving that an uncountable set has an uncountable subset whose complement is uncountable. How does one prove that an uncountable set has an uncountable subset whose complement is uncountable. I know it needs the axiom of choice but I've never worked with it, so I can't figure out how to use. Here is my attempt (which seems wrong from the start): Let $X$ be an uncountable set, write $X$ as a disjoint uncountable union of the sets $\{x_{i_1},x_{i_2}\}$ i.e $X=\bigcup_{i\in I}\{x_{i_1},x_{i_2}\}$ where $I$ is an uncountable index (I'm pretty sure writing $X$ like this can't always be done), using the axiom of choice on the collection $\{x_{i_1},x_{i_2}\}$ we get an uncountable set which say is all the ${x_{i_1}}$ then the remaining ${x_{i_2}}$ are uncountable. Anyway how is it done, properly? I know the question has been asked in some form here but the answers are beyond my knowledge.
Your idea is generally correct. Using the axiom of choice, $|X|=|X|+|X|$, so there is a bijection between $X$ and $X\times\{0,1\}$. Clearly the latter can be partitioned into two uncountable sets, $X\times\{0\}$ and $X\times\{1\}$. Therefore $X$ can be partitioned to two uncountable disjoint sets. Indeed you need the axiom of choice to even have that every infinite set can be written as a disjoint union of two infinite sets, let alone uncountable ones.
{ "language": "en", "url": "https://math.stackexchange.com/questions/685349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Trying to get a bound on the tail of the series for $\zeta(2)$ $$\frac{\pi^2}{6} = \zeta(2) = \sum_{k=1}^\infty \frac{1}{k^2}$$ I hope we agree. Now how do I get a grip on the tail end $\sum_{k \geq N} \frac{1}{k^2}$ which is the tail end which goes to zero? I want to show that $\sqrt{x}\cdot \mathrm{tailend}$ is bounded as $x \to \infty$. All this to show that $x\cdot \mathrm{tailend} = \mathcal O\sqrt{x}$ The purpose is to get the asymptotic formula for the distribution of square free integers. p.269 Exercise 8 Stewart and Tall.
Use either the "Euler-Maclaurin summation formula" or "Abel summation formula" applied to the function $f(x) = 1/x^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/685435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 8, "answer_id": 3 }
Why must metric tensor be invertible? The metric can be written as a matrix, but why must this matrix be invertible? At the points where the matrix is singular, why is the metric not defined?
This worried me at one time as well. The way I thought about it was by working at a fixed point and using the Gram-Schmidt process for inner products on the coordinate basis $\partial_1,...,\partial_n$ to produce an orthonormal basis $e_1,...,e_n$. It's a standard and easy fact that the matrices that represent these bilinear forms are related by $I=A^tgA$, where $I$ is the identity matrix (since $\{e_i\}$ is orthonormal), $g$ is the metric $g_{ij}$ with respect to the $\{\partial_i\}$ basis, and $A$ is the change of basis matrix taking the basis of partials to the orthonormal basis. Taking the determinant of both sides of this equation, and using the fact that $\det(A)=\det(A^t)\neq 0$, we see that $\det(g)\neq 0$ so $g$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/685544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Closed form for derivative $\frac{d}{d\beta}\,{_2F_1}\left(\frac13,\,\beta;\,\frac43;\,\frac89\right)\Big|_{\beta=\frac56}$ As far as I know, there is no general way to evaluate derivatives of hypergeometric functions with respect to their parameters in a closed form, but for some particular cases it may be possible. I am interested in this case: $$\mathcal{D}=\frac{d}{d\beta}\,{_2F_1}\left(\frac13,\,\beta;\,\frac43;\,\frac89\right)\Bigg|_{\beta=\frac56}\tag1$$ Could you suggest how to evaluate $\mathcal{D}$ in a closed form, if possible?
Using Euler-type integral representation for the Gauss's function: $$ {}_2F_1(a,b; c; z) = \frac{\Gamma(c)}{\Gamma(b) \Gamma(c-b)} \int_0^1 u^{b-1} (1-u)^{c-b-1} (1-z u)^{-a} \mathrm{d}u $$ for $c = \tfrac{4}{3}$ and $b=\tfrac{1}{3}$, differentiating with respect to $a$ at $a=\tfrac{5}{6}$: $$ \left.\frac{\mathrm{d}}{\mathrm{d}a} {}_2F_1(a,\tfrac{1}{3}; \tfrac{4}{3}; \tfrac{8}{9}) \right|_{a =\tfrac{5}{6}} = -\frac{1}{3} \int_0^1 \frac{\log\left(1- \tfrac{8}{9} u\right)}{\left(1- \tfrac{8}{9} u\right)^{5/6} u^{2/3}} \mathrm{d}u \tag{1} $$ The integral can be evaluated using Mellin convolution techniques for two functions $G_1(u) = \log\left(1- \tfrac{8}{9} u\right) \mathbf{1}_{0<u<1}$, and $G_2(u) = \left(1- \tfrac{8}{9} u\right)^{-5/6} \mathbf{1}_{0<u<1}$. Asking Mathematica to evaluate the integral $(1)$ the answer comes out in closed form: 9 3^(1/6) Gamma[7/6] Gamma[4/3] (Pi + Sqrt[3] Log[3])/(2 Sqrt[Pi]) - 2 3^(1/3) (3 HypergeometricPFQ[{1/6, 1/6, 2/3}, {7/6, 7/6}, 1/9] + Hypergeometric2F1[1/6, 2/3, 7/6, 1/9] Log[3])
{ "language": "en", "url": "https://math.stackexchange.com/questions/685644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
What steps are taken to make this complex expression equal this? How would you show that $$\sum_{n=1}^{\infty}p^n\cos(nx)=\frac{1}{2}\left(\frac{1-p^2}{1-2p\cos(x)+p^2}-1\right)$$ when $p$ is positive, real, and $p<1$?
Since $0<p<1$, we have \begin{eqnarray} \sum_{n=1}^\infty p^n\cos(nx)&=&\Re\sum_{n=1}^\infty p^ne^{inx}=\Re\sum_{n=1}^\infty(pe^{ix})^n=\Re\frac{pe^{ix}}{1-pe^{ix}}=\Re\frac{pe^{ix}(1-pe^{-ix})}{|1-pe^{ix}|^2}\\ &=&p\Re\frac{-p+\cos x+i\sin x}{|1-p\cos x-ip\sin x|^2}=p\frac{-p+\cos x}{(1-p\cos x)^2+p^2\sin^2x}\\ &=&p\frac{\cos x-p}{1-2p\cos x+p^2\cos^2x+p^2\sin^2x}=\frac{p(\cos x-p)}{1+p^2-2p\cos x}\\ &=&\frac12\left(\frac{1-p^2}{1-2p\cos x+p^2}-1\right). \end{eqnarray}
{ "language": "en", "url": "https://math.stackexchange.com/questions/685705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How find this value $\left|\frac{z_{1}z_{2}+z_{1}z_{3}+z_{2}z_{3}}{z_{1}+z_{2}+z_{3}}\right|$ let three complex $z_{1},z_{2},z_{3}$ such $$z_{1}+z_{2}+z_{3}\neq 0,|z_{1}|=|z_{2}|=|z_{3}|=1$$ Find this value $$\left|\dfrac{z_{1}z_{2}+z_{1}z_{3}+z_{2}z_{3}}{z_{1}+z_{2}+z_{3}}\right|$$ My idea:if $z_{1},z_{2},z_{3}$ is real numbers,and such $z_{1}=z_{2}=z_{3}=1$,then we easy to find this value $$\left|\dfrac{z_{1}z_{2}+z_{1}z_{3}+z_{2}z_{3}}{z_{1}+z_{2}+z_{3}}\right|=1$$ But other complex case,I can't,Thank you
Note that $$\lvert z_2z_3+z_3z_1+z_1z_2\rvert=\lvert z_1z_2z_3\rvert\cdot\lvert z_1^{-1}+z_2^{-1}+z_3^{-1}\rvert=\lvert\overline z_1+\overline z_2+\overline z_3\rvert=\lvert z_1+z_2+z_3\rvert$$ Can you point out the reason of each equality?
{ "language": "en", "url": "https://math.stackexchange.com/questions/685795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$f(x) = \arccos {\frac{1-x^2}{1+x^2}}$; f'(0+), f'(0-)? $f(x) = \arccos {\frac{1-x^2}{1+x^2}}$ $f'(x) = 2/(1+x^2)$, but I see graphic, and it is true only for x>=0. For x<=0 => $f'(x) = -2/(1+x^2)$ How can I deduce the second formula or proof that it is.
Method $\#1:$ Let $\displaystyle\arccos\frac{1-x^2}{1+x^2}=y$ $\displaystyle\implies(1) \cos y=\frac{1-x^2}{1+x^2}\ \ \ \ (i)$ Using the definition of Principal values, $\displaystyle \implies(2)0\le y\le\pi \implies 0\le\frac y2\le\frac\pi2\implies \tan\frac y2\ge0$ Applying Componendo and dividendo on $(i),$ $\displaystyle x^2=\frac{1-\cos y}{1+\cos y}=\tan^2\frac y2$ (using $\displaystyle\cos2A=\frac{1-\tan^2A}{1+\tan^2A}$) As $\displaystyle\tan\frac y2\ge0,\implies \tan\frac y2=|x|$ Method $\#2:$ Let $\displaystyle z=\arctan x$ $\displaystyle\implies -\frac\pi2\le z\le\frac\pi2\iff -\pi\le 2z\le\pi$ and $\displaystyle\tan z=x,\frac{1-x^2}{1+x^2}=\cos2z$ $\displaystyle\arccos(2z)=\begin{cases} 2z=2\arctan x &\mbox{if } 0\le 2z\le\pi\iff 0\le z\le\frac\pi2\implies x=\tan z\ge0\ \\-2z=-2\arctan x & \mbox{if } -\pi\le 2z<0\iff -\frac\pi2\le z<0\implies x<0 \end{cases} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/685900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $\operatorname{tr}(A+B)>\operatorname{tr}(A)$, does it hold that $\operatorname{tr}((A+B)^k)>\operatorname{tr}(A^k)$ for all $k\geq 1$ I wonder whether the following holds and if so how it could be proved: Let $A, B$ be (non-commuting) positive semi-definite matrices, If $\operatorname{tr}(A+B)>\operatorname{tr}(A)$, does it hold that $\operatorname{tr}((A+B)^k)>\operatorname{tr}(A^k)$ for all $k\geq 1$. Any ideas would be welcome. Thank you very much in advance
Very partial answer: It is true for k=2: $tr((A+B)^2)=tr(A^2+AB+BA+B^2)=tr(A^2)+tr(AB)+tr(BA)+tr(B^2)=tr(A^2)+2tr(AB)+tr(B^2)$ by linearity of trace and by the fact that $tr(AB)=tr(BA)$. Furthermore, as B is psd, $tr(B^2)\geq 0$, and $tr(AB)\geq 0$ (see A.U. Kennington, Power concavity and boundary value problems, Indiana University Mathematics Journal Vol.34, No. 3, 1985, p. 687-704, Appendix). Thus $tr((A+B)^2)\geq tr(A^2)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/685985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does X = Y in distribution and X being Y-measurable imply Y is X-measurable? Suppose $X,Y$ are random variables taking values in some Borel space, $X \overset {d}{=} Y$, and $X$ is $Y$-measurable. It follows from the fact that $X$ is $Y$-measurable that there exists a measurable $f$ such that $X = f(Y)$ a.s. Is it the case that there exists a measurable $g$ such that $Y = g(X)$ a.s.? It seems plausible, and is true for finite-valued, discrete random variables. EDIT: I've thought about it a bit, and one consequence is $X \overset {d}{=} f(X)$, which may or may not be helpful.
No. Let $Y$ be uniformly distributed on $[-1,1]$, let $f(x) = 2|x|-1$ and let $X = f(Y)$. Then $X$ is also uniformly distributed on $[-1,1]$, but $Y$ is not $X$-measurable, since, intuitively, by looking at $X$ you cannot tell the sign of $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/686165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Inner-product question Let $V$ be $\mathbf{R}^2$ equipped with usual inner product, and $v$ be a nonzero vector. $S_v(u)= u- 2 \frac{\langle u,v\rangle}{\langle v,v\rangle } v$ and $\Phi$ be a non-empty set of unit vectors in $\mathbf{R}^2$ such that $S_v(u) \in \Phi$ and $2\langle u,v\rangle\in\mathbf{Z}$ for any $u,v \in \Phi$. I need to show that $|\Phi|=2,4$ or $6$ and describe the possible sets $\Phi$ geometrically. I can see that if $v$ is in $\Phi$, then so is $-v$, and that $S_v(u)$ is the reflection of $u$ in the line orthogonal to $v$. I've tried taking an orthonormal basis, but that just seems to confirm the $2\langle u,v\rangle \in \mathbf{Z}$ statement. Anyone have any ideas?
First, note that if $u,v \in \mathbb{S}^1$, so is S_v(u). Then $2<u,v> \in \mathbb{Z}$ means $2cos(\theta) \in \mathbb{Z}$, where theta is the angle between $u$ and $v$. Then $\theta=0, \pi/3,\pi/2$ or $\pi$, and you 're done!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/686247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $L = \{(x,y,z) | x+y=z\}$ a regular language? Suppose $x,y,z$ are coded as decimal or their binary representations in an appropriate DFA. Is $L$ regular? My intuition tells me that the answer is no, because there are infinitely many combinations such that $x+y=z$ and a DFA must contain a finite amount of memory. Is this correct?
This is a funny question, and the answer is: it depends. First, if $x$, $y$ and $z$ are given sequentially, then pumping lemma implies that triplet $\langle 1(0^n),0,1(0^m)\rangle$ would have to be accepted for multiple values of $m$ and $n$, not necessarily equal. On the other hand, if the numbers are given simultaneously (e.g. on three tapes, or perhaps by triplets of digits), then it is regular. Consider the case when the automaton processes the numbers right-to-left, the only thing the automaton needs to know is the "carry flag" (or a small number), which, in case of addition can be only $0$, $1$, or $2$. With the other direction, it is just the reversed language. Also, finite state transducers are even able to carry out addition for numbers given in parallel (i.e. generate $z$ from $x+y$), but there is a difference between deterministic and non-deterministic transducers, so the processing order would matter there. I hope this helps $\ddot\smile$
{ "language": "en", "url": "https://math.stackexchange.com/questions/686352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Tricky question about differentiability at the origin Let $f: \mathbb{R}^2 \to \mathbb{R} $ be given as $$ f(x,y) = \begin{cases} y, & \text{if }\text{ $x^2 = y $} \\ 0, & \text{if }\text{ $x^2 \neq y $} \end{cases} $$ Is this function differentiable at $(0,0)$ ?
Yeah. It is easy to show that $f$ admits partial derivatives at $(0,0)$, both equal to $0$, for example $$\frac{\partial f}{\partial x}(0,0)=\lim_{x\to 0}\frac{f(x,0)-f(0,0)}{x}=0$$ To show that $\lim_{(x,y)\to 0}\frac{f(x,y)}{||(x,y)||}=0$ note that $$\Bigg|\frac{f(x,y)}{||(x,y)||}\Bigg|\leq \frac{x^2}{||(x,y)||}.$$ since $f(x,y)=x^2$ or$f(x,y)=0$. Now try to show that $$\lim_{(x,y)\to(0,0)}\frac{x^2}{\sqrt{x^2+y^2}}=0.$$ See the graph (in red) below to understand the definition of $f$
{ "language": "en", "url": "https://math.stackexchange.com/questions/686407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Finding system with infinitely many solutions The question asks to find equation for which the system has infinitely many solutions. The system is: \begin{cases} -cx + 3y + 2z = 8\\ x + z = 2\\ 3x + 3y + az = b \end{cases} How should I approach questions like this? I tried taking it to row reduced echelon form but it got kind of messy. The answer is supposed to be: $$a - c -5 = 0$$ and $$b- 2c +2 = 0$$
You can do row reduction; consider the matrix \begin{align} \left[\begin{array}{ccc|c} -c & 3 & 2 & 8 \\ 1 & 0 & 1 & 2 \\ 3 & 3 & a & b \end{array}\right] &\to \left[\begin{array}{ccc|c} 1 & 0 & 1 & 2 \\ -c & 3 & 2 & 8 \\ 3 & 3 & a & b \end{array}\right]\quad\text{swap 1 and 2}\\ &\to \left[\begin{array}{ccc|c} 1 & 0 & 1 & 2 \\ 0 & 3 & 2+c & 8+2c \\ 0 & 3 & a-3 & b-6 \end{array}\right]\quad\text{reduce first column}\\ &\to \left[\begin{array}{ccc|c} 1 & 0 & 1 & 2 \\ 0 & 3 & 2+c & 8+2c \\ 0 & 0 & a-c-5 & b-2c-14 \end{array}\right]\quad\text{reduce second column}\\ \end{align} The system has infinitely many solutions if and only if the last row is zero, that is \begin{cases} a-c=5\\ b-2c=14 \end{cases}
{ "language": "en", "url": "https://math.stackexchange.com/questions/686489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Simplifying solutions I am given the differential equation $$\frac{dz}{dx} = m(c_{1}-z)(c_{2}-z)^{\frac{1}{2}}, z(0) =0$$ and have arrived at a solution: $$z(x) = c_{2} - (c_{1}-c_{2})\tan^{2}{\left[\arctan{\left(\frac{\sqrt{c_{2}}}{\sqrt{c_{1}-c_{2}}}\right)} - \frac{mx}{2}\sqrt{c_{1}-c_{2}}\right]}.$$ I was wondering if there was any more simplification that could occur here since we have the $\arctan$ within the $\tan$. I have tried using the identity for $\tan{(A-B)}$ but this doesn't seem to simplify matters.
Using the identity $$ \tan(\arctan(u)-v) = \frac{u - \tan(v)}{1 + u \tan(v)} $$ I was able to simplify your answer to $${{{ c_1}\,\tan \left({{\sqrt{{ c_1}-{ c_2}}\,m\,x}\over{2 }}\right)\,\left(2\,{ c_2}\,\tan \left({{\sqrt{{ c_1}- { c_2}}\,m\,x}\over{2}}\right)-{ c_1}\,\tan \left({{\sqrt{ { c_1}-{ c_2}}\,m\,x}\over{2}}\right)+2\,\sqrt{{ c_1}- { c_2}}\,\sqrt{{ c_2}}\right)}\over{\left(\sqrt{{ c_2}}\, \tan \left({{\sqrt{{ c_1}-{ c_2}}\,m\,x}\over{2}}\right)+ \sqrt{{ c_1}-{ c_2}}\right)^2}}$$ I have to confess that I used Maxima to keep track of all the terms! Not sure if this is simpler than you answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/686578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
List of Common or Useful Limits of Sequences and Series There are many sequences or series which come up frequently, and it's good to have a directory of the most commonly used or useful ones. I'll start out with some. Proofs are not required. $$\begin{align} \sum_{n=0}^{\infty} \frac1{n!} = e \\ \lim_{n \to \infty} \left(1 + \frac1n \right)^n = e \\ \lim_{n \to \infty} \left(1 - \frac1n \right)^n = \frac1e \\ \lim_{n \to \infty} \frac{n}{\sqrt[n]{n!}} = e \\ \lim_{n \to \infty} \frac{1}{n} = 0 \\ \sum_{n=0}^{\infty} \frac1{n} \text{Diverges.} \end{align}$$
My opinion. The most useful series is the geometric series, in both its finite and infinite forms: $$ 1 + x + x^2 + \cdots + x^n = \frac{1 - x^{n+1}}{1-x} \quad (x \neq 1) $$ and $$ 1 + x + x^2 + \cdots = \frac{1 }{1-x} \quad (|x| < 1) $$ You can derive many others from it by substituting values and by formal manipulations (including differentiation and integration). It's as handy in combinatorics (as a generating function) as in analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/686665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Expressing $\cos(x)^6$ as a linear combination of $\cos(kx)$'s Let $$(\cos^6(x)) = m\cos(6x)+n\cos(5x)+o\cos(4x)+p\cos(3x)+q\cos(2x)+r\cos(x)+a.$$ What is the value of $a$?
$\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}% \newcommand{\dd}{{\rm d}}% \newcommand{\down}{\downarrow}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\fermi}{\,{\rm f}}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\half}{{1 \over 2}}% \newcommand{\ic}{{\rm i}}% \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\ol}[1]{\overline{#1}}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ \begin{align} \cos^{6}\pars{x}&=\pars{\expo{\ic x} + \expo{-\ic x} \over 2}^{6}= {1 \over 2^{6}}\sum_{\ell = 0}^{6}{6 \choose \ell}\expo{\ic\pars{6 - 2\ell}x} \\[3mm]&= {1 \over 2^{6}}\sum_{\ell = 0}^{2}{6 \choose \ell}\expo{\ic\pars{6 - 2\ell}x} +{1 \over 2^{6}} \overbrace{6 \choose 3}^{\ds{20}} + {1 \over 2^{6}} \color{#f00}{\sum_{\ell = 4}^{6}{6 \choose \ell}\expo{\ic\pars{6 - 2\ell}x}} \tag{1} \end{align} \begin{align} \color{#f00}{\sum_{\ell = 4}^{6}{6 \choose \ell}\expo{\ic\pars{6 - 2\ell}x}}&= \sum_{\ell = -2}^{0}{6 \choose \ell + 6}\expo{\ic\pars{-6 - 2\ell}x} = \sum_{\ell = 2}^{0}\overbrace{6 \choose -\ell + 6}^{\ds{6 \choose \ell}}\ \expo{\ic\pars{-6 + 2\ell}x} \\[3mm]&= \color{#f00}{\sum_{\ell = 0}^{2}{6 \choose \ell}\expo{\ic\pars{-6 + 2\ell}x}} \end{align} We'll replace this result in $\pars{1}$: \begin{align} \cos^{6}\pars{x}&={5 \over 16} + {1 \over 32}\sum_{\ell = 0}^{2}{6 \choose \ell}\cos\pars{\bracks{6 - 2\ell}x} ={5 \over 16} + {\cos\pars{6x} + 6\cos\pars{4x} + 15\cos\pars{2x}\over 32} \end{align} $$ \color{#00f}{\large\cos^{6}\pars{x} ={5 \over 16} + {15 \over 32}\,\cos\pars{2x} + {3 \over 16}\,\cos\pars{4x} + {1 \over 32}\,\cos\pars{6x}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/686766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Concerning the sequence $\Big(\dfrac {\tan n}{n}\Big) $ Is the sequence $\Big(\dfrac {\tan n}{n}\Big) $ convergent ? If not convergent , is it properly divergent i.e. tends to either $+\infty$ or $-\infty$ ? ( Owing to $\tan (n+1)= \dfrac {\tan n + \tan 1}{1- \tan1 \tan n}$ and the non-covergence of $\Big (\tan n \Big)$ it is easy to see that if $\Big(\dfrac {\tan n}{n}\Big) $ is convergent then the limit must be $0$ , but I can not proceed further . )
Since $\pi/2$ is irrational, a theorem of Scott says there exist infinitely many pairs of positive integers $(n,m)$ with $n$ and $m$ odd such that $\left| \dfrac{\pi}{2} - \dfrac{n}{m} \right| < \dfrac{1}{m^2}$. For such $m$ and $n$ we have $|\cos(n)| < |m \pi/2 - n| < 1/m$ and thus $|\tan(n)|/n > k$ for suitable nonzero constant $k$ (in fact any $k > 2/\pi$ should do). So the sequence does not converge to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/686841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
the rank of a linear transformation Let $V$ be vector space consisting of all continuous real-valued functions defined on the closed interval $[0,1]$ (over the field of real numbers) and $T$ be linear transformation from $V$ to $V$ defined by $$(Tf)(x) = \int_0 ^1 (3x^3 y - 5x^4 y^2) f(y)\,\mathrm dy$$ Why is $\operatorname{rank}(T) = 2$?
Note that $(Tf)(x) = 3x^3 \int_0^1 y f(x)dy - 5 x^4 \int_0^1 y^2 f(y)dy$, hence $Tf \in \operatorname{sp} \{ x \mapsto x^3, x \mapsto x^4 \}$, so $\dim {\cal R} T \le 2$. If we choose $f(x) = {2\over 3} -x$, we see $(Tf)(x) = {5 \over 36} x^4$, and if we choose $f(x) = {4\over 3} -x$, we see that $(Tf)(x) ={1 \over 24} x^3$, hence $\dim {\cal R} T = 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/686947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Calculating $ x= \sqrt{4+\sqrt{4-\sqrt{ 4+\dots}}} $ If $ x= \sqrt{4+\sqrt{4-\sqrt{ 4+\sqrt{4-\sqrt{ 4+\sqrt{4-\dots}}}}}} $ then find value of 2x-1 I tried the usual strategy of squaring and substituting the rest of series by x again but could not solve.
I assume you mean $$ x=\sqrt{4+\sqrt{4-\sqrt{4+\sqrt{4-\sqrt{4\pm\ldots}}}}}$$ so that $$ \begin{align}x&=\sqrt{4+\sqrt{4-x}}\\ x^2&=4+\sqrt{4-x}\\ (x^2-4)^2&=4-x\\ 0 &= x^4-8x^2+x+12= (x^2-x-3)(x^2+x-4)\end{align}$$ Since clearly $x\ge \sqrt 4=2$, the second factor is $x^2+x-4\ge2>0$, which leaves us with the positive solution $x=\frac{1+\sqrt{13}}{2}\approx 2.3027756 $ from the first factor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/687173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Find an equation of the tangent line to the curve $y = x\cos(x)$ at the point $(\pi, -\pi)$ I concluded that the equation is $$(y + \pi) = (\cos(x) + x-\sin(x)) (x - \pi)$$ 1) Is this correct so far? Wolfram doesn't seem to process this correctly. 2) How would I expand this to get it in $y$-intercept form? I know I can plug $\pi$ into my derivative, but i'm not sure what to make of plugging $x$ into it. I feel like it wouldn't be correct, or it would be correct but it would be so verbose the data would be unusable.
Your derivative, which we need for slope, is close, but $y'$ should be $$y' =\underbrace{(1)}_{\frac d{dx}(x)}\cdot(\cos x) + (x)\underbrace{( -\sin x)}_{\frac d{dx}( \cos x)}= \cos x -x\sin x$$ Now, for slope itself, we evaluate $y'(\pi) = \cos (\pi) - \pi\sin(\pi) = -1 - 0 = -1$. That gives you the equation of the line: $$y+\pi = -(x -\pi)$$ To get the slope-intercept form, simply distribute the negative on the right, and subtract $\pi$ from each side to isolate $y$: $$y + \pi = -x+ \pi \iff y = -x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/687234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
A three-way duel (probability puzzle) This puzzle is taken from Mathematical Puzzles: A Connoisseur's Collection [Peter Winkler]. I don't understand the solution. Alice, Bob, and Carol arrange a three-way duel. Alice is a poor shot, hitting her target only 1/3 of the time on average. Bob is better, hitting his target 2/3 of the time. Carol is a sure shot. They take turns shooting, first Alice, then Bob, the Carol, then back to Alice, and so on until one is left. What is Alice's best course of action? The solution is that Alice is better of missing than hitting Carol or Bob, so she should shoot into the air. Indeed, then Bob will shot Carol, and it can be shown that it gives the greatest probability of survival for Alice. But I wonder if Bob should not voluntary shoot into the air too, so that Carol will do the same, and no one be shot. If this is the case, Alice survival probability is 1. What do you think of it? What is Alice survival probability?
Arno proved that if Alice, Bob and Carol want to survive, they will go to a slatemate, so as Daniel V points out, the goal should be to win the duel and not to survive. But assuming that the goal is to win the duel, if Alice and Bob shoot in the air, why Carol could not shoot in the air too? Because she will have to shoot Bob anyway. Indeed, if they keep shooting in the air, when Carol has two bullets left, she has to shoot someone because she prefers having a chance to win rather being sure not to win. As a matter of facts, if Bob does not shoot her dead in the begining, she will shoot him. For Alice, the probability to win a duel against Bob when it's her turn to shoot is 1/2, and the probability to win a duel against Carol is 1/3, hence her probability to win is 5/12.
{ "language": "en", "url": "https://math.stackexchange.com/questions/687272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 3 }
Why do you need to use the chain rule in differentiation of ln? I understand application of chain rule in the differentiation of a random function $(x^2+3)^3$. But, why do you need to use chain-rule when differentiating something like $\ln(2x-1)$; why won't it just be $\displaystyle\frac 1{2x-1}$? Please help.
Let us first call $$y = \ln(2x + 1)$$ Physicists and engineers use the simpler Leibniz calculus to calculate the differential quotient $\frac{dy}{dx}$ instead of using those pesky Newtonian fluxion dots ($\dot{y}$) or French apostrophes ($y'$) .. :-) We substitute $$u = 2x + 1$$ and get $$y = \ln(u)$$ and $$\frac{dy}{dx} = \frac{d\ln(u)}{dx} = \frac{d\ln(u)}{du}\frac{du}{dx}$$. With $$\frac{d\ln(u)}{du} = \frac{1}{u}$$ and $$\frac{du}{dx} = \frac{d(2x + 1)}{dx} = 2$$ we get $$\frac{dy}{dx} = \frac{1}{2x + 1} \cdot 2$$ Here we used the chain rule in its guise as substition rule (or coordinate transformation $u \to x$), canceling differentials in a fraction. The important bit is the factor 2 when transforming from $u$ to $x$ coordinates. The substituion technique is even more helpful when solving integrals. $$y = \int dy + C = \int \frac{dy}{dx} dx + C = \int \frac{2}{2x+1} dx + C = \int \frac{du}{u} + C = \ln u = \ln(2x + 1)$$ for some constant $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/687337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 9, "answer_id": 8 }
What is the number of real roots of $(\log x)^2- \lfloor\log x\rfloor-2=0$ $\lfloor\,\cdot\,\rfloor$ represents the greatest integer Question : What is the number of real roots of $(\log x)^2- \lfloor\log x\rfloor-2=0$. $\lfloor\,\cdot\,\rfloor$ represents the greatest integer function less than or equal to $x$. I know how to solve logarithm equation but due to greatest integer function I am unable to proceed further please help thanks.
Since $ [\log x] \leq \log x $ we have $(\log x)^2-\log x -2 \leq 0$ This is equivalent to $-1 \leq \log x \leq 2$ When $-1 \leq \log x \leq 0, [\log x ] =-1$ so that $\log x =\pm 1$ If we see that $\log x =1$ is not in the specified range. Hence $\log x =-1$ and $x =\frac{1}{10}$ When $0 \leq \log x < 1$ , $[\log x] =0$ so that $\log x =\pm \sqrt{2}$ None of these values in the range. Similarly we can use $1 \leq \log x < 2$ this will give us $x =10^{\sqrt{3}}$ When $\log x =2$, $[\log x] =2$ and equation is satisfied. Thus $x =100$ is third real root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/687437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to ensure Topological Correctness Question: I read through an enormous amount of material on topology and knot-theory in wikipedia, but I still am stuck at the following fundamental problem: Given two representations of closed curves, how do you establish their "linkedness"? So in a really simple example, given the equations for two circles in $\mathbf{R}^3$ how do I tell if they are a Hopf link or disjoint loops? Background: Myself and a conspirator have written a simulator for rope which minimizes stored energies by means of an iterative approach. It works very well for a myriad of test cases, like a hanging segment, centenary, and we have used it to reproduce the shape of a unit-knit. The problem comes when we tried to add rope-rope interactions. In a nutshell, you have to go to fairly extreme lengths to ensure that the ropes do not pull through each other using the minimization process. I believe this is not the way to go about things, so I am on the search of a more principled answer.
Choose a generic 2-plane in 3-space and project your link onto it. Then use the idea in http://en.wikipedia.org/wiki/Linking_number#Computing_the_linking_number. To make it computationally feasible, you might have to approximate your link by a sufficiently close polygonal curve. (This answers what I think is your main question, "Given the equations for two circles in R3 how do I tell if they are a Hopf link or disjoint loops?".) EDIT: As Kevin Carlson points out in the comments, if the links can be disentangled, the linking number will be zero. If the linking number is zero, the links can be disentangled if each component is allowed to pass through itself (but not the other link), but possibly not if this not allowed (see http://en.wikipedia.org/wiki/Whitehead_link for an example).
{ "language": "en", "url": "https://math.stackexchange.com/questions/687525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Determinant of long exact sequence Let the following be a long exact sequence of free $A$-modules of finite rank: $$0\to F_1\to F_2\to F_3\to...\to F_n\to0$$ I want to show that $\otimes_{i=1}^n (\det F_i)^{-1^{i}} \cong A$, where the notation $^{-1}$ means taking the dual. My attempt was to break this into SES's like $$0\to F_i\to F_i\oplus F_{i+1}\to F_{i+1}\to 0$$ from which we know that $$\frac {\det F_i\det F_{i+1}}{\det (F_i\oplus F_{i+1})}\cong A\tag{1}$$ Let me use the notation $$\begin{aligned}d_i &:= \det F_i\\ d_{i+(i+1)} &:= \det(F_i\oplus F_{i+1})\end{aligned}$$ From $(1)$, one readily gets that $d_i d_{i+1} \cdot d_{i+1} d_{i+2} = d_{i+(i+1)}d_{(i+1)+(1+2)}$ from which it follows that $$\frac {d_i d_{i+2}}{d_{i+1}}=\frac{d_{i+(i+1)}d_{(i+1)+(1+2)}}{d_{i+1}^3}$$ But I am stuck at this step.
The determinant can be determined for every finitely generated projective module, because these are precisely the locally free modules of finite rank (which doesn't have to be constant, but it is locally constant, and on each constant piece we take the corresponding exterior power). It is additive on short exact sequences (see for example Daniel Murfet's notes). Now I claim that the statement holds more generally for locally free modules of finite rank (actually also for locally free sheaves of finite rank). This generalization is needed in order to make the induction work: Let $K$ be the kernel of $F_{n-1} \to F_n$. We have an exact sequence $0 \to K \to F_{n-1} \to F_n \to 0$, which splits because $F_n$ is projective. Since $F_{n-1}$ is finitely generated projective, it follows that $K$ is finitely generated projective. Hence, $\det(K)^{-1} \otimes \det(F_{n-1}) \otimes \det(F_n)^{-1} \cong A$, i.e. $\det(K) \cong \det(F_{n-1}) \otimes \det(F_n)^{-1}$ We have the long exact sequence $$0 \to F_1 \to \dotsc \to F_{n-2} \to K \to 0.$$ By induction hypothesis, we have $$\det(F_1)^{-1} \otimes \dotsc \otimes \det(K)^{\pm 1} \cong A.$$ Now it follows $$\det(F_1)^{-1} \otimes \dotsc \otimes \det(F_{n-1})^{\pm 1} \otimes \det(F_n)^{\mp 1} \cong A.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/687687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Can someone help me understand Cramer's Rule? I'm taking notes for my class and they define cramers rule and afterwards give us an example problem. \begin{align*} x_1 + 2\,x_2 =& 2\\ -x_1 + 4\,x_2 =& 1 \end{align*} They compute $$\det(a_1(b)) = \begin{vmatrix}2&2\\ 1&4\end{vmatrix}$$ and then they compute $$\det (a_2(b)) = \begin{vmatrix}1&2\\-1&1\end{vmatrix}\text{.}$$ I was wondering why for the first determinant the column 2,1 is before; however, for the second determinant the column 2,1 is after. Shouldn't the column 2,1 be the second column for both determinants.
No, the matrices are correct. For Cramer's rule, you replace the column corresponding to the variable with the column of numbers on the other side of the equals sign. As Amzoti pointed out, to get to the final answer, you divide both $\det(a_1)$ and $\det(a_2)$ by the same determinant of the matrix of the coefficients (with no replacement).
{ "language": "en", "url": "https://math.stackexchange.com/questions/687888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\sin^2 \alpha + \sin^2 \beta - \cos \gamma < M$ given that the sum of the angles is $\pi$ Question: Find the least real value of $M$ such that the following inequality holds: $$\sin^2 \alpha + \sin^2 \beta - \cos \gamma < M$$ Given that $\alpha, \beta, \gamma \in \mathbb{R}^+$, $\alpha + \beta + \gamma = \pi$ My attempt: Step 1: Replace $\sin^2 t$ with $1 - \cos^2 t$ $2 - \cos^2 \alpha - \cos^2 \beta - \cos \gamma < M$ Furthermore, note that $- \cos \gamma = \cos (180 - \gamma) = \cos(\alpha + \beta)$ In addition, use this identity: $-\frac{1}{2}(\cos(2x) + \cos(2y)) = -\cos^2 x - \cos^2 y + 1$ to arrive at the following: $$1 - \frac{1}{2}(\cos(2 \alpha) + \cos(2 \beta)) + \cos(\alpha + \beta) < M$$ And, conveniently, $\frac{1}{2} (\cos(2\alpha) + \cos(2\beta)) = \cos(\alpha + \beta) \cos(\alpha - \beta)$ $$1 - (\cos(\alpha + \beta))(\cos(\alpha - \beta) - 1) < M$$ $$ (\cos (\alpha + \beta))(1 - \cos(\alpha - \beta)) < M - 1$$ From the inequality $ab \leq \frac{(a + b)^2}{4}$, we have that $$(\cos (\alpha + \beta))(1 - \cos(\alpha - \beta)) \leq \frac{(1 + 2 \sin \alpha \sin \beta)^2}{4} < \frac{1}{4}$$ Since $0 < 2 \sin \alpha \sin \beta < 2$ That's all I have so far. Is it logical to then say that $M - 1 = \frac{1}{4}$? I don't think it is because that doesn't make sense to me (it's asking for the least M, and how do i know that $\frac{1}{4}$ is minimized?), but I am not very experienced by any means in dealing with inequalities. Though I do see that $\frac{5}{4}$ is approachable with $\alpha = \frac{\pi}{3} - h, \beta = h, \gamma = \frac{2 \pi}{3}$ where $h$ is an infitesimally small number. Can anyone give me some guidance to finish up this question?
It should be a lot easier to look at the function: $$\sin^2(x)+ \sin^2(y)-\cos(\pi - x - y)$$ And note it is symmetric when interchanging $x$ and $y$, and noting that comparing it's derivatives to zero leads to $\sin(2x)=\sin(2y)$. Thus $x=y+n\pi$. Now find the maximum value of the function: $$\sin^2(x)+\sin^2(x+n \pi)-\cos(\pi-2 x-n \pi)$$ And show that $M=3$. Edit: As the OP's question constrains $x,y>0$, and since we have shown that there are no local maxima in the region, the maximum must lie on the boundary, i.e. either $x$ or $y$ must be either $0$ or $\pi$. Examine all four options and find for instance that when $y=\pi$: $$M=\max_x\ \sin^2(x)-\cos(x) = 5/4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/687974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I represent such a transformation? Let's say I have a 2d rectangle defined by $ [0,x_0] \times [0,y_0]$. Now lets say I cut out the middle rectangle $[\frac{1}{3} x_0, \frac{2}{3} x_0] \times [\frac{1}{3} y_0, \frac{2}{3} y_0]$. Now suppose I take the hyperreal extension of this rectangle. I then "fill" back up the middle rectangle. I increase the rectangle to $ [0,x_0 + \varepsilon] \times [0,y_0 + \varepsilon]$ where $\varepsilon \in \mathbb{R}_{\varepsilon}$ (the infinitesimals). I then cut out the analagous middle rectangle of this square. I then proceed to take the standard part of this figure. Has the standard square gotten any bigger? If I repeat this process $ N \in \mathbb{N}^*$ number of times, what can be said about the standard part? Will I see entire figure continuously increase in measure? Will I see nothing at all? The goal here is to make something like this process that is continuous. I want to be able to define something that increases the overall measure of the figure while still preserving the structure of the rectangle that has been cut out. Any thoughts? I essentially just want to be able to increase the measure, while preserving structure. Need more exposure on this question... Offering 300 rep bounty for proper answer.
As far as looking for hyperreal approaches to constructing the carpet, which is how I understood your idea, I would suggest looking first at hyperreal approaches to constructing nowhere differentiable functions. This was dealt recently in a paper by McGaffey here: http://arxiv.org/abs/1306.6900
{ "language": "en", "url": "https://math.stackexchange.com/questions/688048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Composite with a zero arrow Why any composite with a zero arrow must itself be a zero arrow? I interpret this as $a \rightarrow z \rightarrow b \rightarrow c = a \rightarrow z \rightarrow c$ (the zero arrow in the composite is $a \rightarrow z \rightarrow b$)
I assume $z$ is a zero object. That is $z$ is initial and terminal. Being initial, for all object $c$ there is a unique arrow $z \to c$, so for any arrow $b \to c$, $$ (z \to b \to c) = (z \to c) .$$ Being terminal, for all object $a$ there is a unique arrow $a \to z$. Composing (on the left) by this arrow for some $a$, you get $$ (a \to z \to b \to c) = (a \to z \to c).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/688145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The unitary implementation of $*$-isomorphism of $B(H)$ Is it possible to construct $*$-isomorphism of (factor von Neumann) algebra $B(H)$ which is not unitary implementable?
Let $\theta:B(H)\to B(H)$ be a $*$-automorphism. Fix an orthonormal basis $\{\xi_j\}$ of $H$, and write $E_{jj}$ for the corresponding rank-one projections, i.e. $E_{jj}\xi=\langle\xi,\xi_j\rangle\,\xi_j$. We can expand $\{E_{jj}\}_j$ to a system of matrix units $\{E_{kj}\}_{k,j}$, where $E_{kj}\xi=\langle\xi,\xi_j\rangle\,\xi_k$. Next notice that $\theta(E_{11})$ is also a rank-one projection (because $\theta$ being an automorphism forces it to be minimal). Let $\eta_1$ be a unit vector in the range of $\theta(E_{11})$. Define $U:H\to H$, linear, by $U\xi_j=\theta (E_{j1})\eta_1$. Then $$\langle U\xi_j,U\xi_k\rangle=\langle \theta (E_{1k}E_{j1})\eta_1,\eta_1\rangle=\delta_{kj}=\langle\xi_j,\xi_k\rangle. $$So $U $ is a unitary and $\{\eta_j\}$, where $\eta_j=U\xi_j $, is an orthonormal basis (the observation that $\theta(E_{11})$ is minimal is necessary to guarantee that $U$ is onto). Note that $\theta(E_{kj})$ is a rank-one operator sending $\eta_j\to\eta_k$ (because $\theta(E_{kj})\theta(E_{j1})=\theta(E_{kj}E_{j1})=\theta(E_{k1})$); this means that $\theta(E_{kj})\xi=\langle\xi,\eta_j\rangle\,\eta_k$. Then $$ UE_{kj}U^*\eta_t=UE_{kj}\xi_t=\delta_{j,t}U\xi_k=\delta_{j,t}\eta_k=\langle\eta_t,\eta_j\rangle\,\eta_k=\theta(E_{kj})\eta_t. $$ As finite linear combinations of the $E_{kj}$ are weakly dense and $\theta$ is normal (which implies weak-continuous on bounded sets), we get $\theta=U\cdot U^*$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/688245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Product of inverse matrices $ (AB)^{-1}$ I am unsure how to go about doing this inverse product problem: The question says to find the value of each matrix expression where A and B are the invertible 3 x 3 matrices such that $$A^{-1} = \left(\begin{array}{ccc}1& 2& 3\\ 2& 0& 1\\ 1& 1& -1\end{array}\right) $$ and $$B^{-1}=\left(\begin{array}{ccc}2 &-1 &3\\ 0& 0 &4\\ 3& -2 & 1\end{array}\right) $$ The actual question is to find $ (AB)^{-1}$. $ (AB)^{-1}$ is just $ A^{-1}B^{-1}$ and we already know matrices $ A^{-1}$ and $ B^{-1}$ so taking the product should give us the matrix $$\left(\begin{array}{ccc}11 &-7 &14\\ 7& -4 &7\\ -1& 1 & 6\end{array}\right) $$ yet the answer is $$ \left(\begin{array}{ccc} 3 &7 &2 \\ 4& 4 &-4\\ 0 & 7 & 6 \end{array}\right) $$ What am I not understanding about the problem or what am I doing wrong? Isn't this just matrix multiplication?
Note that the matrix multiplication is not commutative, i.e, you'll not always have: $AB = BA$. Now, say the matrix $A$ has the inverse $A^{-1}$ (i.e $A \cdot A^{-1} = A^{-1}\cdot A = I$); and $B^{-1}$ is the inverse of $B$ (i.e $B\cdot B^{-1} = B^{-1} \cdot B = I$). Claim $B^{-1}A^{-1}$ is the inverse of $AB$. So basically, what I need to prove is: $(B^{-1}A^{-1})(AB) = (AB)(B^{-1}A^{-1}) = I$. Note that, although matrix multiplication is not commutative, it is however, associative. So: * *$(B^{-1}A^{-1})(AB) = B^{-1}(A^{-1}A)B = B^{-1}IB = (B^{-1}I)B = B^{-1}B=I$ *$(AB)(B^{-1}A^{-1}) = A(BB^{-1})A^{-1} = A^{-1}IA = (A^{-1}I)A = A^{-1}A=I$ So, the inverse if $AB$ is indeed $B^{-1}A^{-1}$, and NOT $A^{-1}B^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/688339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41", "answer_count": 7, "answer_id": 2 }
Rounding up to $3$ significant figures when adding If $a,\ b$ and $c$ are real numbers and you are required to find $a + b + c$ to $3$ significant figures, to how many significant figures could $a,\ b$ and $c$ be rounded up to to give the result?
In general, if you want $\mathrm{round}_3(a+b+c) = \mathrm{round}_{3+k}(a)+\mathrm{round}_{3+k}(b)+\mathrm{round}_{3+k}(c)$ you should make $k$ as large as you possibly can. A simple example is $a=499001,b=499.001,c=0.499001$. * *If you don't do any rounding before addition you get $\mathrm{round}_3(a+b+c) = \mathrm{round}_3(499001 + 499.001 + 0.499001) = \mathrm{round}_3(499500.500001) = 500000$. *If you round $a,b,c$ before addition with $k<3$ you get $\mathrm{round}_3(a+b+c) = \mathrm{round}_{3+k}(499000)+\mathrm{round}_{3+k}(499)+\mathrm{round}_{3+k}(0.499) = \mathrm{round}_3(499499.499)=499000$ This non-intuitive behaviour results from the fact that rounding functions "jump", i.e. they are unstable at the point where they round up or down.
{ "language": "en", "url": "https://math.stackexchange.com/questions/688433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is torsion subgroup of elliptic curve birationally invariant? It's probably a very basic question: Having two birationally equivalent elliptic curves over $\mathbb{Q}$ - is the torsion subgroup unchanged under the birational equivalence?
The group structure of the torsion subgroup may be the same, but the group law may look very different! I find the following example to be interesting and related to your question. Let $E:y^2=x^3+1$ with zero at $[0,1,0]$, and consider $E':y^2=x^3+1$ where we now declare zero to be $[2,3,1]$. Then, $E$ and $E'$ are clearly birationally equivalent via the identity map but zero in $E$ does not map to zero in $E'$. Nonetheless, their torsion subgroup is both $\mathbb{Z}/6\mathbb{Z}$, but the group law is different. For instance, let $P=[-1,0,1]$ and $Q=[0,1,0]$, then $$P+_E Q = [2,-3,1]$$ while $$P+_{E'} Q = [0,-1,0].$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/688520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Whether derivative of $\ln(x)$ is $\frac{1}{x}$ for $x>0$ only? Whether derivative of $\ln(x)$ is $\frac{1}{x}$ for $x>0$ only? Can't we write $$\frac{d}{dx} {\ln|x|} = \frac{1}{x} $$ so that we can get the corresponding integration formula for $\frac{1}{x}$ easily as $$\ln|x|$$ I have gone thorough this but it discusses only about integration Is the integral of $\frac{1}{x}$ equal to $\ln(x)$ or $\ln(|x|)$?
Since $|x|=\sqrt{x^2}$ we have $\bigl(\ln|x|\bigr)'=\bigr(\ln(\sqrt{x^2})\bigr)'=\dfrac{1}{\sqrt{x^2}}\dfrac{1}{2\sqrt{x^2}}\cdot2x=\dfrac{x}{|x|^2}=\dfrac{x}{x^2}=\dfrac{1}{x}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/688607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Elliptic curve over $\mathbb{Q}$ cannot have $\mathbb{Z}_4\times\mathbb{Z}_4$ as a subgroup Show that an elliptic curve over $\mathbb{Q}$ cannot have $\mathbb{Z}_4\times\mathbb{Z}_4$ as a subgroup. We've been told that for this problem, we are not allowed to use Mazur's Theorem. Unfortunately that is the only way I can think to answer this question. It was suggested that a geometric argument can be made. Can someone point me in a proper direction? I was thinking of using Nagell-Lutz to try to show there may be more than 3 elements with an order 2. I'm not sure it can be done though.
Hint: the existence of the Weil pairing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/688703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Number of reflection symmetries of a basketball Excerpt from John Horton Conway, The Symmetries of Things, pg. 12. Basketballs have two planes of reflective symmetry, as do tennis balls. I read this sentence and it immediately struck me as incorrect: from my understanding of the pattern of lines on a basketball, there are three planes of reflective symmetry. Two correspond to the two distinct great circles in the pattern, and the third corresponds to the plane mutually perpendicular to these. I agree with the statement regarding tennis balls. But J.H. Conway is such a respected (and far more brilliant and knowledgeable) mathematician that I could ever even hope to be, so despite my certainty I have literally spent the past 15 minutes trying to think of how I might have overlooked some detail. (I think it is safe to say we can ignore the branding/logo on the ball, as well as the pump hole.) Who is correct? If I am, is this error acknowledged somewhere (the book was published in 2008)?
In the basketball I hold in my hands just now, there really are just two planes of symmetry. The plane perpendicular to the two great circles is not a symmetry. This is because the lines which are not great circles intersect one of the great circles near one of the poles, but the other great circle near the other pole. This is not visible in the picture you made, and will be hard to visualize in a non-distorted fashion in any static image. The correct place to look for errata would likely be http://www.mit.edu/~hlb/Symmetries_of_Things/SoTerrors.html, but this issue isn't reported there. Probably because there are more basketballs marked in the way I just descibed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/688749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Textbook for graduate number theory I am attending a graduate number theory, the professor did not assign any textbook. The materials are somewhere along the advanced/algebraic level such as Ring of Gaussian Integers, Quadratic Number Fields and especially about Euclidean Domain. Any suggestion to textbooks that I can read for self study? I prefer books that has lots of problems and their worked out solutions. Thank you for your time and help.
a good book is Problems in Algebraic Number Theory by Murty. It overs all of those things, and more and is 'problem-orientated,' so you do most of the work!
{ "language": "en", "url": "https://math.stackexchange.com/questions/688832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
If $(\cos \alpha + i \sin \alpha )^n = 1$ then $(\cos \alpha - i \sin \alpha )^n = 1$ Prove that if $(\cos \alpha + i \sin \alpha )^n = 1$ then $(\cos \alpha - i \sin \alpha )^n = 1$. What should I use? De Moivre's formula? Exponential form? I tried, but It doesn't work.
Yet another approach: Since $(\cos\alpha+i\sin\alpha)^n=1,$ then $$(\cos\alpha-i\sin\alpha)^n=(\cos\alpha-i\sin\alpha)^n(\cos\alpha+i\sin\alpha)^n=\bigl((\cos\alpha-i\sin\alpha)(\cos\alpha+i\sin\alpha)\bigr)^n$$ Now, expand $(\cos\alpha-i\sin\alpha)(\cos\alpha+i\sin\alpha)$. What can we do then?
{ "language": "en", "url": "https://math.stackexchange.com/questions/688894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
complement of compact set is connected Let A be a compact subset of R, the real numbers. Prove that the complement of A in the complex numbers C is connected. My thoughts: If A is compact then it is contained in a finite union. So if it's complement in C was disconnected it would imply C was disconnected-contradicton
My thoughts: Whenever possible, I prefer dealing with path-connected spaces to connected spaces, because I can more easily visualize it. If $A \subset \mathbb{R}$ is compact, then there's an $R > 0$ such that $A \subset [-R, R]$. Now if we have two points $z, w \in \mathbb{C} \setminus A$, then we have a couple of easy cases: * *If only one of the two points happens to lie on $\mathbb{R}$ (say it's z), we can move vertically from $z$ to $\Re z + \Im W$ and then move along that line to $w$. *If they happen to have the same nonzero imaginary part, then just connect them linearly. *If they happen to have differing nonzero imaginary parts, just move horizontally (left or right) until you're passed $[-R,R]$, move vertically to the right half plane, and then you're free to move linearly. *If they both happen to lie on $\mathbb{R}$, move vertically from one point into a half plane, move horizontally until you have the appropriate real coordinate, and then move vertically again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/688994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $p$ is an odd prime and $a$ is a positive integer not divisible by p, then the congruence has either no solution or 2 incongruent solutions My question is as follows: Show that if $p$ is an odd prime and $a$ is a positive integer not divisible by p, then the congruence $x^2 \equiv a \pmod{p}$ has either no solution or exactly two incongruent solutions. Now, I can show that the congruence cannot have exactly one solution. Suppose $z$ is a solution. Then $z^2 \equiv (-z)^2 \equiv a \pmod{p}$, and thus, $-z$ is also a solution. If $z \equiv -z \pmod{p}$, then $2z \equiv 0 \pmod{p}$, so it must be that either $p$|$2$ or $p$|$z$. But since $p$ is odd and prime, $p$ cannot divide 2, and if $p$|$z$, then $p$|$z^2$ and so $a \equiv z^2 \equiv 0 \pmod{p}$, which implies $p$|$a$, a contradiction. Thus, $z$ and $-z$ are incongruent modulo p. Now, if I can show that the congruence has no more than 2 solutions, then I believe the problem is solved. How can I demonstrate this?
Hint $\ $ prime $\,p\mid(x-b)(x+b)\,\Rightarrow\, p\mid x-b\ \,$ or $\,\ p\mid x+b\,$ by uniqueness of prime factorizations (or some equivalent, e.g. Euclid's Lemma). Alternatively if $\,c\not\equiv \pm b\,$ is a root then $\,(x-b)(x+b) \equiv (x-c)(x+c)$ contra polynomials over a field have unique prime factorizations. Remark $\ $ More generally over a field (or domain), a nonzero polynomial has no more roots than its degree. See here for one proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/689138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Class divided into 5, probability of 2 people in the same team This might be a very simple question, but: a class of 25 students is divided into 5 teams of 5 each. What is the probability of student X and Y being in the same team? is it just 4/25?
Imagine that our heroes, A and B, are assigned to teams, in that order, with the rest being assigned later. Whatever team A is assigned to, there are $4$ empty spots on that team. The probability B is given one of these spots is $\frac{4}{24}$. Or else we can do more elaborate counting. Imagine the teams are labelled (it makes no difference to the probability). There are $\binom{25}{5}\binom{20}{5}\binom{15}{5}\binom{10}{5}\binom{5}{5}$ ways to assign the people to labelled teams, all equally likely. Now we count the number of ways A and B can end up on the same team. Which team? It can be chosen in $5$ ways. The other $3$ people on that team can be chosen in $\binom{23}{3}$ ways. And the rest of the assignments can be done in $\binom{20}{5}\binom{15}{5}\binom{10}{5}\binom{5}{5}$ ways. Divide. We get that the probability is $\frac{5\binom{23}{3}}{\binom{25}{5}}$. This simplifies to $\frac{1}{6}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/689216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Interesting and unexpected applications of $\pi$ $\text{What are some interesting cases of $\pi$ appearing in situations that do not seem geometric?}$ Ever since I saw the identity $$\displaystyle \sum_{n = 1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}$$ and the generalization of $\zeta (2k)$, my perception of $\pi$ has changed. I used to think of it as rather obscure and purely geometric (applying to circles and such), but it seems that is not the case since it pops up in things like this which have no known geometric connection as far as I know. What are some other cases of $\pi$ popping up in unexpected places, and is there an underlying geometric explanation for its appearance? In other words, what are some examples of $\pi$ popping up in places we wouldn't expect?
Too long for a comment: What are some interesting cases of $\pi$ appearing in situations that are not geometric ? None! :-) You did well to add “do not seem” in the title! ;-) All $\zeta(2k)$ are bounded sums of squares, are they not ? And the equation of the circle, $x^2+y^2=$ $=r^2$, also represents a bounded sum of squares, does it not ? :-) Likewise, if you were to read a proof of why $\displaystyle\int_{-\infty}^\infty e^{-x^2}dx=\sqrt\pi$ , you would see that it also employs the equation of the circle! $\big($Notice the square of x in the exponent ?$\big)$ :-) Similarly for $\displaystyle\int_{-1}^1\sqrt{1-x^2}=\int_0^\infty\frac{dx}{1+x^2}=\frac\pi2$ , both of which can quite easily be traced back to the Pythagorean theorem. The same goes for the Wallis product, whose mathematical connection to the Basel problem is well known, the former being a corollary of the more general infinite product for the sine function, established by the great Leonhard Euler. $\big($Generally, all products of the form $\prod(1\pm a_k)$ are linked to sums of the form $\sum a_k\big)$. It is also no mystery that the discrete difference of odd powers of consecutive numbers, as well as its equivalent, the derivative of an odd power, is basically an even power, i.e., a square, so it should come as no surprise if the sign alternating sums $(+/-)$ of the Dirichlet beta function also happen to depend on $\pi$ for odd values of the argument. :-) Euler's formula and his identity are no exception either, since the link between the two constants, e and $\pi$, is also well established, inasmuch as the former is the basis of the natural logarithm, whose derivative describes the hyperbola $y=\dfrac1x$, which can easily be rewritten as $x^2-y^2=r^2$, following a rotation of the graphic of $45^\circ$. As for Viete's formula, its geometrical and trigonometrical origins are directly related to the half angle formula known since before the time of Archimedes. Etc. $\big($And the list could go on, and on, and on $\!\ldots\!\big)$ Where men see magic, math sees design. ;-) Hope all this helps shed some light on the subject.
{ "language": "en", "url": "https://math.stackexchange.com/questions/689315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76", "answer_count": 30, "answer_id": 6 }
Correspondence as a graph of a multifunction Suppose I'd like to say that a projection of $R\subset X\times Y$ on $X$ is the whole $X$. That is, $R$ is a graph of a certain multifunction, or equivalently it is a left-total relation. I do remember seeing somewhere the term correspondence being used exactly for such purposes. Is such terminology commonly used in set theory? Could you advise some classical text books where it is used?
It is standard terminology in mathematical economics. See for example, Aliprantis & Border 2007 Infinite Dimensional Analysis: A Hitchhiker's Guide. Terminology varies as to whether any subset of $X\times Y$ is a correspondence or whether the projectiont to $X$ has to be surjective (a "nonempty-valued correspondence"). Also, some people define it as a relation and some as a function with sets as values. Mathemtical economics and related fields such as optimization theory are some of the main users of the concept. The term originated probably with Nicolas Bourbaki and was imported into economics by Gerard Debreu. In Bourbaki's book on sets, correspondences are treated in Chapter II §3, where a correspondence is defined as a triple of sets $(G,A,B)$ with $G\subseteq A\times B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/689350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Using discriminants to find order of extension Any hints on how to show $[G:H]^{2}=\frac{disc(H)}{disc(G)}$,where G,H are free abelian groups of rank n and $H\subset G\subset K$,where K is a number field? Alternative formulation, how to relate $[R:Z[a]]$ and disc(R),disc(Z[a])? thanks
I assume that what you mean is something to this effect: If $\mathcal{O}\subseteq\mathcal{O}'$ are orders of $K$ (for some number field $K$) then $[\mathcal{O}':\mathcal{O}]\text{disc}(\mathcal{O}')=\text{disc}(\mathcal{O})$. This follows immediately from the following theorem of algebra: Theorem: Let $R$ be a PID, and $M$ a free-module of rank $n$. Suppose that $N\leqslant M$ is a free module of rank $n$. Then, there exists a basis $\{b_1,\ldots,b_n\}$ of $M$ and $r_1,\ldots,r_n\in R$ such that $\{r_1b_1,\ldots,r_nb_n\}$ is a basis for $N$. Do you see why? (Hint: what are the $r_i$?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/689443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Should I divide this permutation problem into cases or are there any quicker methods? I have got an idea for the second question but I think my approach is too long and I would like to ask whether there are any other quicker methods? Eight cards are selected with replacement from a standard pack of 52 playing cards, with 12 picture cards, 20 odd cards and 20 even cards. (a) How many different sequences of eight cards are possible? (b) How many of the sequences in part (a) will contain three picture cards, three odd-numbered cards and two even-numbered cards? My solutions: (a) $52^8$ (b) Divide into cases of: when none of the picture cards are together, when two of the picture cards are together, when all of the picture cards are together; similarly for the odd and even numbered cards. I am not sure whether my working is correct but pretty sure there should be a faster way for part (b). Just for reference, the solution is $3.907\times 10^{12}$. Many thanks for all the helps!
There are $\binom{8}{3}$ ways to choose the places in the sequence of $8$ cards that the picture cards will occupy. For every such choice, the places can be filled in $12^3$ ways. So this part of the job can be done in $\binom{8}{3}\cdot 12^3$ ways. For every way of dealing with the picture cards, there are $\binom{5}{3}$ ways to decide where the odd cards will go. These places can be filled in $20^3$ ways. For every way of getting this far, there are $\binom{2}{2}$ ways to decide where the even cards will go (of course this is $1$). And there are $20^2$ ways to fill these spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/689612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does this line in Lang's "Algebra" mean? Lang's "Algebra" says the following: Let $S$ be a set. A mapping $S\times S\to S$ is sometimes called a law of composition (of $S$ into itself). I always thought $S\times S\to S$ implied a binary operation on two elements of $S$, and $S$ being closed on that binary operation. I also thought the word "composition" belonged to the world of mappings. I don't see how a binary operation can be called a mapping. EDIT: I must confess I have come across such a usage of the term "composition" before, but could never quite get the motivation behind it. Really hoping for an elaborate answer to shed light on this issue; something that I am sure confuses other autodidacts out there too.
The phrase “law of composition” is a direct translation from the French loi de composition (usually also interne is added). See Bourbaki, Éléments de Mathématique. It's just a name and has nothing to do, in general, with function composition. Indeed, Lang says that sometimes a map $S\times S\to S$ is called a law of composition, probably aware of the fact that this locution is not very used in English speaking countries. It used to be frequent also in Italian text, under the influence of Bourbakism. A binary operation is just a map (or mapping, if you prefer): to any (ordered) pair of elements in a set $S$ it associates an element of $S$. So it's best treated as a map (mapping, function, application, representation are also used), without introducing new concepts that aren't useful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/689670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Properties of the cofinite topology on an uncountable set Let $X$ be an uncountable set and let $\mathcal T = \{U \subseteq X : U = \varnothing\text{ or }U^c \text{ is finite} \}$. Then is topological space $(X,\mathcal T)$ * *separable? *Hausdorff? *second-countable (has a countable basis)? *first-countable (has a countable basis at each point)? I am confirmed about (2), $(X,\mathcal T)$ is not a Hausdorff space because we have a result $(X,\mathcal T)$ is Hausdorff iff $D = \{(x,x) : x \in X \}$ is closed , but $D$ is closed if $D^c$ is open and $D^c$ is open if $D^c$ is finite or $\varnothing$ which is not possible . so $(X,\mathcal T)$ is not a Hausdorff. Please tell me about other three option. Thank you
Hints: * *Show that every infinite set is dense; in particular, the countably infinite sets. (Fix an infinite $A \subseteq X$, and show that $U \cap A \neq \varnothing$ for every nonempty open $U \subseteq X$.) *Show that any two nonempty open sets have nonempty intersection. *If $\mathcal{B} \subseteq \mathcal{T}$ is countable, then the set $A = \bigcup_{U \in \mathcal{B}} ( X \setminus U )$ is countable (since it is a countable union of finite sets). Pick $x \in X \setminus A$, and consider $V = X \setminus \{ x \}$. (Is it a union of sets in $\mathcal{B}$?) *Very similar to the above. For your attempt at the non-Hausdorffness, you need to be a little bit lot more careful. You need to show that $D$ is not closed in the square $X \times X$. In order to proceed as you have done, you would first have to show that the product $X \times X$ also has the co-fintie topology (since you make an appeal to this). However this is not true. If $A, B \subseteq X$ are finite nonempty, then $( X \setminus A ) \times ( X \setminus B )$ is open in $X \times X$, but it is not co-finite (since for any $a \in A$ the uncountable set $\{ \langle a , x \rangle : x \in X \}$ is disjoint from this set).
{ "language": "en", "url": "https://math.stackexchange.com/questions/689756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Heisenberg XXX spin model Let $\pi$ be the standard representation of $sl_2(\mathbb{C})$ on $\mathbb{C}^2$. Let $p_1,p_2,p_3$ the three Pauli matrices. Define $S^a:=\frac{1}{2}\pi(p_a)$. What does such matrices looks like?
Using In[6]:= s1 = -I/2 PauliMatrix[1]; s2 = -I/2 PauliMatrix[2]; s3 = -I/2 PauliMatrix[3]; We can verify that $S^1$, $S^2$ and $S^3$ satisfy commutation relations of $\mathfrak{sl}_2(\mathbb{C})$: In[8]:= {s1.s2 - s2.s1 - s3, s3.s1 - s1.s3 - s2, s2.s3 - s3.s2 - s1} Out[8]= {{{0, 0}, {0, 0}}, {{0, 0}, {0, 0}}, {{0, 0}, {0, 0}}} Here is their explicit form as given by Mathematica: $$ S^1 = \left( \begin{array}{cc} 0 & -\frac{i}{2} \\ -\frac{i}{2} & 0 \\ \end{array} \right) \quad S^2 = \left( \begin{array}{cc} 0 & -\frac{1}{2} \\ \frac{1}{2} & 0 \\ \end{array} \right) \quad S^3 = \left( \begin{array}{cc} -\frac{i}{2} & 0 \\ 0 & \frac{i}{2} \\ \end{array} \right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/689876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Existence of a certain subset of $\mathbb{R}$ To every real $x$ assign a finite set $\mathcal{A}(x)\subset \mathbb{R}$ where $x\not\in \mathcal{A}(x)$. Does there exist $\mathcal{W}\subset \mathbb{R}$ such that: $$1.\;\;\mathcal{W}\cap \mathcal{A}(\mathcal{W})=\varnothing\qquad 2.\;\;|\mathcal{W}|=|\mathbb{R}|$$ This interesting problem was given to me by a friend, but I can't do it. Any ideas?
Let $\mathcal{Q}=\{[p,q]:\;p,q\in\mathbb{Q},\;p<q\}$. $\mathbb{Q}$ dense in $\mathbb{R}$ and $\mathcal{A}$ finite $\Rightarrow$ we may choose $\phi:\;\mathbb{R}\to\mathcal{Q}:$ $$(\text{i}):\;\;x\in\phi(x)\qquad (\text{ii}):\;\;\phi(x)\cap\mathcal{A}(x)=\varnothing$$ Since $|\mathcal{Q}|=|\mathbb{N}|$ there exists $I\in\mathcal{Q}$ such that $\text{card}\,\{x\in\mathbb{R}:\;\phi(x)=I\}=\mathfrak{c}\;(\Leftarrow$ König's th.$)$. Let $\mathcal{W}=\{x\in\mathbb{R}:\;\phi(x)=I\}$ and check $\mathcal{W}\cap\mathcal{A}(\mathcal{W})=\varnothing\;(\Leftarrow\mathcal{W}\subset I$ and $I\cap \mathcal{A}(\mathcal{W})=\varnothing)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/689966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Show that the equation, $x^3+10x^2-100x+1729=0$ has at least one complex root $z$ such that $|z|>12.$ Show that the equation, $x^3+10x^2-100x+1729=0$ has at least one complex root $z$ such that $|z|>12.$
Let $~α_1,~α_2,~α_3~$ be roots of the equation. (These are complex numbers and existence is guaranteed by the Fundamental Theorem of Algebra). What do we already know about these roots? We know the sum of roots, sum of product of roots taken two at a time, and product of roots. These are expressible by the coefficients of the equation. Suppose $~~|α_1|≤12,~~|α_2|≤12,~~|α_3|≤12~$. Then $~|α_1α_2α_3|=|α_1||α_2||α_3|≤123=1728~$ But, the product of roots is $~−1729~$. That is, $~|α_1α_2α_3|=1729~.$ Therefore things go wrong if we suppose $~|α|≤12~$ for all roots $~α~$. Therefore the negation of this statement must be true. What is the negation? Of course, $~≤~$ is replaced by $~>~$ and for all is replaced by for some. Hence the statement The equation $~x^3+10x^2−100x+1729=0~$ has at least one complex root α such that $~|α|>12~$ is True.
{ "language": "en", "url": "https://math.stackexchange.com/questions/690033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
how to find a matrix A given the solution? if we need,for example, to find a nonzero 3x3 matrix A such that we are given a 3x1 vector as a solution to Ax = 0. What is the general procedure we can follow to obtain such Matrix A? Thank you :)
Let's suppose that your vector $v$ is a column vector. One of options is to look at the matrix $B:=vv^T$: it's a $3\times 3$ matrix, and $Av = \|v\|^2v$. Now we can look at the matrix $A:=(\|v\|^2Id-B)$: easy to check that $v$ belongs to its nullspace. We need to check that our $A$ is not zero; indeed, take any nonzero vector $w$ such that $v\bot w$, then $Aw=\|v\|^2w\ne 0$. Note that $A$ is not uniquely defined, because you have $3$ linear equations on $9$ elements of the matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/690139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
"Classify $\mathbb{Z}_5 \times \mathbb{Z}_4 \times \mathbb{Z}_8 / \langle(1,1,1)\rangle$" I have a question that says this: Classify $\mathbb{Z}_5 \times \mathbb{Z}_4 \times \mathbb{Z}_8 / \langle(1,1,1)\rangle$ according to the fundamental theorem of finitely generated abelian groups. I would like to see how it is correctly answered. This is not homework; I'd just like to see a proof.
I'm assuming that $\langle(1,1,1)\rangle$ means the subgroup generated by $(1,1,1)$, or in other words $\{(k\bmod 5,k\bmod 4,k\bmod 8)\mid k\in\mathbb Z\}$. In that case we can see that each of the cosets that make up the quotient must contain an element of the form $(0,x,0)$. Namely, assume that $(a,b,c)$ is some element of the cosets; then by the Chinese Remainder Theorem we can find $k$ such that $a\equiv k\bmod 5$ and $c\equiv k\bmod 8$. Subtracting $(a,b,c)-(k,k,k)$ gives us an alement o the form $(0,x,0)$. On the other hand, $(0,x,0)$ is can only be zero in the quotient when $x\equiv 0\bmod 4$ (because if $(0,x,0)\equiv(k,k,k)$ then $k\equiv 0\bmod 8$ and therefore $k\equiv 0\bmod 4$. So the quotient group is $\mathbb Z_4$. What this has to do with the structure theorem I don't know, though. Perhaps you're supposed to start by rewriting it to $\mathbb Z_4\times\mathbb Z_{40}/\langle(1,1)\rangle$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/690209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
What is $\Bbb R^{\times}$? [unit group, ring to "times" power] I'm doing some sheets for my Abstract Algebra class and I can't seem to remember the group defined as $\mathbb{R}^{\times}$. It's obviously some variation of $\mathbb{R}$ but I'm away from college on reading week so can't ask my tutor. If someone could clear up the confusion I'd be grateful.
The notation is often used on the form $\Bbb R^*$ i.e. with a star and it means $\Bbb R\setminus \{0\}$ and we have $(\Bbb R^*,\times)$ is a multiplicative group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/690301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
All subgroups of index 2 Can we construct examples of non-abelian groups G (finite or infinite) such that for all of it's non-trivial subgroups has index 2 in G?.
Not of finite order. Suppose $G$ has subgroup of index 2. Then order of $G$ is even, as it's a union of two cosets of such subgroup. By Cauchy's Theorem, $G$ has member of order two. If $|G|>4$, index of the subgroup generated by this element is greater than 2. If $G$ has no subgroups at all, it is cyclic (of prime order), hence abelian. Will think about general case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/690454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$H_{n-1}(M;\mathbb{Z})$ is a free abelian group need help with this problem: show that if $M$ is closed connected oriented n-manifold then $H_{n-1}(M;\mathbb{Z})$ is a free abelian group. thanx.
$$O \to Ext(H_{n-1}(M),\mathbb{Z}) \to H^n(M) \to Hom(H_n(M),\mathbb{Z}) \to 0$$ As the latter arrow is an isomorphism when M is closed, connected and orientable, it follows that $Ext(H_{n-1}(M),\mathbb{Z})=0$. You just need to understand why it implies your $n-1$ torsion group $T_{n-1}$ is $0$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/690857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Validating a PDE problem solution I have the following problem, which I have tried to solve myself and I would like someone to verify that my answer is valid. The problem is the following: By separation of variables, derive the family $$u_{mn}^{\pm}(x,y,z) = \sin(m\pi x)\cos(n\pi y)\exp(\pm\sqrt{m^2+n^2}\pi z)$$ of the problem $$u_{xx} + u_{yy} + u_{zz} = 0, \;\;\;u(0,y,z) = u(1,y,z) = u_y(x,0,z) = u_y(x,1,z) = 0$$ Here is my attempted solution: First I try to separate the PDE, by defining $u(x,y,z) = X(x)v$, where $v=Y(y)Z(z)$ and plugging this into the PDE: $$X''(x)v + X(x)v_{yy}+X(x)v_{zz} = 0$$ $$X''(x)v + X(x)(v_{yy}+v_{zz})=0$$ $$\frac{X''(x)}{X(x)}=\frac{-(v_{yy}+v_{zz})}{v}$$ Sine the variables are independent the equation must equal some constant so: $$\frac{X''(x)}{X(x)}=\frac{-(v_{yy}+v_{zz})}{v} = -\lambda^2$$ $$X''(x) = -\lambda^2X(x)$$ $$\frac{-(v_{yy}+v_{zz})}{v} =-\lambda^2$$ Now I insert $v = Y(y)Z(z)$: $$\frac{-Y''(y)Z(z)-Y(y)Z''(z)}{Y(y)Z(z)} = -\lambda^2$$ $$\frac{Y''(y)}{Y(y)}+\frac{Z''(z)}{Z(z)} = \lambda^2$$ $$\frac{Y''(y)}{Y(y)} = \lambda^2 - \frac{Z''(z)}{Z(z)}$$ $$\frac{Y''(y)}{Y(y)} = \lambda^2 - \frac{Z''(z)}{Z(z)} = -b^2$$ from which I can deduce: $$Y''(y) = -b^2Y(y)$$ $$Z''(z)= -(-b^2-\lambda^2)Z(z)$$ I set $\epsilon^2=-b^2-\lambda^2$, so I get $$Z''(z)= -\epsilon^2Z(z)$$ According to my source book the general solution for a ODE of type $X''(x) = -\lambda^2X(x)$, where $-\lambda^2$ is some constant (possibly complex) is: $$X(x) = C_1\cos(\lambda x) + C_2\sin(\lambda x)\;\;\; (C_1, C_2 \;\;\;\;\text{constants})$$ I use this for the ODEs I have solved and I get: $$X(x) = C_1\cos(\lambda x) + C_2\sin(\lambda x)$$ $$Y(y) = C_3\cos(b y) + C_4\sin(b y)$$ $$Z(z) = C_5\cos(\epsilon z) + C_6\sin(\epsilon z)$$ from the initial conditions I deduce that: $X(0)=X(1)=Y'(0)=Y'(1)=0$ and I get: $$X(0) = C_1=0$$ $$X(1)= C_2\sin(\lambda)=0$$ because I'm looking for non-trivial solutions I conclude that $\lambda=n\pi$ where $n$ is some integer. Next I get: $$Y'(0) = -C_3b\sin(0)+C_4b\cos(0)=C_4b = 0,$$ so I select $C_4=0$. $$Y'(1)=-C_3b\sin(b)=0 \rightarrow b=m\pi$$ $$Y(y) = C_3\cos(m\pi y)$$ $$Z(z) = C_5\cos(\epsilon z) + C_6\sin(\epsilon z)$$ $\epsilon^2 = -n^2\pi^2 -m^2\pi^2$ $\epsilon = \pm\sqrt{-\pi^2(m^2+n^2)} = \pm i\pi\sqrt{m^2+n^2}$ so: $$u(x,y,z) = X(x)Y(y)Z(z)= $$ $$C_2C_3\sin(n\pi x)\cos(m\pi y)[C_5\cos(\pm i\pi z\sqrt{m^2+n^2}) + C_6\sin(\pm i\pi z\sqrt{m^2+n^2})]$$ Now I can select $C_2C_3 = 1$, $C_5 = 1$ and $C_6 = -i$ and I get: $$u(x,y,z) = \sin(n\pi x)\cos(m\pi y)[\cos(\pm i\pi z\sqrt{m^2+n^2}) -i\sin(\pm i\pi z\sqrt{m^2+n^2})]$$ and if I understand correctly I know that $e^x = \cos(ix)-i\sin(ix)$ so I get: $$u_{mn}^{\pm}(x,y,z) = \sin(n\pi x)\cos(m\pi y)\exp(\pm \sqrt{m^2+n^2}\pi z)$$ It seems I got it correct, but are the operations I used valid?
The method is correct. At the end, to arrive at the given solution you choose some coefficients. A little remark: for completeness, you should also consider the case $b=0$. This cases arises when, dealing with the boundary conditions for $Y(y)$, you write $$Y^{'}(0)=C_4b=0, $$ $$Y^{'}(1)=-C_3b\sin b+C_4 b\cos b=0.$$ Looking at the first equation you decided to put $C_4=0$. What if $b=0$, instead? Then $Y$ would be solution of the O.D.E. $Y^{''}(y)=0$, which admits general solution $Y(y)=A+By$, and $Z(z)$ would satisfy the O.D.E. $Z^{''}(z)=-(-\lambda^2)Z(z)$. The boundary conditions $Y^{'}(0)=Y^{'}(1)=0$ imply $Y(y)=A$, i.e. $Y(y)$ constant. The family of $u$'s associated to this case are "degenerate".
{ "language": "en", "url": "https://math.stackexchange.com/questions/690973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate the series: $\sum^\infty_{n=1}\frac{(-1)^n}{n^2}$ using dirichlet's theorem This question was in my exam: Calculate the series: $$\sum^\infty_{n=1}\frac{(-1)^n}{n^2}$$. I answered wrong and the teacher noted: "You should use dirichlet's theorem". I know my question is a bit general, but can you please explain me how should I have solved this sum? Thanks in advance.
By absolute convergence you can simply write: $$\sum_{n=1}^{n}\frac{(-1)^n}{n^2}=\sum_{n \text{ even}}\frac{1}{n^2}-\sum_{n\text{ odd}}\frac{1}{n^2}=2\cdot\sum_{n \text{ even}}\frac{1}{n^2}-\sum_{n\geq 1}\frac{1}{n^2}=\frac{2}{4}\sum_{n\geq 1}\frac{1}{n^2}-\sum_{n\geq 1}\frac{1}{n^2}$$ $$=-\frac{1}{2}\sum_{n\geq 1}\frac{1}{n^2}=-\frac{\zeta(2)}{2}=-\frac{\pi^2}{12}.$$ Have a look at this hot question, too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/691230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Localization at a maximal ideal and quotients. If we have a commutative ring $R$ and a maximal ideal $m$, then is $m/m^2$ isomorphic to $m_m/m^2_m$? Thx.
It is enough to show that $R/\mathfrak{m} \cong R_\mathfrak{m}/\mathfrak{m}_\mathfrak{m}$, since $\mathfrak{m}/\mathfrak{m}^2$ and $\mathfrak{m}_\mathfrak{m}/\mathfrak{m}_\mathfrak{m}^2$ are just the base changes of the $R$-module $\mathfrak{m}$ to these respective rings. This is straightforward with universal properties. $R\to R/\mathfrak{m}$ and $R\to R_\mathfrak{m} \to R_\mathfrak{m}/\mathfrak{m}_\mathfrak{m}$ are both universal with respect to maps that send $\mathfrak{m}$ to $0$ and $R\setminus \mathfrak{m}$ to invertible elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/691469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Is $\sum_{x=1}^n (3x^2+x+1) = n^3+2n^2+3n$? I wanna check if the following equation involving a sum is true or false? How do I solve this? Please help me. $$ \sum_{x=1}^n (3x^2+x+1) = n^3+2n^2+3n$$ for all $n \in \{0,1,2,3, \dots\}$.
Not quite. Note that it fails at $n=1$. A closed form expression for the sum is $n^3+2n^2+2n$. Remark: Recall that $\sum_1^n k^2=\frac{n(n+1)(2n+1)}{6}$ and $\sum_1^n k=\frac{n(n+1)}{2}$. And of course $\sum_1^n 1=n$. Or else you can prove the result directly by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/691566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The limit of $|z|^2/z$ in the complex plane What is the limit of $|z|^2\over z$ in the complex plane at $z_0=0$? This is how I do it: ${|z|^2\over z}={{x^2+y^2}\over {x+iy}} $, then along the real axis, and the imaginary axis, the limit approaches different value, namely $y/i$ and $x$, so the limit DNE. Is that correct?
Let $f(z)=|z|^2/z$ then $$\lim_{z\to0}|f(z)|=|z|=0$$ so $$\lim_{z\to0}f(z)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/691685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Who generates the prime numbers for encryption? I was talking to a friend of mine yesterday about encryption. I was explaining RSA and how prime numbers are used - the product $N = pq$ is known to the public and used to encrypt, but to decrypt you need to know the primes $p$ and $q$ which you keep to yourself. The factorization of $N$ is the hard part, and that's why RSA is safe. Then I was asked: Who actually calculates these primes, and how? They're huge, so can you do it on just a normal computer (in reasonable time)? And if not, and encryption software gets the primes from somewhere else, this third party would have a list of primes (however large) to try to factor $N$ with. Using it would be considerably easier than just brute forcing, trying to divide with every prime number up to $\sqrt{N}$. If they (or someone else) has the list, encryption isn't really safe. So, how is it actually done?
They are generated on the machine doing the encryption. Generating primes of a given size is fairly easy, and verifying that they are prime can be done much faster than trial division. 1024-bit RSA requires two 512-bit primes. On my (old) machine it takes about 34 milliseconds to generate a 512-bit prime (so generating the whole key would take about 0.07 seconds). That's about 10 milliseconds to find the prime and 25 to verify it to high certainty. If I was willing to live with 'only' one mistake in $10^{100}$ I could verify a prime in a third the time. If I wanted to instead prove that it was a prime, it would take about 1.6 seconds... but that's overkill for reasonable purposes. (Better to move to a higher bit level with less certainty.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/691797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }