Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Prove: $null ( A - \lambda I)^{a_{\lambda}} = a_{\lambda}$ The problem here is straight forward. Let $a_{\lambda}$ be the algebraic multiplicity corresponding to $\lambda$. Prove $null (A - \lambda I)^{a_{\lambda}} = a_{\lambda}$ I know the following bits: $a_\lambda$ is the highest power of $(x-\lambda)$ that divides evenly, the characteristic polynomial of $A$. Is this any use at all, or am I thinking about this the wrong way?
First, each matrix has a unique structure of Jordan normal form. It consists of Jordan cells of the form $$J_\lambda=\begin{pmatrix}\lambda&1&0&\dots&\dots\\0&\lambda&1&0&\dots\\ \vdots&0&\ddots&\ddots&\ddots\\0&\dots&\dots&\lambda&1\\0&\dots&\dots&0&\lambda\end{pmatrix}$$ The size of this cell is called its order - denote it $ord(J_\lambda)$. Easy to see that $(J_\lambda-\lambda I)^{ord(J_\lambda)}=0$. If $\mu\ne \lambda$, then $\det(J_\lambda-\mu I)^l\ne0$ for all $l$. Finally, we take the whole JNF and study $(J-\lambda)^{a_\lambda}$. Clearly, $$a_\lambda = \text {sum of orders of all cells corresponding to $\lambda$}$$ by definition of algebraic multiplicity. Therefore, each such cell to the power $a_\lambda$ is zero and all other cells are non-singular. Thus, $$\dim\ker (J-\lambda)^{a_\lambda} =\dim\ker (A-\lambda)^{a_\lambda} = \text {sum of orders of all cells corresponding to $\lambda$}=a_\lambda.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/672327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
MENSA IQ Test and rules of maths In a Mensa calendar, A daily challenge - your daily brain workout. I got this and put a challenge up at work. The Challenge starts with.. Assume you are using a basic calculator and press the numbers in the order shown, replacing each question mark ...continues... What is the highest number you can possibly score? Basically, only using $+,-, * ,\div$, once in place of a question mark. $5 ? 4 ? 7 ? 3 ? 2 =$ We all worked out the operators to be $5 + 4 $ x $ 7 - 3/2 =$ Except that I calculated the answer to be $31.5$ and the others argued $30$. THe answer sheet from MENSA says the calculated total is 30. Nobody actually understood the first part about using a basic calculator. I initially thought the challenge was all about the rules of maths. And when I asked why nobody else applied the rules of maths, they all forgot about it, not because the challenge said to use some "basic calculation" I emailed MENSA and queried them about the challenge and they replied, Thank you for your email. On a basic calculator it will be: 5 + 4 = 9 9 x 7 = 63 63 – 3 = 60 60 ÷ 2 = 30 Kind regards, Puzzle Team My Reply, Thank you for your reply. Could you please define what a basic calculator is? I tried 4 pocket, £1 calculators, and all gave me 31.5. And finally their definition. I guess what the question really means, whether you do the sum manually or on a calculator, is don’t change the order of the numbers. The Casio calculators we have in the office allow you to do the sum as it appears: 5 + 4 = 9 9 x 7 = 63 63 – 3 = 60 60 ÷ 2 = 30 Kind regards, Puzzle Team So they guess the challenge meant to do it that way. Why not just say ignore rules of maths. What is the point of this anyway? My original question, on Maths Stack (this one) was why MENSA used 30 instead of 31.5. And initially I did not understand that using a basic calculator meant calculating left to right by pressing equals after each operation. So what is going on here? If they wanted us the ignore rules of math they should of said taht. Because my basic calculator gives me 31.5 and not 30.0 (I dont have a special. Casio MENSA calculator though) Windows standard calculator gives me 30. Why? None of my pocket, office, el cheapo calculators do this. Google, or Windows Scientific give me 31.5 - As do ally my elelctornic calculators.
It seems like a fair assumption that the rules of math should be applied to a mathematical expression. Btw, it is not stated (but I guess it's implied) that the operators must be put in place of the question marks, or else -5/4+7*32 = 222.75
{ "language": "en", "url": "https://math.stackexchange.com/questions/672393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 12, "answer_id": 10 }
Linear Algebra - Can vector $v$ be expressed as a linear combination of $u_1$ and $u_2$ I have a question: Can the vector $v = (1,2)$ be expressed as a linear combination of $u_1 = (1,3)$ and $u_2 = (4,1)$? What I have tried: $a + 4b = 1$ $3a + b = 2$ $a = 1 - 4b$ $3(1 - 4b) +b = 2$ $3 - 12b + b = 2$ $3 -11b=2$ $3 -2 = 11b$ $1 = 11b$ $b = 1/11$ $a + 4(1/11) = 1$ $a =1 - 4/11$ $a = 7/11$ Therefore, $v = 7/11x + 1/11y$ That's what I got but I know it's wrong because it doesn't seem right! Any help about how to go about doing this will be much appreciated!
Your answer is indeed correct. You can (and should) always check your solution directly: $$\frac{7}{11}(1,3) + \frac{1}{11}(4,1) = (\frac{7}{11},\frac{21}{11})+(\frac{4}{11},\frac{1}{11}) = (\frac{11}{11},\frac{22}{11}) = (1,2)$$ as desired. It's generally true that as long as $u_2$ isn't a scalar multiple of $u_1$, you can find a unique linear combination of $u_1$ and $u_2$ to give any desired $v$. That's because the condition is exactly what it takes for $\{u_1,u_2\}$ to be a basis for $\mathbb{R}^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/672493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proof for the formula of sum of arcsine functions $ \arcsin x + \arcsin y $ It is known that \begin{align} \arcsin x + \arcsin y =\begin{cases} \arcsin( x\sqrt{1-y^2} + y\sqrt{1-x^2}) \\\quad\text{if } x^2+y^2 \le 1 &\text{or} &(x^2+y^2 > 1 &\text{and} &xy< 0);\\ \pi - \arcsin( x\sqrt{1-y^2} + y\sqrt{1-x^2}) \\\quad\text{if } x^2+y^2 > 1&\text{and} &0< x,y \le 1;\\ -\pi - \arcsin( x\sqrt{1-y^2} + y\sqrt{1-x^2}) \\\quad\text{if } x^2+y^2 > 1&\text{and} &-1\le x,y < 0. \end{cases} \end{align} I tried to prove this myself, have no problem in getting the 'crux' $\arcsin( x\sqrt{1-y^2} + y\sqrt{1-x^2})$ part of the RHS, but face trouble in checking the range of that 'crux' under the given conditions.
Using this, $\displaystyle-\frac\pi2\leq \arcsin z\le\frac\pi2 $ for $-1\le z\le1$ So, $\displaystyle-\pi\le\arcsin x+\arcsin y\le\pi$ Again, $\displaystyle\arcsin x+\arcsin y= \begin{cases} \\-\pi- \arcsin(x\sqrt{1-y^2}+y\sqrt{1-x^2})& \mbox{if } -\pi\le\arcsin x+\arcsin y<-\frac\pi2\\ \arcsin(x\sqrt{1-y^2}+y\sqrt{1-x^2}) &\mbox{if } -\frac\pi2\le\arcsin x+\arcsin y\le\frac\pi2 \\ \pi- \arcsin(x\sqrt{1-y^2}+y\sqrt{1-x^2})& \mbox{if }\frac\pi2<\arcsin x+\arcsin y\le\pi \end{cases} $ and as like other trigonometric ratios are $\ge0$ for the angles in $\left[0,\frac\pi2\right]$ So, $\displaystyle\arcsin z\begin{cases}\text{lies in } \left[0,\frac\pi2\right] &\mbox{if } z\ge0 \\ \text{lies in } \left[-\frac\pi2,0\right] & \mbox{if } z<0 \end{cases} $ Case $(i):$ Observe that if $\displaystyle x\cdot y<0\ \ \ \ (1)$ i.e., $x,y$ are of opposite sign, $\displaystyle -\frac\pi2\le\arcsin x+\arcsin y\le\frac\pi2$ Case $(ii):$ If $x>0,y>0$ $\displaystyle \arcsin x+\arcsin y$ will be $\displaystyle \le\frac\pi2$ according as $\displaystyle \arcsin x\le\frac\pi2-\arcsin y$ But as $\displaystyle\arcsin y+\arccos y=\frac\pi2,$ we need $\displaystyle \arcsin x\le\arccos y$ Again as the principal value of inverse cosine ratio lies in $\in[0,\pi],$ $\displaystyle\arccos y=\arcsin(+\sqrt{1-y^2})\implies \arcsin x\le\arcsin\sqrt{1-y^2}$ Now as sine ratio is increasing in $\displaystyle \left[0,\frac\pi2\right],$ we need $\displaystyle x\le\sqrt{1-y^2}\iff x^2\le1-y^2$ as $x,y>0$ $\displaystyle\implies x^2+y^2\le1 \ \ \ \ (2)$ So, $(1),(2)$ are the required condition for $\displaystyle \arcsin x+\arcsin y\le\frac\pi2$ Case $(iii):$ Now as $\displaystyle-\frac\pi2\arcsin(-u)\le\frac\pi2 \iff -\frac\pi2\arcsin(u)\le\frac\pi2$ $\arcsin(-u)=-\arcsin u$ Use this fact to find the similar condition when $x<0,y<0$ setting $x=-X,y=-Y$
{ "language": "en", "url": "https://math.stackexchange.com/questions/672575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 2 }
An inequality for completely positive maps. Let $f\colon A\to B$ be a contractive completely positive, ${}^*$-preserving map between C*-algebras and take $a\in A$. How one can prove that $$0\leqslant f(a)f(a^*)\leqslant f(aa^*)?$$ Some authors take it for granted without any explanation.
One of the most important results about completely positive maps is Stinespring's Dilation Theorem. Suppose that $f:A \to B$ is a completely positive map, where $A$ and $B$ are $C^*$-algebras. Then we can find a Hilbert space $H$ such that $B \subseteq B(H)$. Stinespring's Theorem then states that there exists a Hilbert space $K$, $*$-homomorphism $\pi: A \to B(K)$, and a bounded operator $V: H \to K$ such that $$ f(a)= V \pi(a) V^*, $$ with $\Vert f \Vert = \Vert V \Vert^2$. The desired inequality now follows easily: \begin{array}{ccc} f(aa^*)& =& V \pi(aa^*) V^* \\ &= &V \pi(a) \pi(a^*) V^* \\ & \geq &V \pi(a) V^* V \pi(a^*) V^* \\ &= &f(a) f(a^*). \end{array} Here we use that $ V^*V \leq \Vert V \Vert^2 1 \leq \Vert f \Vert \leq 1$ because $f$ is contractive. In fact, one can prove this result only using the weaker hypothesis that $f$ is $2$-positive. This is known as Kadison's Inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/672664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proof that a matrix is nonsingular Let $A$ be $n \times n$ matrix. Show that if $A^2 = 0$, then $I - A$ is non-singular and $(I-A)^{-1} = I+A$. The second part is easy for me, but how can I show that if $A^2 = 0$, then $I - A$ is non-singular. I found in Wolfram Alpha that "A matrix is singular iff its determinant is 0.", but how I can relate this to the given $A^2 = 0$. or is there another easier way. Thank you for your help in advance.
Comparing $|I-A|$ to the characteristic polynomial of $A$: $|I-A| = 0 \implies \lambda=1$ is an eigenvalue of $A$. But $A$ nilpotent necessitates that the only eigenvalue of $A$ is $0$, contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/672755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
i^i^i^i^... Is there a pattern? I was messing around with $i$ and I (haha) noticed that certain progressions arise when I keep on raising $i$ to $i$ to $i$ and so forth. Though, I am not really quite sure what is going on (and I don't have time to explore further). In other words, is there an interesting pattern in the sequence: $i$ , $i^i$, $i^{\left(i^i\right)}$, $i^{\left(i^{\left(i^i\right)}\right)}$, etc.
Actually the limit exists. Define $a_0=i$, $a_{n+1}=i^{a_n}$, $\lim_{n\to\infty}a_n=\frac{W(-\ln(i))}{-\ln(i)}\approx0.4383+0.3606i$, where $W(z)$ is the Lambert W function, $\ln(z)$ is the principle branch of $\log(z)$. More generally, for each $z\in\mathbb{C}$, we can define such sequence $a_n(z)$, the limit exists only if $\frac{W(-\ln(z))}{-\ln(z)}$ is defined and they are equal. Also the proof isn't hard, just messing with the definitions. Correct me if there is any mistakes, I am just retrospecting what I read in high school. Reference: * *http://en.wikipedia.org/wiki/Lambert%27s_W_function *http://en.wikipedia.org/wiki/Tetration
{ "language": "en", "url": "https://math.stackexchange.com/questions/672855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Prove: The product of any three consecutive integers is divisible by $6$. I'm new to number theory and was wondering if someone could help me with this proof. Prove: The product of any three consecutive integers is divisible by $6$. So far I have $\cfrac{x(x+1)(x+2)}{6}$; How would I go about proving this? Should I replace $x$ with $k$ and then $k$ with $k+1$ and see if the statement is true?
Of $n$, $n +1$, $n +2$, one must be even, so divisible by 2 (why?). One must be divisible by 3 (why?). So their product must be divisible by $2 \times 3$ (why?) ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/672936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Can an open ball have just one point. As per my understanding it cannot. Please clarify. I am new to functional analysis and am just learning. To my understanding an open ball must have at least 2 points else its definition will not be satisfied. Now if I have just an empty set and this open ball, why cannot it constitute a Topology. I see it satisfying intersection and union conditions. I agree that the question is too basic! Definition of open ball : $$B_r(x)= \{y \in E | d(x,y)<r \}$$ $(E,d)$ is the metric space.
You say that you are studying functional analysis, so perhaps you are mainly interested in Banach spaces. But even in that context, you are only almost correct - any open ball of a non-trivial Banach space is infinite. The trivial Banach space $V=\{0\}$ consists of just its zero vector $0$, and thus for any $r>0$, $$B_r(0)=\{v\in V:|v-0|<r\}=\{v\in \{0\}:|v|<r\}=\begin{cases} \{0\}&\text{if }|0|<r,\\ \varnothing&\text{if }|0|\geq r \end{cases}=\{0\}=V,$$ which is therefore both an open ball and a singleton set. Of course, most metric spaces are not Banach spaces, and there are many other counterexamples. Basically, for any metric space $X$, if $x\in X$ is an isolated point, then $\{x\}$ will be an open ball of $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/673047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
A problem about Lebesgue measurable set I'm doing exercise in "REAL ANALYSIS" of Folland and got stuck on this problem. I got no clue on how to find the set $I$. Hope someone can help me solve this. Thanks so much Suppose $m$ is Lebesgue measure and $L$ is its domain. If $E \in L$ and $m(E) \gt 0$, for any $\alpha < 1$, prove that there is an open interval $I$ such that $m(E \bigcap I) \gt \alpha m(I)$
Let $E$ be a Lebesgue measurable set with $m(E)>0$, and let $f = \mathbf{1}_{E}$ be the indicator function of this set. For some $x\in E$, we have by the Lebesgue differentiation theorem that $$\lim_{{I\ni x}\atop{\left|I\right|\rightarrow 0}}\dfrac{m(E\cap I)}{m(I)}=\lim_{{I\ni x}\atop{\left|I\right|\rightarrow 0}}\dfrac{\int_{I}f}{\left|I\right|}=f(x)=1$$ where $I$ is an open interval containing the point $x$. Use the fact that the numerator is bounded from above by $\left|I\right|$ to show the existence of such an interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/673149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find all functions $f$ satisfying a certain property How can we find all real-valued functions $f$ such that $f^{(n+1)}(x) = f^{(n)}(x)$? The question gives a hint which says: "If $n=2$, then we have $f(x) = ae^x+be^{-x}$, for some $a,b \in \mathbb{R}$. After proving this proceed by induction on $n$." I can see how to complete the proof assuming the hint, but how can I prove that fact? Clarification: We are not allowed to assume that $f(x) = ae^x+be^{-x}$, but rather need to show this when $n = 2$.
Take $f(x) = ae^x + be^{-x}$. If $n$ is odd, then $f^{(n)}(x) = ae^x - be^{-x}$. If $n$ is even, then $f^{(n)}(x) = ae^x + be^{-x}$, which shows that $b = 0$ (since the derivative of $e^{-x}$ depends on its order). Therefore, $f(x) = ae^{x}$, where $a$ is any value. I don't think there are any sophisticated method to prove that $f$ is of the form $ae^{x}$, where $a$ is an arbitrary constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/673251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How many distinct factors can be made from the number $2^5*3^4*5^3*7^2*11^1$? How many distinct factors can be made from the number $2^5*3^4*5^3*7^2*11^1$? Hmmm... So I didn't know what to do here so I tested some cases for a rule. If a number had the factors $3^2$ and $2^1$, you can make $5$ distinct factors: $2^1$, $3^1$, $3^2$, $2^1 \cdot 3^1$, $2^1 \cdot 3^2$... I don't see a pattern yet. How does one go about this? I don't think the answer is $5!$....
If $\begin{equation}x = a^p \cdot b^q\cdot c^r+...\end{equation}$ then there are $(p+1)(q+1)(r+1)...$ numbers that divde $x$. Any number that divides $x$ will be of the form $a^\alpha\cdot b^\beta\cdot c^\gamma$ . So we have p p+1 options for $\alpha$ because we need to consider $\alpha = 0$ also. Similarly, we have $q + 1$ options for $\beta$ and $q+1$ options for $\gamma$. Therefore, we multiply these out. To get the intuition of why this is so, let us take the example of the number $24$. $24 = 2^3\cdot3^1$ Any number that divides $24$ will be of the form $2^\alpha\cdot3^\beta$. We have 4 choices for $\alpha:0,1,2,3$ and 2 choices for $\beta:0,1$. So we have $4\cdot2 = 8$ numbers that divide 24. These can be listed out as: $$(2^03^0,2^13^0,2^23^0,2^33^0),(2^03^1,2^13^1,2^23^1,2^33^1)$$ So, $2^5*3^4*5^3*7^2*11^1$ has $6\cdot5\cdot4\cdot3\cdot2 = \boxed{720}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/673494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Finding a basis for $\Bbb{Q}(\sqrt{2}+\sqrt{3})$ over $\Bbb{Q}$. I have to find a basis for $\Bbb{Q}(\sqrt{2}+\sqrt{3})$ over $\Bbb{Q}$. I determined that $\sqrt{2}+\sqrt{3}$ satisfies the equation $(x^2-5)^2-24$ in $\Bbb{Q}$. Hence, the basis should be $1,(\sqrt{2}+\sqrt{3}),(\sqrt{2}+\sqrt{3})^2$ and $(\sqrt{2}+\sqrt{3})^3$. However, this is not rigorous. How can I be certain that $(x^2-5)^2-24$ is the minimal polynomial that $\sqrt{2}+\sqrt{3}$ satisfies in $\Bbb{Q}$? What if the situation was more complicated? In general, how can we ascertain thta a given polynomial is irreducible in a field? Moreover, checking for linear independence of the basis elements may also prove to be a hassle. Is there a more convenient way of doing this? Thanks.
A basis of $\mathbb Q\big[\sqrt{2},\sqrt{3}\big]$ consists of the elements $\{1,\sqrt{2},\sqrt{3},\sqrt{6}\}$, and hence its dimension over $\mathbb Q$ is equal to $4$. Clearly, all the above elements belong to $\mathbb Q\big[\sqrt{2},\sqrt{3}\big]$, and hence it remains to show that they are independent over $\mathbb Q$. There is a rather elegant way to show this, and in fact something more general: If $n_1,n_2,\ldots,n_k$ are distinct square-free positive integers, then $\sqrt{n_1},\sqrt{n_2},\ldots,\sqrt{n_k}$ are linearly independent over $\mathbb Q$. (A number $n$ is said to be square free if $k^2\mid n$, implies that $k=1$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/673550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Linear Algebra - Show that $V$ is not a vector space Let $V = \{(x,y,z) | x, y, z \in R\}$. Define addition and scalar multiplication on $V$ as follows: $$(x_1, y_1, z_1) + (x_2, y_2, z_2) = (x_1,y_1+y_2,z_1+z_2)$$ $$c(x_1,y_1) = (2cx_1,cy_1)$$ where $c$ is any real number. Show that $V$, with respect to these operations of addition and scalar multiplication, is not a vector space by showing that one of the vector space axioms does not hold. Since I'm new to Linear Algebra, I don't understand how to approach this question, any help would be much appreciated!
Hint: One of the axioms is that $av + bv = (a+b)v$ for scalars $a,b$ and vector $v$. See how that works with your scalar multiplication.
{ "language": "en", "url": "https://math.stackexchange.com/questions/673671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Pole-zero cancellation Paradox Suppose we have an open-loop transfer function $$G(s) = \frac{1}{s(s+a)(s+b)}$$ If we plot the root locus for the closed-loop system we will get roughly something like this : Now the question is when I add a new zero to the system which is at $-a$ then the book says that we should plot the root-locus without cancelling the pole-zero ( $-a$ in this case ) . I have a practical doubt , in real systems suppose we add a new zero somehow obviously it will not be exactly at $-a$ but at some $-a+\epsilon$ where $\epsilon$ is very small , even in that case the root-locus will become something like this : Because now the aysmptotes will change cause $n-m = 2$ . Therefore the new asymptotes are at $\frac{\pi}{2}$ rad and $\frac{3\pi}{2}$ rad . These two plots become completely different , if I go strictly by the book then my system (even after the addition of new zero ) becomes unstable for some value of gain $K$ but if I consider the situation practically then my system is always stable . Please help me out , which one is correct .
The 2nd plot is indeed fundamentally different from the 1st, because the systems are different - the zero changes its character. The system with relative degree (number of poles minus number of zeros) 2 is more stable than the one with relative degree 1. If you think about the Bode diagram and the Nyquist plot, it has infinite gain margin. Not so with the 1st system, which can be made unstable with a high enough feedback gain. This is the reason why the D action is often used in PID controllers. In practice it is not possible to add an exact differentiator to a given system, however if it comes down to a choice between the 2 configurations, the 2nd is certainly preferable from the point of view of stability. In contrast, the system with exact pole-zero cancellation is not fundamentally different from the more realistic case of approximate cancellation - at least not when the cancelled pole is stable as in you case. The part about the system becoming unstable even adding the new zero is not correct. The correct diagram for the case when cancellation is exact can be obtained as the limit of the approximate cancellation that you drew in the 2nd diagram.
{ "language": "en", "url": "https://math.stackexchange.com/questions/673772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding a point on a plane closest to another point I have the point $(1,1,1)$ and the plane $2x+2y+z = 0$. I want to find a point that is closest to my point on the plane. In other words, I want to find a point along the line $(1,1,1)+t(2,2,1)$ but on my plane. Notice that the vector $(2,2,1)$ is my normal vector and therefore I want to find the point parallel to this vector, but from my original point. I need a nudge to complete this problem! Thank you.
Hint: A general point on the line has the form $(x,y,z) = (1 + 2t, 1 + 2t, 1 + t)$, and for this to be on the plane, it must satisfy the equation of the plane. Plug those in and solve for $t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/673883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Combinatorial proof of sum of numbers Does anyone have any insight on showing that $\sum_{i=1}^n i = {n+1\choose 2}$, through a combinatorial argument (i.e., not an algebraic argument)?
There is a way to see this inductively. Suppose you knew this formula up till $n-1$. Then, $n + 1$ choose $2$ is equal to $n$ choose $2$ plus the number of ways to choose a pair of distinct elements from $\{1,\cdots, n + 1\}$ such that one of them is the 'new' element $n+1$. This is equal to $n$ choose $1$, i.e. $n$, and so the difference between $n+1$ choose $2$ and $n$ choose $2$ must be $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/673945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
evaluation of $\int\frac{1}{\sin^3 x-\cos^3 x}dx$ Evaluation of $\displaystyle \int\frac{1}{\sin^3 x-\cos^3 x}dx$ $\bf{My\; Try::}$ Given $\displaystyle \int\frac{1}{\sin^3 x-\cos^3 x}dx = \int\frac{1}{(\sin x-\cos x)\cdot (\sin^2 x-\sin x\cos x+\cos^2 x)}dx$ $\displaystyle = 2\int\frac{(\sin x-\cos x)}{(\sin x-\cos x)^2\cdot (2-\sin 2x)}dx = 2\int \frac{(\sin x-\cos x)}{(1-\sin 2x)\cdot (2-\sin 2x)}dx$ Let $(\sin x+\cos x) = t\;,$ Then $(\cos x -\sin x)dx = dt\Rightarrow (\sin x -\cos x)dx = dt$ and $(1+\sin 2x)=t^2\Rightarrow \sin 2x = (t^2-1)$ So Integral Convert into $\displaystyle 2\int\frac{1}{(2-t^2)\cdot (3-t^2)}dt = 2\int\frac{1}{(t^2-2)\cdot (t^2-3)}dt$ My Question is , is there is any better method other then that Help me Thanks.
You're missing a minus sign at one point, but other than that I think you're OK. Next, use partial fractions: $$ \frac{1}{(2-t^2)(3-t^2)} = \frac{A}{\sqrt{2}-t} + \frac{B}{\sqrt{2}+t} + \frac{C}{\sqrt{3}-t} + \frac{D}{\sqrt{3}+t} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/674019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Determine if $\beta = \{b_1, b_2, b_3\}$ is linearly independent. Let $\beta = \{b_1, b_2, b_3\}$. Suppose that all you know is that: * *$b_2$ is not a multiple of $b_3$. *$b_1$ is not a linear combination of $b_2, b_3$. Can you determine if $\beta$ is linearly independent? From the second assumption, I think we can tell that $b_1$ isn't a multiple of $b_2$ nor $b_3$, Right? So, All in all, we get that no vector is a multiple of the other. With that in mind, can we figure that $\beta$ is linearly independent?
Three vectors can be linearly independent without each being strictly a multiple of one other. You are correct that the vectors are linearly independent. Assuming $b_1, b_2, b_3$ are non-zero: We start with the set $\{b_2, b_3\}$ where we know that $b_2$ and $b_3$ must be linearly independent, since one is not a multiple of the other. We know that vectors are linearly independent if any one of them cannot be written as a linear combination of the others. In the case of two vectors, this essentially means that they are linearly independent if one is not equal to a multiple of the other. We have $b_1$ and are given that $b_1$ is not a linear combination of $b_2 $ and $b_3$. That tells us that the addition of a vector that is linearly independent of two linearly independent vectors gives us a set of three linearly independent vectors $\;\{b_1, b_2, b_3\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/674102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Optimization of cake pan volume from area of pan It was difficult to accurately word this question, so hopefully a bit of context will clear that up. Context: I have a cake dish that is made by cutting out squares from the corners of a 25cm by 40 cm rectangle of tin. 40cm _____________________ |_| |_| 25cm | | | | |_ _| | |_________________|_| A 3D cake tin is made by folding the edges once the squares have been cut away. What size squares must be cut out to produce a cake dish of maximum volume? My working: * *I know that the area of the pan without the squares will be: (40 - 2X) * (25-2Y) = Volume *But that's about all I can wrap my head around. I know that the pieces cut off are sqaures, so they will have the same width and length. But that's all I can think of doing... How exactly can I find what size square will produce a maximum volume for the container? I'm pretty terrible at math, and I know it looks like I've done nothing to try and solve this. But I honestly am at a bit of a loss.
Hint: Suppose we remove $x\times x$ squares. The "finished" pan will have length $40-2x$, width $25-2x$, and depth $x$. So its volume $V(x)$ is given by $$V(x)=(40-2x)(25-2x)(x).$$ You want to choose $x$ that maximizes $V(x)$. Note that we will need $0\lt x$ and $2x\lt 25$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/674168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trying to show derivative of $y=x^\frac{1}{2}$ using limit theorem I am trying to understand why the derivative of $f(x)=x^\frac{1}{2}$ is $\frac{1}{2\sqrt{x}}$ using the limit theorem. I know $f'(x) = \frac{1}{2\sqrt{x}}$, but what I want to understand is how to manipulate the following limit so that it gives this result as h tends to zero: $$f'(x)=\lim_{h\to 0} \frac{(x+h)^\frac{1}{2}-x^\frac{1}{2}}{h} = \frac{1}{2\sqrt{x}}$$ I have tried writing this as : $$\lim_{h\to 0} \frac{\sqrt{x+h} - \sqrt{x}}{h}$$ But I can't see how to get to the limit of $\frac{1}{2\sqrt{x}}$. Whilst this is not homework I have actually been set, I would like to understand how to evaluate the limit.
Hint: Simplify $$ \lim_{h\to0}\frac{\sqrt{x+h}-\sqrt{x}}{h} = \lim_{h\to0}\frac{\sqrt{x+h}-\sqrt{x}}{h}\cdot\frac{\sqrt{x+h} + \sqrt{x}}{\sqrt{x+h} + \sqrt{x}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/674259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How does $\tbinom{4n}{2n}$ relate to $\tbinom{2n}{n}$? I got this question in my mind when I was working on a solution to factorial recurrence and came up with this recurrence relation: $$(2n)!=\binom{2n}{n}(n!)^2$$ which made me wonder: is there also a recurrence relation for $\tbinom{4n}{2n}$ in terms of $\tbinom{2n}{n}$? Please use no factorials greater than $(2n)!$, preferably not greater than $n!$.
Here is an estimate that gives a good approximation of $\binom{4n}{2n}$ in terms of $\binom{2n}{n}$. Using the identity $$ (2n-1)!!=\frac{(2n)!}{2^nn!}\tag{1} $$ it is straightforward to show that $$ \frac{\binom{4n}{2n}}{\binom{2n}{n}}=\frac{(4n-1)!!}{(2n-1)!!^2}\tag{2} $$ Notice that $$ \begin{align} \frac{(2n-1)!!}{2^nn!} &=\frac{2n-1}{2n}\frac{2n-3}{2n-2}\frac{2n-5}{2n-4}\cdots\frac12\\ &=\frac{n-\frac12}{n}\frac{n-\frac32}{n-1}\frac{n-\frac52}{n-2}\cdots\frac{\frac12}{\;1}\\ &=\frac1{\sqrt\pi}\frac{\Gamma(n+\frac12)}{\Gamma(n+1)}\tag{3} \end{align} $$ By Gautschi's Inequality, we have $$ \frac1{\sqrt{n+1}}\le\frac{\Gamma(n+\frac12)}{\Gamma(n+1)}\le\frac1{\sqrt{n}}\tag{4} $$ Thus, $(3)$ and $(4)$ yield $$ \frac1{\sqrt{\pi(n+1)}}\le\frac{(2n-1)!!}{2^nn!}\le\frac1{\sqrt{\pi n}}\tag{5} $$ and $$ \frac1{\sqrt{\pi(2n+1)}}\le\frac{(4n-1)!!}{2^{2n}(2n)!}\le\frac1{\sqrt{\pi 2n}}\tag{6} $$ Dividing $(6)$ by $(5)$ gives $$ \sqrt{\frac{n}{2n+1}}\le\frac{(4n-1)!!}{(2n-1)!!^2}4^{-n}\le\sqrt{\frac{n+1}{2n}}\tag{7} $$ Combine $(2)$ and $(7)$ to get $$ 4^n\sqrt{\frac{n}{2n+1}}\le\frac{\binom{4n}{2n}}{\binom{2n}{n}}\le4^n\sqrt{\frac{n+1}{2n}}\tag{8} $$ or asymptotically $$ \binom{4n}{2n}\sim\frac{4^n}{\sqrt2}\binom{2n}{n}\tag{9} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/674337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
is there a formula for modulo I have been trying to find a formula for modulo for a long time now. I was wondering, is this even possible? I know there are lot's of solutions for this problem in computer science but is there a solution for this problem in arithmetics? I mean is there a function that uses only arithmetics actions that can solve this problem? (I mean actions like $\log$ or $\sqrt{}$ or something like that)
For positive integers $x$ and $n$, a solution for $x$ modulo $n$ is $$\bmod \left( {x,n} \right) = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n - 1} \mathop \sum \limits_{k = 0}^{n - 1} i\exp \left( {j\left( {x - i} \right)\frac{{2\pi k}}{n}} \right)$$ where ${j^2} = - 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/674419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 3 }
Equivalence relation and subgroup I am taking abstract algebra now, and there's a lemma: Let $H$ be a subgroup of group $G$, for $a,b \in G$,define $a\sim b$ if $ab^{-1}\in H$, then it is an equivalence. I know how to prove it and how to use it in the prove of Lagrange theorem, but can anyone give me a more mathematical intuition explanation about it ? My book has an example,let $a,b$ belong the congruence class, then $a\sim b$ will be like $a-b \in H$ like $a-b\pmod n$, but it is a particular problem, can someone tell me a more general one? How does the lemma been created?
If $H$ is what is called a normal subgroup then the intuition is the following: There exists a homomorphism (that is, a map which preserves the group structure) $\phi: G\rightarrow K$ such that $a\sim b$ if and only if $\phi(a)=\phi(b)$. The subgroup $H$ is precisely the elements of $G$ which map to the identity of $K$, $\phi(h)=1_K$ for all $h\in H$. This is a very fundamental and important notion in group theory. For example, $\mathbb{Z}$ forms a group under addition, and there is a subgroup $H_n=\{ni; i\in\mathbb{Z}\}$. So addition modulo $n$ corresponds to a homomorphism from $\mathbb{Z}$ with kernel $H_n$. A normal subgroup, written $H\lhd G$, is one where $g^{-1}Hg=H$ for all $g\in G$, and a (group) homomorphism is a map $\phi: G\rightarrow K$ such that $\phi(g)\phi(h)=\phi(gh)$. The theorem which connects these two concepts is called the First Isomorphism Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/674539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
How can you find the cubed roots of $i$? I am trying to figure out what the three possibilities of $z$ are such that $$ z^3=i $$ but I am stuck on how to proceed. I tried algebraically but ran into rather tedious polynomials. Could you solve this geometrically? Any help would be greatly appreciated.
The answer of @Petaro is best, because it suggests how to deal with such questions generally, but here’s another approach to the specific question of what the cube roots of $i$ are. You know that $(-i)^3=i$, and maybe you know that $\omega=(-1+i\sqrt3)/2$ is a cube root of $1$. So the cube roots of $i$ are the numbers $-i\omega^n$, $n=0,1,2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/674621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 7, "answer_id": 2 }
Exact sequence induces exact sequences for free parts and torsion parts? Let $A$ be a PID and consider the exact sequence of finitely generately modules over$A$: $$0\longrightarrow M' \overset{f}{\longrightarrow}M\overset{g}{\longrightarrow}M''\longrightarrow 0 \tag{1}.$$ Denote the free part and torsion part by $F(M)$ etc. and $T(M)$ etc. respectively. Does the above exact sequence induces ones on the free parts and torsion parts?
The sequence $$ 0\rightarrow \mathbb Z\xrightarrow{n} \mathbb Z\rightarrow \mathbb Z_n\rightarrow 0 $$ is exact in $\mathbb Z\text{-}\mathsf{Mod}$. Passng to torsion we have $$ 0\rightarrow 0\rightarrow 0\rightarrow \mathbb Z_n\rightarrow 0 $$ which is not exact. Passing to free parts we have $$ 0\rightarrow\mathbb Z\xrightarrow{n}\mathbb Z\rightarrow 0\rightarrow 0 $$ which is also not exact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/674813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Strong Induction: Finding the Inductive Hypothesis Consider this claim: Every positive integer greater than 29 can be written as a sum of a non-negative multiple of 8 and a non-negative multiple of 5. Assume you are in the inductive step and trying to prove P(n+1) using strong induction. What would be the inductive hypothesis for this problem, if formalized and written in logical notation? Answer Choices: * *∀k[(n≥k>29)∧[∃i≥0,j≥0[k=8i+5j]]] *∀k>29[∃i≥0,j≥0[k=8i+5j]] *∀k[(n≥k>29)→[∃i≥0,j≥0[k=8i+5j]]] *∃i≥0,j≥0[n=8i+5j] Please provide an explanation, I'm really trying to learn this stuff.
3) is the right one. In general strong induction means in fact you do not have $P(n)$ as hypothese. But 'more strongly' that $\forall k\leq n\; P\left(k\right)$ is your hypothese. Notice that $P\left(n\right)$ is a consequence of this hypothese. Here $P(n)$ is the statement: $$n>29\Rightarrow\exists i\geq0\exists j\geq0\left[n=8i+5j\right]$$ Personally in your case I would write $\forall k\leq n\; P\left(k\right)$ as: $$\forall k\leq n\left[k>29\Rightarrow\exists i\geq0\exists j\geq 0\left[k=8i+5j\right]\right]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/674906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is the prove correct for: If both ab and a + b are even then both a and b are even Show: If both $ab$ and $a + b$ are even, then both $a$ and $b$ are even Proof: Assume both $ab$ and $a + b$ are even but both $a$ and $b$ are not even Case1: one is odd $a=2m+1$, $b=2n$ Hence $a+b = (2m+1) + 2n = 2(m+n) + 1$ Case2: both are odd $a=2m+1$, $b=2n+1$ Hence $ab = (2m+1)(2n+1) = 2(2mn+m+n) + 1$ Therefore both $a$ and $b$ have to be even for both $ab$ and $a+b$ to be even. My question is that 1) Is this proof correct? 2) Is this proof by contradiction or negation or any other?
You might also argue that as $a + b$ is even, both are odd or both are even. But if $a$ and $b$ are odd, then $a b$ is odd, contradicting the premises. So both are even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/674973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to calculate radius of convergence of the following series? How can I calculate radius of convergence of the following series? $$\Large \sum\limits_{n=0}^\infty \frac{5^{n+1}}{\sqrt[n]{(2n)!}}z^{n} $$ I tried using D'alembert convergence test but cannot figure out how to calculate. I know the answer is $\LARGE\frac{1}{5}$
This is what I have got. $$ 1\le((2n)!)^{\frac{1}{n^2}}\le((2n)^{2n})^{\frac{1}{n^2}} = (2n)^{\frac{2}{n}}=2^{\frac{2}{n}}n^{\frac{2}{n}}\xrightarrow{\scriptscriptstyle n\to\infty}1 $$ Therefore $$ \sqrt[n]{\frac{5^{n+1}}{\sqrt[n]{(2n)!}}} =\frac{5^{\frac{n+1}{n}}}{((2n)!)^{\frac{1}{n^2}}}\xrightarrow{\scriptscriptstyle n\to\infty}\frac{5}{1} $$ Hence, by the Cauchy-Hadamard formula, the radius of convergence is $\frac{1}{5}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/675041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
need help in complex numbers argument Any help appreciated please, use an argand diagram to find, in the form a+bi, the complex numbers which satisfy the following pairs of equations. arg(z+2)=1/2π, argz=2/3π Thanks
$z= -2 +2\sqrt{3} i$ -- to see why, first note that $arg(z+2)=\frac{1}{2}\pi$ means $z+2$ is on the positive imaginary axis, so $z$ is (in rectangular terms) 2 units left of that, so $a=-2$ (where $z=a+bi$). Now thinking of $z$ in polar form, the given condition $arg(z)=\frac{2}{3}\pi$ means $z=r e^{\frac{2}{3}\pi i}$ for whatever value of $r$ puts $z$ 2 units left of the vertical axis -- but noting the 30-60-90 triangles associated with $z$ (e.g. drop a vertical from $z$ down to the negative real axis), it must be the case that $r=4$. Converting to rectangular gives the answer above, or equivalently (again noting 30-60-90 triangle stuff), the vertical leg of the triangle is $\sqrt{3}$ times the length of the horizontal leg.
{ "language": "en", "url": "https://math.stackexchange.com/questions/675162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Coin based subtraction game I'm having a problem in Game Theory where I am trying to understand how a subtraction game can be interpreted by a coin based game. From my book: The problem I'm having is if I have 9 coins and the subtraction set $ \{\ 1,2,3 \}\ $, say, and 3 of them are heads, let's say positions 5, 6 and 7 are heads $(TTTTHHHTT)$. And I want to subtract 2 from this heap of 18...I'd turn over the 7 and then turn over the 5, leaving me with 6, not 16! I explain it here: http://www.youtube.com/watch?v=eRqxC2j1Oxg&feature=youtu.be
If you have three heads coins $5$, $6$, $7$, this is equivalent to three piles of size $5$, $6$ and $7$, not a single pile of $18$ ! If you remove $2$ from the $7$-pile, you obtain three piles of size $5$, $5$ and $6$. But any impartial combinatorial game combined with itself is irrelevant (the second player can always copy the moves of the first player). Hence this is really equivalent to a pile of $6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/675230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the cube root of a prime number rational? The question is: if $P$ is prime, is $P^{1/3}$ rational? I have been able to prove that if $P$ is prime then the square root of $P$ isn't rational (by contradiction) how would I go about the cube root?
Suppose $\sqrt[3]{P} = \dfrac{a}{b}$ where $a$ and $b$ have no common factors (i.e. the fraction is in reduced form). Then you have $$ b^3 P = a^3. $$ Both sides must be divisible by $a$ (if they're both equal to $a^3$). We already know that $a$ does not divide $b$ (when we assumed the fraction is reduced). So then $a$ must divide $P$. EDIT: and if $a = 1$, then $P = \dfrac{1}{b^3}$. How many integers are of the form $\dfrac{1}{B}$ for some $B$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/675327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Expected Value of 10000 coin flips We toss a fair coin 10000 times and record the sequence of the results. Then we count the number of times that a sequence of 5 heads in a row followed immediately by 5 tails in a row has occurred among these results. (Of course, this number is a random variable.) What is the expected value of this number? Enter your answer as a decimal or a fraction, whichever you prefer. I have made some progress on it but granted I have one attempt left I didn't want to mess this up. I have determined that for 5 heads, and for 5 tails to occur in ten tries, the probability is 0.0009765625 with an expected value as well. My line of thinking was since we can't expect to get this sequence occur until the 10th try, the expected value of flipping 10,000 times would be 9990*0.0009765625 but this was wrong. I feel I'm very close since for any sequence of ten tries, the expected value will be 0.0009765625
$$N=10000,\ n=10\implies\frac{N-n+1}{2^n}=\frac{9991}{1024}\equiv9.7568359375$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/675459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the value of $\lim_{x \to - \infty} \left( \sqrt{x^2 + 2x} - \sqrt{x^2 - 2x} \right)$ I am stuck on this. I would like the algebraic explanation or trick(s) that shows that the equation below has limit of $-2$ (per the book). The wmaxima code of the equation below. $$ \lim_{x \to - \infty} \left( \sqrt{x^2 + 2x} - \sqrt{x^2 - 2x} \right) $$ I've tried factoring out an $x$ using the $\sqrt{x^2} = |x|$ trick. That doesn't seem to work. I get $1 - 1 = 0$ for the other factor meaning the limit is zero...but that's obviously not correct way to go about it :( Thanks.
For $x>0$: For brevity let $A=\sqrt {x^2+2 x}.$ We have $(x+1)^2=A^2 +1>A^2>0$ so $x+1>A>0 . $ ..... So we have $$0<x+1-A=$$ $$=(x+1-A)\frac {x+1+A}{x+1+A}=\frac {(x+1)^2-A^2}{x+1+A}=\frac {1}{x+1+A}<1/x.$$ Therefore $$(i)\quad \lim_{x\to \infty} (x+1-A)=0.$$ For $x>2$: For brevity let $B=\sqrt {x^2-2 x}.$ We have $(x-1)^2=B^2+1>B^2>0$ so $x-1>B>0 . $ ..... So we have $$0<x-1-B=$$ $$=(x-1-B)\frac {x-1+B}{x-1+B}=\frac {(x-1)^2-B^2}{x-1+B}=\frac {1}{x-1+B}<1/(x-1).$$ Therefore $$(ii)\quad\lim_{x\to \infty}(x-1-B)=0.$$ We have $\sqrt {x^2+2 x}-\sqrt {x^2-2 x}=$ $A-B=(A-(x+1))-(B-(x-1))+2. $ From $(i)$ and $(ii)$ we have $$\lim_{x\to \infty} A-B=\lim_{x\to \infty}(A-(x+1))-(B-(x-1))+2=0+0+2=2.$$ The idea is that when $x$ is large, $A$ is close to $x+1$ and $B$ is close to $x-1$ so $A-B$ is close to $(x+1)-(x-1)=2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/675516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Automorphisms on a field $F$ I am trying to understand this proposition with respect to algebraic closures of a field $F$ Prop: If $F$ is a finite field, then every isomorphism mapping $F$ onto a subfield of an algebraic closure $\bar{F}$ of $F$ is an automorphism of $F$ Does this mean that if we have some subfields $K_i \leq \bar{F}$ (for some $i\geq 1$) such that $\exists \Phi: F \rightarrow K_i$ an isomorphism then in reality all these $K_i 's = F$ and the $\Phi's$ are automorphisms. Also how do we go about proving this? Thank you.
Yes, it does mean that for every field homomorphism $\Phi \colon F \to \overline{F}$ we have $\operatorname{im}\Phi = F$. Every $x \in F$ satisfies the relation $x^n = x$, where $n$ is the number of elements of $F$, thus is a zero of the polynomial $P(X) = X^n - X$. The degree of $P$ is $n$, so $P$ has exactly $n$ zeros in $\overline{F}$ (counting multiplicities, although here they are all $1$). But if $\Phi \colon F\to \overline{F}$ is a field homomorphism, we have $$\Phi(x)^n - \Phi(x) = \Phi(x^n-x) = \Phi(0) = 0$$ for every $x\in F$, so $\Phi(x)$ is a zero of $P$, hence an element of $F$. Since field homomorphisms are always injective, and $F$ is finite, $\Phi(F) \subset F$ implies $\Phi(F) = F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/675553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Using continued fractions to prove a bijection. Prove that $\mathbb{N}^{\mathbb{N}} \equiv_c \mathbb{R}$ $\textbf{My Attempt:}$ Let \begin{equation*} \mathbb{N}^{\mathbb{N}}=\{f: \text{all functions} \mid f: \mathbb{N} \to \mathbb{N} \} \end{equation*} Let \begin{equation*} g:= f \to f(\mathbb{N}) \end{equation*} In this mappings $f \in \mathbb{N}^{\mathbb{N}}$ is mapped to its image represented in decimal form. For example, if $f$ is a constant function such that $f(x)=c \hspace{2mm} \forall x \in \mathbb{N}$ then $g(f)=0.cccccccc \dots$. Similarly, if $f$ is the identity function then $g(f)=0.123456\dots$. Such a mapping $g$ is clearly bijective. Let \begin{equation*} h := f(\mathbb{N}) \to \mathbb{R} \end{equation*} Such that \begin{equation*} h\left (0.x_1x_2x_3\dots \right)=[x_1;x_2,x_3,x_4, \dots] \end{equation*} Thus $h$ is a mapping from the decimal expansion to the continued fraction representation of some $x \in \mathbb{R}$. Note that $h$ is onto because every element $x \in \mathbb{R}$ can be represented as a continued fraction.Also, $h$ is one-to-one because the unique decimal representation is mapped to a unique continued fraction by definition. Then $f \circ g$ is bijective. \begin{equation*} \mathbb{N}^{\mathbb{N}} \equiv_c \mathbb{R} \end{equation*} I noticed that all my decimal representations will be infinite. However, rational numbers have finite continued fraction expansions. Is there any way I could tweak my proof to account for this?
I wouldn't bother with decimal expansions here. If $\mathbb N$ means $\{1,2,3,\ldots\}$, then a function $f:\mathbb N\to\mathbb N$ corresponds to a smiple continued fraction $$ f(1) + \cfrac{1}{f(2)+\cfrac{1}{f(3)+\cfrac{1}{\ddots}}} $$ This gives a bijection from the set of all functions $f:\mathbb N\to\mathbb N$ to the set of all positive irrational numbers. (Proving that it's one-to-one is something to think about. As is proving that it's onto.) Next you want a bijection from the set of all positive irrational numbers to the set of all positive real numbers. After that, the function $\log$ is a bijection from the positive reals to all reals. To find a bijection from the set of positive irrationals to the set of positive reals, I'd think about the fact that only countably many positive reals are rational.
{ "language": "en", "url": "https://math.stackexchange.com/questions/675658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solve differential equation $ (dy/dx)(x^2) + 2xy = \cos^2(x)$ $(dy/dx)(x^2) + 2xy = \cos^2(x)$ $(dy/dx) + 2y/x = \cos^2 x$ I multiplied both sides by $e^{2\ln\ x + c}$, then rewrote the equation as $(d/dx)(y* e^{2\ln\ x + c}) = (\cos^2(x)/x^2)*(e^{2\ln\ x + c})$ Now when I try to integrate, the right side becomes complicated. Am I going about this the wrong way? I'm following the textbook.
The left-hand side is the derivative of $x^2y$. Let $u=x^2y$. Solving the differential equation $\frac{du}{dx}=\cos^2 x$ is a routine integration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/675758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Incongruent solutions to $7x \equiv 3$ (mod $15$) I'm supposed to find all the incongruent solutions to the congruency $7x \equiv 3$ (mod $15$) \begin{align*} 7x &\equiv 3 \mod{15} \\ 7x - 3 &= 15k \hspace{1in} (k \in \mathbb{Z}) \\ 7x &= 15k+3\\ x &= \dfrac{15k+3}{7}\\ \end{align*} Since $x$ must be an integer, we must find a pattern for $k$ that grants this. We know that $\frac{k+3}{7}$ must be equal to some integer, say $m$. Solving for $k$, we have $k=4+7m$. Substituting this into our value for $x$, we get: \begin{align*} x & = \dfrac{15(4+7m) + 3}{7}.\\ &= \dfrac{63}{7} + \frac{105m}{7}.\\ &= 9+15m. \end{align*} So, $x = 9+15m, m\in \mathbb{Z}.$ So, is this what I was looking for? I'm not exactly sure what is meant by incongruent solutions.
We can solve this congruence equation in elementary way also. We shall write $[15]$ to denote the word mod 15. Fine? Note that \begin{align*} &7x\equiv 3[15]\\ -&15x +7x\equiv 3-15[15] ~~\text{because}~~ 15x\equiv 0\equiv 15[15]\\ -&8x\equiv -12[15]\\ &2x\equiv 3\left[\frac{15}{\gcd(15, -4)}\right]~~\text{since}~~ax\equiv ay[m]\Rightarrow x\equiv y\left[\frac{m}{\gcd(a, m)}\right]\\ &2x\equiv 3[15]~~\text{since}~~\gcd(15,-4)=\gcd(15, 4)=1\\ &2x\equiv 3+15[15]~~\text{since} 15\equiv 0[15]\\ &2x\equiv 18[15]\\ &x\equiv 9\left[\frac{15}{\gcd(15, 2)}\right]\\ &x\equiv 9[15] \end{align*} Thus the solution is given by $x\equiv 9[15]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/675867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
If $A$ and $B$ are sets of real numbers, then $(A \cup B)^{\circ} \supseteq A^ {\circ}\cup B^{\circ}$ I have a proof for this question, but I want to check if I'm right and if I'm wrong, what I am missing. Definitions you need to know to answers this question: $\epsilon$-neighborhood, interior points and interiors. Notation: $J_{\epsilon}(a)$ means a neighborhood formed around a (i.e. $(a-\epsilon, a+\epsilon)$. An interior point in some set $A$ is a point where an $\epsilon$-neighborhood can be formed within the set. The set of all interior points in $A$ is denoted as $A^0$ and is called the interior. My proof for the question: I'm proving based on most subset proofs where you prove that if an element is in one set, then it must be in the other. Say $x \in A^0 \cup B^0$. Then there is a $\epsilon > 0$, where $J_{\epsilon}(x) \subseteq A$ or $J_{\epsilon}(x) \subseteq B$. This implies that $J_{\epsilon}(x) \subset A \cup B$ which then implies $x \in (A \cup B)^0$. We can conclude from here that $(A \cup B)^0 \supseteq A^0 \cup B^0$.
Your proof is good- there is no problem with it. I have reworded a bit to make things a little more clear, specifically, where you say: Then there is $\epsilon>0$, where $J_\epsilon (x)\subseteq A$ or $J_\epsilon (x)\subseteq B$. Instead it would be more appropriately stated as: If $x\in A^0\cup B^0$ then $x\in A^0$ or $x\in B^0$. Then you can assume $x$ is in $A$, prove that $x\in (A\cup B)^0$, by symmetry the same argument will work if $x\in B^0$. I have included an edited proof- but as I originally stated your proof is good, this just more directly reflects the definitions involved. Let $x\in A^0\cup B^0$. Then $x\in A^0$ or $x\in B^0$. Assume $x\in A^0$. Then there is, by definition, $\epsilon>0$ such that $J_{\epsilon}(x)\subseteq A$. Since $A\subseteq A\cup B$, we also have $J_{\epsilon}(x)\subseteq A\cup B$. Thus by definition we have $x\in (A\cup B)^0$. If instead $x\in B^0$, then by symmetry the same argument works. Thus, $A^0\cup B^0 \subseteq (A\cup B)^0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/675981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
How to denote a set of functions Say there is an unknown function $h(x)$ $$\int_A^B h(x) = c$$ $A$, $B$ and $c$ are known. So $h(x)$ can have various forms on the range $[A,B]$. I want to know how to denote the set of functions for $h(x)$. I know the notation for a set is $\{...\}$. So would it be: $\{h(x)|\int_A^B h(x) = c\}$? Or is there a different way to refer to a bunch of different possible functions? I intend to narrow down this set by gradually introducing boundary restrictions/conditions. E.g. $h(x) \in \mathbb R$ and $h(x) = f(x)\cdot g(x)$ with $g(x)$ known.
Summarizing the comments: try * *$\{h \in C([A,B])\vert\int_A^B h(x)\,dx = c\}$ or *$\{h \in L^1([A,B])\vert\int_A^B h(x)dx = c\}$ depending on what kind of functions you consider. Simply saying functions with integral equal to $3$ is usually ambiguous: there are different kinds of integrals. Avoid writing $h(x)$ when you mean $h$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/676089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can a closed set in $\Bbb R$ be written in terms of open sets Is it possible to write any non-empty closed set in $\Bbb R$ as a combination of unions / intersection of open sets. Note that I don't demand just one union / intersection. I am happy with any combination (finite) of unions / intersection, but elements in each union/intersection must be open set. Any finite set, closed interval, cantor set can be written like this.
For any set $A\subset \mathbb R$ define $B(A, r)=\left\{x\in\mathbb R:\text{dist(x,A) < r}\right\}$, where $\,\text{dist(x,A)} = \inf_{y\in A}\rho(x,y)$ with metric $\rho$. $B(A,r)$ is open for any $A$ and $r$. Now for every closed set $F\subset\mathbb R$ $$F = \bigcap_{n\in\mathbb N}B(F, \frac 1 n)$$ So every closed set in $\mathbb R$ is indeed an intersection of countably many open sets (so it is a $G_\delta$ set).
{ "language": "en", "url": "https://math.stackexchange.com/questions/676173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
calculate x,y positions in circle every n degrees I am having trouble trying to work out how to calculate the $(x,y)$ point around a circle for a given distance from the circles center. Variables I do have are: constant distance/radius from center ($r$) the angle from $y$ origin I basically need a point ($x$ and $y$) around a circle every $18$ degrees. Excuse me if this is a very basic question but math is not my strong point :/ Ta John
x = radius * cos(angle) y = radius * sin(angle) Inverse Y-axis: x = radius * sin(angle) y = radius * -cos(angle) If radians is used then radian = angle * 0.0174532925 and x = radius * cos(radian) y = radius * sin(radian) Radian is the standard unit of angular measure, any time you see angles, always assume they are using radians unless told otherwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/676249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
Prove that if $\operatorname{rank}A=n$, then $\operatorname{rank}AB=\operatorname{rank}B$ Let $A \in M_{m\times n}(\mathbb{R})$ and $B \in M_{n\times p}(\mathbb{R})$. Prove that if $\operatorname{rank}(A)=n$ then $\operatorname{rank}(AB)=\operatorname{rank}(B)$. I tried to start with definitions finding that $n \le m$, but didn't know what to do with $AB$. Please help, thank you!
Remember, if: (a) $rk(A + B) \leq rk(A) + rk(B)$ for any two $mxn$ matrices $A,B$; (b) $rk(AB) \leq \min (rk(A),\ rk(B))$ for any $k\times l$ matrix $A$ and $l\times m$ matrix $B$; (c) if an $n\times n$ matrix $M$ is positive definite, then $rk(M) = n$. So that, for your question: Prove that if $rk(A)=n$ then $rk(AB)=rk(B)$ since $A \in Mat_{m\times n}(\mathbb{R})$ which is matrix $A$ is positive definite, and let for $n \le m$ the maximum number of linearly independent columns is $n$, hence $rk(A) = n$. similar for $rk(B)$, For $p \le n$ the maximum number of linearly independent columns is $p$, hence $rk(B) = p$. Further since we have $A \in Mat_{m\times n}(\mathbb{R})$ and $B \in Mat_{n\times p}(\mathbb{R})$. we take $A,B$ such that $A $ is $mxn$ and $B$ is $nxp$ and $AB=I_{mp}$ So, $rk(AB)\le rk(A) \le n \lt p $ hence $rk(I_{mp})=mp$ and $rk(AB) \leq min (rk(A); rk(B))$ And as $rk(I_{mp})=mp$, then $rk(AB)\ne rk(I_{mp}) \Rightarrow AB \ne I_{mp}$, thus If $A$ is an $m × n$ as matrix of $rk(A) = n$ and , then $rk(B)$, For $p \le n$ the maximum number of linearly independent columns is $p$, and $rk(B) = p$, therefore $rk(AB)=rk(B)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/676333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 4 }
Smoothness of the Picard group of a smooth curve Let $X$ be a smooth projective curve over $k=\bar{k}$ and denote its Picard group by $\operatorname{Pic}(X)$, with the usual scheme structure coming from the representability of the relative Picard functor. It's well known that $\operatorname{Pic}(X)$ is smooth of dimension $g$ everywhere in the case of characteristic $0$. For positive characteristic, Igusa and Serre constructed examples of smooth surfaces presenting singular Picard groups. What can be said about smooth curves in positive characteristic? Is the Picard group always smooth?
The Picard group splits as the product of $\mathbb{Z}$ and the Jacobian variety of $X$, and so each connected component of $\mbox{Pic}(X)$ is (non-canonically) isomorphic to the Jacobian of $X$ which is smooth. Edit: To see that the Picard group splits, consider the exact sequence $$0\to\mbox{Pic}^0(X)\to\mbox{Pic}(X)\stackrel{\deg}{\to}\mathbb{Z}\to0$$ where $\mbox{Pic}^0(X)=\{\mathcal{O}_X(D):\deg(D)=0\}$. This sequence splits since if $p_0\in X$, we have a section $\mathbb{Z}\to\mbox{Pic}(X)$ where $m\mapsto\mathcal{O}_X(mp_0)$. It is well-known (and most of the time defined this way) that the Jacobian of $X$ is isomorphic to $\mbox{Pic}^0(X)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/676403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why aren't these two integration methods yielding the same answer? I'm trying to solve this (not homework, if it matters), and both u-substitution and integration by parts are both yielding two different answers. Where am I going wrong? Equation: $$\int \frac{(4x^3)}{(x^4+7)}dx$$ u-substitution answer: $$=\ln\big|(x^4+7)\big|+C$$ integration by parts answer: $$=\int4x^3*(x^4+7)^{-1}dx$$ $$=4x^3*\ln\big|x^4+7\big|-\int 12x^2*(x^4+7)^{-1}dx$$ $$=4x^3*\ln\big|x^4+7\big|-(12x^2*ln\big|x^4+7\big|-\int 24x*(x^4+7)^{-1}dx)$$ $$=4x^3*\ln\big|x^4+7\big|-(12x^2*ln\big|x^4+7\big|-24x*ln\big|x^4+7\big|-\int 24(x^4+7)^{-1}dx)$$ $$= 4x^3*\ln\big|x^4+7\big|-(12x^2*\ln\big|x^4+7\big|-(24x\ln\big|x^4+7\big|-24\ln\big|x^4+7\big|))$$ $$=(4x^3-12x^2+24x-24)(\ln\big|x^4+7\big|)$$
I don't understand it. For $u = x^4+7, du = 4x^3dx$ so $$ \int \frac{4x^3}{x^4+7} dx = \int du/u = \ln |x^4+7| + C. $$ Show work for your by parts results and it will become clear where the error is...
{ "language": "en", "url": "https://math.stackexchange.com/questions/676499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Find value of $r$ and the limit For some $r \in \mathbb Q$, the limit $$\lim_{x \rightarrow \infty}x^r.\frac{1}2.\frac{3}4.\frac{5}6......\frac{2x-1}{2x}$$ exists and is non zero What is that value of $r$ and what is that limit equal to? I rewrote the product $\frac{1}2.\frac{3}4.\frac{5}6......\frac{2x-1}{2x}$ = $\frac{(2x)!}{2^{2x}(x!)^2}$ but it didn't help.
Let $x^r.\frac{1}{2}.\frac{3}{4} \dots \frac{2x-1}{2x} = A \tag{1}$. Clearly $A < x^r. \frac{2}{3} \tag{2}. \frac{4}{5}.\frac{6}{7} \dots \frac{2x}{2x+1}$ and $A > x^r. \frac{1}{2}.\frac{2}{3}.\frac{4}{5} \dots \frac{2x-2}{2x-1} \tag{3}$ Multiplying $(1)$ and $(3)$, $A^2 > x^{2r}.\frac{1}{2(2x)} \text{ or }, A > \frac{x^r}{2 \sqrt{x}}$. The limit does not exist if $r >0.5$. Now we need to check if the limit exists for $r \leq 0.5$ Multiplying $(1)$ and $(2)$, we gwt $A < \frac{x^r}{\sqrt{2x+1}}$ and its clear that $A$ is finite and nonzero for $r=0.5$ and $\lim_{x \to \infty} A \leq \frac{1}{\sqrt{2}}$. I am unable to go further and give the exct value of the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/676574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do I put $\sqrt{x+1}$ into exponential notation? I think $\sqrt{x+1} = x^{1/2} + 1^{1/2}$. Is this incorrect? Why or why not?
Remember the formula for fractional exponents: $$x^\frac{m}{n} = \sqrt[n]{x^m}$$ It can also be written as: $$x^\frac{m}{n} = (\sqrt[n]{x})^m$$ $\sqrt{x+1}$ can be rewritten as $\sqrt[2]{(x+1)^1}$ Using our formula, we can say that: $$\sqrt[2]{(x+1)^1} = (x+1)^\frac{1}{2}$$ Also remember that: $$x^m + y^m \neq (x+y)^m$$ You should be familiar with the Pythagorean triple $3-4-5$. It can be easily seen that: $$3^2 + 4^2 \neq (3+4)^2 \neq 7^2$$ $$3^2 + 4^2 = 5^2$$ I will now go back to the original question. You think that: $$\sqrt{x+1} = x^\frac{1}{2} + 1^\frac{1}{2}$$ Let us expand $x^\frac{1}{2} + 1^\frac{1}{2}$. $$x^\frac{1}{2} + 1^\frac{1}{2}$$ $$=\sqrt[2]{x^1} + \sqrt[2]{1^1}$$ $$=\sqrt{x} + \sqrt{1}$$ $$=\sqrt{x} + 1$$ $$\sqrt{x} + 1 \neq \sqrt{x+1}$$ So, because $\sqrt{x} + 1 \neq \sqrt{x+1}$, then $\sqrt{x+1} \neq x^\frac{1}{2} + 1^\frac{1}{2}$. The correct answer is: $$\sqrt{x+1} = (x+1)^\frac{1}{2}$$ Before I forget, always remember the restrictions on $\sqrt{x+1} = (x+1)^\frac{1}{2}$. If $x < -1$. then the statement is false, because you cannot have negative square roots. TECHNICALLY, it is not false because complex numbers cover negative square roots. But for the purposes of this question, UNLESS you have learned about complex numbers and the number $i$, the statement is false if $x < -1$. RESTRICTIONS ON $\sqrt{x+1} = (x+1)^\frac{1}{2}$: $x ≥ 1$ $\sqrt{x+1} ≥ 0$ $(x+1)^\frac{1}{2} ≥ 0$ I hope I have enlightened you a bit :) EDIT: I do not want you to think that the statement $\sqrt{x}+1=\sqrt{x+1}$ is entirely false. There is a value of $x$ that will make the equation true. Let's find it. $$\sqrt{x}+1=\sqrt{x+1}$$ $$x+2\sqrt{x}+1=x+1$$ $$2\sqrt{x}=0$$ $$4x=0$$ $$x=0$$ Checking for extraneous roots... $$\sqrt{0}+1=\sqrt{0+1}$$ $$1=\sqrt{1}$$ $$1=1$$ So, $\sqrt{x}+1=\sqrt{x+1}$ when $x=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/676634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Is ZF${}-{}$(Axiom of Infinity) consistent? Godel's theorem implies that Con(ZF) is not provable in ZF since it contains the axiom of infinity. So is it consistent if the Axiom of infinity is removed?
Your question is unclear. It is true that $\sf ZF$ cannot prove its own consistency. But $\sf ZF$ can prove the consistency of $\sf ZF-Infinity$, simply by verifying that the set of hereditarily finite sets satisfies all the axioms of $\sf ZF$ except the axiom of infinity. This set, often denoted by $HF$ or $V_\omega$ can be defined as follows, $V_0=\varnothing, V_{n+1}=\mathcal P(V_n)$, and $V_\omega=\bigcup V_n$. Note that all the elements of $V_\omega$ are finite, and their elements are finite and so on. On the other hand, if you want to ask whether or not $\sf ZF-Infinity$ proves its own consistency, then the answer is no. The reason is that this theory satisfies the requirements of the incompleteness theorem, and therefore cannot prove its own consistency (unless it is inconsistent to begin with).
{ "language": "en", "url": "https://math.stackexchange.com/questions/676720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Under what conditions on $f$ does $\|f\|_r = \|f\|_s$ for $0 < r < s < \infty$. Question: If $f$ is a complex measurable function on $X$, such that $\mu(X) = 1$, and $\|f\|_{\infty} \neq 0$ when can we say that $\|f\|_r = \|f||_s$ given $0 < r < s \le \infty$? What I know: Via Jensen's inequality that, $\|f\|_r \le \|f\|_s$ always holds true. For slightly more generality (for those interested, not because it's helpful here...) If $\mu(X) < \infty$ and $1 < r < s < \infty$ then, $\|f\|_r \le \|f\|_s \mu(X)^{\frac{1}{r} - \frac{1}s}$, follows from the Holder inequality. Also, clearly $f \equiv 1$ is a solution. What I've tried: After not making any progress trying to find conditions for $\|f\|_s \le \|f \|_r$. I have tried to decompose $X$ to gain information. However, it leads to too many variables to be helpful, but maybe someone can improve my attempt, so here it is. Let $A = \{ x : |f(x)| < 1 \}, B = \{x : |f(x)| = 1 \}$, and $C = \{x : |f(x)| > 1\}$. Then, to find the necessary conditions we can set \begin{align*} &\|f\|_s = \left( \int_{A} |f|^s d\mu + \mu(B) + \int_{C} |f|^s d\mu \right)^{1/s}\\ &= \left( \int_{A} |f|^r d\mu + \mu(B) + \int_{C} |f|^r d\mu \right)^{1/r} = \|f\|_r. \end{align*} However, I then proceeded to not make it anywhere that looked helpful later on. Any new insight/suggestions are appreciated! Thanks.
A look at the proof of Jensen's inequality is all you really need; there is no need for (more sophisticated) Hölder's inequality. For simplicity, scale $f$ so that $\|f\|_r=1$. Let $g=|f|^r$. Jensen's inequality says $\int_X g^{p}\ge 1$ (for $p=s/r>1$), which is just the result of integrating the pointwise inequality $$g^{p} \ge 1+p(g-1) \tag{1}$$ over $X$. Inequality (1) expresses the convexity of the function $g\mapsto g^p$: its graph lies above the tangent line at $g=1$. Recall that an integral of a nonnegative function is zero only when the function is zero a.e. Thus, if we have $$\int_X g^p = \int_X (1+p(g-1)) \tag{2}$$ then (1) holds as equality a.e. But (1) holds as equality only when $g=1$; this is the only point at which the tangent line to $g\mapsto g^p$ meets the graph of the function. Therefore, $g=1$ a.e. In terms of $f$, which means $|f|$ being constant a.e., as Daniel Fischer noted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/676778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Newton's Function Confusion "Suppose that $r$ is a double root of $f(x) = 0$, that is $f(x)=f'(x)=0$, $f''(x) \neq 0$, and suppose that $f$ and all derivatives up to and including the second are continuous in some neighborhood of $r$. Show that $\epsilon_{n+1} \approx \frac{1}{2}\epsilon_{n}$ for Newton's method and therby conclude that the rate of convergence is $\textit{linear}$ near a double root. (If the root has multiplicity $m$, then $\epsilon_{n+1} \approx \left [ \frac{(m-1)}{m}\right ]\epsilon_n $)". I'm a good amount of confused on this problem. So I know that $\epsilon_n = -\frac{f(x_n)}{f'(x_n)}$ (our error) and that a function with a double root can be written as $f(x) = (x-r)^2g(x)$ where $r$ is our double root. I just don't really know how to do this / start this. If I calculate $\epsilon_n$, I get $-\frac{(x-r)^2g(x)}{2(x-r)g(x) + (x-r)^2g'(x)}$, but what use is that? I think I need a decent push forward in the right direction. Help? Maybe, the $x$'s in my $\epsilon_n$ calculation are supposed to be $x_n$'s? Since we know that as $x_n \to r$, $(x_n - r) \to 0$. Then we could do something with that? That would just make it $0$ though which doesn't help us.
This solution was shown to me by a friend. I understand now! The solution is as follows: Let us state the statement of Newton's Method: $x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$ We know that (our initial guess) is $x_{n} = r + \epsilon$ where $r$ is our double-root and $epsilon$ is our $\textit{very}$ small error. Then our error is $\epsilon = r - x_n$. Our statement for linear convergence is $|e_{n+1}| \leq |e_n|C$ where $C\in \left[ 0, 1\right )$ We can then write $\underbrace{r - x_{n+1}}_{e_{n+1}} = \underbrace{r - x_n}_{e_n} + \frac{f(x_n)}{f'(x_n)}$ to get $e_{n+1} = e_n + \frac{f(x_n)}{f'(x_n)}$. From the Taylor Series around $r$, we can write a function with the double root $r$ as $f(x) = (x-r)^2g(x)$ where $g(x)$ is defined in: $(x-r)^2\left [ \underbrace{\frac{f''(r)}{2!} + \frac{f'''(r)(x-r)}{3!} + \dots}_{g(x)} \right ]$ since $f(r) = 0$ and $f'(r) = 0$. We then calculate $\frac{f(x_n)}{f'(x_n)} = \frac{(x_n - r)^2g(x_n)}{2(x_n - r)g(x_n) + (x_n -r)^2g'(x_n)} = \frac{(x_n - r)}{2 + \frac{g'(x)}{g(x)}}$. We can then make the appropriate substitutions $\frac{-e_n}{2 - e_n\frac{g'(x)}{g(x)}}$ $\lim_{n \to \infty} \frac{g'(x)}{g(x)} = \frac{g'(r)}{g(r)} = K$ where $K$ is a constant. Then $e_{n+1} = e_n - \frac{e_n}{2 - e_nK} = e_n\frac{(1-e_nK)}{2-e_nK}$. When we look at $n$ as it approaches infinity, then $\frac{(1-e_nK)}{2-e_nK} \to \frac{1}{2}$. This leaves us with the conclusion that $e_{n+1} \approx \frac{1}{2}e_n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/676888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Statistics: If $X_1$ and $X_2$ are both normally distributed then explain why $X_1 - X_2$ can be standardized with mean 0 and standard deviation of 1 I am currently studying hypothesis testing for two populations and I would like a math major or someone experienced to explain to me why this particular statistic has a mean of 0 and a standard deviation of 1: $$ z_{\bar{X_1}-\bar{X_2}} = \frac{\bar{X_1}-\bar{X_2} - \left(\mu_1 - \mu_2\right)}{\sigma_{\bar{X_1}-\bar{X_2}}}$$ The course that I'm taking is under the political science department. I see a lot of theoretical questions here and I would like it if someone can explain the fundamentals. How do you know that if $X_1$ and $X_2$ are both normally distributed then $X_1 - X_2$ is also normal? Why is it that when you standardize $X_1 - X_2$ you get the same mean and standard deviation like when $X_1$ or $X_2$ is standardized?
A linear combination of independent normal RVs is normal. The mean of $X=\sum_i a_iX_i$ is $\mu =\sum_i a_i \mu_i$, and the variance is $\sigma^2=\sum_i a_i^2\sigma_i^2$. So you have just standardized a normal RV $X \sim N(\mu,\sigma^2)$, and a standardized normal RV has distribution $N(0,1)$. See this link for proofs: The sum of $n$ independent normal random variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/676988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is $f(a)\!=\!0\!=\!f(b)\Rightarrow (x\!-\!a)(x\!-\!b)\mid f(x)\,$ true if $\,a=b?\ $ [Double Factor Theorem] I encountered this proving problem, I can do the proof but my question is why in the condition/premise we need $a$ to be unequal to $b$? My guess is that even $a=b$, the statement is still true, is it correct? If yes, how can we prove it? If false, are there any counter-examples? The problem: Let $a$ and $b$ be two unequal constants. If $f(x)$ is a polynomial with integer coefficients, and $f(x)$ is divisible by $x-a$ and $x-b$, then $f(x)$ is divisible by $(x-a)(x-b)$. My proof (just for reference): Let $q(x)$ be the quotient and $px+r$ be the remainder (where p, r are constants that we have to find) when $f(x)$ is divided by $(x-a)(x-b)$. So we have $f(x)=(x-a)(x-b)q(x)+px+r$. We then substitute $a$ and $b$ into $f(x)$ and use factor theorem, we get $pa+r$ and $pb+r$. We solve the simultaneous equations we get $p(a-b)=0$, since $a\neq b$, $p=0$ and $r=0$. So $f(x)$ is divisible by $(x-a)(x-b)$. Helps are greatly appreciated. Thanks!
There are obvious counterexamples, e.g. $\,f = x-a.\,$ Less trivially see the remark below. Here is the theorem you seek. Bifactor Theorem $\ $ Let $\rm\,a,b\in R,\,$ a ring, and $\rm\:f\in R[x].\:$ If $\rm\ \color{#C00}{a\!-\!b}\ $ is cancellable in $\rm\,R\,$ then $$\rm f(a) = 0 = f(b)\ \iff\ f\, =\, (x\!-\!a)(x\!-\!b)\ h\ \ for\ \ some\ \ h\in R[x]$$ Proof $\,\ (\Leftarrow)\,$ clear. $\ (\Rightarrow)\ $ Applying the Factor Theorem twice, while canceling $\rm\: \color{#C00}{a\!-\!b},$ $$\begin{eqnarray}\rm\:f(b)= 0 &\ \Rightarrow\ &\rm f(x)\, =\, (x\!-\!b)\,g(x)\ \ for\ \ some\ \ g\in R[x]\\ \rm f(a) = (\color{#C00}{a\!-\!b})\,g(a) = 0 &\Rightarrow&\rm g(a)\, =\, 0\,\ \Rightarrow\,\ g(x) \,=\, (x\!-\!a)\,h(x)\ \ for\ \ some\ \ h\in R[x]\\ &\Rightarrow&\rm f(x)\, =\, (x\!-\!b)\,g(x) \,=\, (x\!-\!b)(x\!-\!a)\,h(x)\end{eqnarray}$$ Remark $\ $ The theorem may fail when $\rm\ a\!-\!b\ $ is not cancellable (i.e. is a zero-divisor), e.g. $\rm\quad mod\ 8\!:\,\ f(x)=x^2\!-1\,\Rightarrow\,f(3)\equiv 0\equiv f(1)\ \ but\ \ x^2\!-1\not\equiv (x\!-\!3)(x\!-\!1)\equiv x^2\!-4x+3$ See here for further discussion of the case of polynomials over a field or domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/677066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Values of square roots Good-morning Math Exchange (and good evening to some!) I have a very basic question that is confusing me. At school I was told that $\sqrt {a^2} = \pm a$ However, does this mean that $\sqrt {a^2} = +2$ *and*$-2$ or does it mean: $\sqrt {a^2} = +2$ *OR*$-2$ Is it wrong to say 'and'? What are the implications of choosing 'and'/'or' Any help would be greatly appreciated. Thanks in advance and enjoy the rest of your day :)
Even though quite a bit has been said already, i wanted to add something. The numbers which you normally use in school (-1, $\frac{2}{3}$, $ \pi$, etcetera) are called the real numbers. The set of real numbers is denoted by $\mathbb{R}$. Now the square root of any number $b$ is normally considered to be any number $x$ that satisfies $x^2 = b$, or equivalently $x^2 - b = 0$. As you pointed out, there are normally two solutions to this, so two values for $x$ will do the trick. However, working in $\mathbb{R}$ this situation is remedied by adopting the convention that the square root of $b$ will be the positive number $x$ that satisfies $x^2 - b=0$. So indeed, when $b= a^2$, we get $$ \sqrt{a^2} = |a|. $$ So with this convention, the solutions to $x^2 - b=0$ become $x=\sqrt{b}$ and $x = -\sqrt{b}$. It is very important to note that this is merely a convention. Even more: there are more sets of numbers we could work in, where this trick will not work! If we pass from the real numbers $\mathbb{R}$ to the so called complex numbers, denoted $\mathbb{C}$ (check wikipedia), we lose this! In this set of numbers, the notion of a positive number does not make sense, and it is in fact impossible to define a square root function in a nice way on the whole of $\mathbb{C}$ (if you want to know more about this, ask google). In general there are many more things that i call "sets of numbers" now, in mathematics they are called "fields". In all of them, the square root notion makes sense, as in solving the solution to $x^2 - b=0$. However, the nice $\sqrt{{}}$ function as we have it in $\mathbb{R}$ is rarely found in other fields. Hope this context was interesting to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/677123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
How to calculate $\theta$ when we know $\tan \theta$. Hej I'm having difficulties calculating the angle given the tangent. Example: In a homework assignement I'm to express a complex variable $z = \sqrt{3} -i$ in polar form. I know how to solve this except for when I get to calculating the angle $\theta$. I know that $\tan \theta = -\frac{1}{\sqrt{3}}$ but I do not know how to continue and compute the angle from that.
You shouldn't use the tangent for this kind of problems; compute $$ |z|=\sqrt{z\bar{z}}=\sqrt{(\sqrt{3}-i)(\sqrt{3}+i)}= \sqrt{3+1}=2 $$ Then you have $z=|z|u$, where $$ u=\frac{\sqrt{3}}{2}-i\frac{1}{2} $$ and you need an angle $\theta$ such that $$ \cos\theta=\frac{\sqrt{3}}{2},\quad\sin\theta=-\frac{1}{2}. $$ Since the sine is negative and the cosine is positive, you see that you can take $$ \theta=-\frac{\pi}{6} $$ (the pair of values is well known). If you need an angle in the interval $[0,2\pi)$, just take $$ -\frac{\pi}{6}+2\pi=\frac{11\pi}{6}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/677214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find the contour integral. Let $f(z)=π \exp(π\cdot\overline{z})$. Let $C$ be the square whose vertices are $0,1,(1+i)$, and $i$. How can I evaluate the contour integral of $f(z)$ over $C$?
We have $$ \int_C\pi e^{\pi\bar{z}}\,\mathrm{d}\bar{z}=0 $$ On $[0,1]$ and $[1+i,i]$, $\mathrm{d}\bar{z}=\mathrm{d}z$, and on $[1,1+i]$ and $[i,0]$, $\mathrm{d}\bar{z}=-\mathrm{d}z$. Thus, $$ \begin{align} \int_C\pi e^{\pi\bar{z}}\,\mathrm{d}z &=\int_C\pi e^{\pi\bar{z}}\,\mathrm{d}\bar{z}+2\int_{[1,1+i]\cup[i,0]}\pi e^{\pi\bar{z}}\,\mathrm{d}z\\ &=2\pi i\int_0^1e^{\pi(1-it)}\,\mathrm{d}t-2\pi i\int_0^1e^{-\pi it}\,\mathrm{d}t\\ &=2\pi i(e^\pi-1)\int_0^1e^{-\pi it}\,\mathrm{d}t\\[5pt] &=-2(e^\pi-1)(e^{-\pi i}-1)\\[12pt] &=4(e^\pi-1) \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/677302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
nonlinear diophantine equation $x^2+y^2=z^2$ how to solve a diophantine equation $x^2+y^2=z^2$ for integers $x,y,z$ i strongly believe there is a geometric solution ,since this is a pythagoras theorem form or a circle with radius $z$ $x^2+y^2=z^2$ $(\frac{x}{z})^2+(\frac{y}{z})^2=1\implies x=y=\pm z$ or $0$ so we consider a line passing through points $P_1(- z,0)$ and $P(x,y)$ both on the circle $m=\frac{y}{x+z}$ $x^2+m^2(x+z)^2=z^2$ $(m^2+1)x^2+2xzm^2+(m^2-1)z^2=0$ $((m^2+1)x+(m^2-1)z)(x+z)=0$ $\frac{x}{z}=-\frac{m^2-1}{m^2+1}$ or $-1$ let $m=\frac{a}{b}\implies \frac{x}{z}=\frac{b^2-a^2}{b^2+a^2}$ $\frac{y}{z}=\frac{2a^2}{b^2+a^2}$ how to get explicit $z,x,y$
Euclid's Formula says that in essence, $(m^2 - n^2)^2 + (2mn)^2 = (m^2 + n^2)^2$ for all positive integers $m > n$. This is basically a parametrization of Pythagorean Triplets with two parameters.
{ "language": "en", "url": "https://math.stackexchange.com/questions/677384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can we say $TT^{*}=T^{2}$ implies $T=T^{*}$? Let $A$ be a $C^{*}$-algebra, Can we say $TT^{*}=T^{2}$ implies $T^{*}=T$? for $T\in A$ I am looking for a counterexample! Thanks
Via a faithful state, we can think of $A$ as represented in some $B(H)$. We have $$ H=\ker T\oplus \overline{\text{ran} T^*}. $$ For $x\in\ker T$, we have $Tx=0$ and then $$ \|T^*x\|^2=\langle T^*x,T^*x\rangle=\langle TT^*x,x\rangle=\langle T^2x,x\rangle=0; $$ so $T^*x=0$ and $T=T^*$ on $\ker T$. Taking adjoints on $TT^*=T^2$ we have $TT^*=T^*T^*$, that is $(T-T^*)T^*=0$. This shows that $T-T^*=0$ on $\text{ran}\,T^*$, and by continuity on its closure. That is, $T=T^*$ on $\overline{\text{ran}\,T^*}$. So $T=T^*$ on all of $H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/677425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Logic puzzle: Which octopus is telling the truth? King Octopus has servants with six, seven, or eight legs. The servants with seven legs always lie, but the servants with either six or eight legs always tell the truth. One day, four servants met. The blue one says, “Altogether, we have 28 legs.” The green one says, “Altogether, we have 27 legs.” The yellow one says, “Altogether, we have 26 legs.” The red one says, “Altogether, we have 25 legs.” What is the colour of the servant who tells the truth?
This is more fundamental than it seems. The way the octopod nervous system works is different from ours. They have no body image, and their legs send no signals back to their brain. So an octopus can only know how many legs other octopuses have. So they can only be sure about what the others say. This means that the only octopus who can have an accurate picture is the one with the most information - the red octopus. As has been pointed out, the only option with one true answer is 3*7 + 1 * 6. The first three octopuses don't know how many legs they, in fact, have, but the fourth octopus knows that the green one has six legs, and sees that the other two have seven, so he knows that there he has seven legs, so he knows he has to lie. The only octopus that knows the truth is the red one, so the only octopus that actually lies is the red one. The other three do their best, but they cannot know the truth.
{ "language": "en", "url": "https://math.stackexchange.com/questions/677495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "106", "answer_count": 12, "answer_id": 5 }
Can any piecewise function be represented as a traditional equation? In "Fundamentals of Electrical Engineering" we learned about piecewise functions for the "unit-step" and "ramp" which are represented by $f(x)= \begin{cases}0, & \text{if }x< 0 \\ 1, & \text{if }x>0\end{cases}$ and $f(x)= \begin{cases}0, & \text{if }x< 0 \\ x, & \text{if }x\ge 0\end{cases}$ respectively. I was bored in calculous class and determined these functions could be represented in traditional algebra by $f(x)= \frac{|x|}{x} \cdot \frac12 + \frac12$ and $f(x)= \frac{x + |x|}2$ So this got me thinking. Can any piecewise function be represented as a traditional equation? Just as an example, how about this one: $$f(x)= \begin{cases} 5, & \text{if }x=0 \\ x^2, & \text{if }x<0 \\ \sqrt{x} & \text{if }x>0 \end{cases}$$ edit: For the sake of the question, substitute $\sqrt{x^2}$ for $|x|$
The crucial step is to come up with an acceptable way to describe indicator functions, i.e. for certain subsets $S\subseteq \mathbb R$ to replace the piecewise definition $$1_S(x)=\begin{cases}1&\text{if }x\in S\\0&\text{if }x\notin S\end{cases} $$ with something not involving piecewise, but only "traditional" definitions. Provided that taking limits is allowed as "traditional", we should accept the functions $$\begin{align}\max\{x,y\} &=\frac{x+2}{2}+\left|\frac{x-y}{2}\right|\\ \min\{x,y\} &=x+y-\max\{x,y\}\\ 1_{[0,\infty)}(x)&=\lim_{n\to\infty}\min\{e^{nx},1\} \\ 1_{(-\infty,0]}(x)&=1_{[0,\infty)}(-x)\\ 1_{[a,b)}(x) &=1_{[0,\infty)}(x-a)-1_{[0,\infty)}(x-b)\\ 1_{[a,b]}(x) &=1_{[0,\infty)}(x-a)\cdot1_{[0,\infty)}(b-x)\\ 1_{\{a\}}(x) &=1_{[0,\infty)}(x-a)\cdot 1_{[0,\infty)}(a-x)\end{align}$$ and similar combinations for arbitrary intervals $\subseteq \mathbb R$. With these you get for example $$\begin{align}f(x)&= \begin{cases} 5, & \text{if }x=0 \\ x^2, & \text{if }x<0 \\ \sqrt{x} & \text{if }x>0 \end{cases}\\& = 1_{\{0\}}(x)\cdot 5+1_{(-\infty,0)}(x)\cdot x^2+1_{(0,\infty)}(x)\cdot\sqrt x.\end{align} $$ Or, just to make sure, you may want to replace $\sqrt {x}$ with $\sqrt{1_{[0,\infty)}(x)\cdot x}$ (otherwise you'd need a convention that $0$ times undefined is $0$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/677618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How many different elements can we obtain by multiplying all element in a group? Let $G$ be a finite group. How many different elements can we obtain by multiplying all element in a group? Of course, if $G$ is abelian the answer is one but when G is non-abelian, changing the order of the multiplication may produce new elements. My second question is actually related to my attempt to solve the first one. Let $S$ be set of all elements produced by multiplying all elements in $G$. Then, it is easy to show that $Aut(G)$ acts on $S$ naturally. I wonder whether this can be transitive.
The answer to your question is even more subtle. The set of all the possible products is always a coset of the commutator subgroup. Theorem Let $G$ be a finite group of order $n$, say $G=\{g_1, \dots, g_n\}$ and let $P(G)=\{g_{\sigma(1)}\cdot g_{\sigma(2)} \dots g_{\sigma({n-1})} \cdot g_{\sigma(n)}: \sigma \in S_n\}$. (a) If $|G|$ is odd, then $P(G)=G'$ (b) If $|G|$ is even, let $S \in Syl_2(G)$. Then $P(G)=G'$ in case $S$ is non-cyclic. If $S$ is cyclic, then $P(G)=xG'$, where $x$ is the unique element of order $2$ of $S$. This higly non-trivial and beautiful result relates to combinatorics - the construction of Latin Squares. It heavily relies on the proof of the so-called Hall-Paige conjecture (Hall, Marshall; Paige, L. J. Complete mappings of finite groups. Pacific Journal of Mathematics 5 (1955), no. 4, 541--549.). This could be proved thanks to the classification of the finite simple groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/677688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 1 }
"reversing" non-linear equation system I'm not a mathematician and I'm facing a problem with those equations that I found in a book of history of colorscience. The equations were created by MacAdam to transform the classical colorimetric diagram of the CIE into something better. The CIE diagram plots chromaticity with 2 coordinates x,y MacAdams plots the transformed chromaticities D,M While it's easy to calculate D,M with given x,y (see codes below), I find it impossible so far to do the opposite, i.e., to find the reverse formulae that will compute x,y with given D,M Do you have any idea of how this could be done? Thanks The transformation from $x,y$ to $D,M$ is as follows: $$a = \frac{10x}{2.4x+34y+1}\\ b = \frac{10y}{2.4x+34y+1}\\ D = 3751a^2-10a^4-520b^2+13295b^3+32327ab-25491a^2b-41672ab^2+10a^3b-5227\sqrt a+2952a^\frac14\\ c = \frac{10x}{42y-x+1}\\ d = \frac{10y}{42y-x+1}\\ M = 404d - 185d^2+52d^3+69c(1-d^2)-3c^2d+30cd^3$$
Hello and welcome to the site! The question you are asking has no simple solution. Basically, you have some mapping of pairs of real numbers into some other pair of real numbers, $(D,M)=F(x,y)$ and are asking to find an inverse of $F$. There are many problems with this, the main two being: * *In general, $F^{-1}$ may not globally exist *Usually, if $F$ is ugly enough, there is no closed form expression of $F^{-1}$. That said, all is not lost! There are ways of solving the nonlinear system. They will not return an exact result but only an approximation, and will not be as quick as simply evaluating one expression. As I do not know what programming language you will be using, I suggest you look into any package or library you can find for solving nonlinear systems of equations. For example, MATLAB has a range of solutions for your problem, as do many other programs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/677846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to prove this matrix bound Let an $m$ by $n$ matrix $A\in\mathbb C^{m\times n}$. Denote $M=\max_i\sum_{j=1}^n|A_{ij}|$ and $N=\max_j\sum_{i=1}^m|A_{ij}|$. Prove for any two vectors $x\in\mathbb C^m$ and $y\in\mathbb C^n$, we have $$\left\vert x^TAy\right\vert\leq\sqrt{MN}|x||y|$$ Here's what I think: $$|x^TAy|=\left|\sum_{i,j}A_{ij}x_iy_j\right|\leq\sqrt{\sum_j\left(\sum_iA_{ij}x_i\right)^2\cdot\sum_jy_j^2}\leq\left|y\right|\sqrt{\sum_j\left(\sum_iA_{ij}^2\cdot\sum_ix_i^2\right)}=|x||y|\sqrt{\sum_{ij}A_{ij}^2}$$ But how do I prove $$\sum_{ij}A_{ij}^2\leq MN$$ Edit: My idea was proved to be wrong. But how do I prove the original inequality?
Let $\left\Vert X\right\Vert$ denote the spectral norm of $X$. Also, let $\left\Vert \cdot\right\Vert_{p\to p}$ denote the matrix norm induced by the $\ell_p$-norm: $$\left\Vert X\right\Vert_{p\to p}=\max_{v\neq 0} \frac{\left\Vert Xv\right\Vert_p}{\left\Vert v\right\Vert_p}.$$ Let $v$ be the principal eigenvector of $A^*A$, i.e., a unit $\ell_2$-norm vector for which we have $A^*Av=\sigma^2 v$ where $\sigma^2=\left\Vert A^*A\right\Vert=\left\Vert A\right\Vert^2$. It follows that $$\frac{\left\Vert A^*Av\right\Vert_1}{\left\Vert v\right\Vert_1}=\frac{\left\Vert \left\Vert A\right\Vert^2v\right\Vert_1}{\left\Vert v\right\Vert_1}=\left\Vert A\right\Vert^2.$$ Therfore, $$\left\Vert A\right\Vert^2\leq\left\Vert A^*A\right\Vert_{1\to 1}=\max_{u,v\neq 0}\frac{\left\vert u^*A^*Av\right\vert}{\left\Vert u\right\Vert_\infty\left\Vert v\right\Vert_1}\\ \leq \max_{u,v\neq 0}\frac{\left\Vert Au\right\Vert_\infty\left\Vert Av\right\Vert_1}{\left\Vert u\right\Vert_\infty\left\Vert v\right\Vert_1}=\left\Vert A\right\Vert_{\infty\to\infty}\cdot \left\Vert A\right\Vert_{1\to 1}.$$ It is straightforward to show that $M=\left\Vert A\right\Vert_{\infty\to \infty}$ and $N=\left\Vert A\right\Vert_{1\to 1}$. The result then follows using the fact that $\left\vert x^*Ay\right\vert\leq\left\Vert A\right\Vert\left\Vert x\right\Vert\left\Vert y\right\Vert$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/677929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Inequality $\left|\,x_1\,\right|+\left|\,x_2\,\right|+\cdots+\left|\,x_p\,\right|\leq\sqrt{p}\sqrt{x^2_1+x^2_2+\cdots+x^2_p}$ I could use some help with proving this inequality: $$\left|\,x_1\,\right|+\left|\,x_2\,\right|+\cdots+\left|\,x_p\,\right|\leq\sqrt{p}\sqrt{x^2_1+x^2_2+\cdots+x^2_p}$$ for all natural numbers p. Aside from demonstrating the truth of the statement itself, apparently $\sqrt{p}$ is the smallest possible value by which the right hand side square root expression must be multiplied by in order for the statement to be true. I've tried various ways of doing this, and I've tried to steer clear of induction because I'm not sure that's what the exercise was designed for (from Bartle's Elements of Real Analysis), but the best I've been able to come up with is proving that the statement is true when the right hand side square root expression is multiplied by p, which seems pretty obvious anyway. I feel like I'm staring directly at the answer and still can't see it. Any help would be appreciated.
This is an application of Jensen's Inequality: $$ \left(\frac1p\sum_{k=1}^p|x_k|\right)^2\le\frac1p\sum_{k=1}^p|x_k|^2 $$ since $f(x)=x^2$ is convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/678044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Series representation of $\sin(nu)$ when $n$ is an odd integer? So, out of boredom and curiosity, today I came up with a series representation for $\sin(nu)$ when $n$ is an even integer: $$\sin(nu) = \sum_{k=1}^\frac n2 \left(\left(-1\right)^{k-1}\binom{n}{-\left|2k-n\right|+n-1}\sin\left(u\right)^{2k-1}\cos\left(u\right)^{n-2k+1}\right)\;\mathtt {if}\;n\in 2\Bbb Z$$ I was working on a similar representation for when $n$ is an odd integer, but I'm having some difficulties. There doesn't seem to be much of a pattern. If it exists, could someone please point me in its direction? If it's impossible, could you provide me with the proof?
Using complex methods and the binomial theorem, $$\eqalign{\sin(nu) &={\rm Im}(\cos u+i\sin u)^n\cr &={\rm Im}\sum_{m=0}^n \binom nm (\cos u)^{n-m}(i\sin u)^m\ .\cr}$$ As only the terms for odd $m$ contribute to the imaginary part we can take $m=2k-1$ to give $$\sin(nu)=\sum_{k=1}^{(n+1)/2}(-1)^{k-1}\binom n{2k-1}\cos^{n-2k+1}u\sin^{2k-1}u\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/678112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Direct proof. Square root function uniformly continuous on $[0, \infty)$ (S.A. pp 119 4.4.8) (http://math.stanford.edu/~ksound/Math171S10/Hw8Sol_171.pdf) Prove for all $e > 0,$ there exists $d > 0$ : for all $x, y \ge 0$, $|x - y| < d \implies |\sqrt{x} - \sqrt{y}| < e$. (a) Given $\epsilon>0$, pick $\delta=\epsilon^{2}$. First note that $|\sqrt{x}-\sqrt{y}|\leq|\sqrt{x}+\sqrt{y}|$. Hence if $|x-y|<\delta=\epsilon^{2}$, then $ |\sqrt{x}-\sqrt{y}|^{2}\leq|\sqrt{x}-\sqrt{y}||\sqrt{x}+\sqrt{y}|=|x-y|<\epsilon^{2}. $ Hence $|\sqrt{x}-\sqrt{y}|<\epsilon$. 1. Where does $|\sqrt{x}-\sqrt{y}|\leq|\sqrt{x}+\sqrt{y}|$ issue from? How to presage this presciently? Yes...can prove it. Square both sides. By dint of $|a|^2 = (a)^2$: $|\sqrt{x}-\sqrt{y}|^2 \leq|\sqrt{x}+\sqrt{y}|^2 \iff (\sqrt{x}-\sqrt{y})^2 \leq(\sqrt{x}+\sqrt{y})^2 \iff - 2\sqrt{xy} \le 2\sqrt{xy} \\ \iff 0 \le 4\sqrt{xy}. █ $. 2. Figue or Intuition please for $|\sqrt{x}-\sqrt{y}|\leq|\sqrt{x}+\sqrt{y}|$? Feels fey. I know $|x + y| \le |x| + |y| \iff |x - y| \le |x| + |y|$. 3. For scratch work, I started with $|\sqrt{x} - \sqrt{y}| < e$. But answer shows you need to start with $|\sqrt{x} - \sqrt{y}|^2$. How to presage this vaticly? What's the scratch work for finding $d = e^2$? 4. How does $d = e^2$ prove uniform continuity? I know this proves uniform continuity. Solution needs to prove $d = e^2$ doesn't depend on x or y?
2. You know that $\sqrt{x}$ and $\sqrt{y}$ are non negative. The sum of two non-negative numbers is always at least as great as their difference. Alternatively, $|x| + |y| = |x + y|$ for non negative $x$ and $y$. Thus $|\sqrt{x} - \sqrt{y}| \le |\sqrt{x}| + |\sqrt{y}| = |\sqrt{x} + \sqrt{y}|$. 3. Practise! The solution is not obvious. 4. $\delta$ doesn't depend on $x$ or $y$. It is a function solely of $\epsilon$. Thus, the same $\delta$ works for any $x$ and $y$, which is what is required for uniform continuity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/678226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
A conjecture about Pythagorean triples I noticed for the integer solutions of $a^2 + b^2 = c^2$, there don't seem to be cases where both a and b are odd numbers. In fact, I saw this property pop up on a nice question, which required you to prove it. So I have tried proving it, but I have failed so far. A reductio ad absurdum method seems to me the most likely way of proving this. So far, I have shown that if both a and b were odd, then $a^2 + b^2 = 4\cdot y^2$ (where y is some integer). This is because an odd squared is still odd and 2 odds make an even, and if $c^2$ is even, then it must have a factor of 4. However this isn't too great and my brain seems to be rather uncreative today. Any tips on how I should proceed?
The square of an odd number is always a multiple of four plus one: $(2n+1)^2=4(n^2+n)+1$. So the sum of two odd squares cannot be a multiple of four.
{ "language": "en", "url": "https://math.stackexchange.com/questions/678397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Extremum of functional of a complex function consider functional $E$ defined by $$E[z]=\int F(x,z(x))dx$$ where $F$ is a complex-valued nonlinear function. How can we find the function $z(x)$ so that $$G=|E|^2=EE^*=\iint F(x_1,z(x_1))F^*(x_2,z(x_2))dx_1dx_2$$ takes its maximum?
Find $z$ maximizing $$G(z):=\iint_\Omega F\big(x_1,z(x_1)\big) F^*\big(x_2,z(x_2)\big) \,dx_1 \,dx_2$$ We are given $F$ -- I assume it is twice differentiable, but there are methods (mentioned below) in nonlinear optimization that work for functions which are not. I assume there are no restrictions on $z$; if it must be continuous you will select a different numerical approximation. I assume $\Omega$ to be the box $[a,b]\times[a,b]\in\mathbb{R}^2$. Let $z$ be a map from $\mathbb{R}\to\mathbb{C}$. Pick a discretization for $x$ of size $n$, e.g., divide $[a,b]$ into $n$ segments of equal length. Their length will be $\ell=\frac{b-a}{n}$ so that their center points are $x_{k}=a+(k-\frac{1}{2})\ell$. We approximate the integral by the midpoint rule -- there are vastly better methods but this is suitable for a first attempt. This gives us the approximation $$G_n(z)=\ell^2\sum_{j=1}^n\sum_{k=1}^n F\big(x_j,z(x_j)\big)F^*\big(x_k,z(x_k)\big)$$ We now pick a discretized approximation to $z$ by defining (where $i=\sqrt{-1}$): $$z_k=a_k+i b_k=z(x_k)$$ and assuming $z$ is constant on the whole interval of length $\ell$ surrounding $x_k$. Now $z$ is just a sequence of complex values and we have $$G_n(z)=\ell^2\sum_{j=1}^n\sum_{k=1}^n F\big(x_j,a_j+ib_j\big)F^*\big(x_k,a_k+ib_k\big).$$ Our goal is to pick the values of $a_k$ and $b_k$ for each $k\in\{1,2,\dotsc,n\}$ maximizing $G_n$. There are a wide variety of nonlinear techniques to solve this unconstrained problem -- Newton's method ought to work well if you have two derivatives and a reasonable initial guess. Your goal is to find the stationary points where the derivative is zero, and check they are a maximum by looking at the second derivative. If you don't have derivatives then you can use something like the compass method. And for what it's worth, in my opinion the hardest part will be correctly writing the derivatives of the function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/678473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing that $\sinh(\mathrm{e}^z)$ is entire I am attempting to show that $\sinh(\mathrm{e}^z)$, where $z$ is a complex number, is entire. The instructions of the problem tell me to write the real component of this function as a function of $x$ and $y$, which I used algebra to do; this function is $u(x, y)=\cos(\mathrm{e}^x \sin(y))\cosh(\mathrm{e}^x \cos(y))$. The instructions then say to "state why this function must be harmonic everywhere", i.e., that the sum of the second derivatives $u_{xx}, u_{yy}$ is zero. This is where I'm stuck: Even the first derivative looks like a nightmare to compute, and the language of the problem's instructions seems to suggest that I shouldn't need to, that I should simply be able to glance at the function and know that it's harmonic, and state why in a simple sentence. Why should this function be clearly harmonic?
You have real and imaginary part $ u(x,y )$ and $v(x,y)$, so you can control if the Cauchy-Riemann equations are verified: $$u_{x} = v_{y} \ \ \ u_{y} = -v_{x}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/678551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
If $\sum a_n$ converges, so does $\sum a_n^{\frac{n}{n+1}}$ Let $a_n$ be a positive real sequence such that the series $\sum a_n$ converges. I was asked to prove that under such circumstances $\sum a_n^{\frac{n}{n+1}}$ converges. The previous sum can be rewritten as $\sum \frac{a_n}{a_n^{1/(n+1)}}$. I can't prove the convergence of this last series... I'm sure comparison test is enough here, but other than $a_n \rightarrow 0$ there is no information that helps. Thanks for your help.
Let $A=\sum_{n=1}^\infty a_n$, and set $$S=\big\{n: a_n^{n/(n+1)}\le 2a_n\big\},$$ and $$T=\big\{n: a_n^{n/(n+1)}> 2a_n\big\}.$$ If $n\in T$, then $$ a^{n/(n+1)}_n> 2a_n\quad\Longrightarrow\quad a_n^n>2^{n+1}a_n^{n+1}\quad\Longrightarrow\quad a_n<2^{-n-1}. $$ $$ \sum_{n=1}^\infty a^{n/(n+1)}_n= \sum_{n\in S} a^{n/(n+1)}_n+\sum_{n\in T} a^{n/(n+1)}_n\le 2\sum_{n=1}^\infty a_n+\sum_{n=1}^\infty 2^{-n-1}=2\sum_{n=1}^\infty a_n+\frac{1}{2}. $$ Thus $$ \sum_{n=1}^\infty a^{n/(n+1)}_n\le \frac{1}{2}+2A<\infty. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/678628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Prove that $(n!)^2$ is greater than $n^n$ for all values of n greater than 2. This problem , I assume can be proved using induction, however I am trying to find another way. Is there a simple combinatorial approach? One notices that $(n!)^2$ is equal to the number of permutations of size n squared, and that $n^n$ is the number of redundant combinations where there are n spaces and n choices. Any help would be much appreciated. Thanks
divide $(n!)^2 > n^n$ by $n!$ to get $$n! = 1 \times 2 \times \ldots \times (n-1) \times n > \frac{n}{n} \times \frac{n}{n-1} \times \ldots \times \frac{n}{2} \times \frac{n}{1}$$ It is a bit of simple algebraic manipulation to show that each term on the lhs is greater or equal to the corresponding term on the rhs, or $\frac{n}{n-k} < k+1 $ for $k \in \overline{0, n-1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/678682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Is this process a martingale I was solving some practice problems in stochastics and faced the following exercise: Given Brownian motion $W(t)$ and a stochastic process $B(t)$ defined as: $$B(t) = \begin{cases} W(t), & \text{if $0 \le t < 1$} \\ tW(1), & \text{if $1 \le t < \infty$} \\ \end{cases}$$ Answer the following: * *Is $B(t)$ a martingale? *Compute $QV_2(B)$ I have never faced a problem in this form before, thus I am slightly confused, so could you help me on it? My thoughts: * *Speaking about 1, is it correct to show that $tW(1)$ is not a martingale and using this fact state that $B(t)$ is not a martingale? *Well, actually I have never seen such notation, but I guess the question is to compute the quadratic variation, so how should one do it for this sort of processes? Thank you in advance.
Hi your first intuition is correct. Formally you could write for example to hsow the statement that for $t>s>1$ : $E[W_t | \mathcal{F}_s]\not=W_s$ For your second question it is more a direct application of your course if you want my opinion. For $t<1$ Have you seen what the Quadratic variation a Brownian motion is ? For t>1, you can check that a finite variation process that is continuous has null Quadratic variation. Best regards
{ "language": "en", "url": "https://math.stackexchange.com/questions/678764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove $f(x,y) > g(x,y)$ for all $x,y \in [0,1]$ I'm trying to prove the following: $$ 4xy + 4(1-x)(1-y) < \max\{8xy,8(1-x)(1-y),3\} \qquad \forall x,y \in [0,1] $$ In the language from the class, I'm trying to show that: $m_2 < \max\{m_1,m_3,m_4\}$ I think that's written correctly. It's derived from a microeconomics game theory problem with $3$ players. I'm trying to prove that some choice ($m_2$) has an expected value that is strictly less than at least one of the three other choices ($m_1, m_3, m_4$) no matter how $x$ and $y$ vary, assuming $x$ and $y$ between $0$ and $1$ inclusive. ($x$ and $y$ are probability variables.) Spot checks show that this is the case, but I assume calculus can be used to prove it for all cases. I don't know if it's required of us, but I want to know nonetheless. * *for $x=0, y=0$: $m_2 < m_3$. *for $x=1, y=1$: $m_2 < m_1$. *for $x =.5, y=.5$: $m_2 < m_4$. and so on. My intuition says the answer somehow involves doing something with "level set" but that's getting to the furthest reaches of my math knowledge. Thanks, I hope I'm asking this correctly!
Say that $a=4xy$ and $b=4(1-x)(1-y)$. If $a<b$, we know that $a+b<b+b=2b=8(1-x)(1-y)$. If $a>b$, we know that $a+b<a+a=2a=8xy$. If $a=b$, we have $xy=(1-x)(1-y)$. We then want to show that $a+b<3$, since $a+b=2a=2b$. Equality holds if and only if $x=1-y$. Then, the maximum of $$ 4xy+4(1-x)(1-y)=8y(1-y) $$ occurs at $y=\frac 12$. The maximum is $8\cdot \frac 12\cdot \frac 12=2<3$. Thus, we always can find a part of the right hand side that is strictly larger than the left hand side, $a+b$, and therefore the inequality holds for all $x,y\in[0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/678867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimal answer possible. Are there a finite or infinite set of solutions that would satisfy this equation? If there are a finite set of solutions, what would it be? $$(2x+2y+z)/125\leq9.5$$ where $x$, $y$ and $z$ $\leq$ 250. Also $x$, $y$, and $z$, have to be integers.
Let a = 250 - x, b = 250 - y, c = 250 - z when the equation becomes: 2(250 - a) + 2(250 - b) + 250 - c < 125(9.5) ===> 4a + 4b + 2c > -875/2 and a, b, c are now whole numbers. So if this condition was now in effect then there are infinitely many solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/678943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
intuition for matrix multiplication not being commutative I want to have an intuition for why AB in matrix multiplication is not same as BA. It's clear from definition that they are not and there are arguments here (Fast(est) and intuitive ways to look at matrix multiplication?) that explain that this is in order to maintain certain compositional properties. Example: (column vector) $A = \left( \begin{array}{c} 1\\ 2\\ 3 \end{array} \right)$ (row vector) $B = \left(1, 5, 0\right)$ If we view a matrix as a linear combination of columns, then I read this from right to left as saying "take 1 times" of column 1 of left matrix, then "take 5 times" of column 2 of left matrix, then "take 0 times" of column 3 of left matrix. Intuitively this means the $B$ vector is the set of weights for the linear combination and the columns of $A$ are the ones being combined. This yields: $AB = \left( \begin{array}{c} 1 & 5 & 0\\ 2 & 10 & 0\\ 3 & 15 & 0 \end{array} \right)$ 1st question: is this a valid way to think of the operation? It gives the right answer here, but more generally is it correct? 2nd question: how can we apply this (or a better) intuition to the case of mutiplying $BA$? We have: $BA = \left((1\times1) + (5\times2) + (3\times0)\right) = \left(11\right)$ not sure how to think of that intuitively. one intuition that has been proposed is matrix multiplication as linear composition of functions. I'm open to that but usually I don't think of matrices like $A$ and $B$ as individually representing functions.
I know this was awhile ago, but I think I have a nice answer. Remember that matrices are linear transformations, so we can rephrase the question: "Is $AB$ the same as $BA$?" as "Does the order of two linear transformations matter?" The answer is clearly yes. To see why, consider two transformations of the $xy$-plane. Transformation $A$ only scales the $x$ coordinates and transformation $B$ rotates the plane by 45 degrees clockwise. Now, imagine applying both of these transformations to the standard bases. If we scale the $x$ coordinate first, then only the length of $\hat{i}$ changes. If we scale the $x$ coordinates after rotation though, then both rotated versions of $\hat{i}$ and $\hat{j}$ will be scaled.
{ "language": "en", "url": "https://math.stackexchange.com/questions/679002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If $a = \mathrm {sup}\ B$, how to show that the following holds? Let $B \subseteq \mathbb R$ be a nonempty set. If $a = \mathrm {sup}\ B$, then it will be the case that for all $n \in \mathbb N$ that an upper bound of $B$ is $$a +\frac {1}{n}$$ while $$a - \frac {1}{n}$$ won't be an upper bound of $B$. I attempted to prove this using mathematical induction. It is easy to show that the base case for $n = 1$ is true (by definition of supremum). Then, assuming that the property holds for some $k \in \mathbb N$ such that $k \ge 1$, we proceed to show that it holds for $k + 1$. How can we show that $$a - \frac {1}{k + 1} \le a$$ and that $$a \le a +\frac {1}{k + 1}$$
If $a + \frac 1 n$ is not an upper bound of $B$, then there is an element $a'$ in $B$ such that $a' \gt a + \frac 1 n \gt a$ leading to a contradiction since $a$ is an upper bound of $B$. Similarly, if $a - \frac 1 n$ is an upper bound of $B$ then $a - \frac 1 n \lt a$ contradicting the fact that $a$ is the least upper bound of $B$. As T. Bongers has suggested induction is not at all required..
{ "language": "en", "url": "https://math.stackexchange.com/questions/679109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
local minimum of $|f|$ Suppose $f \in H(\Omega)$, where $\Omega\subset\mathbb C$ is an open set. Under what condition can $|f|$ have a local minimum? Here $|f| = u^2 +v^2 = g$ say. We assumed $f(x,y)= u(x,y) +i v(x,y)$. Then $g$ has local minimum if $g_{xx} > 0$ and $g_{xx}= 2[u_x^2 +uu_{xx} +v_x ^2 +vv_{xx}]$. So as square terms are positive always, the required condition is $uu_{xx}+vv_{xx} >0$. I am asking if this a correct answer; if not then please guide me in the right way. Thanks in advance.
If $f$ has a zero on $\Omega$, then clearly $|f|$ has a local minimum at those points. Otherwise, the open mapping principle prevents $|f|$ from having a local minimum. (If $a \in \Omega$ then $f$ maps open discs centered at $a$ to open sets. In particular if $f(a) \neq 0$ then there are nearby points $z$ where $|f(z)| < |f(a)|$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/679226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Limit of cosine function Can I evaluate the following limit $\lim_{(x, y) \to (0, 1)}\cos (x)$ as below $\lim_{(x, y) \to (0, 1)}\cos (x)=\lim_{x \to 0}\cos (x)=\cos (0)=1?$ Can I further explain why I can evaluate the limit in that way as follows? As $(x, y) \to (0, 1)$, $x \to 0$ and $y \to 1$. The cosine function is continuous on the interval $(-\infty, \infty)$. Thus, $\lim_{(x, y) \to (0, 1)} \cos (x)=\lim_{x \to 0} \cos (x)=\cos (0)=1.$
It is always true that: $$ \lim_{(x_1,\dots,x_n)\rightarrow (x^0_1,\dots,x^0_n)}f(x_i)=\lim_{x_i\rightarrow x^0_i}f(x_i) $$ without any hypothesis on $f$. It is just not a function of other variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/679496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Combinatorial interpretation of Euclid's form for even perfect numbers Euclid showed that if $p$ is a prime such that $2^{p}-1$ is also a prime, then the number $n=2^{p-1}.(2^{p}-1)$ is perfect. Much later, Euler proved that every even perfect number is of this form. Thinking of proper divisors of a number as analogues of proper subsets of a discrete set, one can see an analogy between this number of subsets ($2^{p}-1$ for a set of $p$ elements) and the Mersenne prime occuring in Euclid's formula. So my question is: as a perfect number is defined as a number equal to the sum of its proper divisors, is there a combinatorial interpretation of Euclid's form explaining why an even perfect number is of the aforementionned form? Moreover, if $m$ is a (not necessarily even) perfect number, must there exist an integer $k$ such $2^{k}-1$ divides $m$? Thanks in advance.
Comment converted to an answer, so that this question does not remain in the unanswered queue. If $m$ is (even) perfect, then $m = 2^{p-1} (2^p - 1)$, for some Mersenne prime $2^p - 1$. So taking $k = p$, the answer to your second question is yes for even perfect numbers. As for odd perfect numbers $M$, it is currently unknown whether $2^2 - 1 = 3 \mid M$. As to your first question: I am currently unaware of any combinatorial interpretation of Euclid's form explaining why an even perfect number is of the aforementioned form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/679551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
The reason behind the name "Orthogonal transformation". An orthogonal transformation is a linear transformation such that $(Tx,Ty)=(x,y)$. Orthogonality is suggestive of perpendicularity. What might have been the reason for naming a distance preserving linear transformation on a vector space "orthogonal"? Thanks!
I think it's due to the fact that, in a finite dimensional vector space, the columns of the associated matrix of the transformation are mutually orthogonal. They don't just preserve distance, they also preserve inner product. In more familiar examples, you can think of this as preserving the angle between two vectors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/679636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Easier way to show $(\mathbb{Z}/(n))[x]$ and $\mathbb{Z}[x] / (n)$ are isomorphic $$(\mathbb{Z}/(n))[x] \simeq \mathbb{Z}[x] / (n)$$ I've shown this by showing that the map that sends $\overline{1} \mapsto [1+(n)]$ (where the bar denotes the congruence class mod $n$) and $x \mapsto [x+(n)]$ is a homomorphism that is injective and bijective, which is a bit cumbersome. Is there an easier way to show this? I can't tell if this is trivial or not.
The homomorphism in the other direction is maybe easieer to see. From the $\mathbb Z\to \mathbb Z/(n)\hookrightarrow \mathbb Z/(n)[x]$ (canonical projection and canonical inclusion) and $x\mapsto x$ we obtain a ring homomorphism $\mathbb Z[x]\to \mathbb Z/(n)[x]$ (universal property of polynomial ring). The kernel is quite clearly the set of polynomials with coefficients multiples of $n$, i.e. $n\mathbb Z[x]$ or $(n)$. Hence we obtain a homomorphism $\mathbb Z[x]/(n)\to\mathbb Z/(n)[x]$. The homomorphism is clearly onto as it is easy to find an inverse image in $\mathbb Z[x]$ for any element of $\mathbb Z/(n)[x]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/679716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Value of $\sum\limits_{n= 0}^\infty \frac{n²}{n!}$ How to compute the value of $\sum\limits_{n= 0}^\infty \frac{n^2}{n!}$ ? I started with the ratio test which told me that it converges but I don't know to what value it converges. I realized I only know how to calculate the limit of a power/geometric series.
The idea for $$\frac{n^r+b_{r-1}n^{r-1}+\cdots+b_1\cdot n}{n!},$$ we can set this to $$\frac{n(n-1)\cdots(n-r+1)+a_{r-1}\cdot n(n-1)\cdots(n-r+2)+\cdots+a_2n(n-1)+a_1\cdot n+a_0}{n!}$$ $$\frac1{(n-r)!}+\frac{a_{r-2}}{(n-r+1)!}+\frac{a_1}{(n-2)!}+\frac{a_1}{(n-1)!}+\frac{a_0}{(n)!}$$ where the arbitrary constants $a_is,0\le i\le r-2$ can be found comapring the coefficients of the different powers of $n$ Here let $$\frac{n^2}{n!}=\frac{a_0+a_1n+a_2n(n-1)}{n!}=\frac{a_0}{n!}+\frac{a_1}{(n-1)!}+\frac{a_2}{(n-2)!}$$ $$\implies n^2=a_0+n(a_1-a_2)+a_2n^2$$ $$\implies a_2=1,a_1-a_2=0\iff a_1=a_2,a_0=0$$ So, we have $$\sum_{n=0}^{\infty}\frac{n^2}{n!}=\sum_{n=0}^{\infty}\frac1{(n-1)!}+\sum_{n=0}^{\infty}\frac1{n!}=\sum_{n=1}^{\infty}\frac1{(n-1)!}+\sum_{n=0}^{\infty}\frac1{n!}$$ as $\displaystyle\frac1{(-1)!}=0$ Now, observe that each summand is $e$(exponential 'e')
{ "language": "en", "url": "https://math.stackexchange.com/questions/679790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Example of a domain in R^3, with trivial first homology but nontrivial fundamental group Let $\Omega \subset \mathbb{R}^3$ be a domain. Is it true that if $H_1(\Omega)$ = 0, then $\pi_1(\Omega) = 0$? For a counterexample, the group $\pi_1(\Omega)$ needs to be a perfect group and so I was trying with the smallest one i.e. $A_5$. But I don't think the standard construction of the space from CW complexes, embeds into $\mathbb{R}^3$.
First, suppose that you have a compact connected submanifold $C$ with nonempty boundary in 3d sphere. If some boundary components are spheres, you add the 3-ball which they bound $C$ without changing 1st homology or fundamental group. Suppose the result, which I will still call $C$, still has nonempty boundary. Recall that $$ \chi(C)=\chi(\partial C)/2, $$ Which immediately implies that the 1st Betti number of $C$ is positive. Hence, we are done in this case. The remaining possibility is that the original $C$ was simply connected to begin with. This answers your question positively in the case of domains which are interiors of compact manifolds with boundary. However, for general domains the answer is negative. Take a doubly wild arc in 3d space, meaning that it is wild at both ends. Then the complement is in general not simply connected but is acyclic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/679910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Showing set linear independence How do I show that the set $\{ e^x , ... ,e^{nx} \}$ is linearly independent? I tried using induction as the base case of $\{ e^x \}$ and even $\{ e^x, e^{2x} \}$ is easy, but I can't use the I.H. to go further. What I try to do with the induction step is plug in values of x and try to force the last coefficient to be 0, to no avail.
Hint: You can use the Wronskian. See here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/680017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Trigonometry, knowing 3 sides how to find the height? I have a mathematician problem where, I knew the 3 sides of a triangle, with these sides I can figer out what type of type of triangle is. What I realy want to find is the height of the triangle and another one "side". Let me explain what I want, with the above picture. I knew a,b and c and I can calculate the 2 angles (the angle oposite of C, and the angle oposite of b, those on the black dots). I want to find out the red dots (the bottom red dot is is the other "side" I note before, from black dot to red dot distance). So I want to to know the (x,y) for the red dot at the top, via the combination of height length, b OR c length (which also needs the two angles) in case to find the absolute right (x,y) for the red dot at the top. Thanks.
Or from Heron's formula for triangle area http://en.wikipedia.org/wiki/Heron%27s_formula
{ "language": "en", "url": "https://math.stackexchange.com/questions/680262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Trigonometric Series Proof I am posed with the following question: Prove that for even powers of $\sin$: $$ \int_0^{\pi/2} \sin^{2n}(x) dx = \dfrac{1 \cdot 3 \cdot 5\cdots (2n-1)}{2 \cdot 4 \cdot 6 \cdots 2n} \times \dfrac{\pi}{2} $$ Here is my work so far: * *Proof by induction $P(1) \Rightarrow n = 2 \Rightarrow $ $$\int_0^{\pi/2} \sin^4(x) dx = \dfrac{1 \cdot 3}{2 \cdot 4} \times \dfrac{\pi}{2} $$ $$ \frac{3\pi}{16} = \frac{3 \pi}{16} $$ Base Case Succeeds. (The left hand side evaluated by MAPLE) $P(n+1) \Rightarrow n = n + 1 \Rightarrow$ $$ \int_0^{\pi/2} \sin^{2n + 1}(x) dx = \int_0^{\pi/2} \sin^{2n} x \sin x dx$$ $$ = \int_0^{\pi/2} (1 - \cos^2x)^n \sin x dx$$ Let $u = \cos x, du = - \sin x dx \Rightarrow$ $$ -\int_0^{\pi/2} (1-u^2)^n du$$ Now I am stuck and unsure what to do with this proof ... Any help would be greatly appreciated (I also tried using the reduction formulas to no avail) EDIT I have completed the proof. I am posting this here for any other people who may have the same question... We will prove by induction that $\forall n \in 2 \mathbb{N}_{>0}$ \begin{align*} \int_0^{\pi/2} \sin^{2n} x dx & = \frac{1 \times 3 \times 5 \times \cdots \times (2n - 1)}{2 \times 4 \times 6 \times \cdots \times 2n} \times \frac{\pi}{2} \tag{1} \end{align*} With $k = 1$ as our base case, we have \begin{align*} \frac{1}{2}x - \frac{1}{4} \sin{2x} \bigg|_{0}^{\pi / 2} & = \frac{\pi}{4} \tag{67} \\ \frac{1}{2} \times \frac{\pi}{2} & = \frac{\pi}{4} \end{align*} Let $n \in 2\mathbb{N}_{>0}$ be given and suppose (1) is true for $k = n$. Then \begin{align*} \int_0^{\pi/2} \sin^{2n+2} x dx & = - \frac{1}{2n+2} \cos^{2n-1}x \sin x \bigg|_{0}^{\pi / 2} + \frac{2n+1}{2n+2} \int_0^{\pi/2} \sin^{2n} x dx \tag{67} \\ & = \frac{2n+1}{2n+2} \int_0^{\pi/2} \sin^{2n} x dx \\ & = \frac{1 \times 3 \times 5 \times \cdots \times (2n + 1)}{2 \times 4 \times 6 \times \cdots \times {(2n+2)}} \times \frac{2n+2}{2n+1} \times \frac{\pi}{2} \\ & = \frac{1 \times 3 \times 5 \times \cdots \times (2n - 1)}{2 \times 4 \times 6 \times \cdots \times 2n} \times \frac{\pi}{2} \\ & = \int_0^{\pi/2} \sin^{2n} x dx \end{align*} Thus, (1) holds for $k = n + 1$, and by the principle of induction, it follows that that (1) holds for all even numbers. $\square$
For $k=1$, it's straightforward to verify$$\int_0^{\pi/2}\sin^2x~dx=\int_0^{\pi/2}\frac{1-\cos 2x}2dx=\frac\pi4$$ Assume $k=n$ we have $$I_n=\int_0^{\pi/2}\sin^{2n}x~dx=\frac{(2n-1)!!}{(2n)!!}\frac\pi2$$ Then for $k=n+1$, $$\begin{align}I_{n+1}&=\int_0^{\pi/2}\sin^{2n}x(1-\cos^2x)dx\\ &=I_n-\int_0^{\pi/2}\sin^{2n}x\cos^2x~dx\\ &=I_n-\left.\frac1{2n+1}\sin^{2n+1}x\cos x\right|_0^{\pi/2}-\frac1{2n+1}\int_0^{\pi/2}\sin^{2n+1}x\sin x~dx\\ &=I_n-\frac1{2n+1}I_{n+1}\end{align}$$ Solve the recurrent relation and obtain $$I_{n+1}=\frac{2n+1}{2n+2}I_n=\frac{(2n+1)!!}{(2n+2)!!}\frac\pi2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/680340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $a$ and $b$ are odd, prove $\gcd(a,b) = \gcd(\frac {\left| {a-b} \right |} {2}, b)$ Honestly I don't have a strong idea. I don't know where to even begin, I have considered that the $\gcd(a,b)$ is somehow less than $a-b$, but I'm not even sure why that would be the case.. Any help would be great!
Some hints. Let $g=\gcd(a,b)$. We want to show that $g\mid{\rm RHS}$. (1) $g$ is a factor of $a-b$ because. . . (2) therefore $g$ is a factor of $|a-b|$ because. . . (3) therefore $g$ is a factor of $\frac{1}{2}|a-b|$ because. . . (this is the hard bit) (4) and it is obvious that $g$ is a factor of $b$, so (5) $g$ is a factor of ${\rm RHS}$. See if you can fill in all the reasons where indicated by dots. Now you have to do the same kind of thing in the other direction: let $h={\rm RHS}$ and prove that $h$ is a factor of ${\rm LHS}$. Good luck!
{ "language": "en", "url": "https://math.stackexchange.com/questions/680418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
List of functions not integrable in elementary terms When teaching integration to beginning calculus students I always tell them that some integrals are "impossible" (with a bit of expansion on what that actually means). However I must admit that the examples I give mostly come from "folklore" or guesswork. Can anyone point me to a list (not a complete list of course!) of fairly simple elementary functions whose antiderivatives are not elementary? I'm thinking of things like $\exp(x^2)$ which is the standard example, $\sin(\exp(-x))$ perhaps, things like this, not hugely complicated formulae.
Liouville's theorem in fact exactly characterizes functions whose antiderivatives can be expressed in terms of elementary functions. However, the only proof I have seen is not exactly suitable for teaching beginning calculus students. In fact, the proof of the impossibility of solving a general 5th degree polynomial by radicals (by Galois) and the proof of Liouville's theorem share a common idea. (Liouville's theorem is part of what is called differential Galois theory) If you are prepared to wade through a bit of differential Galois theory to get to the proof, you could read R.C.Churchill's notes available here. You could also try Pete Goetz's presentation here which assumes Liouville's theorem and proves the the Gaussian does not have a elementary antiderivative. Note: Proving that a certain function does not have an elementary antiderivative is often quite difficult, and reduces to the problem of showing that a certain differential equation does not have a solution. I have not seen many examples of such functions, and I do not know a reference which proves it for all the functions listed in the previous answer by sas.
{ "language": "en", "url": "https://math.stackexchange.com/questions/680478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 3, "answer_id": 2 }
Why isn't a t test used when comparing two proportions? All the examples I've seen say to use a z test to compare two proportions. For example, n=13, x=0.22 versus n=10, x=0.44. Then all the examples warn that the z test doesn't work with low sample sizes. So why can't we just use a t test, which provides p values appropriate to the sample size?
Stefan may have already addressed your concerns, but here is the basic rationale for the normal approximation for comparing proportions and why the t-test is not useful: Although sample proportions are not normally distributed for any finite sample size, they approach a normal distribution as the sample size approaches infinity. In particular, if $p$ is the true proportion in the population, then a ramdom sample of size N will have the number of "successes" will be binomially distributed with success probability p and sample parameter N. Of course, the binomial approaches the normal distribution as N gets large. For paired data where you are comparing differences in proportions, you need BOTH samples large so you can assume that the difference is also normally distributed. So why not the t-test for small samples? The t-test requires that the sample be from a normal population, which as Stefan pointed out, it is not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/680587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Convolution of convolution Let us write a convolution $\int_{0}^{t} A(t-\tau) \mathrm{d}x(\tau)$ as $A \star \mathrm{d}x$ I would like to write down the expression for the double convolution $A \star \mathrm{d}x \star \mathrm{d}x $ Following the definition I obtain $ \int_{0}^{t} \int_{0} ^{t-\tau} A(t-\tau-s) \mathrm{d}x(s) \mathrm{d}x(\tau)$ Can this be given a more compact form, especially in reference to the upper limit of integration in the inner integral? I would like to perform the change of variable $t-\tau = w$ but unsure as tyo how to proceed, any hint would be the most appreciated, thanks
When the function is differentiable and you can write the operation as a regular convolution, you can use the fact that $\dot x\ast \dot x $ makes sense, differently from $dx\star dx$, which is not defined. In this case you would have $A\star dx\star dx = A\ast \dot x \ast \dot x = A\ast (\dot x \ast \dot x)$: $$\int_0^t A(t-u) \int_0^u \dot x(u-s)\dot x(s)\,ds\,du.$$ If you want to rewrite it as before: $$\int_0^t A(t-u) \int_0^u \dot x(u-s)\,dx(s)\,du.$$ Otherwise, you can change the limits, but at the cost of defining another function $x^t(w)=x(t-w)$ when changing $t-\tau=w$, in this case $dx^t(w)=-dx(t-w)$: $$\int_0^t \int_0^{t-\tau} A(t-\tau-s)\,dx(s)\,dx(\tau)=\int_0^t \int_0^{w} A(w-s)\,dx(s)\,dx^t(w)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/680683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
how to find this line integral and what is its answer evaluate the line integral $$\int_C (xy^2 dy-x^2y dx), $$ taken in the counter-clockwise sense along the cardioid $$r= a(1+\cos\theta)$$ here putting the parametric form of cardioid $x=a(2\cos t-\cos2t), y= a(2\sin t-\sin2t) $ and taking $\theta$ , $0 $ to $2\pi $ tried to solve but then it became complicated, so is there any other method and in this method please solve these as well ?
I think this problem can be solved by Green's identity. Let $D$ denote the area of shape enclosed by $C$. By using polar substitution, we have: \begin{equation} \int_C(xy^2dy-x^2ydx)=\iint_Dx^2+y^2dxdy \\=\int_0^{2\pi}\int_0^{a(1+\cos(\theta))}r^3drd\theta\\ =\frac{35}{16}\pi a^4 \end{equation} I ignore the trigonometric integral in my deduction. I want to explain the reason that my answer different from @Yiorgos S. Smyrlis's answer. The key is the parametric equation of the Cardioid. That is, $x=a(2\cos(t)-\cos(2t))$, $y=a(2\sin(t)-\sin(2t))$ is not the corresponding parametric equation of $r=a(1+\cos\theta)$! $x=\frac{a}{2}(1+2\cos(t)+\cos(2t))$, $y=\frac{a}{2}(2\sin(t)+\sin(2t))$ is the corresponding parametric equation of $r=a(1+\cos\theta)$! (see Cardioid). I display the two pictures as follow: The 1st picture is the $r=a(1+\cos\theta)$ and 2nd is $x=a(2\cos(t)-\cos(2t))$, $y=a(2\sin(t)-\sin(2t))$. We can obviously find the difference at x-axis. It can also easy check that in $x=a(2\cos(t)-\cos(2t))$, $y=a(2\sin(t)-\sin(2t))$, the curve do not cross $(0,0)$ while $r=a(1+\cos\theta)$ do. So, if we use the corresponding parametric equation that $x=\frac{a}{2}(1+2\cos(t)+\cos(2t))$, $y=\frac{a}{2}(2\sin(t)+\sin(2t))$ and calculate the integral, we have: \begin{equation} \int_C(xy^2dy-x^2ydx)\\ =\int_0^{2\pi}\sin(t)^2\cos(t)a^4(4\cos(t)^5+14\cos(t)^4+17\cos(t)^3+7\cos(t)^2-\cos(t)-1)dt\\ =\frac{35}{16}a^4 \end{equation} The same answer! So I think if we use the corresponding polar equation of $x=a(2\cos(t)-\cos(2t))$, $y=a(2\sin(t)-\sin(2t))$, we can also get the result that $21\pi a^4$ by using Green's identity. All of our methods are valid but you should distinguish which parametric equation you want to use when you will solve this problem. About the parametric equation of Cardioid: The polar equation tell us all points on the curve satisfy that the angle between pole axis is $\theta$ and radius is $a(1+\cos\theta)$. So we can solve it in complex plane. That is, in exponential form, the equation in complex plane is $r=a(1+\cos\theta)e^{i\theta}$. To get the parametric equation, we solve the real part and imaginary part: \begin{equation} x=\Re\{ a(1+\cos\theta)e^{i\theta} \}=\frac{a}{2}(1+2\cos(t)+\cos(2t))\\ y=\Im\{ a(1+\cos\theta)e^{i\theta} \}=\frac{a}{2}(2\sin(t)+\sin(2t)) \end{equation} That is the parametric equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/680787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Distance between two sets in a metric space is equal to the distance between their closures Let $A,B \subseteq \mathbb{R}^d$ be non-empty sets. Define their distance to be $$ d(A,B) = \inf \{ ||x-y|| : x \in A, \; \; y \in B \} $$ For any $A,B$, do we have that $d(A,B) = d( \overline{A}, \overline{B} ) $. Is the following proof correct? Proof Note, you always have $d(A,B)\geq d(\bar{A},\bar{B})$ since the supremum is taken over a bigger set on the right hand side. If $d(\bar{A},\bar{B}) = d$ then $\exists x_1,x_2,...\in \bar{A}$ and $y_1,y_2,...\in \bar{B}$, for $\epsilon>0$, $\exists N$ such that $d(x_n,y_n)\leq d+\epsilon$ for $n\geq N$ Now each of the $x_i$ are in the $\bar{A}$, this means for each $x_i$, there exists $x_i'\in A$ such that $d(x_i,x_i')<\epsilon$ (In a metric space, the closure is the set of limit point of $A$, so there must be $d(x_i,x_i')<\epsilon$ with $x_i'\in A$). Similarly there exists $y_i'\in B$ such that $d(y_i,y_i')<\epsilon$. Then $ d(\bar{A},\bar{B})\geq d(x_i,y_i)-\epsilon\geq d(x_i',y_i')-3\epsilon \geq d(A,B)-3\epsilon$ where I used $d(x_i,y_i)+d(x_i',x_i)+d(y_i,y_i')\geq d(x_i',y_i')$ but $\epsilon$ is arbitrary... so we are done.
The proof is correct. Just to have a shorter proof of the same fact (the distance between sets is equal to the distance between their closures), here goes: Let $\rho = d(A,B)$. The inequality $d(\overline{A},\overline{B})\le \rho$ holds because the infimum on the left is over a larger set. In the opposite direction, take any $a\in \overline{A}$ and $b\in \overline{B}$. The goal is to prove $d(a,b)\ge \rho$. For every $\epsilon>0$ there exist $a'\in A$ and $b'\in B$ such that $d(a,a')<\epsilon$ and $d(b,b')<\epsilon$. By the triangle inequality, $\rho \le d(a',b')\le d(a,b)+ 2\epsilon$. Since $\epsilon$ can be arbitrarily small, $\rho\le d(a,b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/680855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Ordinary Differential Equation - Boundary Conditions Question The following problem has brought up some misunderstandings for me - Find the eigenvalues λ, and eigenfunctions u(x), associated with the following homogeneous ODE problem: $$ {u}''\left ( x \right )+2{u}'\left ( x \right )+\lambda u\left ( x \right )=0\; ,\; \; u\left ( 0 \right )=u\left ( 1 \right )=0 $$ Solution: Try $ u\left ( x \right )=Ae^{rx} $, which gives roots $ r=-1\pm \sqrt{1-\lambda } $ Solution is altered with $$ \lambda <1\; ,\; \; \lambda =1\; ,\; \; \lambda >1 $$ For the first case $ \lambda <1 $ the general solution is $$ u\left ( x \right )=Ae^{\left ( -1+\sqrt{1-\lambda } \right )x}+Be^{\left ( -1-\sqrt{1-\lambda } \right )x} $$ $$ u\left ( x \right )=C\cosh \left ( -1+\sqrt{1-\lambda } \right )x+D\sinh \left ( -1-\sqrt{1-\lambda } \right )x $$ Applying boundaries: (this is where my question lies - how to correctly apply BCs) $$ u\left ( 0 \right )=0 \; \; \Rightarrow \; \; C+D=0 $$ (some cases i've seen the conclusion that only $ C=0 $). Do i assume that as $ \cosh $ is never zero that $ C=0 $ and therefore it must be that $ D=0 $. Or do i only take the result $ C=0 $ from the first BC and then apply the second BC to see what happens to $ D $? The latter (assuming $ C=0 $) gives $$ D\sinh \left ( -1-\sqrt{1-\lambda } \right )=0 $$ So either $ D=0 $ or $ \sinh \left ( -1-\sqrt{1-\lambda } \right )=i\pi n $ I'm confused by the what the rules are for BCs. Can anyone point out how to proceed? Thanks
Since $\sinh(0)=0$, the first condition is obviously $0=u(0)=C\cosh(0)=C$. The sum applies to the exponential representation, i.e., $0=u(0)=Ae^0+Be^0=A+B$. At the point $1$ you get to $$0=u(1)=C\cosh(−1−\sqrt{1−λ})+D\sinh(−1−\sqrt{1−λ})=D\sinh(−1−\sqrt{1−λ})$$ where neither factor reduces to zero, but $C=0$ can be inserted. So one concludes $D=0$. Which is also what one would expect, that eigenfunctions will be in terms of the trigonometric sin and cos and not the hyperbolic functions. Using $v(x)=e^{x}u(x)$ one gets $$v'(x)=e^x(u'(x)+u(x))$$ and $$v''(x)=e^x(u''(x)+2u'(x)+u(x))=(1-λ)v(x)$$ with the boundary conditions $v(0)=0=v(1)$ carrying over. This now is the classical problem of the vibrating string with the known eigenfunctions $v_n(x)=\sin(\pi nx)$ where $\pi^2n^2=λ-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/680919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Which of the following collections of subsets of $\mathbb{R}$ form a topology on $\mathbb{R}$? 1) $T_3 =\{ \emptyset, \mathbb{R}, [−a,a] : a \in \mathbb{R},a>0 \}$; 2) $T_4 = \{ \emptyset, \mathbb{R}, [−n,n], (−a,a) : a \in \mathbb{R}, a > 0, n \in \mathbb{N}^{>0} \}$. I have that $T = \{ \emptyset, \mathbb{R}, (−a,a) : a \in \mathbb{R}, a>0 \}$ is a topology, but I am not sure how to continue with these 2. To be a topology T must satisfy (i) X and the empty set, Ø, belong to τ , (ii) the union of any (finite or infinite) number of sets in τ belongs to τ, and (iii) the intersection of any two sets in τ belongs to τ Thanks
Apologies for the blunder earlier. Hope this is correct. Say $A = (0, 10)$ Then, $ \bigcup_{a\in A} [-a, a] = (-10, 10) \not \in T_3$. Therefore, $T_3$ is not closed under arbitrary unions and hence does not form a Topology. Now to $T_4$. Consider $\bigcup_{n \in \Bbb N \setminus \{1\} } ( - \frac 2 3 + \frac 1 n, \frac 2 3 - \frac 1 n) $ which is equal to $[- \frac 2 3, \frac 2 3]$ which is not an element of $T_4$. Therefore both sets do not qualify to be a Topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/681005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding Arc Length using First Fundamental Form Let $v=ln(u+\sqrt{u^2+1})+C$ be the curve given on the right helicoid $x=u\cos(v),y=u\sin(v),z=2v$. Calculate the arc lengths of this curve between the points $M_1(0,0)$ and $M_2(1,ln(1+\sqrt{2}))$ So I am trying to find $r_u(u,v)$ and $r_v(u,v)$ to construct the first fundamental form, but I am having trouble with $v$ being reliant on $u$. It could be the case that I am forgetting some of my differentiation rules. Thanks for any help (If some of the numbers seem odd, my professor changed the points to these since he said the others were misprints in the book. And this is just an extra problem not homework, so any steps after would also be helpful; I couldn't find many direct examples online elsewhere)
We usually use partial derivatives to get the metric $$g_{uu}=(\cos v,\sin v,0)\cdot (\cos v,\sin v,0)=1, \\ g_{vv}=(-u\sin v,u\cos v,2)\cdot (-u\sin v,u\cos v,2)=u^2+4 \\ g_{uu}=(\cos v,\sin v,0)\cdot (-u\sin v,u\cos v,2)=0$$ The the metric is given by $$(g)= \begin{pmatrix} 1 & 0\\ 0 & u^2+4 \end{pmatrix} $$ the curve is given by $$\alpha(u)=(u,\ln(u+\sqrt{u^2+1})\implies \alpha'(u)=(1,\frac{1}{\sqrt{u^2+1}})$$ Then $$||\alpha'(u)||=\sqrt{1\cdot 1^2+(u^2+4)\cdot (\frac{1}{\sqrt{u^2+1}})^2}$$ Finally, you can now integrate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/681095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Structure of the functional space $\int_ {- \infty} ^ \infty f (x) dx = 1 $ Please, help me with studying of useful practical features of the following functional space: $$\int_{-\infty}^\infty f(x) \, dx = 1$$ For example: 1) What basis types are most convenient for representation an element from the space? 2) How to find an element of best approximation for the given data set $\{(x_i, y_i) | i = \overline{1,n} \}$?
The integral condition says very little about $f$; it selects an affine subspace of codimension $1$, which is not any more manageable than the space you began with. For any $f\in L^1(\mathbb R)$, the function $$f-c\chi_{[0,1]},\quad \text{where } c = \int_{-\infty}^\infty f -1 $$ satisfies your condition. So, if you have a convenient representation here, you have it for all $L^1(\mathbb R)$. The Haar system is a convenient basis for the linear space $\{f\in L^1(\mathbb R):\int_{-\infty}^\infty f=0\}$. Therefore, every function in your space can be written as $$f = \chi_{[0,1]}+ \sum_{n,k}c_{n,k} \psi_{n,k}$$ with $\psi_{n,k}$ being Haar functions. You can find $c_{n,k}$ using the fact that the Haar system is orthogonal in $L^2$ (even though your function need not be in $L^2$): $c_{n,k} = \int_{-\infty}^\infty f\psi_{n,k} $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/681170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can we conclude that a distribution is a $L^2$ function by testing with $L^2$? Let $T\colon \mathcal{D}\to\mathbb{R}$ be a distribution. Does $|T(f)|\leq\|f\|_2 \forall f\in\mathcal{D}$ imply $T=T_g$ for some $g\in L^2$? What if $T$ is tempered?
Your hypothesis leads to $$T:D\to\mathbb C,$$ linear continuous functional in the sense of $L^2(\mathbb R)$. $D$ is dense in $L^2$, therefore by continuity we can extend $T$ to the whole $L^2$. By Riesz representation theorem, there exists an $L^2$ function $g$ such that $T(f) = (g,f)_{L^2}$. Moreover, it implies that $T$ is given by $T_{\bar g}$ and therefore tempered.
{ "language": "en", "url": "https://math.stackexchange.com/questions/681271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is the Center of G the same as the Centralizer of g in G? Is the center, $Z(G)$, of a group $G$ the same as the centralizer, $C(g)$, of an element $g\in G$? I have proven that $C(g)\leq G\forall g\in G$ but my homework, in a later problem, asks me to prove that $Z(G)\leq G$. This confuses me, because I thought $C(g)$ is the same as $Z(G)$.
Consider a group (G, *). Let 'g' be a fixed element of G. The set of all the elements in G that commute with 'g' is known as the centralizer of 'g'. It is denoted by C(g). The set of all the elements in G that commute with every element of G is known as the center of G. It is denoted by Z(G). There can be many elements in G that commute with a fixed element g € G but we can't say that those same elements commute with every element of G. There might be some elements in C(g) that belong to Z(G) but not all the elements of C(g) are in Z(G). Therefore C(g) is not the same as Z(G). At least most of the time. But every element of Z(G) is always in C(g) of every element . Because the elements that commute with every element of G commute with a fixed element also. Therefore, the centre is always a subset of the centralizer but not the other way around. The centre might sometimes be equal to one of the centralizers. All the centralizers and the center of a group are subgroups however. The answers mentioned before explain the concept really well but while I was learning, I broke it all up this way and it really helped. Hope this helps you all too. Thank you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/681326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
An inequality for sides of a triangle Let $ a, b, c $ be sides of a triangle and $ ab+bc+ca=1 $. Show $$(a+1)(b+1)(c+1)<4 $$ I tried Ravi substitution and got a close bound, but don't know how to make it all the way to $4 $. I am looking for a non-calculus solution (no Lagrange multipliers). Do you know how to do it?
Solving $ab+bc+ca=1$ for $c$ gives $$ c=\frac{1-ab}{a+b}\tag{1} $$ The triangle inequality says that for non-degenerate triangles $$ |a-b|\lt c\lt(a+b)\tag{2} $$ Multiply $(2)$ by $a+b$ to get $$ |a^2-b^2|\lt1-ab\lt(a+b)^2\tag{3} $$ By $(3)$, we have $(a+b)^2-1+ab\gt0$; therefore, $$ \begin{align} (a+b+1)(a+b+ab-1) &=\left[(a+b)^2-1+ab\right]+(a+b)ab\\ &\gt(a+b)ab\\[6pt] &\gt0\tag{4} \end{align} $$ Furthermore, $\color{#C00000}{(3)}$ implies $$ a\ge1\implies1-b^2\le\color{#C00000}{a^2-b^2\lt1-ab}\implies b\gt a\tag{5a} $$ Similarly, $$ b\ge1\implies1-a^2\le\color{#C00000}{b^2-a^2\lt1-ab}\implies a\gt b\tag{5b} $$ Inequalities $(5)$ imply that if either $a\ge1$ or $b\ge1$, then both $a\gt1$ and $b\gt1$. Consequently, we have both $a\gt b$ and $b\gt a$. Therefore, we must have $$ a\lt1\qquad\text{and}\qquad b\lt1\tag{6} $$ Using $(1)$, $(4)$, and $(6)$, we have $$ \begin{align} abc+a+b+c-2 &=(1+ab)\frac{1-ab}{a+b}+(a+b)-2\\ &=\frac{1-a^2b^2+(a+b)^2-2(a+b)}{a+b}\\ &=\frac{(a+b-1)^2-a^2b^2}{a+b}\\ &=\frac{(a+b+ab-1)(a+b-ab-1)}{a+b}\\ &=-\frac{(a+b+ab-1)(a-1)(b-1)}{a+b}\\ &\lt0\tag{7} \end{align} $$ Therefore, since $ab+bc+ca+1=2$, $(7)$ says $$ \begin{align} (a+1)(b+1)(c+1) &=(abc+a+b+c)+(ab+bc+ca+1)\\[4pt] &\lt2+2\\[4pt] &=4\tag{8} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/681433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Invertible Matrix to Higher power I'm working on showing if A is invertible, that for any positive integer $n$, $(AMA^{-1})^n=(AM^nA^{-1})$ My first idea is induction on $n$ but is there a property of $A$ that explans why its power remains 1 or -1? Thanks in advance.
Hint: $AA^{-1}=A^{-1}A=I$ where $I$ is the identity matrix. Induction would be a good idea.
{ "language": "en", "url": "https://math.stackexchange.com/questions/681499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What does $\prod_{n\geq2}\frac{n^4-1}{n^4+1}$ converge to? What does $\prod_{n\geq2}\frac{n^4-1}{n^4+1}$ converge to? As far as I can tell, this has no closed-form solution (not saying much, I don't know much math), but a friend of mine swears he saw a closed-form solution to this in some text he doesn't remember. Running it through WolframAlpha gives me an approximation which, entered through an inverse symbolic calculator, gets no results. This is satisfactory enough for me to believe there is no closed-form solution, but my friend really does insist there is one. Edit Okay, not meaning to sound greedy, but I'd also be interested in seeing how I'd derive a closed-form solution algebraically. I do appreciate the fast answers, though.
Since your post was interesting and the answers really nice, just for personal curiosity, I looked at the more general function $$\prod_{n=2}^\infty\frac{n^q-1}{n^q+1} $$ where $q$ in an integer. I report below some results I found interesting $$\prod_{n=2}^\infty\frac{n^2-1}{n^2+1}=\pi \text{csch}(\pi )$$ $$\prod_{n=2}^\infty\frac{n^3-1}{n^3+1}=\frac{2}{3}$$ $$\prod_{n=2}^\infty\frac{n^4-1}{n^4+1}=\frac{\pi \sinh (\pi )}{\cosh \left(\sqrt{2} \pi \right)-\cos \left(\sqrt{2} \pi \right)}$$ $$\prod_{n=2}^\infty\frac{n^6-1}{n^6+1}=\frac{\pi \left(1+\cosh \left(\sqrt{3} \pi \right)\right) \text{csch}(\pi )}{3 \left(\cosh (\pi )-\cos \left(\sqrt{3} \pi \right)\right)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/681619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }