Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Prove $\ \frac{z^{2010} - \bar z^{2010}}{1+z\bar z}$ is imaginary number Prove $\ \frac{z^{2010} - \bar z^{2010}}{1+z\bar z}$ is imaginary number. I understand that if $\ z = (a+bi) $ then $\ z - \bar z = 2bi $ and the denominator $\ 1+z\bar z $ is $\ 1+|z|^2 $ and therefore it is a real number. so need to prove the numerator is imaginary. I first tried to see what happens if I take $\ z^2 - \bar z^2 $ and it is imaginary $\ (a+bi)^2-(a-bi)^2 = 4abi $ and also $ z^{2010} = (z^{1005})^2$ so if $\ z^{1005} $ is imaginary number... but I have no clue what is $\ z^{1005}$ and maybe it's a real number..?
$z\bar z=|z|^2$ $(a+ib)^n-(a-ib)^n=2i\sum_{r=0}^{2r+1\le n}\binom n{2r+1}a^{n-(2r+1)}b^{2r+1}(-1)^r$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2900911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 3 }
Find when $2^{3^{4^{...^{n}}}} \equiv 1$ (mod $n+1$) is true It is linked to my previous question, I haven't been given any clue for how to verify this modular equation: $2^{3^{4^{...^{n}}}} \equiv 1$ (mod $n+1$) How can I find the condition for $n$?
$2^d \equiv 1 \mod (n+1)$ iff $n$ is even and $d$ is divisible by the multiplicative order of $2$ mod $(n+1)$. So for the condition to be true, you need the multiplicative order of $2$ mod $(n+1)$ to be a power of $3$. The first few $n$ for which this is the case are $6, 72, 486, 510, 2592, 3408, 18150, 35550, 39366, 71118, 80190, 97686$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2901004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Mahlo operation, consistency border Can a (relatively consistent) cardinal notion be given so that its usual Mahlo operation is (probably at least) not consistent?
Sure - "is a successor ordinal." For any (fine, uncountable) infinite cardinal $\kappa$, the set of limit cardinals below $\kappa$ is a club which avoids the set of successor ordinals. I suspect, though, that this isn't really what you want. So in the opposite direction, let me observe: There is no known, currently-considered-consistent large cardinal property whose Mahlo version is known to be (or even suspected to be) inconsistent. This is an awkward thing to claim since it's inherently hard (if not impossible) to justify, but it is to the best of my knowledge true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2901110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence of P-adic series How can I prove that $\sum_{i = 0}^\infty 2^i$ doesn't converge with respect to $|\cdot|_3$ without resorting to the fact that $\sum a_i$ converges wrt $|\cdot|_p$ if and only if $|a_i|_p \to 0$? Is it even possible?
The only thing that $\sum_0^\infty 2^n$ can converge to is $-1$. But this series does not converge $3$-adically to $-1$. If it did then $$\left|-1-\sum_{n=0}^N 2^n\right|_3<1$$ for all large enough $N$. This means that $$\sum_{n=0}^N 2^n\equiv-1\pmod 3\tag{*}$$ for all large enough $N$. But $3\nmid 2^n$ for all $n$, so if (*) holds for $N$, it fails for $N+1$. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2901184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Example of a metric on $\mathbb{R}^2$ that is not induced by any norm. From looking at the properties of both what makes a function a metric and a function a norm, I'd gather that I'd have to create a metric that would not satisfy the scalar multiplication property of a norm (i.e. $||ax|| \ne |a|||x||$). So, I went with $d(x,y) = \sqrt{|x-y|}$, since then $\sqrt{|a||x-y|} \ne |a|\sqrt{|x-y|}$. Am I correct in my going about of doing this? If not, I'd prefer a hint as to how I should think about creating such a metric.
Yes, this is correct. All metrics induced by a norm have to be homogeneous (scalar property) and hence counterexamples only work if they are not homogeneous themselves. Your example works just fine, you just need to verify the triangle inequality which works out nicely. Also, the discrete metric is a standard example for a metric which is not induced by a norm. I recommend also the answers to this question: Not every metric is induced from a norm
{ "language": "en", "url": "https://math.stackexchange.com/questions/2901274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $\lambda\in \sigma(A).$ Then, $e^{\lambda t}\in \sigma(e^{tA})$ where $\sigma(A)$ is set of all eigenvalues of matrix $A$ Let $\lambda\in \sigma(A).$ Then, $e^{\lambda t}\in \sigma(e^{tA})$ where $\sigma(A)$ is set of all eigenvalues of matrix $A$. MY TRIAL Let $\lambda\in \sigma(A).$ Then, $\exists:t\neq 0$ s.t. \begin{align}Ax=\lambda x \end{align} \begin{align}\sum^{\infty}_{n=0}\frac{A^n}{n!}=\sum^{\infty}_{n=0} \frac{ \lambda^n}{n!} \end{align} \begin{align}e^{A}=e^{\lambda } \end{align} So, \begin{align}e^{At}=\sum^{\infty}_{n=0}\frac{A^n t^n}{n!}=\sum^{\infty}_{n=0}\frac{\lambda^n t^n}{n!}=e^{\lambda t}\end{align} and we're done! I'm skeptical about my proof. Can someone please, check for me? If the proof is wrong, alternative proofs will be highly regarded. Thanks
Your idea is right but your proof is muddled (due to mixing up the scalar $t$ with the eigenvector of $A$ corresponding to $\lambda$). Suppose $v$ is an eigenvector with $Av = \lambda v$. Then $$e^{At}v = \sum_{i=0}^{\infty} \frac{(At)^i}{i!}v = \sum_{i=0}^{\infty} \frac{t^i}{i!} (A^i v) = \sum_{i=0}^{\infty} \frac{(\lambda t)^i}{i!} v = e^{\lambda t}v.$$ Therefore $v$ is an eigenvector of $e^{At}$ with eigenvalue $e^{\lambda t}$. The one additional detail that you may want to check (depending on the level of rigor expected) is that the sums in the above calculation are well-defined (they are).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2901394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Conflicting sign of line intergrals I am studying vector analysis (Mathematical Methods by Hassani). There is a passage in the book that talks about how parameterisation ensures that a line integral will have the correct sign. What are the conditions that make a non-parameterised line integral have a different sign than a parameterised one? I tried it for other examples but I couldnt produce the conflict. Regarding the example given, its weird to me (altho not wrong?) that $d\vec{r} = -\hat{e}_x dx$. My instinct is to have a "positive" $d\vec{r}$ and let the sign resolve itself depending on the direction of integration. Excerpt: Problem: Figure: Note:
It seems that the Author is aimed to make the reader aware about the risk to proceed by coordinates, in that case assuming * *$\vec A=(A_x,A_y,A_z)$ with $A_x>0$ clearly along path $(iv)$ $$\vec A \cdot d\vec r=-A_xdx$$ and it leads to the wrong result $$\int_{(a,a)}^{(0,0)} \vec A \cdot d\vec r=\int_{a}^{0} -A_xdx=\int_{0}^{a} A_xdx$$ The mistake here is in the limits for the integral, indeed as the parametrization shows, we have that for $x\in [0,a]$ $$r_x=a-x \implies dr_x=-dx$$ and * *$x=a \iff r_x=0$ *$x=0 \iff r_x=a$ therefore the correct step should be $$\int_{(a,a)}^{(0,0)} \vec A \cdot d\vec r=\int_{a}^{0} A_x \cdot dr_x=\int_{0}^{a} -A_xdx=-\int_{0}^{a} A_xdx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2901502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Domain for index of Radical Sign What is the domain of $\sqrt[x]{a}$, and is $\sqrt[x]{a}=a^{1/x}$ always true?? I was told that the domain of $\sqrt[x]{a}$ is natural numbers and the domain of $a^{1/x}$ is real numbers, so they are not identical. Is it true??
Were you also told to not split infinitives and that sentences shouldn't end with prepositions like "by" or "with"? By convention we agree that $\root x \of a$ refers to the principal $x$th root of $a$. If $x$ is a positive real integer, then $a$ has precisely $x$ roots, and each of the other roots can be obtained by multiplying the principal root by the appropriate $x$th root of 1. For example, $\root 4 \of 4$ is the principal root of $x^4 - 4$, which of course can be simplified to $\sqrt 2$, roughly 1.414213562373. The other roots are $-\sqrt 2$, $i \sqrt 2$ and $-i \sqrt 2$, which consist of $\sqrt 2$ multiplied by the quartic roots of 1 in turn: $-1$, $i$, $-i$. And just to make sure I get some flack for this answer, I'm going to say these four roots can just as validly be expressed as $\sqrt 2$, $-\sqrt 2$, $\sqrt{-2}$ and $-\sqrt{-2}$. In the case of $\sqrt{-2}$, it has two roots, and we can probably come to the agreement that it represents the principal root of 2 (which is $\sqrt 2$) multiplied by $i$. Some people will be pedantic and tell you that $\sqrt{-2}$ is undefined and you really should write $i \sqrt 2$. But if everyone understands that's what you mean, then what is the problem? If we agree that the radical symbol stands for a principal root, we should also agree that $a^{\frac{1}{x}}$ also stands for a principal root. Lastly, I'd like to mention that, depending on your TeX installation, you may or may not be able to use \root x \of a instead of \sqrt[x]{a}. I personally prefer the former to the latter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2901629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find all skew-symmetric matrices given their anti-commutator with a symmetric matrix Let $S$ be a skew-symmetric matrix and $J$ a symmetric matrix. Is it possible to find all skew-symmetric matrices $\Omega$ satisfying $$S = J\Omega + \Omega J$$ in terms of $S$ and $J$?
Vectorizing the equation yields $$\eqalign{ {\rm vec}(S) &= {\rm vec}(J\Omega I) + {\rm vec}(I\Omega J) \cr s &= (I\otimes J + J\otimes I)\,\omega \,\,{\dot =}\,\, M\omega \cr \omega &= M^+s + (I-M^+M)\,a \cr \Omega &= {\rm Mat}\big(M^+s + (I-M^+M)\,a\big) \cr }$$ where ${\rm Mat}()$ is the inverse of the ${\rm vec}()$ operation, $\otimes$ represents the Kronecker product, $M^+$ is the Moore-Penrose inverse of $M$, and $a$ is an arbitrary vector. If $M$ is full rank then $(I-M^+M)=0$ and there is only one solution, otherwise there are an infinite number, i.e. a solution for each $a$ vector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2901683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Tough Divisibility Problem When the five digit number $2A13B$ is divided by $19$, the remainder is $12$. Determine the remainder of $3A21B$ when divided by $19$. $$2A13B \equiv 12 \pmod{19}$$ $$20000 + 1000A + 100 + 30 + B \equiv 12 \pmod{19}$$ $$ 5 + 12A + 5 + 11 + B \equiv 12 \pmod{19}$$ $$ 21+ 12A+ B \equiv 12 \pmod{19}$$ $$ 12A+ B + 9 \equiv 0 \pmod{19}$$ This is where I'm stuck.
\begin{align}30\,000+1\,000A+200+10+B\equiv x\pmod{19}&\iff-1+12A+10+10+B\equiv x\pmod{19}\\&\iff12A+B\equiv x\pmod{19}.\end{align}Therefore, since $12A+B+9\equiv0\pmod{19}$, take $x=10$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2901824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Proving homeomorphism on closed ball Let $B$ be the closed unit ball in $\mathbb{R^n}$, and $a\in B,|a|<1$. Prove that $f:B\rightarrow B$, $$f(x) = (1-|x|)a+x$$ Is an homeomorphism. To me it's clear $f$ is continuous and I managed to prove it's injective. I'm only having trouble proving it's surjective, or finding an inverse. Any tips are appreciated!
This answer uses some light algebraic topology (really, just the definitions of homotopy, contractible) to show that $f$ is surjective. From there, showing $f$ is a homeomorphisn is straightforward. Note first that $f$ preserves the unit circle (or, if you prefer, the boundary of the closed ball, i.e. its difference with the open ball). Note also that the unit ball is contractible. Assume for contradiction that $f$ misses a point $x$. Since $f$ preserves the unit circle, $im(f)$ is not contractible (think about where $x$ is moved to under a homotopy). But $im(f)$ is homeomorphic to a contractible space, the unit ball, and therefore must itself be contractible, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2901905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Show that the number of elements of a finite set is well-defined. Given the following definition: A set $A$ is finite if it is empty or there are $n\in \mathbb{N}$ and a 1-1 onto function $f:\{1,...,n\}\to A$. In the first case we say that $A$ has $0$ elements, while in the second we say $A$ has $n$ elements. We say that $A$ is infinite if it is not finite. To show that the number of elements of a finite set is well defined, we have to prove the following: $(1)$ For all $n,k\in \mathbb{N}$ there is a 1-1 onto function $f:\{1,...,n\} \to \{1,...,k\}$ if and only if $n=k$. I intuitively understand why it is true. But I am having a hard time proving it rigorously. I have also been told that it is best proved by induction. Can you please help provide a proof for that statement?
Sketch: Hopefully you know that if $n\ne k$, then one of $\{1,2,\ldots,n\}$ and $\{1,2,\ldots, k\}$ is a proper subset of the other. Now prove by induction on $n$ that a function from $\{1,2,\ldots,n\}$ to a proper subset of itself cannot be injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2902199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does there exist a function which equals $0$ for odd inputs and $1$ for even inputs? Suppose $f(n)$ is a function that equals $0$ for odd inputs of $n$ and $1$ for even inputs. Note that $n$ can only be an integer. Is there a way of explicitly defining $f(n)$ so that satisfy the above conditions, without having to use a piecewise function?
Another option is $$f(n) = \cos^2 \left(\frac{n\pi}{2}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2902304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 0 }
Find a sequence converging to $\{1/k\}^{\infty}_{k=1}$ in $l^2$ Let $s=\{1/k\}^{\infty}_{k=1}$. Find a sequence $\{s_n\}^{\infty}_{n=1}$ of points in $l^2$ such that each $s_n$ is distinct from $s$ and such that $\{s_n\}^{\infty}_{n=1}$ converges to $s$ in $l^2$. This is a problem from Goldberg. Exercise 4.3, 3. I don't understand the question. How can I find a sequence that converges to another sequence? Is $s$ the limit of $\{1/k\}$ or it is the variable assigned to the sequence itself? If it is the limit then it can be worked out
Hint. Recall that $l^2$ is the normed space of all square summable sequences $s$, such that $$\|s\|^2_2=\sum_{k=1}^{\infty}s(k)^2<+\infty.$$ Let $s_n(k):=\frac{1}{k}+\frac{1}{nk}$ (also $s_n(k):=\frac{a_n}{k}$ with $a_n\to 1$ will work), then $\{s_n(k)\}_{k\geq1}\in l^2$ for each $n\geq 1$. Now consider the limit of $$\lim_{n\to \infty}\|s_n-s\|^2_2=\lim_{n\to \infty}\sum_{k=1}^{\infty}\left(s_n(k)-\frac{1}{k}\right)^2$$ and show that it is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2902426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
if $\ A^2 = -I $ then $\ A $ has no real eigenvalues Given $\ A $ is a $\ 2 \times 2 $ matrix over $\ \mathbf R$ that $\ A^2 = -I $ and I need to prove $\ A $ has no real eigen values. $\ A^2 = -I \rightarrow A^2 +I = 0 $ I guess it is something about $\ x^2 +1 = 0 $ has no real solutions... but if someone can show me the connection to matrices (if it is about that?) I mean how can I conclude anything from that about the characteristic polynomial of $\ A $
Since $A^2+I = 0$, the polynomial $x^2+1$ annihilates the matrix $A$. Therefore the minimal polynomial $m_A$ of $A$ divides $x^2 + 1$ so $$\sigma(A) \subseteq \{\text{zeroes of } m_A\} \subseteq \{\text{zeroes of } x^2 + 1\} = \{i, -i\}$$ Hence $\sigma(A) \cap \mathbb{R} = \emptyset$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2902510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Logic behind bitwise operators in C I came across bitwise operations in C programming, and I realized that XOR operator can be used to swap 2 numbers in their binary bases. For example let $$i=(65)_{10}=(1000001)_{2}, \text{ and } j=(120)_{10}=(1111000)_{2}$$. Let $\oplus$ be the XOR operator, then observe that if I started with any one of them, say $i$ and following the following procedure: 1)replace its value with the $\oplus$ value, yielding $$i=(0111001)_{2},j=(1111000)_{2}$$ 2) replace the other variable($j$) with another $\oplus$ value derived from the new $i$ and old $j$, yielding $$i=(0111001)_{2},j=(1000001)_{2}$$ 3)replace the original variable $i$ with $\oplus$ value again, yielding $$i=(1111000)_{2},j=(1000001)_{2}$$ which shows that we would somehow have their values swapped. I found this way of programming online and I definitely can’t understand how people think of the logic aspect of this. I would think it’s linked to the truth table as follows, which shows by division of cases that the values can be swapped. However, I am still uncertain about the full reasoning why this works, like whether there is any mathematical theorems that I should know that can aid me in my understanding. PS: Sorry if the question is off-topic here, it feels like a programming question, but I feel that I more concerned about the “logic” rather than the programming. I also drew the table myself on MS word since I can't get the latex one to work somehow.
Note that you can do the same thing without bitwise operators (at least for unsigned integer types since they can't overflow into undefined behavior): // i == x j == y i += j; // i == x+y j == y j -= i; // i == x+y j == -x i += j; // i == y j == -x j = -j; // i == y j == x Now if we do this bit for bit, but modulo 2 instead of modulo UINT_MAX+1, the XOR operation implements both addition and subtraction, and the final negation is a no-op because $-1\equiv 1$ and $-0\equiv 0 \pmod 2$. So what is left in the bitwise version is exactly i ^= j; j ^= i; i ^= j;
{ "language": "en", "url": "https://math.stackexchange.com/questions/2902731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Evaluate: $ \int \frac{\sin x}{\sin x - \cos x} dx $ Consider $$ \int \frac{\sin x}{\sin x - \cos x} dx $$ Well I tried taking integrand as $ \frac{\sin x - \cos x + \cos x}{\sin x - \cos x} $ so that it becomes, $$ 1 + \frac{\cos x}{\sin x - \cos x} $$ But does not helps. I want different techniques usable here.
Let $$I_{1} = \int \frac{\sin x}{\sin x - \cos x}dx, \quad I_{2} = \int \frac{\cos x}{\sin x - \cos x}dx$$ Then $$I_{1}-I_{2} = \int 1\, dx = x + c_{1}$$ $$I_{1}+I_{2} = \int \frac{\sin x + \cos x}{\sin x - \cos x} dx = \ln(\sin x - \cos x) + c_{2} $$ Then solve simultaneously $$I_{1} = \frac{1}{2}\left(x+ \ln(\sin x - \cos x) \right) + c$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2902855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
Finding $P(A \cap B)$, given $P(A\mid B)$ and $P(B\mid A)$ Given events $A$ and $B$, if $P(A\mid B)$ and $P(B\mid A)$ are known, can $P(A \cap B)$ be found? I tried the following approach and came to an answer, but doubt its veracity. Here's my attempt: I tried thinking of $P(A\mid B)$ as $P(B \implies A)$. This is logically equivalent to $(1-P(A))+P(B)-(1-P(A))P(B)$, right? This comes from $w \implies z = \neg w \lor z$ for Boolean variables. Then it treats $X$ and $Y$ as independent variables, and uses the common formula for the probability of event one or event two happening, $P(A\cup B) = P(A)+P(A)-P(A)P(B)$. In addition, it uses $P(\neg A)=1-P(A)$. Taking that approach, if you assign $P(A\mid B)=a$ and $P(B\mid A)=b$, this system of equations results: $$ 1-P(A)+P(B)-(1-P(A))P(B) = a $$ $$ 1-P(B)+P(A)-(1-P(B))P(A) = b $$ Well, this can be solved uniquely for $P(A)$ and $P(B)$. Given that solving this system with substitution involves solving a quadratic, the result is $$ P(A\cap B) = \frac{3}{2} a + \frac{3}{2} b - 2 + \frac{1}{2}(a-b)^2 $$ OR $$ P(A\cap B) = -\frac{1}{2} a - \frac{1}{2} b + 1 - \frac{1}{2}(a-b)^2 $$ Which looks dubious, since the answer here seems like it should be unique. Moreover, one but not both of these formulas give negative probability for certain combinations of $a$ and $b$. Well, I believe applying Bayes' theorem is the right way to solve this problem, but it requires knowing either $P(A)$ or $P(B)$ beforehand. Is there a way to obtain $P(A\cap B)$ without the individual probabilities? In addition, what's wrong with the approach above? I assume it has to do with assuming independence between $A$ and $B$.
Your assumption that $$p(A\mid B)=p(A \implies B)$$ is not valid. Think of $A$ as $x>3$ and $B$ as $x>5$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2902945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding Jordan canonical form of a matrix given the characteristic polynomial I am trying to find the Jordan canonical form of a matrix $A$ given its characteristic polynomial. Suppose $A$ is a complex $5\times 5$ matrix with minimal polynomial $X^5-X^3$. The end goal of the problem is to find the characteristic polynomial of $A^2$ and the minimal polynomial of $A^2$. I know that since the minimal polynomial of a matrix divides the characteristic polynomial of a matrix, then $A$ has the same minimal and characteristic polynomial, namely $X^5-X^3$. Now I am trying to find the JCF (Jordan Canonical Form) of $A$ to make it easier to compute $A^2$, since $A$ is conjugate to its JCF. So, since the characteristic polynomial of $A$ splits into $X^3(X+1)(X-1)$, then I know that the Jordan canonical form will have three Jordan blocks, 2 of size 1 corresponding to $1$ and $-1$ and one of size 3. Now, my problem is that I can't figure out the form of this third Jordan block. How do I know that it has the from $$\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$$ or the form $$ \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}$$ or the form $$\begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$$ Thanks for all your help!
We have the minimal polynomial is $X^3(X^2-1)$. Over $\Bbb C$, the exponent of the irreducible factor $(x-a)$ in the minimal polynomial gives the size of the largest Jodan block. Thus we have a $3\times 3$ Jordan block corresponding to the eigenvalue $0$. The only possibility is the middle one: $$A=\left( \begin{array}{ccccc} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ \end{array} \right).$$ To see this, note that the other two cases you gave have smaller minimal polynomials. For example, if $$A=\left( \begin{array}{ccccc} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ \end{array} \right),$$ then $A$ is annihilated by $X^2(X^2-1)$. If its not clear why, notice that an $n\times n$ matrix with all zeros and ones on the super diagonal is nilpotent with minimal polynomial $X^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2903055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Determining the general solution for the trigonometric equation $ 5\cos(x)-12\sin (x) = 13 $ Given that $$5\cos(x)-12\sin (x) = 13 $$ I'm trying to evaluate the general solution for that expression. It reminds me of $5-12-13$ triangle. Since we don't know the degree of $x$, I couldn't proceed further. Specifically, let's take its derivate, which yields $$\dfrac{d}{dx} 5\cos(x)-12\sin (x) = 13 = 0$$
This has a general method: divide the whole equation by $\;\sqrt{5^2+12^2}=13\;$ , so the equation becomes $$\frac5{13}\cos x-\frac{12}{13}\sin x=1$$ Since $\;\left(\frac5{13}\right)^2+\left(\frac{12}{13}\right)^2=1\;$ , there exists $\;\alpha\in\Bbb R\;$ (in fact, we can choose this value in an infinite number of ways...) such that $\;\cos\alpha=\frac{12}{13}\;,\;\;\sin\alpha=\frac5{13}\;$ , so the equations becomes $$\sin\alpha\cos x-\sin x\cos\alpha=1\stackrel{\text{trig. identity}}\iff\sin(\alpha-x)=1\ldots$$ Try now to take it from here. And BTW: some high schools specifically forbid to use calculus when solving trigonometric equations!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2903190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Question about the definition of the least upper bound property Definition: Let $A$ the set with order relation. We say that the set $A$ has least upper bound property if any $A_0\subset A$, $A_0\neq \varnothing$ which has upper bound has the least upper bound. Question 1: When we say "has upper bound..." do we mean that its upper bound is in $A$? Question 2: When we say "has the least upper bound..." do we mean that its least upper bound is in $A$? Example: Consider the set $A=(-1,1)$ of real numbers in the usual order. Assuming the fact that the real numbers have least upper bound property, it follows that the set $A$ has the least upper bound property (why?). For given any subset of $A$ having an upper bound in $A$ , it follows that its least upper bound must be in $A$. For example, the subset $\{-1/2n: n\in \mathbb{N}\}$ of $A$, thought it has no largest element, does have a least upper bound in $A$, the number $0$. $ \quad $ On the other hand, the set $B=(-1,0)\cup (0,1)$ does not have th least upper bound property . The subset $\{-1/2n: n\in > \mathbb{N}\}$ of $B$ is bounded above by any element of $(0,1)$, but it has no least upper bound in $B$. I have read this example very carefully and I guess that it provides an example of subsets of reals which has LUB-property and has not, respectively. Do I correctly interpreted the meaning of above example?
Your point is that if a set $A$ has the least upper bound property, it does not imply that every subset of A also has the least upper bound property. Yes, you are quite right. A good example is the set of rational numbers which does not have the least upper bound property while it is a subset of real numbers which has the least upper bound property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2903310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can I find the limit of the following sequence $\sin ^2 (\pi \sqrt{n^2 + n})$? How can I find the limit of the following sequence: $$\sin ^2 (\pi \sqrt{n^2 + n})$$ I feel that I will use the identity $$\sin ^2 (\pi \sqrt{n^2 + n}) = \frac{1}{2}(1- \cos(2 \pi \sqrt{n^2 + n})), $$ But then what? how can I deal with the limit of $\cos (2 \pi \sqrt{n^2 + n})$? I know that $\cos (n\pi) = (-1)^n$, if $n$ is a positive integer but then what?
You can check $\sin^2(\pi\sqrt{n^2 + n})=\sin^2(\pi\sqrt{n^2 + n}-\pi n)$. So $$\sin^2(\pi\sqrt{n^2 + n})=\sin^2 \pi\frac{n}{\sqrt{n^2 + n}+n}\to \sin^2\frac{\pi}{2}=1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2903413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Prove that $\int_0^1\,\frac{\ln(x)}{\sqrt{1-x^2}}\,\text{d}x=-\frac{\pi}{2}\,\ln(2)$. I have discovered via contour integration that $$\int_0^\infty\,\frac{\exp(t\,u)}{\exp(u)+1}\,\text{d}u={\text{csc}(\pi\,t)}\,\left(\frac{\pi}{2}-\int_0^{\frac{\pi}{2}}\,\frac{\sin\big((1-2t)\,y\big)}{\sin(y)}\,\text{d}y\right)\tag{*}$$ for all $t\in\mathbb{C}\setminus\mathbb{Z}$ such that $\text{Re}(t)<1$. By taking $t\to 0$, I deduce that $$\int_0^\infty\,\frac{1}{\exp(u)+1}\,\text{d}u=\frac{2}{\pi}\,\int_0^{\frac{\pi}{2}}\,y\,\cot(y)\,\text{d}y\,.$$ With a step of integration by parts, I obtain $$\int_0^\infty\,\frac{1}{\exp(u)+1}\,\text{d}u=-\frac{2}{\pi}\,\int_0^{\frac{\pi}{2}}\,\ln\big(\sin(y)\big)\,\text{d}y\,.$$ Setting $x:=\sin(y)$, I get $$\int_0^\infty\,\frac{1}{\exp(u)+1}\,\text{d}u=-\frac{2}{\pi}\,\int_0^1\,\frac{\ln(x)}{\sqrt{1-x^2}}\,\text{d}x\,.$$ This shows that $$\int_0^1\,\frac{\ln(x)}{\sqrt{1-x^2}}\,\text{d}x=-\frac{\pi}{2}\,\int_0^\infty\,\frac{1}{\exp(u)+1}\,\text{d}u\,.$$ The integral $\displaystyle\int_0^\infty\,\frac{1}{\exp(u)+1}\,\text{d}u$ can be easily obtained since $$\int\,\frac{1}{\exp(u)+1}\,\text{d}u=u-\ln\big(\exp(u)+1\big)+\text{constant}\,.$$ That is, I have $$\int_0^1\,\frac{\ln(x)}{\sqrt{1-x^2}}\,\text{d}x=-\frac{\pi}{2}\,\ln(2)\,.\tag{#}$$ However, this proof is a very roundabout way to verify the equality above. Is there a more direct way to prove that (#) is true? Any technique is appreciated. A nice consequence of (*) is that $$\int_0^\infty\,\frac{\sinh(t\,u)}{\exp(u)+1}\,\text{d}u=\frac{\pi}{2}\,\text{csc}(\pi\,t)-\frac{1}{2\,t}$$ for all $t\in\mathbb{C}\setminus\{0\}$ such that $\big|\text{Re}(t)\big|<1$. This provides a proof that $$\eta(2r)=\frac{1}{(2r-1)!}\,\int_0^\infty\,\frac{u^{2r-1}}{\exp(u)+1}\,\text{d}u=\frac{\pi^{2r}}{2}\,\Biggl(\left[t^{2r-1}\right]\Big(\text{csc}(t)\Big)\Biggr)$$ for $r=1,2,3,\ldots$. Here, $\eta$ is the Dirichtlet eta function. In addition, $[t^k]\big(g(t)\big)$ denotes the coefficient of $t^k$ in the Laurent expansion of $g(t)$ about $t=0$. This also justifies the well known results that $$\eta(2r)=\frac{\left(2^{2r-1}-1\right)\,\big|B_{2r}\big|\,\pi^{2r}}{(2r)!}\text{ and }\zeta(2r)=\frac{2^{2r-1}\,\big|B_{2r}\big|\,\pi^{2r}}{(2r)!}$$ for $r=1,2,3,\ldots$, where $\left(B_j\right)_{j\in\mathbb{Z}_{\geq0}}$ is the sequence of Bernoulli numbers and $\zeta$ is the Riemann zeta function. Similarly, $$\begin{align}\int_0^\infty\,\frac{\exp(t\,u)-1}{\exp(u)-1}\,\text{d}u&=\ln(2)+2\,\int_0^{\frac{\pi}{2}}\,\frac{\sin\big((1-t)\,y)\,\sin(t\,y)}{\sin(y)}\,\text{d}y\\ &\phantom{aaaaa}-\cot(\pi\,t)\,\left(\frac{\pi}{2}-\int_0^{\frac{\pi}{2}}\,\frac{\sin\big((1-2t)\,y\big)}{\sin(y)}\,\text{d}y\right)\,,\end{align}$$ for all $t\in\mathbb{C}\setminus\mathbb{Z}$ such that $\text{Re}(t)<1$. This gives $$\int_0^\infty\,\frac{\sinh(t\,u)}{\exp(u)-1}\,\text{d}u=\frac{1}{2\,t}-\frac{\pi}{2}\,\cot(\pi\,t)$$ for all $t\in\mathbb{C}\setminus\{0\}$ such that $\big|\text{Re}(t)\big|<1$. Another consequence of (*) is that $$\int_0^{\frac{\pi}{2}}\,\frac{\sin(k\,y)}{\sin(y)}\,\text{d}y=\frac{\pi}{2}\,\text{sign}(k)$$ for all odd integers $k$. It is an interesting challenge to determine the integral $\displaystyle \int_0^{\frac{\pi}{2}}\,\frac{\sin(k\,y)}{\sin(y)}\,\text{d}y$ for all even integers $k$.
\begin{align} \int_0^1 x^\alpha\ dx &= \dfrac{x^{\alpha+1}}{\alpha+1}\\ \dfrac{d}{d\alpha}\int_0^1 x^\alpha\ dx &=\dfrac{d}{d\alpha}\dfrac{x^{\alpha+1}}{\alpha+1}\\ \int_0^1 x^\alpha\ln x\ dx &=\dfrac{x^{\alpha+1}((\alpha+1)\ln x-1)}{(\alpha+1)^2}\Big|_0^1=\dfrac{-1}{(\alpha+1)^2} \end{align} \begin{align} I &= \int_{0}^{1} \frac{\ln x}{\sqrt {1-x^2}}dx \\ &= \int_0^1\ln x\sum_{n=0}^\infty{2n\choose n}\dfrac{1}{4^n}x^{2n} dx\\ &= \sum_{n=0}^\infty{2n\choose n}\dfrac{1}{4^n}\dfrac{-1}{(2n+1)^2} \\ &= \color{blue}{-\dfrac{\pi}{2}\ln2} \end{align} using generator function $$\sum_{n=0}^\infty{2n\choose n}\dfrac{1}{4^n}\dfrac{x^{2n+1}}{(2n+1)^2}=\ln\dfrac{1+\sqrt{1-x^2}}{x}-\sqrt{1-x^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2903468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 3 }
Pell's equation (or a special case of a second order diophantine equation) Question Find integers $x,y$ such that $$x^2-119y^2=1.$$ So far I've tried computing the continued fraction of $\sqrt{119}$ to find the minimal solution, but either I messed up or I don't know where to stop computing a rough approximation of said square root. Please help.
The algorithm in the first answer works for the general case, but in this specific case, noticing that $119$ is very close to the perfect square $121$ suggests that $y=11$ is worth a look. Then, $119\cdot 121=(120-1)(120+1)=120^2-1$ leads directly to $x=120, y=11$ as a solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2903559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Extension operator for Lipschitz domain in Sobolev spaces Suppose we have a Lipschitz domain $\Omega \subset \mathbb{R}^2$. Let $u$ be a function in the Sobolev space $W^{1\,,\,p}(\Omega)$. Since $\Omega$ is Lipschitz, there is an extension operator $P: W^{1\,,\,p}(\Omega) \to W^{1\,,\,p}(\mathbb{R}^2)$ such that \begin{equation} P \, u \, |_{\Omega}=u \tag{1} \end{equation} Q$1$. Can someone give me a reference for $(1)\,$? Q$2$. How to extend $u$ in this case?
Let us say that a domain $\Omega \subset \mathbb{R}^n$ is an extension domain if there exists a bounded linear operator $\mathcal{E} \colon W^{1,p}(\Omega) \to W^{1,p}(\mathbb{R}^n)$ such that $\mathcal{E}u(x) = u(x)$ for $x \in \Omega$. It is proved in P. W. Jones, Quasiconformal mappings and extendability of functions in Sobolev spaces, Acta Math. 147 (1981), 71–88, that every uniform domain is an extension domain. Since domains with Lipschitz boundary are uniform, the result follows. In general, the construction of the extension is done via a local reduction (change of variables that reduced the boundary of $\Omega$ to a particular shape) and a patching by means of partitions of unity. You can find several examples in standard books about Sobolev spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2903653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can the Inverse Finding the Laplace transform of $\frac{2s + 1}{s(s + 1)(s + 2)}$ without using partial fractions? I'm wondering if we can perhaps using the convolution theorem to find the inverse Laplace transform of $\dfrac{2s + 1}{s(s + 1)(s + 2)}$? I can find it using partial fraction decomposition, but it is not obvious to me whether this is the convolution of two functions? Thank you for any help.
It is not right away the convolution of two functions but you can split into two fractions and use convolution on each one and add the results .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2903820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is the sequence $\frac{e^n}{n}$ convergent? Is the sequence $$\frac{e^n}{n}$$ convergent? I think it is not because $\log n < n$, implying that $\frac{e^n}{n} >1$ and hence the limit does not exist. Which probably also means that the sequence is unbounded. Am I right? Please correct me if I am wrong.
No it is not. Notice that $$a_{n+1}=\dfrac{e^{n+1}}{n+1}=e\cdot \dfrac{e^n}{n}\cdot \dfrac{n}{n+1}>\dfrac{e}{2}a_n>1.3a_n$$therefore $$a_{n+1}>1.3a_n>(1.3)^2a_{n-1}>\cdots>(1.3)^na_1=e\cdot(1.3)^n$$which is unbounded since $e\cdot(1.3)^n$ is unbounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2903890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 2 }
Smart Integration Tricks I am in the last year of my school and studied integration this year I have done several Integration techniques like Integration * *By substitution *By partial fractions *By parts *Trigo. substitutions *Tangent half angle substitution and many other basic methods of integration. So I wanted to ask about some integration tricks that might prove quite helpful. Not something advanced which is taught at higher level of studies But some smart integration tricks at school level only.
Another neat trick is to add different forms of integrals to obtain a much simpler one. For example, if we let a function $f$ be such that $f(x)f(-x)=1$ and we want to evaluate $$I=\int_{-1}^1\frac1{1+f(x)}\,dx$$ then we could replace $x$ by $-x$ giving $$I=-\int_1^{-1}\frac1{1+f(-x)}\,dx=\int_{-1}^1\frac{f(x)}{1+f(x)}\,dx$$ and adding gives $$2I=\int_{-1}^1\,dx=2\implies I=1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2903959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 2 }
Does $\frac{n}{\sum\limits_{k=1}^{n}\Big(\frac{k}{k+1}\Big)^k}$ converge? Does the sequence $$\displaystyle \frac{n}{\sum\limits_{k=1}^{n}\Big(\frac{k}{k+1}\Big)^k}$$ converge? Attempt. Since $\Big(\frac{k}{k+1}\Big)^k \rightarrow 1/e\neq 0$ and the terms are positive, the series $\sum\limits_{k=1}^{\infty}\Big(\frac{k}{k+1}\Big)^k$ diverges to $+\infty$. I find hard to determine if $n$ or the sum goes faster to $+\infty.$ Thanks in advance.
If $a_n\to L,$ then as is well known, $(a_1+\cdots + a_n)/n \to L.$ Since $[n/(n+1)]^n \to 1/e,$ we therefore have $$\frac{\sum_{k=1}^{n}[k/(k+1)]^k}{n} \to \frac{1}{e}.$$ Taking reciprocals gives the limit of $e.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2904089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
What you can conclude about signs of number $\lambda$ and $\omega$ Let matrix $A\in M_n(\mathbb R)$ is symetric matrix such that $a_{11}=-1,a_{22}=-1,a_{33}=1$ and spectrum of $A=\{\lambda,\omega\}$ such that $\lambda>\omega$. $N(A-\lambda I)=L(v_1,v_2)$ and $N(A-\omega I)=L(v_3)$ where is $v_1=(1,1,0),v_2=(1,1,1),v_3=(1,-1,0)$ . What can conclude about signs of number \lambda and omega?And explain the answer. I have more question about this matrix, but I know other question, since this matrix is not positive definite then does not mean that eigenvalues is positive, it is not even negative definite because for $e_3^TAe_3=a_{33}>0$, so does not mean that eigenvalue is negative, but only on what I doubt is that $\lambda>\omega$, but in spectrum it put first $\lambda$ then $\omega$ maybe that does not mean anything but, in some set you put first smaller number then bigger,bit since $A=A^T$ using spectral theorema $A=Q\Lambda Q^T$, there exist orthogonal eigenvectors so it mean that matrix A have three linear independent column, so eigenvalues is not zero, do you see more information?
Clearly $n=3$ because $A$ diagonalizes so the eigenvectors span the space. We have $$N(A - \lambda I) = \operatorname{span}\left\{\pmatrix{1 \\ 1 \\ 0}, \pmatrix{1 \\ 1 \\ 1}\right\} = \operatorname{span}\left\{\frac1{\sqrt2}\pmatrix{1 \\ 1 \\ 0}, \pmatrix{0 \\ 0 \\ 1}\right\}$$ $$N(A - \omega I) = \operatorname{span}\left\{\pmatrix{1 \\ -1 \\ 0}\right\} = \operatorname{span}\left\{\frac1{\sqrt2}\pmatrix{1 \\ -1 \\ 0}\right\}$$ so $A$ diagonalizes in the orthonormal basis $\left\{\frac1{\sqrt2}\pmatrix{1 \\ 1 \\ 0}, \pmatrix{0 \\ 0 \\ 1}, \frac1{\sqrt2}\pmatrix{1 \\ -1 \\ 0}\right\}$ to $D = \begin{bmatrix}\lambda & 0 & 0 \\ 0 & \lambda & 0 \\ 0 & 0 & \omega \end{bmatrix}$. Therefore if we set $P = \begin{bmatrix} \frac1{\sqrt2} & 0 & \frac1{\sqrt2} \\ \frac1{\sqrt2} & 0 & -\frac1{\sqrt2} \\ 0 & 1 & 0\end{bmatrix}$ we have $$\begin{bmatrix} -1 & * & * \\ * & -1 & * \\ * & * & 1 \\ \end{bmatrix} = A = PDP^T = \begin{bmatrix} \frac{\lambda}{2}+\frac{\omega}{2} & \frac{\lambda}{2}-\frac{\omega}{2} & 0 \\ \frac{\lambda}{2}-\frac{\omega}{2} & \frac{\lambda}{2}+\frac{\omega}{2} & 0 \\ 0 & 0 & \lambda \\ \end{bmatrix} $$ If follows that $\lambda = 1$ and $\omega = -3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2904178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Triangles : isoceles and angles inside with other triangle I have this, what seems basic triangle problem, and not certain if the given information is sufficient to solve the problem of finding an angle. We have one main isoceles and another one inside of it. I have attached here a diagram, and we wish to find the angle in red: The labels blue means : lengths PQ = QR The label green means : lengths QS = QT The given angle PQS = 24 (deg) We wish to find angle in red , angle RST. I am trying to figure this out only using geometric principle without trying to use a system of algebraic equations. I tried to create parallel lines to help, but it did not, I tried to make use of principle of exterior angle theorem, but still could not get it. Hope someone here can help me on this.
Let $\widehat{QRP}= \widehat{RPQ}=x$, so $\widehat{RQP}=180-2x$, so $\widehat{RQS}=156-2x$, so the sum of the angles $\widehat{QST}$ and $ \widehat{QTS}$ is $24+2x$, so they are both $x+12$, so $\widehat{RTS}=168-x$, so $\widehat{RST}=12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2904276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Forking but not dividing Definition. A formula $\phi(x,a)$ divides over a set $B$ if there are $k<\mathbb{N}$ and a sequence $(a_i)_{i<\omega}$ such that (1) $\text{tp}(a/B)=\text{tp}(a_i/B)$, for all $i<\omega$; (2) $\{\phi(x,a_i)\}_{i<\omega}$ is $k$-inconsistent. Definition. A formula $\phi(x,a)$ forks over a set $B$ if there are $n\in\mathbb{N}$ and formulas $\psi(x,b_1),\dots, \psi(x,b_n)$ such that (1) for each $i=1,\dots, n$, the formula $\psi_i(x,b_i)$ divides over $B$; (2) $\phi(x,a)\models \bigvee_{i=1}^{n} \psi_i(x,b_i)$. It is clear that dividing implies forking. Question. Does forking always imply dividing? If no, is there a formula that forks but does not divide?
No, forking does not always imply dividing. In simple theories dividing and forking are the same, but they are not the same in general. Look at the following example. Example. Let $\mathcal{L}=\{ R^{(3)} \}$ be a language which consists of a ternary relation. Consider the $\mathcal{L}$-structure $\mathcal{M}=\big(\mathbb{S}^1, R \big)$ where $\mathbb{S}^1$ is the unit circle around the origin on the plane and $R(x,y,z)$ holds if and only if $y$ lies on the shorter arc between $x$ and $z$, ordered clock-wise, including the endpoints. Now, let $a,b$ and $c$ be three equidistant points on $\mathbb{S}^1$. Then $$\mathcal{M}\models \forall x\bigg( R(a,x,b)\vee R(b,x,c) \vee R(c,x,a) \bigg)$$ Claim(1). Each of $R(a,x,b), R(b,x,c)$ and $R(c,x,a)$ 2-divides over $\emptyset$. Proof of Claim(1). We will show, for instance, the formula $R(a,x,b)$ 2-divides over $\emptyset$. Let $a, a_0,b_0, a_1,b_1,\dots, b$ be a sequence of consecutive points on $\mathbb{S}^1$. Then $(a_ib_i)_{i<\omega}$ is a sequence such that i) $\text{tp}(a_i,b_i)=\text{tp}(a,b)$ for each $i<\omega$; ii) $\big\{ R(a_i,x,b_i) \big\}_{i<\omega}$ is 2-inconsistent. Therefore $R(a,x,b)$ 2-divides over $\emptyset$. Similarly we can prove that $R(b,x,c)$ and $R(c,x,a)$ 2-divides over $\emptyset$ as well. Claim(2). The formula $x=x$ forks over $\emptyset$ but does not divide over $\emptyset$. Proof of Claim(2). Since $x=x\models R(a,x,b)\vee R(b,x,c)\vee R(c,x,a)$ and by Claim(1) each of $R(a,x,b), R(b,x,c)$ and $R(c,x,a)$ 2-divides over $\emptyset$, $x=x$ forks over $\emptyset$. But $x=x$ does not divide over $\emptyset$, because if $x=x$ divides over $\emptyset$, then there are $k<\omega$ and an $\emptyset$-indiscernible sequence $(a_i)_{i<\omega}$ such that $\big\{ x=x \big\}_{i<\omega}$ is $k$-inconsistent which is a contradiction. Therefore the formula $x=x$ forks but does not divide.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2904359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to find a bijective mapping such that a) $f: \mathbb{N} \rightarrow \mathbb{N} \cup \{0\}$ I'm thinking $f(x) = x-1$, because in order to get 0, it would just be 1-1. Is this correct? I can show this is a bijection because it is surjective, i.e.: $ y = x - 1 \implies x = y+ 1$ and $f(x) = f(y+1) = y+1-1 = y$ and it is injective because $f(a) = a -1 = f(b) = b-1$ and adding one to both sides yields $a = b$. b) $f: \mathbb{N} \rightarrow \mathbb{N} \backslash \{1,2,3,...2017\}$ I'm a bit confused on this, if I do $f(x) = x + 2017$ and I can show this is a bijection the same way. Is this correct?
That would be correct. This is a variant of the Hilbert hotel "technique", whereby you can essentially lose an extra element in (and only in) an infinite set. As Cantor said, "i see it but I can't believe it. "
{ "language": "en", "url": "https://math.stackexchange.com/questions/2904463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to factor polynomials in $\mathbb{Z}_{n}[x]$ I realize there already is a question almost identical to this (here), but the answers given are a bit too vague. The problem I have is that I'm looking to factor the polynomial $$x^2+23x+18 \text{ in } \mathbb{Z}_{28}$$ into as many ways as possible. I've tried through trial and error since we're looking for $a,b \in \mathbb{Z_{28}}$ such that $$(x+a)(x+b)=x^2+23x+18$$ So $ab=18$ and $a+b=23$. But this seems too tedious and I haven't found an answer yet. So if anyone has a method for me to follow (sort of a recipe) or some clues as to how I can discover the method myself it would be much appreciated. Thanks.
Hint: Solve mod $4$ and mod $7$ and use the Chinese Remainder Theorem. EDIT: OK, let's do the mod $4$ part. You want $$ x^2 + 3 x + 2 \equiv (x+a)(x+b) \mod 4 $$ Thus $$ \eqalign{a b &\equiv 2 \mod 4\cr a + b &\equiv 3 \mod 4\cr} $$ One of $a$ and $b$, let's say $a$, must be even, but can't be $0$, so it's $2$. Then the second equation tells you $b = 1$. I'll let you do it mod $7$: let's say the result is $(x+c)(x+d)$, where $c$ and $d$ are different. Now you'll have two possibilities for the factorization $(x+a)(x+b) \mod 28$, because the factors mod $4$ and the factors mod $7$ can pair up in two ways: either $$a \equiv 2 \mod 4, \; a \equiv c \mod 7, b \equiv 1 \mod 4, b \equiv d \mod 7$$ or $$a \equiv 2 \mod 4, \; a \equiv d \mod 7, b \equiv 1 \mod 4, b \equiv c \mod 7$$ In either case, once you have chosen $a$ and $b$ mod $4$ and mod $7$, CRT gives you the values mod $28$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2904542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Find $f(10)=?$ when the following condition is given. $$ Let \ f:R \to R \in \vert f(x)-f(y) \vert \le (x-y)^3 \ \forall \ x,y \in R \\and \ f(2)=5;\ then\ f(10)=? $$ This question is from an old assignment on the topic Limits , Continuity and Differentiability Though i didn't get the answer , but i tried in the following way.... $$ \vert f(x)-f(y) \vert \le (x-y)^3 \ \\ \implies 0\le \vert f(x)-f(y) \vert \le (x-y)^3 \\ \implies x-y\ge0 \implies x\ge y\ \forall (x,y) \in domain \\ put\ x=10 \ and \ y=2 \implies|f(10)-5| \le 8^3 \\ \implies \ f(10) \in (-8^3+5,8^3+5) $$ But i couldn't get any further. Please help.( I think we need to use squeeze theorem) $$ **EDIT** (previous\ inequality \ is\\ wrong\ \mathbf| answer \ as \ helped\ by\ \mathbf{Przemysław Scherwentke}\ and\ \mathbf{ mengdie1982} ) \\\vert f(x)-f(y) \vert \le |x-y|^3 \\-|x-y|^2 \leq \frac{f(x)-f(y)}{x-y}\leq |x-y|^2, ~~~\forall x \neq y.\\ 0\leq\lim_{y \to x}\frac{f(x)-f(y)}{x-y} \leq 0,\implies f'(x) = 0,~~~\forall x \in \mathbb{R}.\\ hence\ f(10)=5 $$
Maybe, the inequality condition of the problem should be $$|f(x)-f(y)|\leq |x-y|^3,~~\forall x,y \in \mathbb{R}.$$ Thus, $$-|x-y|^2 \leq \frac{f(x)-f(y)}{x-y}\leq |x-y|^2, ~~~\forall x \neq y.$$ Now, fix $x$ and take all the limits under the process $y \to x.$ We may obtain $$0\leq\lim_{y \to x}\frac{f(x)-f(y)}{x-y} \leq 0,$$ which implies $$f'(x) = 0,~~~\forall x \in \mathbb{R}.$$Hence, $f(x)$ is a constant function. As a result, $$f(10)=f(2)=5.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2904685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
age-based word problem Peter's age is three years more than three times his son's age. After three years, Peter's age will be ten years more than twice his son's age. What is Peter's present age? I have tried to put this into algebra, but not sure if correct? $x =$ Peter's son's age $p =$ Peter's age \begin{align*} 3x + 3 & = p\\ 10 + 2x & = 3 \end{align*}
The first equation ($3x + 3 = p$) is correct, but you lost your focus on the second equation. So now Peter's age is $p$ and his sons age is $x$. After three years, Peter's age will be $p+3$ and his son's $x+3$. So "After three years, Peter's age ($p+3$) will be ten years more than twice his son's age" becomes $$ (p+3) = 10 + 2(x+3) $$ When reading this kind of word problems, you just need to keep your head cool, advance slowly and write down exactly what you know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2904760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Is a density nonnegative almost everywhere? Let $(\Omega,\mathcal F,P)$ be a probability space. Let $\xi:\Omega\to\mathbb R$ be a random variable. Let $P_\xi$ be the distribution of $\xi$. Suppose $P_\xi$ is a continuous distribution. (i. e. There is a Borel function $f_\xi:\mathbb R\to\mathbb R$ such that for all $B\in\mathcal B(\mathbb R)$, $P_\xi(B)=\int_Bf_\xi dx$. ($f_\xi$ is the density of $\xi$.)) Is $f_\xi$ nonnegative almost everywhere? (i. e. Is there a Lebesgue measurable set $A$ such that $\mu(A)=0$ and $f_\xi(x)\geq0$ for all $x\in\mathbb R-A$, where $\mu$ is the Lebesgue measure?) Here's my attempt: Let $C=f_\xi^{-1}((-\infty,0))$. It is clear that $C$ is a Borel set. So if we prove that $\mu(C)=0$, then we are done. Assume $\mu(C)>0$. Let's prove that $\int_C f_\xi dx<0$ so that $P_\xi(C)<0$, a contradiction. Assume $\int_C f_\xi dx\geq0$. By the definition of $C$, it is clear that $\int_C f_\xi dx$ can't be greater than $0$, so we have $\int_C f_\xi dx=0$. Let $f_\xi^-:=-\operatorname {min}\{f_\xi,0\}$. Choose a sequence $(s_n)$ of measurable, nonnegative simple functions which monotonically increases and $s_n\to f_\xi^-$ pointwise. By the Lebesgue's monotone convergence theorem, we have $\int_Cs_ndx\to\int_Cf_\xi^- dx$. So $\int_Cs_ndx$ should be $0$ for all $n$. I tried to derive a contradiction here using the fact that $f_\xi^-(x)>0$ for all $x\in C$, but I couldn't do it.
What follows after "Assume $\mu(C)>0$" in your question should be followed by this: $C=\bigcup_{n=1}^{\infty}C_n$ where $C_n:=f_{\xi}^{-1}((-\infty,-\frac1n])$ so that $C_1\subseteq C_2\subseteq\cdots$ and consequently $\mu(C_n)\uparrow\mu(C)$ Then assuming $\mu(C)>0$ leads to $\mu(C_n)>0$ for $n$ large enough and consequently: $$P_{\xi}(C_n)=\int_{C_n} f_{\xi}\;d\mu\leq -\frac1n\mu(C_n)<0$$A contradiction, and you are ready.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2904842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that $(a-b)(c-d)(\bar{a}-\bar{d})(\bar{c}-\bar{b})+i(c\bar{c}-d\bar{d})\text{Im}(c\bar{b}-c\bar{a}-a\bar{b})$ is real This is an exercise in Remmert's Theory of Complex Functions, GTM 122, page 17. Show that for $a,b,c,d\in\mathbb{C}$ with $\lvert a\rvert=\lvert b\rvert=\lvert c\rvert$ the complex number $$(a-b)(c-d)(\bar{a}-\bar{d})(\bar{c}-\bar{b})+i(c\bar{c}-d\bar{d})\text{Im}(c\bar{b}-c\bar{a}-a\bar{b})$$ is real. I let the above expression $w$ and tried showing the $w-\bar{w}=0$, but the expression is too complicated to handle and it seems like $w-\bar{w}$ depends on $d$, also. Is there any simpler way of showing $w$ is real? Please enlighten me.
If $b=0$ or $c=0$, then the complex number equals $0$ which is real. In the following, $b\not =0$ and $c\not=0$. In order to prove that the claim is true, it is sufficient to prove the following two lemmas : Lemma 1 : $$\text{Im}(c\bar{b}-c\bar{a}-a\bar{b})=-\frac{\bar a(a-b)(b-c)(c-a)}{2bc}$$ Lemma 2 : $$\text{Im}((a-b)(c-d)(\bar{a}-\bar{d})(\bar{c}-\bar{b}))=(c\bar c-d\bar d)\times \frac{\bar a(a-b)(b-c)(c-a)}{2bc}$$ From the two lemmas, we see that the complex number equals $\text{Re}((a-b)(c-d)(\bar{a}-\bar{d})(\bar{c}-\bar{b}))$ which is real, so the claim is true. Proof for lemma 1 : Using $a\bar a=b\bar b=c\bar c$, we have $$\begin{align}\text{Im}(c\bar{b}-c\bar{a}-a\bar{b}) &=\frac 12\left((c\bar{b}-c\bar{a}-a\bar{b})-\overline{(c\bar{b}-c\bar{a}-a\bar{b})}\right) \\\\&=\frac 12\left((c\bar{b}-c\bar{a}-a\bar{b})-(\bar c b-\bar ca-\bar a b)\right) \\\\&=\frac 12\left(c\frac{a\bar a}{b}-c\bar{a}-a\frac{a\bar a}{b}-\frac{a\bar a}{c} b+\frac{a\bar a}{c}a+\bar a b\right) \\\\&=\frac{\bar a}{2bc}\left(c^2a-bc^2-ca^2-ab^2+a^2b+b^2c\right) \\\\&=\frac{\bar a}{2bc}\left(c^2(a-b)-c(a-b)(a+b)+ab(a-b)\right) \\\\&=\frac{\bar a(a-b)}{2bc}\left(c^2-c(a+b)+ab\right) \\\\&=\frac{\bar a(a-b)}{2bc}(c-a)(c-b) \\\\&=-\frac{\bar a(a-b)(b-c)(c-a)}{2bc}\qquad\square\end{align}$$ Proof for lemma 2 : Using $a\bar a=b\bar b=c\bar c$, we have $$\begin{align}&\text{Im}((a-b)(c-d)(\bar{a}-\bar{d})(\bar{c}-\bar{b})) \\\\&=\frac 12(a-b)(c-d)(\bar{a}-\bar{d})(\bar{c}-\bar b)-\frac 12\overline{(a-b)(c-d)(\bar{a}-\bar{d})(\bar{c}-\bar{b})} \\\\&=\frac 12(a-b)(c-d)(\bar{a}-\bar{d})(\bar{c}-\bar{b})-\frac 12(\bar a-\bar b)(\bar c-\bar d)(a-d)(c-b) \\\\&=\frac 12(a-b)(c-d)(\bar{a}-\bar{d})\left(\frac{a\bar a}{c}-\frac{a\bar a}{b}\right)-\frac 12\left(\bar a-\frac{a\bar a}{b}\right)\left(\frac{a\bar a}{c}-\bar d\right)(a-d)(c-b) \\\\&=\frac{a\bar a}{2bc}(a-b)(c-d)(\bar{a}-\bar{d})\left(b-c\right)-\frac{\bar a}{2bc}\left(b-a\right)\left(a\bar a-c\bar d\right)(a-d)(c-b) \\\\&=\frac{\bar a(a-b)(b-c)}{2bc}\left(a(c-d)(\bar{a}-\bar{d})-c(\bar c-\bar d)(a-d)\right) \\\\&=\frac{\bar a(a-b)(b-c)}{2bc}\left((c-a)c\bar c-(c-a)d\bar d\right) \\\\&=\frac{\bar a(a-b)(b-c)(c-a)(c\bar c-d\bar d)}{2bc} \\\\&=(c\bar c-d\bar d)\times \frac{\bar a(a-b)(b-c)(c-a)}{2bc}\qquad\square\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2904979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Curvature function and rate of change of angle Let $\gamma:(a,b)\rightarrow \mathbb{R}^2$ be a smooth curve with $\| \dot{\gamma}(s)\|=1$ for all $s\in (a,b)$. Fix $s_0\in (a,b)$ and let the unit vector $\dot{\gamma}(s_0)$ be represented by $(\cos \phi_0,\sin\phi_0)$. Then there is smooth function $\phi$ with $\phi(s_0)=\phi_0$ such that $$\dot{\gamma}(s)=(\cos\phi(s),\sin\phi(s))$$ for all $s\in (a,b)$. The proof goes as follows: let $$\dot{\gamma}(s)=(f(s),g(s))$$ so that $f(s)^2+g(s)^2=1$ for all $s$. Define $$\phi(s)=\phi_0 + \int_{s_0}^s (f\dot{g}-g\dot{f})du$$ It is then shown that this is required $\phi$ in the theorem. Q. I didn't get intuition for choice (definition) of $\phi$. How do we justify the choice of $\phi$ above? Reference: Elementary differential geometry by Pressley, Proposition 2.2.1 (New edition) Using the explicitly defined angular function $\phi$, the curvature function is given by $$\kappa_s =\frac{d\phi}{ds}.$$
Intuition is that you want to get $\phi$ by integration of $\kappa=\frac{d\phi}{ds}$. You get $$\phi(s)=\phi_0 + \int_{s_0}^s \kappa(u) du$$ by fundamental theorem of calculus. From serret formulae you can derive $$\kappa=\langle \ddot \gamma,J \dot \gamma \rangle$$ where $\dot\gamma$ interpreted as tangential vector field $T$ and $J$ a rotation anticlockwise by $1/2 \pi$. This will yield you $$\phi(s)=\phi_0 + \int_{s_0}^s \langle \dot T(u),JT(u) \rangle du=\phi_0 + \int_{s_0}^s (f\dot{g}-g\dot{f}) du$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2905057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If $u$ and $w$ belongs to the same connected components, does there exist any $u-w$ path containing $v$? If $u$ and $w$ belongs to the same connected components, does there exist any $u-w$ path containing $v$?
If $v$ is a cut vertex of $G$, then $G-v$ is disconnected and has at least two components , $G_1$ and $G_2$. Take $u \in G_1$ and $w \in G_2$. You ask, “what happens if $u$ and $w$ lie in the same connected component of $G-v$?” If $u$ and $w$ were in the same component of $G-v$, let's say $G_1$, there would be a path in $G_1$ connecting $u$ to $w$. This path would not contain $v$, because $G_1$ is a component of $G-v$. It might happen that some paths in $G$ connecting $u$ to $w$ contain $v$, but not all of them. But notice that $u$ and $w$ aren't arbitrary vertices—far from it. We only need to show that such vertices exist somewhere in $G$. So we carefully require that $u$ and $w$ come from different components of $G-v$. Now we know there is a path connecting $u$ to $w$ in $G$ (because $G$ is connected), but there is no path connecting $u$ to $w$ in $G-v$ (because $G-v$ is not connected, and $u$ and $w$ are in different components), so every path from $u$ to $w$ in $G$ must not be a path in $G-v$. That is, every path from $u$ to $w$ in $G$ must contain $v$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2905297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Constant in front of the characteristic polynomial of a matrix In a text, I saw that the characteristic polynomial for an $n\times n$ matrix A with eigenvalues $e_{1}, \ldots e_{n}$ can be written $$p(\lambda) = (\lambda - e_{1})(\lambda - e_{2}) \cdots (\lambda - e_{n}).$$ But shouldn't there be a constant in front of this polynomial? Like $$p(\lambda) = K(\lambda - e_{1})(\lambda - e_{2}) \cdots (\lambda - e_{n}).$$
In principle it could happen if we knew nothing about the characteristic polynomial's structure. However, it turns out that it will always be monic. One possible definiton for the determinant of a matrix is $$ \det A = \sum_{\sigma \in \mathbb{S}_n}\operatorname{sgn}(\sigma) \cdot a_{1,\sigma(1)}\cdots a_{n,\sigma(n)}. $$ Since the characteristic polynomial of a matrix $A$ is: $$ \chi_A = \det(\lambda I - A) = \sum_{\sigma \in \mathbb{S}_n}\operatorname{sgn}(\sigma) \cdot (\lambda I -A)_{1,\sigma(1)}\cdots (\lambda I -A)_{n,\sigma(n)}, $$ this expression is a polynomial in $\lambda$, because it is a sum of producuts of either $\lambda - a_{ii}$ or a coefficient of $A$. The maximum exponent for $\lambda$ would occur, then, if $\lambda$ appears in every factor of a term, that is, in the term $(\lambda - a_{11})\cdots(\lambda - a_{nn})$, whose greater power of $\lambda$ is $\lambda^n$: recall that the degrees of a product is the sum of the degrees.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2905413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Degree of curve where matrix of polynomials has rank 1 My question is about a step in Exercise 12.8 on page 442 of 3264 & All That by Eisenbud and Harris. Chapter 12 is about Porteous' formula. The exercise reads: Let $A=(P_{i,j})$ be a $2 \times 3$ matrix whose entries $P_{i,j}$ are general polynomials of degree $a_{i,j}$ on $\mathbb{P}^3$. Assuming that $a_{1,j}+a_{2,k}=a_{1,k}+a_{2,j}$ for all $j$ and $k$--so that the minors of $A$ are homogeneous--what is the degree of the curve where $A$ has rank $1$? Shuai Wang posted a solution on page 21 of this document https://www.math.columbia.edu/~tedd2013/intersectiontheory.pdf which seems correct, but I'm struggling to understand one step of it. He claims that the curve where $A$ has rank $1$ is the degeneracy locus of the following bundle map: $\mathcal{O}_{\mathbb{P}^3}(a_{22}) \oplus \mathcal{O}_{\mathbb{P}^3}(a_{12}) \to \mathcal{O}_{\mathbb{P}^3}(a_{11}+a_{22}) \oplus \mathcal{O}_{\mathbb{P}^3}(a_{12}+a_{22}) \oplus \mathcal{O}_{\mathbb{P}^3}(a_{12}+a_{23})$. \begin{align*} \begin{bmatrix} a_{22} & a_{12} \end{bmatrix} \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \end{bmatrix} \end{align*} I have a few questions about this bundle map. Why is $\mathcal{O}_{\mathbb{P}^3}(a_{11}+a_{22}) \oplus \mathcal{O}_{\mathbb{P}^3}(a_{12}+a_{22}) \oplus \mathcal{O}_{\mathbb{P}^3}(a_{12}+a_{23})$ the target space of this map? Why is the degeneracy locus of this map equal the locus of the maximal minors of $A$? The maximal minors of $A$ are $\{a_{11}a_{22}-a_{21}a_{12}, a_{11}a_{23}-a_{13}a_{21}, a_{12}a_{23}-a_{22}a_{13} \}$ while the result of the matrix multiplication is \begin{align*} \begin{bmatrix} a_{22} & a_{12} \end{bmatrix} \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \end{bmatrix} = \begin{bmatrix} a_{11}a_{22}+a_{21}a_{12} & a_{12}a_{22}+a_{12}a_{22} & a_{22}a_{13}+a_{12}a_{23} \end{bmatrix}. \end{align*} In general, how does one find a map of vector bundles such that its degeneracy locus is the locus of maximal minors of a given matrix? (If possible, it would be nice for the vector bundles to be written as sums of line bundles.) This seems like an important tool for applying Porteus' formula.
I believe you got confused with the map. Let's define a bundle map by right multiplication with $$ A = \begin{bmatrix} P_{11} & P_{12} & P_{13} \\ P_{21} & P_{22} & P_{23} \end{bmatrix} $$ whose degrees are respectively$$ \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \end{bmatrix}$$ Start taking $\begin{bmatrix} R & S \end{bmatrix}$ a section of $\mathcal{O}_{\mathbb{P}^3}(a_{22}) \oplus \mathcal{O}_{\mathbb{P}^3}(a_{12})$. Then $$ \begin{bmatrix} R & S \end{bmatrix} A = \begin{bmatrix} RP_{11}+SP_{21} & RP_{12}+SP_{22} & RP_{13}+SP_{23} \end{bmatrix} =: C $$ The identity $a_{1,j}+a_{2,k}=a_{1,k}+a_{2,j}$ says that this map is well defined and the target is $\mathcal{O}_{\mathbb{P}^3}(r) \oplus \mathcal{O}_{\mathbb{P}^3}(s)\oplus \mathcal{O}_{\mathbb{P}^3}(t)$, for some integers $r,s,t$. These numbers are the corresponding degrees of the entries of $C$, so $r= a_{22}+a_{11}$, $s= a_{22}+a_{12}$ and $t= a_{22}+a_{13} = a_{12}+a_{23} $. Now you want to calculate the points $p\in \mathbb{P}^3$ such that the map $(x,y) \mapsto \begin{bmatrix} x & y \end{bmatrix}A(p)$ is not injective, the so called degenracy locus. This coincides with the vanishing of the minors of $A(p)$, hence the degeneracy locus of this bundle map is given by vanishing of the minors of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2905527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
determinant and invertibility for matrices I'm reading this in my text: I don't understand this at all: (see attachment) It's the second part that I don't get. I get that if B has a row of 0s, then the determinant is 0 (I could cofactor expand this and it'd be 0). But here's what I don't get: * *if $|B| = 0$, why does this imply that $|A| = 0$? Where does this come from? *if A is row equivalent to I, why does this mean that A is invertible?
Note that every elementary row operation is a multiplication of your matrix by an invertible matrix. So if you can reduce your matrix to the identity matrix via elementary row operations that means you have multiplied your matrix by some invertible matrices to get $I$, therefore the product of those matrices is the inverse of your matrix. Thus your matrix in invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2905618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Zeroes of a polynomial. Evaluate an expression Let $x_1,x_2,x_3$ be the zeros of the polynomial $7x^3+24x^2+2016x+i$. Evaluate $(x_1^2+x_2^2)(x_2^2+x_3^2)(x_3^2+x_1^2)$. My thoughts: I've tried $7(x-x_1)(x-x_2)(x-x_3)=0$ and expanded it out to match the polynomial given and got an ugly system of equations (which I can share). I'm not sure if I should start off with this equation or go a different way.
Let $g(x)=b_3 x^3 + b_2 x^2 + b_1 x + b_0$ be a cubic having as zeros exactly $x_1^2+x_2^2$, $x_2^2+x_3^2$, $x_3^2+x_1^2$. Then $(x_1^2+x_2^2)(x_2^2+x_3^2)(x_3^2+x_1^2)=-b_0/b_3$. Now $x_1^2+x_2^2=p_2-x_3^2$ etc. and so the roots of $g$ are given by $h(x_i)$ where $h(x)=p_2-x^2$. Note that $p_2=(x_1+x_2+x_3)^2-2(x_1 x_2 + x_2 x_3 + x_3 x_1)$ is easily computed from the coefficients of $f(x)=7x^3+24x^2+2016x+i$. Finally, Wikipedia tells us that $g$ is given by the resultant $\operatorname {Res}_{x}(y-h(x),f(x))$, which you can find using WA.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2905711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
approximation to standard normal distribution I have a sequence of independent, but non-identically distributed Bernoulli random variables $X_i$'s taking on value $1$ with probability $p_{i1}\cdot p_{i2}$, where $i=1,\ldots,n$. Let $X=\sum_{i=1}^n X_i$. Using Lyapunov central limit theorem, I have approximated the probability $\Pr[X \geq k]$ by standard normal distribution. However, for some values of $k$ the approximated value is not close to the exact value. I wanted to ask that why this happens? Thanks in advance for your help.
Summary Notation: Let $F_n$ be the cumulative distribution function for $X = \sum_{I=1}^n X_i$, and let $F$ be the cumulative distribution function $Z$ where $Z$ follows a standard normal distribution. The result that you have only establishes that for any fixed $k$, $P[Z \geq k]$ will be an increasingly good approximation to $P[X \geq k]$ as $n \rightarrow \infty$. Details You have convergence in distribution (by the Lyapunov central limit theorem): for any $x \in \mathbb{R}$, $F_n(x) \rightarrow F(x)$ as $n \rightarrow \infty$. (This holds for any real value $x$ since $F$ is continuous on the real numbers $\mathbb{R}$.) But (I think that) what you're seeing is that for any given $n$, the closeness of $F_n(x)$ to $F(x)$ can vary with $x$. Hence in particular, the closeness of $F_n(k)$ to $F(k)$ can vary as $k$ increases. Therefore since $P[X \geq k] = 1 - F_n(k)$ and $P[Z \geq k] = 1 - F(k)$, the closeness of $P[Z \geq k]$ to $P[X \geq k]$ can also vary as $k$ increases. In other words, the convergence in distribution is pointwise convergence and not uniform convergence. That is, in establishing that for any $x \in \mathbb{R}$, $F_n(x) \rightarrow F(x)$ as $n \rightarrow \infty$, the following is true: $$ \text{pointwise convergence:}\ \text{for any}\ x \in \mathbb{R}\ \text{and for any}\ \epsilon > 0, \text{there exists}\ N > 0\ \text{such that if}\ n \geq N \ \text{then}\ |F_n(x) - F(x) |< \epsilon $$ The following is not necessarily true: $$ \text{uniform convergence:}\ \text{for any}\ \epsilon > 0, \text{there exists}\ N > 0\ \text{such that for any}\ x \in \mathbb{R}, \text{if}\ n \geq N \ \text{then}\ |F_n(x) - F(x) |< \epsilon $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2905839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why we get inequality form an equation? In the paper linear forms in the logarithms of real algebraic numbers close to 1, it is written on page 5 that- $\varLambda \leq \frac{1}{by^n}$ (see equation 7 on page 5) But we get it from an equation. As I understand , it should be $\varLambda = \frac{1}{by^n}$. How $\varLambda$ could be less than $\frac{1}{by^n}$? If it is $\varLambda \leq \frac{1}{by^n}$ then why not $\varLambda \geq \frac{1}{by^n}$?
$$a=b$$ implies both $$a\le b$$ and $$a\ge b.$$ It's the author's choice to weaken the comparison for the requirements of the exposition. Quiz: Are these propositions true ? ($\land$ is and, $\lor$ is or) * *$a=b\implies a\le b\land a\ge b$ ? *$a=b\implies a\le b\lor a\ge b$ ? *$a=b\implies a< b\lor a=b\lor a> b$ ? *$a=b\implies a< b\land a=b\land a> b$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2905938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Hilbert-Schmidt operator defined by non-orthogonal basis I have the following operator on $L^2(0,1)$ $$ Tf = \sum_{n \geq 0} 2^{-n}\langle f,v_n\rangle v_n$$ Where $v_n(t) = t^n$. I was able to prove that it is Hilbert-Schmidt, but now I need to calculate its Hilbert-Schmidt norm, and its integral kernel. The problem is that it is defined through an indipendent system of vectors (the polynomials) which are dense but not orthogonal (if it had been $v_n(t) = \exp(2\pi i nt)$ it would have been much easier. How do I identify the eigenvectors? I was thinking about setting $t = \exp(2\pi i x)$ and reduce to the orthogonal case somehow but I reach incorrect results, I am not sure of where I am doing wrong. Can anybody help me?
The operator is given by $$ Tf(x)=\int_0^1 f(t) \sum_{n\ge 0} \frac{(xt)^n}{2^n}\, dt =\int_0^1 f(t)\frac{2}{2-xt}\, dt.$$ Let $K(t, x):=2(2-xt)^{-1}$. This is the kernel you are looking for. The squared Hilbert-Schmidt norm of $T$ is given by the integral $$ \iint_{[0, 1]^2} \frac{4dtdx}{(2-xt)^2}= 2\log(2).$$ (thanks for correcting the mistake in my computations)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2906173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is the product of elementary matrices necessarily invertible? Why is the product of elementary matrices necessarily invertible? I understand that each elementary matrix is invertible, but why is their product also invertible? Is it the indirect result of this theorem?
That $A$ is invertible means precisely that there is another matrix $A^{-1}$ such that $$AA^{-1}=I=A^{-1}A.$$ It is easy to show that the inverse, if it exists, must be unique. Now suppose $A,B$ to be invertible, and denote $C:=AB$. $C$ will be invertible if we can find $C^{-1}$ such that $CC^{-1}=I=C^{-1}C$. Observe that $$C(B^{-1}A^{-1})=(AB)(B^{-1}A^{-1})=A(BB^{-1})A^{-1}=AIA^{-1}=AA^{-1}=I.$$ Similarly we get $(B^{-1}A^{-1})C=I$. Therefore, since inverses are unique, by definition we can conclude that $C^{-1}=B^{-1}A^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2906271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Why is it that $x\sim y$ implies that $[x]=[y]$? I had a doubt in the proof of this property: For $x\in A$, let $[x]$ be the set $[x]=\{a\in A\mid a\sim x\}$. a) $[x]=[y]$ or b) $[x]\cap[y]=\varnothing$ I understand till the part where they prove that if for some $c$, $c\sim x$ and $c\sim y$ then $x\sim y$. But then the conclusion that $x\sim y$ implies $[x]=[y]$ strikes a bit of confusion. I need help regarding this. Thank you very much.
If you understood that far, you're practically done. If $x\sim y$, then take $c\in[x]$, then $c\sim x$ and therefore $c\sim y$, since $x\sim y$, so $c\in[y]$. Therefore $[x]\subseteq[y]$. The argument for $[y]\subseteq[x]$ is similar, and therefore $[x]=[y]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2906380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does every finite non-trivial complete group have even order? Does every finite non-trivial complete group have even order? I checked three well known classes of complete groups, and this statement is true for them all: 1) Symmetric groups: All symmetric groups have even order (a well known fact) 2) Automorphism groups of non-abelian simple groups: All non-abelian simple groups are of even order by Feit-Thompson theorem. Thus they have elements of order 2. And, as all non-abelian simple groups are centreless, this element is not in its centre. Thus, the conjugation by it is an automorphism of order 2. That means, that all automorphism groups of non-abelian simple groups have even order. 3) Holomorphs of cyclic groups of odd order (here is the proof, why they are complete: Is the statement that $ \operatorname{Aut}( \operatorname{Hol}(Z_n)) \cong \operatorname{Hol}(Z_n)$ true for every odd $n$?): All cyclic groups are abelian and thus all cyclic groups of even order have automorphism of order 2, that maps all their elements to their inverse. Thus both their automorphism group and their holomorph are of even order. However, I do not know, how to prove this statement in general. Any help will be appreciated.
No. There was at one time a conjecture that this is true, but an example of a complete group of order $3\cdot 19\cdot 7^{12}$ was produced by R.S. Dark in “A complete group of odd order”.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2906545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Comparing Fisher Information of sample to that of statistic Let $X_1,...,X_n$ be Bernoulli($p$) where $p$ is unknown, and $n>2$, and let $T=X_1+X_2$. My task is to calculate the information about $p$ in the entire sample and compare it to the information about $p$ given by the statistic. After a few lines of work, I obtain the following expression for information contained in the sample: $$I_X(p)= n*E_p[(\frac{X_1p^{-1}(1-p)+X_1+1}{1-p})^2] $$ To calculate this, I first calculated the information given by one observation, and then multiplied that information by $n$. Now, after some work, I obtained the following expression for information contained in $T$: $$I_T(p)=E_p[(\frac{Tp^{-1}(1-p)-1}{1-p})^2]$$ Now, I know from previous questions that $T$ is sufficient for $p$. Hence, $I_X(p)$ should equal $I_T(p)$. However, I have no idea how I am to compare these two quantities, because one has an $n$, and the other has an $X_2$ (in the $T$). Any advice?
Fisher Information Matrix (FIM), is the negative of the Expectation of the Hessian of the log likelihood function, namely \begin{equation} I(p) = - E H(p) \end{equation} where $I(p)$ is the FIM and $H(p)$ is the Hessian. Your likelihood function for independent samples is \begin{equation} L(p) = f(x_1\vert p) \ldots f(x_n \vert p) \end{equation} where \begin{equation} f(x_i \vert p) = p^{x_i} (1 - p)^{1 - x_i} \end{equation} Both equations above give us the likelihood function \begin{equation} L(p) = p^{ \sum x_i} (1 - p)^{n - \sum x_i} \end{equation} Log likelihood becomes \begin{equation} l(p) = \log L(p) = \log \big( p^{ \sum x_i} (1 - p)^{n - \sum x_i} \big) \end{equation} Denoting $\bar{x} = \frac{1}{n} \sum x_i$ for sake of simple presentation, one gets: \begin{equation} l(p) = n\bar{x} \log(p) + n(1 - \bar{x})\log(1-p) \end{equation} The score function (gradient of the log likelihood) is \begin{equation} s(p) = l'(p) = n \frac{\bar{x}}{p} - n \frac{(1- \bar{x})}{1 - p} \end{equation} The Hessian becomes \begin{equation} H(p) = s'(p) = - \frac{n(1-2p)\bar{x} + np^2}{p^2(1-p)^2} \end{equation} The FIM becomes \begin{equation} I(p)= -E H(p) = E\big[ \frac{n(1-2p)\bar{x} + np^2}{p^2(1-p)^2} \big] = \big[ \frac{n(1-2p)E\bar{x} + np^2}{p^2(1-p)^2} \big] \end{equation} But $E\bar{x} = p$ so \begin{equation} I(p) = E\big[ \frac{n(1-2p)\bar{x} + np^2}{p^2(1-p)^2} \big] = \big[ \frac{n(1-2p)p + np^2}{p^2(1-p)^2} \big] = \frac{n}{p(1-p)} \end{equation} Now, \begin{equation} T = X_1 + X_2 \end{equation} and the distribution of $T$ is \begin{equation} g(T=t\vert p) = Pr(T = t) = C_2^t p^t(1-p)^{n-t} \end{equation} Using Bayes theorem, we know that \begin{equation} f(x_1 \ldots x_n \vert T = t, p) = \frac{f(x_1 \ldots x_n,t=T \vert p)}{g(T= t \vert p)} \end{equation} Let's get $f(x_1 \ldots x_n,t=T \vert p)$, the likelihood function becomes \begin{equation} f(x_1 \ldots x_n , T = t\vert p) = p^{t + \sum\limits_{i=3}^n x_i}(1-p)^{n - t - \sum\limits_{i=3}^n x_i} \end{equation} \begin{equation} f(x_1 \ldots x_n \vert T = t, p) = \frac{f(x_1 \ldots x_n,t=T \vert p)}{g(T= t \vert p)} = \frac{p^{t + \sum\limits_{i=3}^n x_i}(1-p)^{n - t - \sum\limits_{i=3}^n x_i}}{C_2^t p^t(1-p)^{n-t}} \end{equation} which is \begin{equation} f(x_1 \ldots x_n \vert T = t, p) = \frac{p^{ \sum\limits_{i=3}^n x_i}(1-p)^{ - \sum\limits_{i=3}^n x_i}}{C_2^t} = \frac{1}{C_2^t} \Big(\frac{p}{1-p}\Big)^{\sum\limits_{i=3}^n x_i} = h(p) \end{equation} which is a function of $p$. Hence, $T$ is NOT a sufficient statistics of $p$. However, if $T$ is the sum of ALL your samples, then you'd get a sufficient statistic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2906633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving equation with fraction I don't understand how to get from the second to the third step in this equation: $ - \frac { \sqrt { 2 - x ^ { 2 } } - x \left( \frac { - x } { \sqrt { 2 - x ^ { 2 } } } \right) } { \left( \sqrt { 2 - x ^ { 2 } } \right) ^ { 2 } } = - \frac { \sqrt { 2 - x ^ { 2 } } + \frac { x ^ { 2 } } { \sqrt { 2 - x ^ { 2 } } } } { \left( \sqrt { 2 - x ^ { 2 } } \right) ^ { 2 } } = - \frac { \frac { 2 - x ^ { 2 } } { \sqrt { 2 - x ^ { 2 } } } + \frac { x ^ { 2 } } { \sqrt { 2 - x ^ { 2 } } } } { \left( \sqrt { 2 - x ^ { 2 } } \right) ^ { 2 } } = - \frac { 2 } { \left( \sqrt { 2 - x ^ { 2 } } \right) ^ { 3 } } $ Why can we just add $ 2 - x ^ { 2 } $ in the numerator? Step 1 to step 2, as well as step 3 to step 4 is clear to me.
It's simply that for $a>0$, $$\sqrt{a} = \sqrt{a}\cdot\underbrace{\left(\frac{\sqrt{a}}{\sqrt{a}}\right)}_1$$ $$=\frac{\sqrt{a}\cdot\sqrt{a}}{\sqrt{a}}$$ $$=\frac{(\sqrt{a})^2}{\sqrt{a}}$$ $$=\frac{a}{\sqrt{a}}$$ In your case, "$a$" is $2-x^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2906794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Is the piecewise-defined function differentiable The function is defined as $$f(x)=\begin{cases}x^2, &\text{ for }x\leq 1\\ \sqrt{x}, &\text{ for }x>1\end{cases}$$ and Is this function differentiable at $x=1$? I thought that since $\lim_{x\to 1}$ of $f'(x)$ exists then it IS differentiable. And I think this limit does exist so it should be differentiable. Book says no. My logic must not be correct here.
Taking $\lim\limits_{x\to 1}f'(x)$ is not well defined, until we are convinced that $f'(x)$ exists. If $f'(1)$ does exist, we will have that the left and right hand limits defining the derivative are equal. If we denote $f'_-(x)$ and $f'_+(x)$ as the left and right derivatives respectively, we get that \begin{align*} f'_-(x) &= 2\\ f'_+(x) &=\frac{1}{2} \end{align*} The left and right derivatives exist, but they are not equal. Thus, $f$ is not differentiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2906868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Getting $p_y(y) = p_x(g^{-1}(y)) \left| \frac{\partial{x}}{\partial{y}} \right|$ by solving $| p_y(g(x)) \ dy | = | p_x (x) \ dx |$? My textbook has a very brief section that introduces some concepts from measure theory: Another technical detail of continuous variables relates to handling continuous random variables that are deterministic functions of one another. Suppose we have two random variables, $\mathbf{x}$ and $\mathbf{y}$, such that $\mathbf{y} = g(\mathbf{x})$, where $g$ is an invertible, continuous, differentiable transformation. One might expect that $p_y(\mathbf{y}) = p_x(g^{−1} (\mathbf{y}))$. This is actually not the case. As a simple example, suppose we have scalar random variables $x$ and $y$. Suppose $y = \dfrac{x}{2}$ and $x \sim U(0,1)$. If we use the rule $p_y(y) = p_x(2y)$, then $p_y$ will be $0$ everywhere except the interval $\left[ 0, \dfrac{1}{2} \right]$, and it will be $1$ on this interval. This means $$\int p_y(y) \ dy = \dfrac{1}{2},$$ which violates the definition of a probability distribution. This is a common mistake. The problem with this approach is that it fails to account for the distortion fo space introduced by the function $g$. Recall that the probability of $\mathbf{x}$ lying in an infinitesimally small region with volume $\delta \mathbf{x}$ is given by $p(\mathbf{x}) \delta \mathbf{x}$. Since $g$ can expand or contract space, the infinitesimal volume surrounding $\mathbf{x}$ in $\mathbf{x}$ space may have different volume in $\mathbf{y}$ space. To see how to correct the problem, we return to the scalar case. We need to present the property $$| p_y(g(x)) \ dy | = | p_x (x) \ dx |$$ Solving from this, we obtain $$p_y(y) = p_x(g^{-1}(y)) \left| \dfrac{\partial{x}}{\partial{y}} \right|$$ or equivalently $$p_x(x) = p_y(g(x)) \left| \dfrac{\partial{g(x)}}{\partial{x}} \right|$$ How do they get $p_y(y) = p_x(g^{-1}(y)) \left| \dfrac{\partial{x}}{\partial{y}} \right|$ or equivalently $p_x(x) = p_y(g(x)) \left| \dfrac{\partial{g(x)}}{\partial{x}} \right|$ by solving $| p_y(g(x)) \ dy | = | p_x (x) \ dx |$? Can someone please demonstrate this and explain the steps?
$p_X(x)dx$ represents the probability measur $\mathbb{P}_X$ which is the probability distribution of the random variable $X$, it is defined by its action on measurable positive functions by $$\mathbb{E}(f(X))=\int_{\Omega}f(X)d\mathbb{P}=\int_{\mathbb{R}}f(x)d\mathbb{P}_X(x)=\int_{\mathbb{R}}f(x)p_X(x)dx.$$ Now, we consider a new random variable $Y=g(X)$, (with some conditions on $g$), and we seek $p_Y$ the probability density distribution of $Y$. So we calculate, for an arbitrary measurable positive function $f$ the expectation $\mathbb{E}(f(Y))$ in two ways: First, $$\mathbb{E}(f(Y))=\int_{\mathbb{R}}f(y)\color{red}{p_Y(y)dy}\tag1$$ Second, $$\eqalignno{\mathbb{E}(f(Y))&=\mathbb{E}(f(g(X)))\cr &=\int_{\mathbb{R}}f(g(x))p_X(x)dx\qquad\text{now a change of variables}\cr &=\int_{\mathbb{R}}f(y)\color{red}{p_X(g^{-1}(y))\left|\frac{dx}{dy}\right|dy}&(2) }$$ Now, because $f$ is arbitrary, comparing (1) and (2) we get $$p_Y(y)=p_X(x)\left|\frac{dx}{dy}\right|, \quad\text{where $y=g(x)$.}$$ Or, better $$p_Y(y)=p_X(g^{-1}(y))\left|\frac{1}{g’(g^{-1}(y))}\right|\iff p_Y(g(x))|g’(x)|=p_X(x).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2907001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Compute $\lim\limits_{n \to \infty} {\frac{1 \cdot 3 \cdot 5 \cdots(2n - 1)}{2 \cdot 4 \cdot 6 \cdots (2n)}}$ EDIT: @Holo has kindly pointed out that my concept of ln rules used in this question is wrong. However, the intuition behind using the tangent of a curve to find the sum to infinity of a series still stands. Therefore, I won't be editing the post. I tried expanding out the equation from the question and got $${a_{n}} = \frac{1}{2} .\frac{3}{4} . \frac{5}{6} ...\frac{2n -1}{2n}$$ I then tried taking the ln of the equation which works out to $$\ln(1 - \frac{1}{2}) + \ln(1 - \frac{1}{4}) + ...$$ Here is my question. I used the rules of ln functions and took $ln(\frac{1}{1/2}) + ln(\frac{1}{\frac{1}{4}})$ + ... Since ln(1) is 0, shouldn't the limit work out to 0 as n tends to infinity? Then $$ln(L) = 0, where L = limit$$ $$L = e^0$$ $$L = 1$$ However, the limit is actually 0, and my tutor used a method which I couldn't understand as he tried approximating the function $y = ln(x)$ to $y = x - 1$, as he says that the linear equation is actually a tangent to the ln curve. Could someone please explain this intuition behind it? He differentiated the equation, and since the curve cuts the x-axis at x = 1, he got the linear curve $y = x - 1$. He did mention that the linear equation is just an approximation, but said such an approximation would be more than sufficient.
A completely different approach is to write $${a_{n}} = \frac{1}{2} .\frac{3}{4} . \frac{5}{6} ...\frac{2n -1}{2n}=\frac {(2n)!}{(2^nn!)^2}$$ because you can divide a $2$ out of each term on the bottom and get $n!$ and you can multiply top and bottom by the bottom. Now feed it to Stirling $$a_n\approx\frac{(2n)^{2n}e^{2n}}{e^{2n}2^{2n}n^{2n}\sqrt{\pi n}}=\frac 1{\sqrt{\pi n}}\to 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2907113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
To be concave on $[0,1]$, $f(t)\leq f(0)+tf'(0)$ is enough? Suppose $f(t)>0$. $f(t)\leq f(0)+tf'(0)$ if and only if $f$ is concave over $[0,1]$. Is the above stetement true? There must be a counterexample in my opinion.
An obvious counterexample would be $f(t)=\sin(2\pi t)+1$, which is neither convex nor concave but fulfills your inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2907200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Relation between dilogarithm and its complex conjugate I am looking for a relation between the dilog and its complex conjugate, that is can I simplify the following summation of terms $$f(z) = \text{Li}_2(z) + (\text{Li}_2(z))^*?$$ I have looked through the many identities that are known to exist among such functions on the Wolfram pages but did not find any involving the complex conjugate. If $z>1$ then $\text{Li}_2(z)$ is complex such that the combination $f(z)$ is real so it would be nice if $f(z)$ may be simplified to a dilog with an argument not appearing on the branch cut or something alike.
The real part $\Re \mathrm{Li}_2(x) = \frac{1}{2}f(x)$ for $x>1$ can be computed with the Euler reflection formula (see https://en.wikipedia.org/wiki/Polylogarithm#Dilogarithm) $$\Re \mathrm{Li}_2(x) = \frac{\pi^2}{6} - \mathrm{Li}_2(1-x) - \ln x \ln|1-x|$$ where I have used $\Re \ln(1-x) = \ln|1-x|$ for $x > 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2907329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let us say that we are picking 2 letters from a set of 12. How would you describe the sample space? Let's say the first letter is from the following set: {A,B,C,D,E,F}. And the second letter is from the following set: {a,b,c,d,e,f}. Would this be simply $12\choose2$=66? Or would it be $_{12}P_2=132$? I tried to do it out and got $6$ ways to do the capital letter, 6 ways to do the lowercase letter, and then multiplied by 2 since $Aa\neq aA$. This was $6\times 6\times 2=72$ which is not equal to the answers I got using permutations and combinations. Which way is the correct method?
Let $S = \{A, B, C, D, E, F\}$; let $T = \{a, b, c, d, e, f\}$. Then the sample space is $$S \times T = \{(s, t) \mid s \in S, t \in T\}$$ that is, the set of ordered pairs in which the first element is a member of set $S$ and the second element is a member of set $T$. Since there are six choices for the first element and six choices for the second element, the number of such ordered pairs is $6 \cdot 6 = 36$. Note that since the first element of the ordered pair is in set $S$ and the second element of the ordered pair is in set $T$, the outcome $aA$ is not a possible outcome. The number $$\binom{12}{2}$$ represents the number of two-element subsets of a set with twelve elements. You would get $72$ if you considered sequences of one uppercase and one lowercase letter if there were no requirement that the uppercase letter precede the lowercase letter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2907432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Recognizing a $4\times4$ matrix Suppose I have $$ \Sigma=\begin{pmatrix}\sigma_{11} & \sigma_{12} \\ \sigma_{21} & \sigma_{22}\end{pmatrix},\quad S=\begin{pmatrix}s_{11} & s_{12} \\ s_{21} & s_{22}\end{pmatrix} $$ both of which are actually covariance matrices of two $2\times 1$ random vectors. So, in particular, $\Sigma$ and $S$ are symmetric. Now I have a $4\times 4$ matrix: $$ A=\begin{pmatrix} \sigma_{11}s_{11} & \sigma_{11}s_{12} & \sigma_{12}s_{11} & \sigma_{12}s_{12} \\ \sigma_{12}s_{11} & \sigma_{12}s_{12} & \sigma_{22}s_{11} & \sigma_{22}s_{12} \\ \sigma_{11}s_{12} & \sigma_{11}s_{22} & \sigma_{12}s_{12} & \sigma_{12}s_{22} \\ \sigma_{12}s_{12} & \sigma_{12}s_{22} & \sigma_{22}s_{12} & \sigma_{22}s_{22} \end{pmatrix} $$ I would like to write $A$ as a function of $\Sigma$ and $S$ (without referring to $\sigma$'s and $s$') using matrix operations ($\otimes$, multiplication, inversion, addition, and maybe something else I cannot think of). How do I do that please?
I might be wrong, but one way to do it may be through combining Kronecker product and Hadamard product: $$ \biggl(\begin {bmatrix} 1 & 1 \end {bmatrix} \otimes S \otimes \begin{bmatrix} 1 \\ 1 \end {bmatrix}\biggr) \circ \biggl(\begin{bmatrix} 1 \\ 1 \end {bmatrix}\ \otimes \Sigma \otimes \begin {bmatrix} 1 & 1 \end {bmatrix}\biggr) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2907791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $\lim\limits_{s\to0^+}s^z=\lim\limits_{s\to 0^+} e^{z\ln s}=0$ where $z\in\mathbb C$ and $Re(z)>0$ Prove that $\displaystyle\lim_{s\to0^+}s^z=\lim_{s\to 0^+} e^{z\ln s}=0$ where $z\in\mathbb C$ and $Re(z)>0$ Using the $\epsilon-\delta$ definition we have Let $\epsilon>0.$ We have to find $\delta>0$ such that $0<s<\delta$ implies $|e^{z\ln s}|<\epsilon$ I have no idea what to do next, could anyone help please? Maybe a hint? or maybe 2 hints? Note I am not asking for the proof since I know this site doesn't work like that.
Hint: What does the image of $\ln s$ for $s\in(0,\delta)$ look like in the complex plane? How does that vary with $\delta$. What does the image of $z\ln s$ look like, given that $z$ has positive real part? How does that vary with $\delta$? What is then the maximal size of $e^{z\ln s}$, given $\delta$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2907936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Given $\cos^2 x = 2 \sin x \cos x$, why can't I cancel $\cos x$ to get $\cos x = 2 \sin x$? If I have a function where I know $\cos^2 x = 2 \sin x \cos x$. Why can I not cross out $\cos x$ on both sides, because I get different values for $\cos x = 2 \sin x$?
You may not divide the two members of an equation by $0$. So you can handle the problem with case analysis: * *if $\cos x=0$, the equation holds; *else if $\cos x\ne0$, the equation can be reduced to $\cos x=2\sin x$. Now you solve the two cases independently.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2908157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If we say "classes of non-zero integers modulo $n$", why does this not include the $0$ class? I suppose this is a bit of a wording question more than anything else - I'm working through group theory and was learning that "the (classes of) non-zero integers modulo $p$ form an Abelian group under multiplication." It's the wording of this that gets me a bit confused. Let's say $p = 3$. From my understanding the group described is meant to contain $\{[1],[2]\}$. I completely get why this would be an Abelian group under multiplication. However, if we're looking at all the non-zero integers modulo $3$, it seems to me like the "non-zero" attribute binds to the integer part only. So it's $\mathbb{Z} - \{0\}$ (which would include $3,-3,6,-6,...$) modulo $3$. Hence, since integer multiples of 3 are included in this set, $[0]$ would also be included. But clearly my interpretation can't be the case since including $[0]$ would mean it's not a multiplicative group. In summary, it's like I'm having trouble with the order of operations here: Interpretation 1: (set of non-zero integers) modulo $p$ Interpretation 2: set of non-zero (integers modulo $p$) So am I supposed to interpret it like interpretation 2? Since it seems like interpretation 1 would include $[0]$.
I think the "blocking" isn't and shouldn't be "[non-zero integers] [modulo $p$]" but rather "[non-zero][integers modulo $p$]" You have three [integers modulo $3$]. They are $[53], [-216]$ and $[3{,}691]$. $[53]$ and $[3{,}691]$ are non-zero [integers modulo $3$]. And $[-216]$ is not a non-zero [integer modulo $3$].
{ "language": "en", "url": "https://math.stackexchange.com/questions/2908313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
integral of $\int_{0}^{2014}{\frac{\sqrt{2014-x}}{\sqrt{x}+\sqrt{2014-x}}dx} $ Solve $\int_{0}^{2014}{\frac{\sqrt{2014-x}}{\sqrt{x}+\sqrt{2014-x}}dx} $. Answer is 1007. I tried multiplying $\sqrt{x}-\sqrt{2014-x}\;$, which results in $\frac{\sqrt{2014-x}(\sqrt{x}-\sqrt{2014-x})}{2x-2014}=$$\frac{\sqrt{2014x-x^2}}{2x-2014}-... \\ =\frac{\sqrt{2014/x-1}}{2-2014/x}-... $ I got stuck so I tried substituting $u=2014-x$, thus $\int_{0}^{2014}{\frac{u}{\sqrt{2014-u}+u}}du=... ?$ I found the value of 1007 using the value of integrand at x=0, 1007 and 2014. But cannot solve integral. How can it be solved?
$$let \int_{0}^{2014}{\frac{\sqrt{2014-x}}{\sqrt{x}+\sqrt{2014-x}}dx}=A \\ put \ 2014-x=u \\ \implies \int_{0}^{2014}{\frac{\sqrt{2014-x}}{\sqrt{x}+\sqrt{2014-x}}dx}=\int_{2014}^{0}{\frac{\sqrt{u}}{\sqrt{u}+\sqrt{2014-u}}(-du)}\\=\int_{0}^{2014}{\frac{\sqrt{u}}{\sqrt{u}+\sqrt{2014-u}}(du)}=A \\ add \ both \ integrals \ \implies 2(A)=\int_0^{2014}dx \implies A=1007$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2908398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Monotonicity of the function $(1+x)^{\frac{1}{x}}\left(1+\frac{1}{x}\right)^x$. Let $f(x)=(1+x)^{\frac{1}{x}}\left(1+\frac{1}{x}\right)^x, 0<x\leq 1.$ Prove that $f$ is strictly increasing and $e<f(x)\leq 4.$ In order to study the Monotonicity of $f$, let $$g(x)=\log f(x)=\frac{1}{x}\log (1+x)+x\log \left(1+\frac{1}{x}\right).$$ And $f$ and $g$ has the same Monotonicity. By computation, $$g'(x)=\frac{1}{x^2}\left(\frac{x}{1+x}-\log (1+x)\right)+\log \left(1+\frac{1}{x}\right)-\frac{1}{1+x}.$$ As we know $\frac{x}{1+x}-\log (1+x)\leq 0$ and $\log \left(1+\frac{1}{x}\right)-\frac{1}{1+x}\geq 0$. So it does not determine the sign of $g'(x)$. If we compute the second derivative $g''(x)$, you will find it is also difficult to determine the sign of $g''(x)$. Our main goal is to prove $$\frac{1}{x^2}\left(\frac{x}{1+x}-\log (1+x)\right)+\log \left(1+\frac{1}{x}\right)-\frac{1}{1+x}>0.$$ Is there some tricks to prove this result. Any help and hint will welcome.
Michael Rozenberg's answer has it all. Here are two remaining proofs in Michael Rozenberg's answer: * *Show: $\ln{x}\leq\frac{2(x-1)}{1+x}$ We have $\ln(1+y)\leq\frac{2y}{{2+y}}$ for $-1<y<0$ (see e.g. in https://en.wikipedia.org/wiki/List_of_logarithmic_identities#Inequalities), i.e. $\ln(x)\leq\frac{2(x-1)}{{1+x}}$ for $0<x<1$. This is exactly what needs to be shown. *Show: $\ln(1+x)\leq\frac{x(2x+1)}{(1+x)^2}$ We have $\ln(1+x)\leq\frac{x}{\sqrt{1+x}}$ (see e.g. in https://en.wikipedia.org/wiki/List_of_logarithmic_identities#Inequalities), so we may prove $\frac{1}{\sqrt{1+x}}\leq\frac{2x+1}{(1+x)^2}$ or $(1+x)^3-{(1+2x)^2}\leq 0$ or $x^2 - x - 1<0$ or $x( 1-x) + 1>0$ which is true since $1-x \ge 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2908549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
An Induction Problem, What Am I Supposed To Prove? I have encountered an induction problem which I don't understand. What I don't understand is what it is asking me to prove. I don't want a solution. The problem is: If $u_1=5$ and $u_{n+1}=2u_n-3(-1)^n$, then $u_n=3(2^n)+(-1)^n$ for all positive integers. Am I supposed to prove $u_{n+1}=2u_n-3(-1)^n$ or $u_n=3(2^n)+(-1)^n$ is true for all positive integers?
You are supposed to prove $u_n=3(2^n)+(-1)^n$. $u_1=5$ and $u_{n+1}=2u_n-3(-1)^n$ are the conditions you are supposed to make use of.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2908655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Maximum value of $ab+bc+ca$ given that $a+2b+c=4$ Question: Find the maximum value of $ab+bc+ca$ from the equation, $$a+2b+c=4$$ My method: I tried making quadratic equation(in $b$) and then putting discriminant greater than or equal to $0$. It doesn't help as it yields a value greater than the answer. Thanks in advance for the solution.
Without using calculus: Substituting $c=4-2b-a$, we get $$ab+bc+ca=ab+(a+b)(4-2b-a)=(4(a+b)-(a+b)^2)-b^2$$ and since $f(x)=4x-x^2=4-(x-2)^2$ has maximum at $(2,4)$, substituting $x:=a+b$ gives $$ab+bc+ca\le4-b^2\le4.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2908775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
$\int_{\mathbb R} \frac{1}{(1+ |t-y|)^r} \|f(y)\|_{L^1} dy \leq \frac{C}{r-1} \|f\|_{C(\mathbb R, L^1(\mathbb R))}$ for some $r>0$? Let $f:\mathbb R \to L^1(\mathbb R):t\mapsto f(t)\in L^1(\mathbb R).$ [So $f(t):\mathbb R \to \mathbb C: x\mapsto f(t)(x)$] Assume that $f$ is continuous with time variable $t$. Assume that $f$ is nice For $t>0,$ Can we say $\int_{\mathbb R} \frac{1}{(1+ |t-y|)^r} \|f(y)\|_{L^1} dy \leq \frac{C}{r-1} \|f\|_{C(\mathbb R, L^1(\mathbb R))}$ for some $r>0$? Where $C(\mathbb R, L^1(\mathbb R))$ denotes class of continuous functions $f:\mathbb R \to L^1(\mathbb R).$
By the norm on the continuous function mapping $\mathbb{R} \to L^1(\mathbb{R})$ I assume you mean the maximum-norm, i.e. $$ \|f\|_{C(\mathbb{R},L^1(\mathbb{R}))} = \sup_{t\in \mathbb{R}} \|f(t)\|_{L^1(\mathbb{R})} $$ If that is the case, then the answer to your question is yes. Indeed, for any $t\in \mathbb{R}$ and $r > 1$ one has the estimate: $$\int_{\mathbb R} \frac{1}{(1+ |t-y|)^r} \|f(y)\|_{L^1} dy \leq \frac{2}{r-1} \|f\|_{C(\mathbb R, L^1(\mathbb R))}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2908892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inequality Involving $\int_V \left(\int_{B(x,\epsilon)} \eta_{\epsilon}(x-y) |f(y)|^p dy\right) dx $ I was reading the book "Partial Differential Equation" written by Lawrence C. Evans, coming up with a question. On page 718, Evans wrote $$\int_V \left(\int_{B(x,\epsilon)} \eta_{\epsilon}(x-y) |f(y)|^p dy\right) dx \\ \leq \int_W |f(y)|^p \left(\int_{B(y,\epsilon)} \eta_{\epsilon}(x-y) dx\right) dy$$ Where $V,W,U$ are open sets, $V\subset\subset W\subset \subset U$, and $\eta_{\epsilon}(x) := \frac{1}{\epsilon^n} \eta(\frac{x}{\epsilon})$, $\eta$ is the standard mollifier, $f \in L_{loc}^p(U)$ I want to ask why the inequality holds. I was trying to prove it so hard but I have no ideas.
This is an application of Fubini's theorem, followed by the fact that the integrand is positive and $V$ is a subset of $W$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2909009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Stuck On A Proof By Induction I need to prove true for all integers greater than and equal to 1 using induction. I'll skip the base case, and the inductive assumption, and jump straight to the inductive step: = What I've done now is to say that is less than . But I don't know what to do from beyond there.
One way by induction (as opposed to recognizing the inequality as just AM-GM for $1,2,\ldots,n\,$):   write the inequality to prove as $\,\color{blue}{2^n n! \le (n+1)^n}\,$, and take this to be the inductive assumption. Then, to prove the inductive step for $\,n+1\,$: $$ 2^{n+1} (n+1)! = 2(n+1) \cdot \color{blue}{2^n n!} \;\;\le\;\; 2(n+1)\cdot\color{blue}{(n+1)^n} = 2(n+1)^{n+1} $$ To complete the inductive step, it is sufficient to show that the RHS is: $$ 2(n+1)^{n+1} \le (n+2)^{n+1} \;\;\iff\;\; \left(\frac{n+2}{n+1}\right)^{n+1} \ge 2 \;\;\iff\;\;\left(1 + \frac{1}{n+1}\right)^{n+1} \ge 2 $$ But the latter holds true by Bernoulli's inequality, which concludes the proof. [ EDIT ]   To note that this particular case (positive integer exponent and $\ge1$ base) does not require the full power of Bernoulli's inequality, and the result can be simply derived from the binomial expansion $\,\left(1 + \frac{1}{n+1}\right)^{n+1}= 1 + \binom{n+1}{1}\cdot \frac{1}{n+1}+\ldots \ge 1 + (n+1)\cdot \frac{1}{n+1} = 2\,$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2909233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $f:[a,b]\to\Bbb{R}$ be continuous. Does $\max\{|f(x)|:a\leq x\leq b\}$ exist? Let $f:[a,b]\to\Bbb{R}$ be continuous. Does \begin{align}\max\{|f(x)|:a\leq x\leq b\} \end{align} exist? MY WORK I believe it does and I want to prove it. Since $f:[a,b]\to\Bbb{R}$ is continuous, then $f$ is uniformly continuous. Let $\epsilon> 0$ be given, then $\exists\, \delta>$ such that $\forall x,y\in [a,b]$ with $|x-y|<\delta,$ it implies $|f(x)-f(y)|<\epsilon.$ Then, for $a\leq x\leq b,$ \begin{align} f(x)=f(b)+[f(x)-f(b)]\end{align} \begin{align} |f(x)|\leq |f(b)|+|f(x)-f(b)|\end{align} \begin{align} \max\limits_{a\leq x\leq b}|f(x)|\leq |f(b)|+\max\limits_{a\leq x\leq b}|f(x)-f(b)|\end{align} I am stuck at this point. Please, can anyone show me how to continue from here?
Because $[a,b]$ is compact, every sequence in $[a,b]$ has a subsequence that limits to a point in $[a,b]$. Pick a sequence $x_n \in [a,b]$ such that $\lim_{n \rightarrow \infty} |f(x_n)| = \sup_{[a,b]}|f|$. Now get a convergent subsequence $x_{n_k}$ that converges to some $x \in [a,b]$. By continuity of $|f|$, $|f(x)| = |f( \lim_{k\rightarrow \infty}x_{n_k})| = \lim_{k\rightarrow \infty} |f(x_{n_k})| = \lim_{n \rightarrow \infty} |f(x_n)| = \sup_{[a,b]}|f|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2909330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Find the probability $P(X Suppose $X$ and $Y$ be two independent Poisson $(\lambda)$ random variables. Find $P(X<Y)$. My attempt $:$ \begin{align*}P(X<Y) &= \sum\limits_{x=0}^{\infty} \sum\limits_{y=x+1}^{\infty} P(X=x,Y=y) \\ &= \sum\limits_{x=0}^{\infty} \sum\limits_{y=x+1}^{\infty} P(X=x) P(Y=y) \\ &= e^{-2\lambda} \sum\limits_{x=0}^{\infty} \frac {{\lambda}^x} {x!} \sum\limits_{y=x+1}^{\infty} \frac {{\lambda}^y} {y!} \\ &= e^{-2\lambda} \sum\limits_{x=0}^{\infty}\left( \frac{e^{-\lambda}\lambda^x}{x!} - \frac{\lambda^x}{x!} \sum\limits_{y=0}^x \frac {\lambda^y}{y!} \right) \\ &= e^{-2\lambda} - e^{-2\lambda} \sum\limits_{x=0}^{\infty} \left( {{\frac {{\lambda}^x} {x!}} \sum\limits_{y=0}^{x} {\frac {{\lambda}^y} {y!}}} \right) \end{align*} Now how do I calculate $$\sum\limits_{x=0}^{\infty} \left( \frac {{\lambda}^x} {x!} \sum\limits_{y=0}^{x} \frac {{\lambda}^y} {y!} \right)$$ Please help me in this regard. Thank you very much.
For the special case of $X$ and $Y$ being identically distributed, you have $$P(X < Y) + P(Y < X) + P(Y = X) = 1$$ $$2 P(X < Y) + P(X = Y) = 1$$ $$ P( X < Y) = 1/2 (1 - P(X = Y))$$ So it reduces to computing $P(X =Y)$ whose computation appears here Probability that two independent Poisson random variables with same paramter are equal
{ "language": "en", "url": "https://math.stackexchange.com/questions/2909419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Matrix eigenvalues Consider the matrix $$A_n=\begin{bmatrix} a & b & 0 & 0 & 0 & \dots & 0 & 0 & 0 \\ c & a & b & 0 & 0 & \dots & 0 & 0 & 0 \\ 0 & c & a & b & 0 & \dots & 0 & 0 & 0 \\ 0 & 0 & c & a & b & \dots & 0 & 0 & 0 \\ 0 & 0 & 0 & c & a & \dots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & 0 & \dots & a & b & 0 \\ 0 & 0 & 0 & 0 & 0 & \dots & c & a & b \\ 0 & 0 & 0 & 0 & 0 & \dots & 0 & c & a \end{bmatrix}_{n\times n}$$ The matrix with $a=2$ and $b=c=-1$ is encountered in finite difference discretization of $u_{xx}.$ (a) If $D_n = \det(A_n),$ show that $D_n = aD_{n-1}-bcD_{n-2}.$ (b) Solve the recurrence analytically to obtain $D_n$ as a function of $n.$ (and ofcourse $D_n$ will also depend on $a, b, c.$) (c) Obtain the eigenvalues of $A_n.$ (Hint: Replace $a$ by $a-\lambda$)$$ $$ $$ $$(a)Part can be shown easily by just simple Laplace expansion. (b)We see that $D_0=1, D_1=a$. Let $D_n=r^n$ be a solution of the recurrence relation \begin{equation} D_n=aD_{n-1}-bcD_{n-2} \end{equation} Then characteristic equation corresponding to (1) \begin{alignat*}{3} &\quad & r^n-ar^{n-1}+bcr^{n-2} &=0 \\&\implies &r^2-ar+bc &=0 \\&\implies &r_1=\tfrac{a-\sqrt{a^2-4bc}}{2}, r_2 &=\tfrac{a+\sqrt{a^2-4bc}}{2} \end{alignat*}$ $ Case 1: $a^2-4bc=0$ $r_1=r_2=\frac{a}{2}$ General solution of (1) : $D_n=(C_1+nC_2)(\frac{a}{2})^n$, where $C_1$ and $C_2$ are arbitrary constants. For $n=0$, we get $C_1=D_0=1$. For $n=1$, we get $(C_1+C_2)\frac{a}{2}=D_1=a\implies C_2=1$ Hence $D_n=(1+n)(\frac{a}{2})^n$ $$ $$Case 2: $a^2-4bc\neq0$ General solution of (1) : $D_n=C_1r_1^n+C_2r_2^n$, with where $C_1$ and $C_2$ are arbitrary constants. For $n=0$, we get $C_1+C_2=D_0=1$ For $n=1$, we get $C_1r_1+C_2r_2=D_1=a\implies (C_1+C_2)\frac{a}{2}+(C_2-C_1)\frac{\sqrt{a^2-4bc}}{2}=a \implies 2C_2-1=\frac{a}{\sqrt{a^2-4bc}} \implies C_2=\frac{r_2}{\sqrt{a^2-4bc}}$ $\therefore C_1=\frac{-r_1}{\sqrt{a^2-4bc}}$ Hence $D_n=\frac{r_2^{n+1}-r_1^{n+1}}{\sqrt{a^2-4bc}}=\frac{1}{2^{n+1}\sqrt{a^2-4bc}}[(a+\sqrt{a^2-4bc})^{n+1}-(a-\sqrt{a^2-4bc})^{n+1}]$ $$------------------------------------$$ I have done this far, but I'm stuck now. Is there any simpler expression for $D_n$? How to obtain eigenvalues, if we consider replacing $a$ by $a-\lambda$?
You got questions (a) and (b) already. For (c) the eigenvalues, you need the characteristic equation $\det (A_n - \lambda I) = 0$. This is the same as $D_n = \det (A_n) = 0$, if in there $a$ is replaced by $a-\lambda$. From your result, $$ 0 = D_n({\rm a \; replaced}) =\frac{1}{2^{n+1}\sqrt{(a-\lambda)^2-4bc}}[(a-\lambda+\sqrt{(a-\lambda)^2-4bc})^{n+1}-(a-\lambda-\sqrt{(a-\lambda)^2-4bc})^{n+1}]$$ i.e. for $(a-\lambda)^2-4bc \ne 0$ (denominator $\ne 0$) we have $$ (a-\lambda+\sqrt{(a-\lambda)^2-4bc})^{n+1}=(a-\lambda-\sqrt{(a-\lambda)^2-4bc})^{n+1}$$ or (be careful to obtain all the roots in $\sqrt[n+1]{1}$) $$ a-\lambda+\sqrt{(a-\lambda)^2-4bc}=(a-\lambda-\sqrt{(a-\lambda)^2-4bc})\exp{(2\pi i k/(n+1))}$$ for $k = 0,1,\cdots,n$. Indexing the $\lambda_k$ with $k$, you get the results. E.g. $\lambda_0 = a \pm 2 \sqrt{bc}$ but that contradicts the above condition $(a-\lambda)^2-4bc \ne 0$. Since $k=0$ is excluded, the general result is $\lambda_k = a \pm 2 \sqrt{bc} \cos(\frac{\pi k}{n+1})$ for $k = 1,2,\cdots,n$. Since $\cos(x) = -\cos(\pi -x)$, one of the two signs in $\pm$ actually suffices: $\lambda_k = a - 2 \sqrt{bc} \cos(\frac{\pi k}{n+1}) = a + 2 \sqrt{bc} \cos(\pi - \frac{\pi k}{n+1}) \\= a + 2 \sqrt{bc} \cos(\frac{\pi (n+1-k)}{n+1}) =a + 2 \sqrt{bc} \cos(\frac{\pi m}{n+1}) $ where $1 \le m = n+1-k \le n$, so the results with the positive sign are reproduced with the same range of the counting variable $m$. We show the general result by plugging $\lambda_k = a + 2 \sqrt{bc} \cos(\frac{\pi k}{n+1})$ (plugging $\lambda_k = a - 2 \sqrt{bc} \cos(\frac{\pi k}{n+1})$ works as well ) in the determining equation for the eigenvalues. Indeed $$ 2 \sqrt{bc} [\cos(\frac{\pi k}{n+1})+\sqrt{\cos^2(\frac{\pi k}{n+1})-1}]=2 \sqrt{bc} [\cos(\frac{\pi k}{n+1})-\sqrt{\cos^2(\frac{\pi k}{n+1})-1}]\exp{(2\pi i k/(n+1))}$$ or $$ \cos(\frac{\pi k}{n+1}) + i \sin(\frac{\pi k}{n+1})=[\cos(\frac{\pi k}{n+1}) - i \sin(\frac{\pi k}{n+1})]\exp{(2\pi i k/(n+1))}$$ or $$ \exp{(\pi i k/(n+1))}=\exp{(-\pi i k/(n+1))}\exp{(2\pi i k/(n+1))}$$ which is an identity. By the way, technically, what you have here is a tridiagonal Toeplitz matrix, where references can be found easily.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2909566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Find the volume bounded above the sphere $r=2a\cos\theta$ and below the cone $\phi=\alpha$, where $0<\alpha<\frac{\pi}{2}$ Find the volume bounded above the sphere $r=2a\cos\theta$ and below the cone $\phi=\alpha$, where $0<\alpha<\frac{\pi}{2}.$ I'm supposed to use triple integrals in spherical coordinates to solve it. The image I have in my mind is as such After some thinking I thought that the volume of the cone may be expressed as such (please note that the horizontal line represents the x-y plane, I forgot to include it in the picture): $$\int^\frac{\pi}{2}_b \int^{2a}_0\int^{2\pi}_0 r^2\sin\theta\, d\phi\, dr \,d\theta$$ after transformation in spherical coordinates. Is it correct? I'm not confident in this answer. The most troublesome part is the spherical cap on top. I know there's a formula for it in Cartesian coordinates, but I have absolutely no clue on how to find its volume via integration in spherical coordinates. [FYI: The answer is $4\pi a^3(1-\cos^4\alpha)$] Also, I have tried plain old geometry, ie, finding the volume of the sphere, then subtracting off the volume of the cone and spherical camp through their usual formulas, but I get some weird expression like $\frac{4}{3}\pi a^3 (2\sin^2\alpha(\sin^2\alpha-\cos^2\alpha)+1-6\sin^5\alpha \cos\alpha).$ If the diagram is wrong, please do let me know.
HINT I would use cylindical coordinates with * *$x=r\cos \theta$ *$y=r\sin \theta$ *$z=z$ and * *$0\le \theta \le 2\pi$ *$2a \cos^2 \alpha\le z\le 2a $ *$0\le r \le 2a \cos \alpha\sin \alpha$ *$dV=r\,dz\, dr\, d\theta$ that is $$V=\int_0^{2\pi} d\theta\int^{2}_{2a \cos^2 \alpha}\, dz\int^{2a \cos \alpha\sin \alpha}_0 r\,dr$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2909665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving that the sequence $\left \{ \frac{x_{n}}{y_{n}} \right \} \rightarrow \frac{x}{y}$ Suppose $\left \{ x_{n} \right \}$ and $\left \{ y_{n} \right \}$ converge to the limits $x$ and $y$, respectively. Also, suppose that $y_{n}$'s are nonzero. I want to show that the sequence $\left \{ \frac{x_{n}}{y_{n}} \right \} \rightarrow \frac{x}{y}$. Let $\epsilon >0$. We can pick an $N$ such that $n \geq N$ $\Rightarrow \left | \frac{x}{y} - \frac{x_{n}}{y_{n}}\right | < \epsilon$. I start from $$ \left | \frac{x}{y} - \frac{x_{n}}{y_{n}}\right |$$ $$=\left | \frac{x}{y} - \frac{x}{y_{n}} + \frac{x}{y_{n}} - \frac{x_{n}}{y_{n}}\right | $$ $$= \left | \frac{x\left (y_{n}- y \right )}{y y_{n}} + \frac{\left ( x-x_{n} \right )}{y_{n}}\right | $$ $$\leq \left | \frac{x}{y} \right | \frac{1}{\left | y_{n} \right |} \left | y-y_{n} \right | + \frac{1}{|y_{n}|}\left | x-x_{n} \right | $$ by triangle inequality. I know that I want to make the two terms $\frac{\epsilon}{2}$ + $\frac{\epsilon}{2}$. Since $y_{n}$ converges to $y$, can make $\left | y - y_{n} \right |< \frac{\epsilon \left | y \right |}{\left ( \left | x \right | + 1\right )}$ for some $N_{1}$. But the $y_{n}$ term is giving me a problem, which I can't get rid off. Any suggestions on how to proceed from here (or approaching the problem from a different angle) would be greatly be appreciated.
You can proof $$\lim_{n\to\infty}\frac{1}{y_n}=\frac{1}{y}$$ and then use the product rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2909775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Proving that a function is continuously differentiable using decay of fourier series Let $\mathbb{S}^1=\mathbb{R}\backslash\mathbb{Z}$. Let $\alpha$ be an irrational number, and consider the equation $$g(x+\alpha)-g(x)=p(x), x\in \mathbb{S}^1$$ for an unknown function $g$, with a given function $p\in C^\infty(\mathbb{S}^1)$, such that $$\int_{\mathbb{S}^1} p(x)dx=0$$ Give a condition on $\alpha$ that guarantees $g\in C^1(\mathbb{S}^1)$ for any such function $p$. $\textbf{Thoughts}$ Using Fourier series I was able to deduce that $$\hat{g}(n)=\frac{\hat{p}(n)}{e^{in\alpha}-1}, n \ne 0$$ I was thinking to prove that $g$ is continuously differentiable it might be enough to prove that $\{n\hat{g}(n)\}$ is absolutely summable. We also have arbitrary decay for $\hat{p}(n)$ in that sense for any $k>0$ $\hat{p}(n)\leq \frac{C_k}{n^k}$ Although I am a bit concerned about the choice of $\alpha$ since for irrational $\alpha$, $\{n\alpha\}$ is equidistributed so we can have a subsequence converging to 1. Perhaps there's a way out. Any help is appreciated.
Yo need to use the fact that $|e^{2 \pi i \alpha n} - 1|$ is comparable to the infimum over all natural numbers $m$ of $$ |m - \alpha n | = n \, \Big| \frac{m}{n} - \alpha \Big| $$ That quantity is measuring how googd can your number by approximated by rational numbers. You want to have some lower bound of the form: $$ \Big| \frac{m}{n} - \alpha \Big| > \frac{C}{n^d}. $$ That holds for algebraic numbers of order $d$ is I recall correctly. See the wikipedia for Lioville theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2909902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How many roots does the equation $z^{2018}=2018^{2018}+i$ have? Consider the equation $$ z^{2018}=2018^{2018}+i$$ where $i=\sqrt{-1}$. How many complex solutions as well as real solutions does this equation have? My attempt: I took the polar form as the equation has very difficult to handle when using $z=x+iy$. So I set $z=re^{iθ}$, which yields $$ (re^{iθ})^{2018}=2018^{2018}+e^{i\frac{\pi}{2}}$$ After this I was not able to handle it.
There are several red herrings in the question and, as the problem is stated, you don't need to describe the solutions. Your problem can be generalized to $z^n=a+i$, where $n$ is a positive integer and $a$ is real. Clearly, $z$ cannot be real, nor can $a+i$ be zero. Thus the solutions are all complex (not real) and they are the $n$-th roots of $a+i$. There are $n$ of them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2910047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
How to apply limit properties in this case? I was asked to find the following limit: $$ \lim_{x \to 1} \frac{1 - \sqrt x}{1 - x} $$ I worked it out using direct substitution that the limit is $\frac{1}{2}$. Initially I was trying a more algebraic approach, finding separately the limits of $$ \lim_{x \to 1} 1 - \sqrt x = 0 $$ $$ \lim_{x \to 1} 1 - x = 0 $$ And then applying limit properties for division and multiplication: $$ \lim_{x \to a} (f \cdot g)(x) = l \cdot m $$ $$ \lim_{x \to a} (\frac{1}{g})(x) = \frac{1}{m} $$ But that doesn't work, since the limit in the denominator will $0$. So besides direct substitution, the limit properties are of no use in this case when there is a $0$ in the denominator?
Since the function involves a square root of $x$ we tacitly assume $x >0$ and from $x\to 1$ we assume that $x\neq 1$. Then we may use $1- x = (1 - \sqrt x)(1 + \sqrt x)$. It follows that $$ \lim_{x \to 1} \frac{1 - \sqrt x}{1 - x} = \lim_{x \to 1} \frac{1 - \sqrt x}{(1 - \sqrt x)(1 + \sqrt x)} = \lim_{x \to 1} \frac{1}{1 + \sqrt x} = \frac{1}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2910190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Sum of all powers of two Prove that for any positive integer $n$, there exists a nonnegative integer $k$ with the property that $n$ can be written as a sum of the numbers $2^0,2^1,\dots,2^k$, each appearing once or twice. It seems that we should begin with the canonical representation of $n$ as a sum of powers of two. To make sure that every powers of two appears, we will need to "break down" some powers of two to fill in the gaps.
Basically, as you say, fill in the gaps. We can always write a positive integer $n$ as a sum of powers of $2$ using the binary expansion: $$n = \delta_0 2^0 + \delta_1 2^1 + \ldots + \delta_k 2^k,$$ where $\delta_i \in \lbrace 0, 1\rbrace$. Take the least $m$ such that $\delta_i = 0$, and consider the least $l > m$ such that $\delta_l = 1$. Then, we can write: $$n = 2^0 + \ldots + 2^{m-1} + 2\cdot 2^m + 2^{m+1} + \ldots + 2^{l-1} + \delta_{l+1} 2^{l+1} + \ldots + \delta_k 2^k.$$ Note that, while this doesn't necessarily reduce the number of gaps, it does push the start of the first gap further along. You could consider using strong induction on $\lfloor\log_2(n)\rfloor - m$, where $m$ is the start of the first gap.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2910274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Suppose $r\lt1$. Prove that the series $\sum x_n$ is convergent. Let $x_n$ be a sequence in $\mathbb R$ and suppose $r = \lim_{n\rightarrow\infty} \root {n} \of {|x_n|}$ exists! Suppose $r\lt1$. Prove that the series $\sum x_n$ is convergent. I'm struggling to get started on this question, follow up questions include 'what if $r > 1$, $r = 0$', but after getting help with this one I should be able to pick it up. My work so far: $$\lim_{n\rightarrow\infty}|x_n| \le \lim_{n\rightarrow\infty}(|x_n|)^{\frac{1}{n}}\lt 1$$ Intuitively this to me means $\lim_{n\rightarrow\infty}|x_n| = 0$ since $r^n \rightarrow 0$ as $n\rightarrow \infty$. But this simply doesn't show a single thing when it comes to the convergence of the series of $x_n$. Any guidance is appreciated!
Note, clearly $0\le r$. By $r \lt 1$, we know there exists $R \in (r,1)$ such that $0\le r \lt R \lt 1$. Now choose $\epsilon = R - r$ then by hypothesis, there exists an $N$ such that $\forall n \ge N$ we have: $$|\root n\of{|x_n|} - r| \le R - r$$ Hence $|\root n\of{|x_n|}| \le R$ for all $n \ge N$ (Triangle Inequality). Notice; $0 \le \root n\of{|x_n|} \le R$ and the cool thing is this is equivalent to: $0 \le |x_n| \le R^n$ and $\sum R^n$ converges (geometric and $R \lt 1$), hence by comparison $\sum |x_n|$ converges!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2910413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If $x$ is less than $\pi/2$ then show that $i\cos^{-1}(\sin x + \cos x)$ has two real values Here's how I tried : Let $$i\cos^{-1}(\sin x +\cos x) =y$$ So $$\cos^{-1}(\sin x + \cos x) = -iy$$ So $$\sin x +\cos x =\cos(iy)$$ Now $$\sqrt{2} \cos\left(2n\pi + x- \frac{\pi}{4}\right)= \cos(iy)$$ What now?
Notice that $$i\cos^{-1}(\sin x + \cos x)=i\cos^{-1}(\sqrt 2\cos (x-\dfrac{\pi}{4}))$$I assume it must also be that $x>0$ otherwise for example for $x=-\dfrac{\pi}{4}$ the expression would become zero and has one real value. If so, we have $$-\dfrac{\pi}{4}<x-\dfrac{\pi}{4}<\dfrac{\pi}{4}\\1<\sqrt 2\cos (x-\dfrac{\pi}{4})<\sqrt 2$$therefore $\cos^{-1}(\sqrt 2\cos(x-\dfrac{\pi}{4}))$ would be imaginary. To see this and see for which complex numbers $z$ the cosine would be a real number greater than $1$ we refer to the following expansion of cosine$$\cos(z)=\cos(x+iy)=\cos x\cosh y-i\sin x\sinh y$$if $\cos z=r>1$ we have: $$\cos x\cosh y=r\\\sin x\sinh y=0$$obviously $y\ne0$ (if so $z$ would become real and this is impossible), therefore $x=0$ and $y=\pm \cosh^{-1} r$ so we have $$z=\pm i\cosh^{-1} r$$here $1<r<\sqrt 2$ and $$iz=i\cos^{-1}r=i\cos^{-1}(\sqrt 2\cos(x-\dfrac{\pi}{4}))=\pm \cosh^{-1}r\qquad,\qquad 1<r<\sqrt 2$$which has two real values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2910516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
About definition of Ergodic theorem Let $(X,\Sigma, \mu)$ be a probability space, and $T:X\rightarrow X$ be a measure-preserving transformation. We say $\mu$ is ergodic with respect to $T$ if for every $E\in \Sigma $ with $T^{-1}(E) = E$, either $\mu(E)=1$ or $0$. I have a very fundamental problem about this definition: the result "$\mu(E)=1$ or $0$" does not have any information about $T$ or does not say anything about $T$. So is the condition "$T^{-1}(E)=E, \forall E\in \Sigma$" necessary? $T^{-1}(E)=E$ should depend on the choice of $T$; I pick a particular $T$ such that $T^{-1}(E)=E$. However, $T$ does not show up in "$\mu(E)=1$ or $0$"
For any $E\in \Sigma$, let $P(E)$ be the implication "$\left(T^{-1}E=E\right)\Rightarrow \left(\mu(E)\in \{0,1\}\right)$". Ergodicity of $\mu$ with respect to $T$ means that fro all $E\in\Sigma$, proposition $P(E)$ is true. Since the first part of the implication involves $T$, the definition of ergodicity also implies $T$. Note also that "$T^{-1}(E)=E, \forall E\in \Sigma$" does not hold in general. For example, if $\Sigma$ contains the singletons, this would force $T$to be the identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2910648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
General statements for the second derivative of a function I am working on a task about second derivative. The task is: $f(x)$ on $(-1,1)$ has the values $f(-1)=-10$, $f(0)=-10$ and $f(1)=-3$. What can you say about the values for first and second derivative? For the first derivative I use the mean value theorem and find $f'(c)$ for different intervals. For the second derivative I have some statements: 1) $|f '' (c)|>\frac{7}{2}$ 2) $|f '' (c)|>7$ Are there any theorems or rules I can use in order to check if these statements are true or not? Thanks!
Hint If we consider $x\in(-1,1)$ then $$|f'(x)|\le k\implies |f(x)-f(0)|\le k\left|x\right|\le k\implies |f(1)-f(0)|\le k.$$ Since $f(1)-f(0)=7,$ what can we say about $k?$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2910796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
characteristic function of a convolution of measures Take the probability measures $\mu,\nu$ on $\mathbb{R}$ and denote $\varphi_{\mu}$ (the same for $\nu$) its characteristic function. Why holds $$\varphi_{\mu *\nu}(t)=\varphi_{\mu}(t)\cdot\varphi_{\nu}(t)$$ where $\mu*\nu$ denotes the convolution of $\mu$ and $\nu$?
Neither do we need to assume that $\mu$ and $\nu$ admit densities with respect to a common reference measure, nor do we need to restrict this result to $\mathbb R$. Remember that if $(E,\mathcal E)$ is a measurable group and $$\tau:E^2\to E\;,\;\;\;(x,y)\mapsto xy$$ denotes the group operation, then $\mu\ast\nu$ is the pushforward $\tau(\mu\otimes\nu)$ of the product measure $\mu\otimes\nu$ under $\tau$. Assuming that $E$ is a $\mathbb R$-Banach space (consider as a group with the group operation being the addition) and $\mathcal E=\mathcal B(E)$, we immediately obtain \begin{equation}\begin{split}\varphi_{\mu\ast\nu}(x')&=\int\tau(\mu\otimes\nu)({\rm d}x)e^{{\rm }i\langle x',\:x\rangle}\\&=\int\mu({\rm d}x)\int\nu({\rm d}y)e^{{\rm }i\langle x',\:\tau(x,\:y)\rangle}\\&=\int\mu({\rm d}x)e^{{\rm }i\langle x',\:x\rangle}\int\nu({\rm d}y)e^{{\rm }i\langle x',\:y\rangle}=\varphi_\mu(x')\varphi_\nu(x')\end{split}\tag1\end{equation} for all $x'\in E'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2910885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Groups of order $360$ have a subgroup of order $10$ I want to prove that groups of order $360$ must have a subgroup of order $10$. By Sylow's theorem, the number of Sylow $5$-subgroups $n_5 \equiv 1 \pmod 5$ and $n_5\mid 360$. There are three solutions: $1, 6, 36$ (let me know if I missed any). If $n_5=1$, then the only one is normal, making the product with an element of order $2$ we get a subgroup of order $10$. If $n_5=36$, then pick any Sylow $5$-subgroup, $[G:N_{G}(P)]=36$. It follows that $N_G(P)$ is a subgroup of order $10$. But how to deal with the case when $n_5=6$?
If $n_5=6$ and $P$ is a Sylow 5-subgroup, you have $|N_G(P)|=60$. Set $H=N_G(P)$, and we find a subgroup of order $10$ in $H$. Your arguments work. In $H$, $n_5$ is either $1$ or $6$. If $n_5=1$, multiply Sylow $5$-subgroup with a cyclic of order $2$. If $n_5=6$ and $Q$ is Sylow $5$-subgroup, then $|N_H(Q)|=10$. So, $N_H(Q)\leq H\leq G$ is a subgroup of order $10$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2911121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
The limit of the function $ \frac{\log_{2}(x)-1}{x^2-4}$ as $x=2$ I want to solve the limit: $$\lim_{x \to 2}{\frac{\log_{2}(x)-1}{x^2-4}} $$ The problem is I can't use L'Hospital and Taylor Series to solve it. In My attempt I'm stuck here: $$ \lim_{x \to 2}{\frac{\frac{\ln(x)}{\ln(2)}-1}{x^2-4}} $$ $$ \lim_{x \to 2}{\frac{\frac{-\ln(2)+\ln(x)}{\ln(2)}}{(x-2)(x+2)}} $$ I know the answer is: $\frac{1}{\ln(256)}$ , but how can I make a substitution to get there?
A very standard trick for calculating limits when L'Hopital or Taylor series are not allowed is using the definition of the derivative of some function at some point. Usually, this "some point" is the point at which you take the limit and the "some function" is the "complicated" one in the denominator. In your case, if you look at the definition of the derivative of the function $f(x)=\log_2(x)$ at $x=2$, you have $$ f'(2)=\lim_{x\to 2}\frac{f(x)-f(2)}{x-2}= \lim_{x\to 2}\frac{\log_2x-1}{x-2}.\tag{1} $$ Note that (1) is not far away from the limit you want! Acutally, you have $$ \frac{\log_2x-1}{x^2-4}=\frac{\log_2x-1}{x-2}\cdot\frac{1}{x+2}.\tag{2} $$ On the other hand, $$ f'(2)=\frac{1}{x\ln 2}\bigg|_{x=2}=\frac{1}{2\ln 2}.\tag{3} $$ Now you can combine (1) (2) and (3) to get your answer. (Note that in your answer, $\ln 256=8\ln 2$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2911227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Number of solutions for a given logical equation I came across the following question while studying logic and cannot find a solution for it anywhere. I am studying by myself and think I just don't know exactly the right terms to search for it online (I'm not sure it is called a logical equation so excuse the title of this question in case it isn't): Given the proposition $P$, it's logical value is defined as $[P] = 0$, in case $P$ is false, and $[P] = 1$, in case $P$ is true. Consider the following open sentences defined in the set of integers: $ P_i(x): x \le 5$ $ P_{ii}(x): x \ge 3$ $ P_{iii}(x): $ x is odd $ P_{iv}(x): x \ge 6$ How many solutions does the following equation have? $ x = [P_i(x)] + 2 \cdot[P_{ii}(x)]+3\cdot[P_{iii}(x)]+4\cdot[P_{iv}(x)]$ I've made this jsfiddle and from there, I can count the number of solutions through a loop. In this case I've looped from 0 to 1000 and it yields 2 solutions. Though I can clearly reason it wouldn't be possible for a very large number to work here, since these are all sums of multiplications of 0s or 1s, I am having a hard time articulating exactly why. How would you go about finding the largest number possible, in this very specific case? So you wouldn't have to loop through values of X too far off from it?
In addition to @Rushabh Mehta's answer: note that $[P_i]$ can be seen as a "classic" (aka Pre-Calculus) function. For example, $f_1=[P_1]$ is simply the function $$f_1(x)=[P_1(x)]=\left\{ \begin{array}{rl} 1, & x \leqslant 5 \\ 0, &x>5 \end{array} \right.$$ Therefore, you can study the given equation $x=f_1(x)+2f_2(x)+3f_3(x)+4f_4(x)$ using all techniques you already knows for this kind of problems: considering cases (as in Rushabh Mehta's answer), drawing the graph carefully,...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2911315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Doubt on the definition of ordered topology given in 'Foundations of Topology By C. Wayne Patty' If the underlined symbol is as it is, The definition is confusing. Is it $\mathscr T$ or $\mathscr S$? Please help me with the definition.
All marked $\mathcal{T}$ should be $\mathcal{S}$, of course. <rant> It's too bad so many math-books are submitted as pdf's from Latex sources without the need for human copy editors any more. Huge savings, I know. But earlier all published maths texts went through human eyes during proof reading and type setting and such errors were more rare. </rant> Nitpick: note the proodf only works if there are at least two points in $X$. If there is one, so $X = \{p\}$ then $\mathcal{S} = \{\emptyset\}$ and that's a fine subbase (for the discrete = indiscrete topology on $X$) but not in this author's view.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2911407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding value of $\lim_{n\rightarrow \infty}\frac{1+\frac{1}{2}+\frac{1}{3}+ \cdots +\frac{1}{n^3}}{\ln(n)}$ Find the value of $$\lim_{n\rightarrow \infty}\frac{1+\frac{1}{2}+\frac{1}{3}+\cdots +\frac{1}{n^3}}{\ln(n)}$$ My Try: Using Stolz-Cesaro, Let $\displaystyle a_{n} = 1+\frac{1}{2}+\frac{1}{3}+\cdots \cdots +\frac{1}{n}$ and $b_{n} = \ln(n)$ So $\displaystyle \frac{a_{n+1}-a_{n}}{b_{n+1}-b_{n}} = \lim_{n\rightarrow \infty}\frac{1}{{(n+1)^3}}\cdot \frac{1}{\ln\bigg(1+\frac{1}{n}\bigg)} = 0$ Please explain if what I have done above is right.
To answer your question directly. The mistake is made in $a_{n+1}-a_n$. For example, if $n = 10$, then this is given by $$a_{11}-a_{10} = 1/1331 + 1/1330 + \cdots + 1/1001$$ since $11^3 = 1331$ and $10^3=1000$. If you insert this, you still are going to need to do something similar as Gabriel suggested by viewing the summation as upper and lower Riemann sums. Edit2: if you now use Stolz Cesaro, which is possible, it works out as Gimusi did.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2911688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Log transforming an ODE I'm doing some numerical simulations of an exponential growth like system which, for simplicity, has the form: $$ \frac{dx}{dt}= ax + bxy \quad\quad \frac{dy}{dt}= cy + dxy $$ For some parameter values i get instability in the simulation though I remember reading a paper which used log transformations to prevent this. Any ideas on how I could do this or how to rewrite the equations as: $$ \frac{d log(x)}{dt}= \ldots \quad\quad \frac{d log(y)}{dt}= \ldots $$
(If you need more information, for example Lyapunov functions, this equation is similar to Lotka-Volterra equation.) Dividing the first equality by $x$: $$\frac{1}{x} \frac{dx}{dt}=a+by$$ i.e: $$\frac{d \log(x)}{dt}=a+by$$ and similarly: $$\frac{d \log(y)}{dt}=c+bx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2912041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Lower bound for complex polynomial beyond circle or radius R If we have a polynomial with $c_i$ a complex number $$c_nz^n + c_{n-1}z^{n-1} + \cdots + c_1 z + c_0$$ then $$|P(z)| > \frac{|c_n|R^n}{2}$$ When |z| > R for some R I have tried using the triangle inequality where I obtain, $|P(Z)| \leq |c_n||z|^n + \cdots + |c_0|$. But I seem to keep getting stuck. Any hints on how to move forward? Thank you!
You have $$\frac{|P(z)|}{|z|^n} =\left|c_n+\frac{c_{n-1}}z+\frac{c_{n-2}}{z^2}+\cdots+\frac{c_0}{z^n}\right|.$$ Show that if $|z|$ is large enough, then $$\left|\frac{c_{n-1}}z+\frac{c_{n-2}}{z^2}+\cdots+\frac{c_0}{z^n}\right|<\frac{|c_n|}2$$ etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2912165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Confusion about Nelson's proof of Liouville's theorem Nelson's proof of Liouville's theorem (in the case $n=2$) is as follows: Consider a bounded harmonic function on Euclidean space. Since it is harmonic, its value at any point is its average over any sphere, and hence over any ball, with the point as center. Given two points, choose two balls with the given points as centers and of equal radius. If the radius is large enough, the two balls will coincide except for an arbitrarily small proportion of their volume. Since the function is bounded, the averages of it over the two balls are arbitrarily close, and so the function assumes the same value at any two points. Thus a bounded harmonic function on Euclidean space is a constant. I tried to formalize it. Let $f:\mathbb{C}\to\mathbb{C}$ be a holomorphic bounded function. If $z,w\in\mathbb{C}$ we have that \begin{align*} |f(z)-f(w)| &= \frac{1}{\pi r^2}\left|\int_{D(z,r)}f(x+iy)\:\mathrm{d}x\:\mathrm{d}y - \int_{D(w,r)}f(x+iy)\:\mathrm{d}x\:\mathrm{d}y\right|\\ &= \frac{1}{\pi r^2}\left|\int_{A}f(x+iy)\:\mathrm{d}x\:\mathrm{d}y - \int_{B}f(x+iy)\:\mathrm{d}x\:\mathrm{d}y\right| \\ &\leq \frac{2}{\pi r^2}(\sup |f|)\int_A 1\:\mathrm{d}x\:\mathrm{d}y \\ &= \frac{2}{\pi r^2}(\sup |f|) \left(2r^2\cos^{-1}\left(\frac{d}{2r}\right)-\frac{d}{2}\sqrt{4r^2-d^2}, \right) \end{align*} where $d=|z-w|$. Observe what are $A$ and $B$ in the following drawing: I would expect that the right hand side tends to $0$ when $r\to\infty$. However, that is not the case. How should I formalize Nelson's proof?
Ok, let's calculate the volume $|A|$ of $A$. WLOG I let $z=0$ and $w=d > 0$. Then $$ \frac{|A|}2 = \int_{d/2}^{r+d}\sqrt{r^2-(x-d)^2}\,dx - \int_{d/2}^{r}\sqrt{r^2-x^2}\,dx = \int_{-d/2}^{d/2}\sqrt{r^2-x^2}\,dx. $$ Substituting $x = r\sin t$ gives $$ \frac{|A|}2 = r^2\int_{-\arcsin(d/2r)}^{\arcsin(d/2r)}\cos^2t\,dt = \frac{r^2}2\left[\cos t\sin t + t\right]_{-\arcsin(d/2r)}^{\arcsin(d/2r)}. $$ As $\cos(\arcsin(x)) = \sqrt{1-x^2}$, $$ \frac{|A|}2 = r^2\left(\frac{d}{2r}\sqrt{1-\frac{d^2}{4r^2}}+\arcsin\frac{d}{2r}\right)\,\le\,r^2\left(\frac d{2r} + \arcsin\frac d{2r}\right). $$ Now, for small positive $x$ we have $\arcsin x\le 2x$, hence $|A|\le 3dr$ for large $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2912258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
To prove that $f(z) = constant$ if $f'(z) = 0$, why is it necessary to prove that $u$ and $v$ are constant for all paths? In Churchill's "Complex Variables and Applications", when proving the following statement: $f'(z) = 0 \space\space \forall z \in \mathbb{D}\subset \mathbb{C} \implies f(z) = constant \space\space \forall z \in \mathbb{D}$ First we write $f = u + iv$. Since $f'(z)$ exists in all points on the domain and it equals $0$, then $u'_{x} + i v'_{x} = 0$ and, because it fulfills the Cauchy-Riemann equations, $v'_{y} - i u'_{y} = 0$. So $u'_{x} = u'_{y} = v'_{x} = v'_{y} = 0$. Then he goes on proving that for every path between any two points in $\mathbb{D}$, the directional derivatives of $u$ and $v$ are $0$ and thus $u(x,y) = a$, $\space\space$ $v(x,y) = b$ and $f(z) = a + ib = constant$ for all $z \in \mathbb{D}$. I don't understand why this last step is necessary. If the derivatives of $u$ and $v$ are already known to be $0$, doesn't that already imply that $u$ and $v$ must be constant, and so is $f$? I apologize if this is too basic, but I'm very rusty on my multivariable real calculus.
The problem here is that the C-R equations are only concerned with derivatives in the $x$ and $y$ directions but none of the infinitely many directions inbetween. Therefore it doesn‘t follow entirely trivially that $f$ is constant. However, we can assume the set that $f$ is defined on to be open and locally path connected. It is easy to show that any two points $z_0$ and $z_1$ in a path component can be joined by a sequence of horizontal and vertical paths. Along each of those sub-paths the change in $f$ is zero, so going along the path we get $f(z_0) = f(z_1)$. Thus $f$ is constant on the path component, i.e. locally constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2912393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does it mean to have an absolute value equal an absolute value? I have no problem reading absolute value equations such as $|x -2| = 2$. I know this means that the distance of some real number is $2$ away from the origin. Because the origin splits the number line into a negative side and positive side then the numbers inside the absolute value symbol will be $2$ and $-2$, since those are the only two numbers $2$ units away from the origin. Then, it's just a matter of finding the values of $x$ which will give $2$ and $-2$ inside the absolute value. Therefore, $|x - 2| = 2$ which is $x - 2 = 2$ or $x - 2 = -2$ And the solutions are $\{0, 4\}$ But when I see $|3x - 1| = |x + 5|$, I have no idea know what this means. I know how to solve it, but I don't know how this relates to the distance from the origin or how to interpret this on a number line. My initial interpretation is to say, "the absolute value of some unknown number is the absolute value of some unknown number," but that doesn't tell me the distance from $0$. My Algebra textbook gave the following definition: If $|u| = |v|$, then $u = v$ or $u = -v$. But I can't really tell why this is the case.
Your interpretation is good. Any value $v$ is at distance $|v|$ from origin. Sometimes we are given the distance and are asked to find original value. When something ($\in \mathbb{R}$) is at distance $|w|$ from origin, it has a value either $w$ or $-w$. As you said, $|x-2|=2$ means that $(x-2)$ is at distance $2$ from origin. The same goes for the example that confuses you; $$|3x-1|=|x+5|$$ means that $(3x-1)$ is at distance $|x+5|$ from origin. And what can we conclude from this? That the value of $(3x-1)$ is either $(x+5)$ or $-(x+5)$, and that is what your textbook says using $u$ and $v$. [Also, you can flip it and say that $(x+5)$ is at distance $|3x-1|$ from origin, and those are the redundant cases you see mentioned in other answers.] And to directly refer to your title question; Having two absolute values equal means, in terms of distance from origin, that they are both equaly far from origin. So $$|u|=|v|$$ means that $u$ and $v$ are equaly far from origin. How far specificaly? Exactly $|u|$ (or $|v|$, because they are equal).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2912507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
Show there is a bijection between $R^n\ and\ R^N$ Let $N=\{1,2,3,...,n\}$, show there is a bijection between $R^n\ and\ R^N$ Define $f: R^n \rightarrow R^N$ by $f(x_1,..,x_n)=g_{(x_1,...x_n)}$ where the n-tuple is an element of $R^n$ and $g_{(x_1,...x_n)}:N \rightarrow \mathbb{R}$ is defined by: $g_{(x_1,...x_n)}(i)=x_{i}$ for $i \in N$ What I want is g to map the elements in N to the corresponding real number in the n-tuple. But I'm not sure that I can define g in this way. But if I can here is my proof this is injective. Suppose $(x_1,...x_n) \neq (y_1,...y_n)$ then there is at least one $i \in N$ such that $x_i \neq y_i$. Then the function $g_{(x_1,...x_n)}(i)=x_i \neq y_i=g_{(y_1,...y_n)}$ for some $i \in N$ Since $f(x_1,..,x_n)=g_{(x_1,...x_n)}$ then $f(x_1,...,x_n)\neq f(y_1,...,y_n)$ My issue and why I think this doesn't work is because when I want to prove this is surjective, I want to prove that for any g, there is an n-tuple s.t $g=g_{(x_1,...,x_n)}$ but the way I've defined g seems to make it depend upon this tuple already existing.
I would just appeal to cardinal arithmetic. We have $$\mathfrak c = |\Bbb R|=2^{\aleph_0}=2^{|\Bbb N|}\\ \mathfrak c = |\Bbb R|\le |\Bbb R^n|\le |\Bbb R^{|\Bbb N|}|=(2^{|\Bbb N|})^{|\Bbb N|}=2^{|\Bbb N|{|\Bbb N|}}=2^{|\Bbb N|}=\mathfrak c$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2912564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
An ant is to walk from $A$ to $B$. Calculate the number of paths. An ant has to walk from the left most lower corner to top most upper corner of $4 \times 4$ square. However, it can take a right step or an upward step, but not necessarily in order. Calculate the number of paths.
Realise that the ant in question must take 4 steps to the right and 4 upwards. Representing a step towards right as R and an upward step as U, the ant can choose paths like RRRUUUUR, URURURUR, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2912854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Will this series converge? $\sum \frac {1/2 + (-1)^n}{n}$ Will this series converge? $\sum \frac {1/2 + (-1)^n}{n}$ MY Try: Dirichilet , Abel , libnitz rules can not be used. $\sum \frac {1/2 + (-1)^n}{n} = \sum \frac {1}{2n} + \sum\frac {(-1)^n}{2n} $. Is it possible to write in that way? If yes the how? Can anyone please help me out?
$$\sum\frac{1/2+(-1)^n}{n}=\frac{1}{2}\sum\frac{1}{n}+\sum\frac{(-1)^n}{n}$$ I believe that the first summation is called the harmonic series and is divergent whilst the second summation is convergent
{ "language": "en", "url": "https://math.stackexchange.com/questions/2913073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that there is at most one entire function satisfying the following condition. Show that there is at most one entire function $f:\mathbb C\to \mathbb C$ with $f(0)=2+3i$ satisfying $$f'(z)=\sin(z)f(z)+e^{z^2}$$ for all $z\in\mathbb C$. My question: Although this is a problem I encountered in a course for complex analysis I kept wondering if I can use theory for differential equations to solve this: The function $g(f):=\sin(z)f+e^{z^2}$ is continuous in $f$ as a polynomial which means it satisfies a local Lipschitz-condition and thus the corresponding IVP $f'=g(f),\ f(0)=2+3i$ should have a unique solution on $\mathbb C$ (by Picard-Lindelöf). This means there can't be more than one such function. Does this work?
That's right, if you have a proof of Picard-Lindelöf that works for holomorphic functions in the plane. Luckily you don't need any non-trivial theorems here, because first-order linear equations are trivial via integrating factors. Say $f_1$ and $f_2$ are two solutions, and $h=f_1-f_2$. Then $h'(z)-h(z)\sin(z)=0$, so if $\mu(z)=e^{\cos(z)}$ then $(\mu h)'=0$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2913192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Iterations of a multivariable function How do you define iterations of multivariable functions? To be clear(example): If $f: \mathbb R^2 \to \mathbb R$ How do you define $f \circ f$, or $f \circ \cdots \circ f$? I admit that this question sounds very odd, but I think I need to define or learn of this. (Why? I want to generalize this(Carleman matrix) to multivariable functions to solve this(Multivariable carleman matrix) or this(same but different sites) question!) And I think this concept may be quite reasonable because there is a something like multiplication of matrices that have different dimensions. My assumtion is that $f \circ f \cdots \circ f : \text{also } \mathbb R^2 \to \mathbb R$. Any suggestions are appreciated.
The composition is undefined as $$f \circ f=\mathbb{R}^2\xrightarrow{f}\mathbb{R}\xrightarrow{?}\underline{?}.$$ However if you have a function $\mathbb{R}^2\xrightarrow{F}\mathbb{R}^2,$ then we can easily form the composition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2913277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the radius of a cylinder of given volume V if its surface area is a minimum. this question is driving me crazy as I'm not sure how they've got the answer. The surface area is given as $S = 2\pi r^2 + \frac {1}{50r} $ and they are asking for the value of r for which S is minimum. The derivative of this (I hope!) is $4\pi r - \frac {1}{50r^2}$ Then to find the value of r when S is a minimum I presume you set the derivative to equal $0$. The book shows the value $200π - \frac 13 $ but I'm not sure how they've got this figure from setting the derivative to $0$. Any insight would be appreciated! Edit: My bad, answer is raised to negative $ \frac {1}{3}$, you live, you learn. Thanks for pointing this out.
By the classical formulas, $$V=\pi r^2h$$ and $$S=2\pi r^2+2\pi rh.$$ Eliminating $h$, $$S=2\pi r^2+\frac{2V}r.$$ Then cancelling the derivative, $$4\pi r-\frac{2V}{r^2}=0$$ or $$r^3=\frac V{2\pi}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2913363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Matrix notation $i$ $j$ Let $A = \begin{bmatrix} a_1 & a_2 & \cdots & a_n \\ \end{bmatrix}$ be a $n \times n$ matrix such that $a_i \cdot a_i = 1$ for all $i$ and $a_i \cdot a_j = 0$ for all $i \neq j$. I'm familiar with $i$ indicating row and $j$ indicating column but I'm not sure what these dot products actually refer to. Let's say we have $B = \begin{bmatrix} \frac{1}{\sqrt2} & \frac{1}{\sqrt2}\\ -\frac{1}{\sqrt2} & \frac{1}{\sqrt2}\\ \end{bmatrix}$. What does $a_i \cdot a_i =1$ and $a_i \cdot a_j = 0$ mean here?
Such $B$ doesn't fulfill your constraints. Perhaps you intended$$B=\begin{bmatrix}\dfrac{1}{\sqrt 2}&-\dfrac{1}{\sqrt 2}\\\dfrac{1}{\sqrt 2}&\dfrac{1}{\sqrt 2}\end{bmatrix}$$here we define $$a_m=\begin{bmatrix}b_{1m}\\b_{2m}\\.\\.\\.\\b_{nm}\end{bmatrix}\qquad,\qquad \forall m$$and $$a_i\cdot a_j=\sum_{k=1}^{n}b_{ki}b_{kj}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2913485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }