Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Show that $2^{2^n}+5$ is divisible by $3$ for all $n \geq 1.$ This is part of a larger proof that I am trying to do, but this is a fact I'm trying to wrap my head around and prove. Based on a Sage program, I was able to deduce by pattern that $2^{2^n}+5$ is divisible by $3$ for all $n \geq 1.$ If you factor numbers of this form until $n=6$ on Sage, you get that a prime factor of all of them is $3.$ I tried proving this through induction, but I couldn't quite figure out how to make the argument for the inductive step. I know modular arithmetic is the way to go, but it isn't something we've studied yet in class! Thank you for your help.
$2^{2^n}+5 \equiv (-1)^{2^n}-1 \mod 3$. If $n\ge1$, the exponent is even, and the expression becomes $1-1\equiv 0 \mod 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4263864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Why $\nabla \omega = e^i \otimes \nabla _{e_i}\omega $? For understanding the formal adjoint of div, I read the book, but I don't know how to get $$ \nabla \omega = e^i \otimes \nabla _{e_i}\omega \tag{1} $$ In my view, there is $$ \nabla \omega(e_i, e_j,...)=(\nabla_{e_i}\omega)(e_j,...) = e_i(\omega(e_j,...))-\omega(\nabla_{e_i}e_j,...) -... \tag{2} $$ I can't get (1) from (2). I feel that I am on a wrong way, seemly, there are some things I didn't learn.
Note that in the equation (1) we have $$\nabla \omega = \color{red}{e^i} \otimes \nabla _{\color{blue}{e_i}}\omega\tag{$\star$}\label{a}$$ that is $e^i$s are covectors dual of ONB vectors $e_i$s. And if we want to calculate $\nabla \omega$ at $e_2$ for example we have: $$\nabla \omega(e_2,- )=(\nabla_{e_2}\omega)(-)$$ and from \eqref{a} we have (note that $e^2(e_2)=1$ and $e^2(e_3)=0.$) $$(\nabla \omega)(e_2,- ) = (\color{red}{e^i} \otimes \nabla _{\color{blue}{e_i}}\omega)(e_2,- )=1\otimes (\nabla _{\color{blue}{e_2}}\omega)(-)$$ In short the eq \eqref{a} just says that if you want to compute $\nabla\omega$ at $e_k$, roughly just look at the $k-$th place of first factor of $e^i \otimes \nabla _{e_i}\omega$. If you evaluate both of equation that you want to prove they are equal then you will see that they are really equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4263991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of having exactly $v$ different letters in password Consider a password with $t$ characters, with a character set of length $n^2$. What is the probability there are $v$ distinct letters in the password. So if the password was "abccdadde" $v$ would be $5$ and $t=9$. So far I have deduced if $t=n=v$ the probability is given by: $$\frac{n^{2}!}{\left(n^{2}-n\right)!n^{2n}}$$ And when $t=v+1=n+1$ the probability is given by:$$\frac{n\left(n+1\right)!n^{2}!}{2n!\left(n^{2}-n\right)!n^{2\left(n+1\right)}}$$
Denote $n^2=N$ for ease of notation, then the number of ways to assign $v$ "dummy" characters (to be replaced by actual characters later under a bijective map) is the definition of the second-kind Stirling number $S(t,v)$. Then there are ${}^NP_v=\binom Nvv!$ ways to assign the actual characters, so the number of admissible passwords is $$\binom Nvv!S(t,v)$$ and the probability is this over the total password count $N^t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4264185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What is the probability of all three companies going bankrupt? So I had a question on a test which I can't stop thinking about. The question is: say that there are three companies $(A,B,C)$ that are independent of each other. The probability of company $B$ going bankrupt is $0.20$. The probability of both $A$ and $C$ going bankrupt is $0.07$, and the probability of $B$ and $C$ going bankrupt is $0.08$. If we know that at least two of the companies have gone bankrupt, what is the probability of all three of them having gone bankrupt? $$P(B) = 0.2$$ $$P(A\cap C) = 0.07$$ $$P(B\cap C) = 0.08$$ Isn't this just $P(\textrm{all three bankrupt}\; |\; \textrm{at least two of the companies having gone bankrupt})$? And if we use the rule $P(A\;|\;B) = P(A)$ for independent events (if we can?), shouldn't the answer to this question just be $P(\textrm{All three bankrupt})$ Which would just be $P(A) \cdot P(B) \cdot P(C)$? Is there something I've done wrong? This was a question on an exam for a statistics basic course at UNI. Just confused if this is really the correct answer or if its supposed to be way harder?
$P(\textrm{all three bankrupt}\; |\; \textrm{at least two bankrupt})$.......if we use the rule $P(A\;|\;B)= P(A)$ for independent events (if we can?), shouldn't the answer to this question just be $P(\textrm{all three bankrupt})?$ But the event that all three have gone bankrupt is not independent of the event that at least two have gone bankrupt. $$P(\text{all three bankrupt}|\text{at least two bankrupt})\\= \frac{P(\text{all three bankrupt})}{P(\text{at least two bankrupt})}\\= \frac{P(A\cap B\cap C)}{P(A\cap B)+P(B\cap C)+P(C\cap A)-2P(A\cap B\cap C)}\\= \frac{0.014}{0.035+0.08+0.07-2(0.014)}\\=8.92\%$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4264544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do we know the number of elements in a group knowing two Sylow subgroups that are interacting trivially? Here is the question I want to solve: Prove that if $|G| = 2907,$ then $G$ is not simple. Here is my idea for the solution: We know that since, by Dummit and foote, a simple group $G$ is a group in which the only normal subgroups are the trivial ones, namely $1$ and $G,$ So what we want to show here is that the given group $G$ has a normal subgroup i.e., we want to show that there is a unique Sylow $p$-subgroup i.e., $n_p = 1$ for some prime $p.$ Also, we have $|G|= 2907 = 3^{2} \times 17 \times 19$. And we know that if $p^k$ is the highest power of a prime $p$ dividing the order of a finite group $G$, then a subgroup of $G$ of order $p^k$ is called a Sylow $p$-subgroup of $G$. So, our group $G$ has $3$ Sylow $p$-subgroups which are: * *$P$ a Sylow $3$-subgroup of order $3^2.$ *$Q$ a Sylow $17$-subgroup of order $17.$ *$R$ a Sylow $19$-subgroup of order $19.$ We will start first with the highest prime number: The number of $19$-Sylow subgroups that exists. we have $$n_{19} \mid 3^2 \times 17 = 153 \quad \quad (1)$$ But we also know that $n_{19}\equiv 1 \pmod p$ which means that $$n_{19} = 1 + 19k \quad \quad (2)$$ Therefore $$n_{19} = 1 (k=0)$$ and $$n_{19} = 153 (k=8).$$ The number of $17$-Sylow subgroups that exists. We have $$n_{17} \mid 3^2 \times 19 = 171 \quad \quad (1)$$ But we also know $n_{17}\equiv 1 \pmod p$ which means that $$n_{17} = 1 + 17k \quad \quad (2)$$ Therefore $n_{17} = 1 (k=0)$ and $n_{17} = 171 (k=8).$ But then I do not know how do we know the number of elements in a group knowing that two Sylow subgroups (namely $17$- sylow and $19$-sylow subgroups)that are intersecting trivially? Could anyone help me in this please?
If $n_{17}=171$ and $n_{19}=153$, then the number of elements of order $17$ is $171\times 16=2736$ and the number of elements of order $19$ is $153 \times 18=2754$ and hence the total number of elements in the Group is more than $2907$ which is not possible. Hence one of them must be one which gives a normal subgroup. Done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4264654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it "safe" to evaluate $\lim_{x \to \infty} \frac{1}{\frac{1}{x} +x}$ as follows? If I want to evaluate the following limit: $$\lim_{x \to \infty} \frac{1}{\frac{1}{x} +x},$$ is it valid to use regular arithemtic rules to come to $$\frac{\lim_{x \to \infty} 1}{\lim_{x \to \infty} \frac{1}{x} +\lim_{x \to \infty} x}?$$ Evaluating existing limits gives $$\frac{1}{0 +\lim_{x \to \infty} x},$$ but writing something such as $$\frac{1}{\infty}=0$$ does not seem legal, although it feels intuitive that the limit is $0$ from the first expression. Is there a more sound way of stating this? Perhaps with different laws of limits which I seem to be lacking knowledge of? Thanks in advance for your help!
As explained in comments, your method is not valid because the hypotheses for the quotient and sum limit laws are not satisfied. However, the common technique of (convert to polynomials, then) factor out the big part works. \begin{align*} \lim_{x \rightarrow \infty} \frac{1}{\frac{1}{x} + x} &= \lim_{x \rightarrow \infty} \left( \frac{1}{\frac{1}{x} + x} \cdot \frac{x}{x} \right) \\ &= \lim_{x \rightarrow \infty} \frac{x}{1 + x^2} \end{align*} Here the largest power of $x$ appearing in the numerator or denominator is $2$, so factor out $x^2/x^2$. Continuing the display, \begin{align*} &=\lim_{x \rightarrow \infty} \left( \frac{x^2}{x^2} \cdot \frac{1/x}{1/x^2 + 1} \right) \\ &=\frac{\lim_{x \rightarrow \infty} 1/x}{\lim_{x \rightarrow \infty} (1/x^2 + 1)} \\ &=\frac{\lim_{x \rightarrow \infty} 1/x}{\lim_{x \rightarrow \infty} 1/x^2 + \lim_{x \rightarrow \infty} 1} \\ &= \frac{0}{0 + 1} \\ &= 0 \end{align*} This may seem like "put something in just so we can take it and a little bit more out again", and this is a valid observation. Clearing the nested denominators is just to make it very clear where the power of $x$ is coming from. However, we can do the same thing in the original form: $$ \lim_{x \rightarrow 0} \left( \frac{x}{x} \cdot \frac{1/x}{1/x^2 + 1} \right) $$ equally arranging for the numerator and denominator to have finite limits with the denominator not having limit $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4264836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Little notation question about matrix multiplications / quadratic forms I'm a bit confused about expanding out the notation of product of matrices, in the context of quadratic forms. If $x \in \mathbb{R}^n, \, \, A \in \mathbb{R}^{n \times n}$ then $$x^TAx = \sum_{i,j=1}^na_{ij}x_ix_j$$ But then if I consider a matrix $X \in \mathbb{R}^{n \times n}$ how should I write the expanded form of $$X^TAX = \, \,...\, \, ?$$ This time the result will be a matrix.. will it be something like $$X^TAX = \sum_{i,j=1}^n a_{ij}x_ix_j^T$$ and if yes, why? Sorry if this is pretty straightforward but it always happens to get a little bit stuck with matrix notation. Many thanks, James
The $(i,j)$-coefficient of the matrix $X^TAX$ is given by \begin{equation} \sum_{k,l=1}^nx_{ki}a_{kl}x_{lj}, \end{equation} where $x_{ij}$ (resp. $a_{ij}$) is the coefficient of $X$ (resp. $A$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4265001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
A “general definition” of Riemann sum Suppose $f$ is Riemann integerable in $[0,1]$.Prove that $$\lim_{n\rightarrow \infty}\frac{1}{\phi(n)}\sum_{1\leq k\leq n,(k,n)=1}f\left(\frac{k}{n}\right)=\int_0^1f(x)dx$$Here $\phi(n)$ is Euler's function My attempt: Let $\mu$ be the Möbius function. $$\begin{align} LHS-RHS&=\frac{\sum_{k=1}^{n}\sum_{d|(k,n)}\mu(d)f(\frac{k}{n})}{n\sum_{d|n}\frac{\mu(d)}{d}}-\frac{1}{n}\sum_{k=1}^{n}f(\frac{k}{n})\\ &=\frac{1}{n}\frac{\sum_{k=1}^{n}(\sum_{d|(k,n)}\mu(d)-\sum_{d|n}\frac{\mu(d)}{d})f(\frac{k}{n})}{\sum_{d|n}\frac{\mu(d)}{d}} \end{align} $$But it seems not to work.Does anyone know how to prove it?Thank you
Let $(X_n)_{n\geq 1}$ be a sequence of indepenent random variables such that $$ \mathbf{P}(X_n = k) = \frac{1}{\phi(n)} \quad\text{for each $k$ between $1$ and $n$ coprime to $n$}. $$ In particular, we have $$ \mathbf{E}[f(X_n)] ​= \frac{1}{\phi(n)} \sum_{\substack{1 \leq k \leq n \\ (k, n) = 1}} f(k/n) $$ for any test functions $f$ on $[0, 1]$. Then we have the following observations: Step 1. Let $f$ be bounded, measurable, and $1$-periodic. If $(m, n) = 1$, then by the Chinese remainder theorem, \begin{align*} \mathbf{E}[f(X_{mn})] &= \frac{1}{\phi(mn)}\sum_{\substack{1 \leq k \leq mn \\ (k, mn) = 1}} f\left(\frac{k}{mn}\right) \\ &= \frac{1}{\phi(m)\phi(n)} \sum_{\substack{1 \leq j \leq m \\ (j, m) = 1}} \sum_{\substack{1 \leq k \leq n \\ (k, n) = 1}} f\left(\frac{jn + km}{mn}\right) = \mathbf{E}[f(X_m + X_n)]. \end{align*} Here, the last line follows from the independence of $X_n$'s. Step 2. If $n = p^r$ is a prime power, then for any bounded and measurable $f$ on $\mathbb{R}$, $$ \mathbf{E}[f(X_{p^r})] = \frac{p}{p-1} \left( \frac{1}{p^r} \sum_{k=1}^{p^r} f(k/p^r) \right) - \frac{1}{p-1} \left( \frac{1}{p^{r-1}} \sum_{k=1}^{p^{r-1}} f(k/p^{r-1}) \right). $$ Now assume in addition $f$ is Lipschitz continuous with the Lipschitz constant $\|f\|_{\text{Lip}}$. Then by the above formula, for each prime power $n = p^r$, we get \begin{align*} &\left| \mathbf{E}[f(X_{n})] - \int_{I} f(x) \, \mathrm{d}x \right| \\ &\leq \frac{p}{p-1} \sum_{k=1}^{p^r} \int_{(k-1)/p^r}^{k/p^r} \left| f(k/p^r) - f(x) \right| \, \mathrm{d}x + \frac{1}{p-1} \sum_{k=1}^{p^{r-1}} \int_{(k-1)/p^{r-1}}^{k/p^{r-1}} \left| f(k/p^{r-1}) - f(x) \right| \, \mathrm{d}x \\ &\leq \frac{2\|f\|_{\text{Lip}}}{p^{r-1}(p-1)} \leq \frac{4\|f\|_{\text{Lip}}}{n}. \end{align*} Step 3. For each $n$, let $q_n$ denote the largest prime power dividing $n$. Since there are only finitely many $n$'s for which $q_n$ is bounded by each given number, it follows that $q_n \to \infty$ as $n \to \infty$. Write $m_n = n/q_n$ so that $n = q_n m_n$. If $f $ is 1-periodic and Lipschitz continuous, then by the previous steps, \begin{align*} \left| \mathbf{E}[f(X_n)] - \int_{0}^{1} f(x) \, \mathrm{d}x \right| &= \left| \mathbf{E}[f(X_{q_n} + X_{m_n})] - \mathbf{E} \int_{0}^{1} f(x+X_{m_n}) \, \mathrm{d}x \right| \\ &\leq \mathbf{E}\biggl[ \frac{4}{q_n} \| f(\cdot + X_{m_n}) \|_{\text{Lip}} \biggr] = \frac{4}{q_n} \| f \|_{\text{Lip}}. \end{align*} Therefore it follows that $\mathbf{E}[f(X_n)] \to \int_{0}^{1} f(x) \, \mathrm{d}x$. Step 4. Finally, let $f$ be Riemann integrable on $[0, 1]$. Then for each $\varepsilon > 0$, by modifying the step functions realizing the upper/lower Darboux sums for $f$, we can find 1-periodic and Lipschitz continuous functions $\psi, \varphi$ on $\mathbb{R}$ such that $\psi \leq f \leq \varphi$ on $[0, 1]$ and $\int_{0}^{1} (\varphi(x) - \psi(x)) \, \mathrm{d}x < \varepsilon$. Using this, we get $$ \limsup_{n\to\infty} \mathbf{E}[f(X_n)] \leq \limsup_{n\to\infty} \mathbf{E}[\varphi(X_n)] = \int_{0}^{1} \varphi(x) \, \mathrm{d}x \leq \int_{0}^{1} f(x) \, \mathrm{d}x + \varepsilon, $$ and similarly $$ \liminf_{n\to\infty} \mathbf{E}[f(X_n)] \geq \liminf_{n\to\infty} \mathbf{E}[\psi(X_n)] = \int_{0}^{1} \psi(x) \, \mathrm{d}x \geq \int_{0}^{1} f(x) \, \mathrm{d}x - \varepsilon. $$ Letting $\varepsilon \downarrow 0$, the proof is complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4265173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Is this function, $f$, a homomorphism under addition and subtraction? Allow me to construct $f$ $\textbf{Lemma}$: For all $x \in [0,1]$, there exists a sequence $\{b_k\}_{k=1}^\infty$ such that $$\forall k \in \mathbb{N}, b_k \in \{0,1\}$$ And $$x = \sum_{k=1}^\infty \frac{b_k}{2^k}$$ $\textbf{Proof}$: For any number $x \in [0,1]$, it has a base 2 form (binary) consisting of only $0$ before the Radix point. For example, $$1.0_{10} = 0.11\bar 1_2$$ $$0.5_{10} = 0.10\bar 0_2$$ $$0.0_{10} = 0.00\bar 0_2$$ Consider the sequence $\{b_k\}_{k=1}^\infty$, where $b_k$ is the value at the $k$th location past the Radix point of $x$'s base 2 form. Because $b_k$ is a bit in a base 2 number, for all $k \in \mathbb{N}$, $b_k \in \{0, 1\}$. In addition, because of the method used to convert base 2 numbers to base 10, and $x \in [0,1]$ $$x = \sum_{k=1}^\infty \frac{b_k}{2^k}$$ And our lemma is true. Using this Lemma, we construct a sequence of functions $$\{b_k(x)\}_{k=1}^\infty:\ [0,1] \rightarrow \{0, 1\}$$ Such that for some $x \in [0,1]$ and some corresponding $\{b_k\}_{k=1}^\infty$ as defined in our lemma, $b_k(x) = b_k$. Notice that for all $x \in [0,1]$, $$x = \sum_{k=1}^\infty \frac{b_k(x)}{2^k}$$ We will now use this function $b_k(x)$ to construct $f:\ [0,1] \rightarrow [0,1]$ Define $$f(x) := \sum_{k=1}^\infty \frac{2b_k(x)}{3^k}$$ Is this function a homomorphism under addition and subtraction? That is for $x,y \in [0,1]$, does $$(x+y) \in [0,1] \Rightarrow f(x + y) = f(x) + f(y)$$ $$(x-y) \in [0,1] \Rightarrow f(x - y) = f(x) - f(y)$$ The purpose of this is to show that $f$ is continuous. If $f$ is homomorphic, it is not too hard to prove. I have tried to prove it is continuous without this assumption but keep getting stuck. The function intrigued me and can be thought of as first converting $x$ to binary, replacing the 1's with 2's, and then converting it from ternary to decimal.
This map is not additive (even when restricted to cases it is well-defined). Consider $x = y = 1/3$. Then $$ x = y = 1/3 = 0.\overline{01}_2 \\ x+y = 2/3 = 0.\overline{10}_2 \\ f(x) = f(y) = 0.\overline{02}_3 = 1/4 \\ f(x+y) = 0.\overline{20}_3 = 3/4 $$ and $f(x+y) \ne f(x)+f(y)$. Is $f$ continuous? Yes, it is continuous at the points with unique binary expansion. For counterexample we have to look at a point with two different binary expansions. $$ 1/2 = 0.1\overline{0}_2 = 0.0\overline{1}_2 . $$ So conider a sequence $a_n$, $$ a_{2n} = 0.10^n1\overline{0}_2,\\ a_{2n+1} = 0.1^n\overline{0}_2 . $$ where $0^n$ means $n$ zeros in a row, $1^n$ means $n$ ones in a row. Then $a_n$ converges to $1/2$, with terms alternately above and below $1/2$. But $f(a_n)$ does not converge. The even terms $f(a_{2n})$ converge to $2/3$ from above. The odd terms $f(a_{2n+1})$ converge to $1/3$ from below.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4265460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What do ideals of $\mathbb{C}[x,y]$ containing $(y-x^2)$ look like? I encountered this statement in my ring theory course in the section on the correspondence theorem: Every ideal of the polynomial ring $\mathbb{C}[x, y]$ that contains $y − x^2$ has the form $I = (y − x^2,p(x))$, for some polynomial $p(x)$. In our course we have defined the ideal $(a_1,\dots,a_n)$ as $$(a_1,\dots,a_n) = \{a_1x_1+\dots+a_nx_n\mid x_{i}\in R\}$$ where $R$ is the ring in question. I fail to see why the statement is true, however. Consider the ideal $(y-x^2, y^2-x)$, which contains $(y-x^2)$. Isn't this in direct contradiction to the above? Why do we have to be limited only to a single variable polynomial?
Question: "Every ideal of the polynomial ring $\mathbb C[x,y]$ that contains $y−x^2$ has the form $I=(y−x^2,p(x))$, for some polynomial $p(x)$." Answer: There is for every $F(x,y)\in k[x,y]$ an equality $$ F(x,y)=F(x,x^2+(y-x^2))=F_0(x)+F_1(x)(y-x^2)+\dotsm +F_d(x)(y-x^2)^d $$ hence $$ F(x,y)=f_0(x)+f_1(x,y)(y-x^2) $$ for polynomials $f_0(x) \in k[x]$ and $f_1(x,y)\in k[x,y]$. Hence for any ideal $$ (y-x^2) \subseteq J:=(y-x^2,F_1(x,y),\dotsc,F_k(x,y)) $$ you may write $$ F_i(x,y)=f_i(x)+g_i(x,y)(y-x^2)\tag{*} $$ and hence $$ J=(y-x^2,f_1(x),... ,f_k(x)) $$ is an equality of ideals. The ideal $I:=(f_1(x),..,f_k(x))\subseteq k[x,y]$ is generated by one element $(f(x))$ (here we use that $k[x]$ is a PID) and it follows $J=(y-x^2,f(x))$. Hence you may choose $p(x):=f(x)$ where $(f_1(x),..,f_k(x))=(f(x)) \subseteq k[x]$ in the decomposition (*) - $f(x)$ is a generator for the ideal $(f_1,..,f_k) \subseteq k[x]$. Example: The inclusion $(y-x^2) \subseteq (x-a,y-a^2)$ for $a\in k$. We get $$y-a^2=y-x^2+x^2-a^2=y-x^2+(x-a)(x+a)$$ hence $$(y-a^2,x-a)=(y-x^2+(x-a)(x+a),x-a)=(y-x^2,x-a)$$ and hence $$(y-x^2) \subseteq (y-a^2,x-a).$$ Hence since $k$ is a field it follows $\mathfrak{m}:=(y-a^2,x-a)\in Spec(k[x,y])(k)$ is a $k$-rational point for every $a\in k$ contained in $V(y-x^2)$. It follows $$(y-a^2,x-a)\in Spec(k[x,y]/(y-x^2))(k).$$ Hence your ideal $(y-x^2) \subseteq k[x,y]$ is contained in a 1-dimensional family of maximal ideals $(x-a,y-a^2)$ with $a\in k$. Question: "I fail to see why the statement is true, however. Consider the ideal $(y−x^2,y^2−x)$, which contains $(y−x^2)$. Isn't this in direct contradiction to the above? Why do we have to be limited only to a single variable polynomial?" Example: Let $F_1(x,y)=y^2-x$. You get $$F_1=-x+(x^2+y-x^2)^2= x^4-x+(x^2+y)(y-x^2)=f_1(x)+g_1(x,y)(y-x^2)$$ with $f_1(x):=x^4-x, g_1(x,y):=x^2+y$. Hence $$(y-x^2,y^2-x)=(y-x^2,x^4-x+(x^2+y)(y-x^2))=(y-x^2,x^4-x).$$ Hence using the above "algorithm" you may choose $f(x)=x^4-x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4265820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do you explain intuitively that in split-complex numbers, $0^{\frac12+\frac{j}2}=\frac12-\frac{j}2$ and $0^{\frac12-\frac{j}2}=\frac12+\frac{j}2$? How do you explain intuitively that in split-complex numbers, $0^{\frac12+\frac{j}2}=\frac12-\frac{j}2$ and $0^{\frac12-\frac{j}2}=\frac12+\frac{j}2$? Usually zero to any power is zero, except when the power is zero itself, or otherwise cannot be defined at all. But in split-complex numbers, when we rise zero to a power of some zero divisors, we obtain other zero divisors. But this does not work with all zero divisors, for instance, $0^{\frac{j}2-\frac12}$ would be infinite. Is there an intuitive, for instance, geometric explanation for the equality in the title?
Note that for real $x$, we have: $$e^x\cosh x-e^x\sinh x=e^x\left(\dfrac{e^x+e^{-x}}2-\dfrac{e^x-e^{-x}}2\right)=e^xe^{-x}=1\tag{$\star$}$$ Now let $a$ be a positive real number, and set $t=(\ln a)/2$ for convenience. We have \begin{align} &\phantom{=,,}a^{(1\pm j)/2}\\&=\left(\exp(2t)\right)^{(1\pm j)/2}\\&=\exp\left(t\pm jt\right)\\&=\exp(t)\exp(\pm jt)\\&=\exp(t)\left(\cosh(t)\pm j\sinh(t)\right)\\&=\exp(t)\sinh(t)+1\pm j\exp(t)\sinh(t)\text{ by }(\star)\\&=1+\exp(t)\sinh(t)(1\pm j)\text{.}\end{align} Geometrically, this means that for positive $a$, $a^{(1\pm j)/2}$ always lies on the line $y=\pm(x-1)$ (parallel to the "null cone"). In the limit as $a$ approaches $0$ from above, we have $t\to -\infty$. But $e^{t}\sinh t=\dfrac{e^{2t}-1}2\to-1/2$, so that $a^{(1\pm j)/2}\to 1-(1\pm j)/2=(1\mp j)/2$. This is a good reason to define $0^{(1\pm j)/2}=(1\mp j)/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4265975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Two independent spanning trees of $2$-connected graph I want to prove the following statement: Let $u$ be a vertex in a $2$-connected graph $G$. Then $G$ has two spanning trees such that for every vertex $v$, the $u,v$-paths in the trees are independent. I tried to show this, but surprisingly, I have proved another statement. A graph with $\vert V(G) \vert \geq 3$ is $2$-connected iff for any two vertices $u$ and $v$ in $G$, there exist at least two independent $u,v$-paths. And I can assure that it is true, since I could find it from other papers. I think this one may help me proving the desired statement, but I have no idea how to use it properly. Would you help me find a such way, or suggest another proof of the first statement?
You could try building up two trees $T_1,T_2$ such that $V(T_1) = V(T_2)$ (here $V(T_i)$ denotes the vertex set of $T_i$), $u \in V(T_1)$ and for every vertex $v \in V(T_1) - \{u\}$ the $u-v$ paths in $T_1$ and $T_2$ are independent (i.e. internally disjoint). At the beginning, let $T_1$ and $T_2$ consist of the single vertex $u$. Then while $V(T_1) \neq V$, choose an outgoing edge $e = u'v'$, where $u' \in V(T_1)$. Because $G$ is $2$-connected, you can find a path $P$ not containing $u'$ that goes from $v'$ to some vertex of $V(T_1)$ (Edit: as pointed out in the comments, this is not true for the first step and we need to modify the argument slightly in this case). Then you can use $P$ and $e$ to extend $T_1$ and $T_2$ to larger trees while maintaining the properties above. The above strategy can be also rephrased as taking an open ear decomposition of $G$ whose initial cycle contains $u$, and then constructing two suitable spanning trees from the ear decomposition "one ear at a time".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4266252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A combinatorial question about odd and even numbers Consider an $n$ tuple. Each index of the tuple is filled with a symbol chosen from the following set comprising of four symbols: \begin{equation} S = \{a, b, c, d\}. \end{equation} Note that there are $4^{n}$ possible fillings. I want to count the number of possible tuples such that $b$-s, $c$-s, and $d$-s all occur in it an even number of times (not necessarily the same even number for each.) For example, if $n = 4$, then $bbba$ is not a valid tuple --- as it has $3$ $b$-s, which is an odd number. However $bb cc$ is a valid tuple — as both $b$ and $c$ occur an even number of times. For $n = 2$, the valid choices are $aa, bb, cc, dd$. Hence, the number of such tuples is $4$. For $n = 3$, the valid choices are $aaa, abb, acc, add, bba, cca, dda, bab, cac, dad$. So, $10$ choices. Is there a general pattern?
Since this is tagged combinatorial-proofs, here’s a purely bijective solution. Let the “signature” of a tuple be the numbers of $b$s, $c$s, and $d$s mod 2, and let $x_s$ be the number of tuples with signature $s$ (so we seek $x_{000}$). By the symmetry $b → c → d → b$, we have $x_{100} = x_{010} = x_{001}$ and $x_{110} = x_{101} = x_{011}$. We can count all $4^n$ tuples as follows: $$x_{000} + 3x_{100} + 3x_{011} + x_{111} = 4^n. \tag{1}$$ Consider the transformation that flips the first letter that’s $a$ or $b$ to $b$ or $a$, respectively. This is a self-inverse transformation that exchanges signatures $000 ↔ 100$, $010 ↔ 110$, $001 ↔ 101$, $011 ↔ 111$, except that it fails on the $2^n$ tuples made entirely from $c$ and $d$. If $n$ is even, the signatures of these failing tuples are half $000$ and half $011$; if $n$ is odd, they’re instead half $010$ and half $001$. So: \begin{gather*} x_{000} - x_{100} = 2^{n - 2} + (-2)^{n - 2}, \tag{2} \\ x_{011} - x_{111} = 2^{n - 2} + (-2)^{n - 2}, \tag{3} \\ x_{010} - x_{110} = 2^{n - 2} - (-2)^{n - 2}, \tag{4} \\ x_{001} - x_{101} = 2^{n - 2} - (-2)^{n - 2}. \tag{5} \\ \end{gather*} By $\text{(1)} + 7·\text{(2)} + \text{(3)} + 4·\text{(4)}$, we have $$8x_{000} = 4^n + 8(2^{n - 2} + (-2)^{n - 2}) + 4(2^{n - 2} - (-2)^{n - 2}), \\ x_{000} = \frac{4^n + 3·2^n + (-2)^n}{8}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4266460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Find weak derivative of sign-like function Let $f:\mathbb{R}^2\to \mathbb{R}$ be a function defined as follows. $$f(x,y)=\begin{cases} 1, &\text{if } x>y\\ -1,&\text{if } x<y. \end{cases}$$ To compute the weak derivative $f_x$ of $f$, I proceeded as follows. Assume $w$ be a weak derivative of $f$ in the sense of distribution. By definition of weak derivative, we have $$\int_{\mathbb{R}^2}f(x,y)\,\varphi_x\,\mathrm dx\,\mathrm dy=-\int_{\mathbb{R}^2}w\,\varphi \,\mathrm dx\,\mathrm dy.$$ After a computation, the LHS becomes $$\int_{\mathbb{R}^2}f(x,y)\,\varphi_x\,\mathrm dx\,\mathrm dy = -2\int_{\mathbb{R}}\varphi(y,y)\,\mathrm dy.$$ Then $$\int_{\mathbb{R}^2}w\,\varphi \,\mathrm dx\,\mathrm dy=2\int_{\mathbb{R}}\varphi(y,y)\,\mathrm dy.$$ It looks like $w$ is related to Dirac delta distribution, but I don't know how to get an explicit formula for $w$. Any help would be appreciated.
You were almost there! Just remark that using the definition of $\delta_0(x-y) = \delta_y(x)$ gives $$ \int_{\Bbb R} \varphi(y,y)\,\mathrm dy = \int_{\Bbb R} \left(\int_{\Bbb R}\varphi(x,y) \, \delta_y(x) \,\mathrm dx\right)\mathrm dy \\ \qquad= \int_{\Bbb R^2} \varphi(x,y) \, \delta_0(x-y) \,\mathrm dx\,\mathrm dy = \langle\delta_0(x-y),\varphi\rangle $$ and so you just proved that $f_x = 2\,\delta_0(x-y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4266629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Non-probabilistic/non-combinatoric proof of $\sum_{k=n}^{2n} \frac{{k \choose n}}{2^k}=1$ The summation $$S_n=\sum_{k=n}^{2n} \frac{{k \choose n}}{2^k}=1~~~~(1)$$ has been proved using probabilistic/combinatoric method in MSE earlier: Combinatorial or probabilistic proof of a binomial identity Here we give an analytic proof for (1): $$S_n=\sum_{k=n}^{2n} \frac{{k-1 \choose n-1}+{k-1 \choose n}}{2^k}~~~~~(2)$$ Let $k-1=p$, then $$S_n=\sum_{p=n-1}^{2n-1} \frac{{p \choose n-1}+{p \choose n}}{2^{p+1}}~~~~(3)$$ Take the first term $$\sum_{p=n-1}^{2n-1} \frac{{p \choose n-1}}{2^{p+1}}=\sum_{p=n-1}^{2n-2} \frac{{p \choose n-1}}{2^{p+1}}+\frac{{2n-1 \choose n-1}}{2^{2n-1}}=\frac{S_{n-1}}{2}+\frac{{2n-1 \choose n-1}}{2^{2n}}~~~~~(4)$$ Now take the second term $$\sum_{p=n-1}^{2n-1} \frac{{p \choose n}}{2^{p+1}}=\sum_{p=n}^{2n} \frac{{p \choose n}}{2^{p+1}}+\frac{{n-1 \choose n}}{2^{n}}-\frac{{2n \choose n}}{2^{2n+1}}~~~~(5)$$ The second term in RHS vanishes and the third term is equal and opposite to the second term of the RHS of (4). Adding (4) and (5), we get $$S_n=\frac{S_{n-1}}{2}+\frac{S_{n}}{2} \implies S_n=S_{n-1}$$So $S_n$ is a constant (independent of $n$) we can have $S_n=S_{n-1}=S_1=1$ The question is: What could be other analytic proofs of (1) without using the ideas of probability/combinatorics?
We seek to show that $$\sum_{k=n}^{2n} 2^{-k} {k\choose n} = 1$$ or alternatively $$S_n = \sum_{k=0}^n 2^{-k} {k+n\choose n} = 2^n.$$ Using an Iverson bracket we find $$\sum_{k\ge 0} 2^{-k} {k+n\choose n} [z^n] \frac{z^k}{1-z} = [z^n] \frac{1}{1-z} \sum_{k\ge 0} 2^{-k} {k+n\choose n} z^k \\ = [z^n] \frac{1}{1-z} \frac{1}{(1-z/2)^{n+1}}.$$ This is $$\mathrm{Res}_{z=0} \frac{1}{z^{n+1}} \frac{1}{1-z} \frac{1}{(1-z/2)^{n+1}} \\ = (-1)^n 2^{n+1} \mathrm{Res}_{z=0} \frac{1}{z^{n+1}} \frac{1}{z-1} \frac{1}{(z-2)^{n+1}}.$$ Residues sum to zero and the residue at infinity is zero by inspection. Hence we take minus the sum of the residues at $z=1$ and $z=2.$ We obtain for $z=1$ $$-(-1)^n 2^{n+1} (-1)^{n+1} = 2^{n+1}.$$ We write for $z=2$ $$- (-1)^n 2^{n+1} \mathrm{Res}_{z=2} \frac{1}{((z-2)+2)^{n+1}} \frac{1}{(z-2)+1} \frac{1}{(z-2)^{n+1}} \\ = (-1)^{n+1} \mathrm{Res}_{z=2} \frac{1}{(1+(z-2)/2)^{n+1}} \frac{1}{(z-2)+1} \frac{1}{(z-2)^{n+1}}.$$ This yields $$(-1)^{n+1} \sum_{q=0}^n (-1)^q 2^{-q} {n+q\choose n} (-1)^{n-q} = - \sum_{q=0}^n 2^{-q} {n+q\choose n} = - S_n.$$ We have shown that $S_n = 2^{n+1} - S_n$ or $S_n = 2^n$ as claimed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4266830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
For any sets $A$ and $B$ show that: $(A \cup B) \diagdown (A \cap B) = (A \diagdown B) \cup (B \diagdown A)$ I have the question "solved" and I understand the what the question is asking I just feel like the solution I have is not sufficiently "mathematical" and would like some guidance writing it in a more sophisticated way. If the way I answered it is sufficient for full marks, however, that would be good to know as well. It just feels too... casual. The goal of this problem is to show that the set on the LHS of the equation is a subset of the set and the right and that the converse is true as well. So the set on the LHS of the equation can be written in set builder notation as: $(A \cup B) \diagdown (A \cap B)$ $=$ {$x: x\in A$ or $x\in B, x\notin A$ and $B$} Similarly the set on the RHS of the equation can be written in set builder notation as $(A \diagdown B) \cup (B \diagdown A)$ $=$ {$x: x\in A$, $x \notin B$ or $x \in B$, $x\notin A$} If these set-builder notation write ups are wrong please let me know. * *To show LHS is a subset of RHS we let $x \in$ {$x: x\in A$ or $x\in B, x\notin A$ and $B$} and note that $x$ can be an element of either $A$ or $B$ but not both simultaneously, thus we can see that $x \in$ $(A \diagdown B) \cup (B \diagdown A)$ *To show RHS is a subset of LHS we let $x \in$ {$x: x\in A$, $x \notin B$ or $x \in B$, $x\notin A$} and note that if $x \in A$ then $x$ is not an element of $B$ and if $x \in B$ then $x$ is not an element of $A$ hence $x$ could never be an element of $A$ and $B$ and thus $x \in$ $(A \cup B) \diagdown (A \cap B)$ I feel like I've done nothing more than write the sets in set builder notation and then describe them and say this equals that. I can see how they are equivalent but it doesn't seem like enough work. Am I missing somethign here? Any help is greatly appreciated.
$$A\Delta B =(A \cup B) \diagdown (A \cap B) = (A \diagdown B) \cup (B \diagdown A)$$ The set is called symmetric difference of $A$ and $B$ Note that on both sides we have those elements which are in one and only one of the two sets so they are equal. The symmetric difference is also defined for three or more sets. One notable property of the symmetric difference is the associative property $$ A\Delta (B\Delta C)=(A\Delta B)\Delta C$$ which makes a good exercise for students of a set theory course.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4267099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Is $\lim_{x\to a} \frac{\tan{f(x)}}{f(x)}$ always 1? [And other generalization of related limits] I've just discovered something really interesting (Or to be precise, it's been discovered for ages but I've just realized it now.) We know these: $$\lim_{x\to 0} \frac{\tan{x}}{x} = 1$$, $$\lim_{x\to 0} \frac{\sin{x}}{x} = 1$$, and $$\lim_{x\to 0} \frac{1-\cos{x}}{x} = 0$$ However, can we generalize it like this: For all polynomial $f(x)$ that has roots $a_i$, where $a_i\in \Bbb{Z}$, then $$\lim_{x\to a_i} \frac{\tan{f(x)}}{f(x)} = 1$$, $$\lim_{x\to a_i} \frac{\sin{f(x)}}{f(x)} = 1$$, and $$\lim_{x\to a_i} \frac{1 - \cos{f(x)}}{f(x)} = 0$$ I'm not going to write all functions that I've been tried, but here's one of them: $f(x)=x^2+x-6$ has roots $-3$ and $2$. That means, $$\lim_{x\to -3} \frac{\tan{(x^2+x-6)}}{x^2+x-6} = 1$$ and $$\lim_{x\to 2} \frac{\tan{(x^2+x-6)}}{x^2+x-6} = 1$$ and perhaps it holds for the other related limits. Nevertheless, how to prove it if this is true (please, give a counterexample if it isn't)? Also, what if $a_i\in \mathbb{R}$? Thanks in advance!
Whenever you have $\lim_{x\to0}f(x)=l$ and $a$ is a zero of a continuous function $g$, then $\lim_{x\to a}g(x)=g(a)=0$, and therefore $\lim_{x\to a}f\bigl(g(x)\bigr)=l$. This explains to all of your limits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4267295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the $6$-tuples $(x_1,x_2,x_3,x_4,x_5,x_6)$ to determine the least number of needed nurses A simple examination in a hospital shows that the hospital needs the following number of nurses at different times of the day: \begin{array}{|c|} \hline \text{course} & \text{time} & \text{least number of needed nurses} \\ \hline 1& 6 \;\text{am}\; \text{to}\; 10\; \text{am} & 60&\\ \hline 2& 10\; \text{am}\; \text{to}\; 2\; \text{pm} & 70&\\ \hline 3&2\;\text{pm}\; \text{to}\; 6\; \text{pm} & 60&\\ \hline 4&6\; \text{pm}\; \text{to}\; 10\; \text{pm} & 50&\\ \hline 5& 10\;\text{pm}\; \text{to}\; 2\; \text{am} & 20&\\ \hline 6& 2\;\text{am}\; \text{to}\; 6\; \text{am} & 30&\\ \hline \end{array} As soon as the nurses arrive at the hospital, they introduce themselves at the beginning of each course and start their work and can work for $8$ hours. The hospital wants to know how many nurses it needs to hire so that it has the necessary manpower for different periods of work. Formulate the problem in mathematical programming. Define $ x_i $ as the number of nurses starting their work at the beginning of the $ i $ th course, then we are going to find the $6$-tuple $(x_1,x_2,x_3,x_4,x_5,x_6)$ to minimize $z=\sum_{i=1}^{6} x_i$ assuming the following relations hold: $$x_1 \ge 60$$ $$x_1+x_2 \ge 70$$ $$x_2+x_3 \ge 60$$ $$x_3+x_4 \ge 50$$ $$x_4+x_5 \ge 20$$ $$x_5 +x_6\ge 30$$ $$x_6 +x_1\ge 6 0$$ $$x_i \in \mathbb Z_{ \ge 0},\; i \in \{1,...,6\}$$ However my source answer removes the first constraint $x_1 \ge 60$, but I think that's necessary.(Assuming the hospital starts its work at $6$ am.) I'm not looking for the answer itself, instead I want to know whether the inequality $x_1 \ge 60$ should be mentioned or not.
The $x_1 \ge 60$ constraint should be omitted. Every nurse covers two consecutive courses, and every course is covered by two consecutive sets of nurses. So every variable should appear twice, and every constraint should contain two variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4267425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Defining convergence in terms of the set of rationals in the gap between $\liminf$ and $\limsup$ Consider a real sequence $x_n$ and the set $A = \{ r | \liminf x_n < r < \limsup x_n \, , \, r \in \mathbb Q\}$. Is it true that $|A| = 0$ if and only if $x_n$ converges to a real limit? My attempt: Note that if $\liminf x_n < r < \limsup x_n$ for some real $r$ then there exists a $q \in \mathbb Q$ close enough to $r$ such that $\liminf x_n < q < \limsup x_n$ since the rationals are dense in the reals. Also, if there exists a rational $q$ such that $\liminf x_n < q < \limsup x_n$ then the inequality is satisfied for a real value since $\mathbb Q \subset \mathbb R$. So, the set $A$ above has cardinality $0$ if and only if the set $B$ has cardinality $0$ where $B = \{r | \liminf x_n < r < \limsup x_n, r \in \mathbb R\}$. But $B$ has cardinality $0$ only when $\liminf x_n = \limsup x_n$ which implies that $x_n$ converges to a real limit. I could use some feedback on my reasoning.
If $|A|$ means measure of $A$ then $|A|=0 $ for any sequence $x_n .$ If $|A|$ means the cardinality of $A$ then $|A|=0$ if and only if $x_n$ converges to finite or infinite limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4267627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proving that expression can be expressed as a sum of $3n-1$ natural square numbers that are bigger than one $n$ is a natural number. Prove that $(2n+1)^3 - 2$ can be expressed as sum of $3n - 1$ natural square numbers, which are bigger than one. Is this a valid proof? Showing that $((4+1)^3 - 2) - ((2+1)^3 - 2)=98= 3^2 + 5^2 + 8^2$ can be expressed as a sum of $(6-1)-(3-1) = 3$ natural square numbers that are bigger than one. Then showing for $n=k+1$ and $n=k$, that it can be expressed as $(3(k+1) - 1) - (3k-1) = 3$ natural square numbers?
You have made a good start on using induction to prove the hypothesis that $$p(n) = (2n + 1)^3 - 2 \tag{1}\label{eq1A}$$ can be expressed as the sum of $3n - 1$ perfect squares, each greater than $1$. First, the base case is $n = 1$, where $p(1) = 3^3 - 2 = 25$, which can be expressed as the sum of $3n - 1 = 2$ perfect squares as $25 = 9 + 16 = 3^2 + 4^2$. Next, assume the hypothesis is true for $n = k$ for some $k \ge 1$. Then, as you indicated, showing that $p(k+1) - p(k)$ can be expressed as the sum of $3$ squares means these squares can be added to the $3k - 1$ squares already used for $p(k)$, thus showing that $p(k + 1)$ is a sum of $(3k - 1) + 3 = 3(k + 1) - 1$ perfect squares. Note that $$\begin{equation}\begin{aligned} p(k + 1) - p(k) & = ((2(k + 1) + 1)^3 - 2) - ((2k + 1)^3 - 2) \\ & = (2k + 3)^3 - (2k + 1)^3 \\ & = 8k^3 + 3(4)(3)k^2 + 3(2)(9)k + 27 - (8k^3 + 3(4)k^2 + 3(2)k + 1) \\ & = 8k^3 + 36k^2 + 54k + 27 - (8k^3 + 12k^2 + 6k + 1) \\ & = 24k^2 + 48k + 26 \\ & = (4k^2 + 4k + 1) + (4k^2 + 12k + 9) + (16k^2 + 32k + 16) \\ & = (2k + 1)^2 + (2k + 3)^2 + (4k + 4)^2 \end{aligned}\end{equation}\tag{2}\label{eq2A}$$ Since $k \ge 1$, each perfect square above is $\gt 1$. As explained earlier, this means the hypothesis is also true for $n = k + 1$ so, by induction, it's true for all $n \ge 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4267767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is the ordering of integrals important in a 2D integral? Let's consider $\int_a^b\int_c^d f(x,y)dxdy$ : do we know that $x$ will vary from $a$ to $b$ and $y$ from $c$ to $d$, or said similarly, is the ordering crucial in the definition ? Same question for the ordering of dx and dy ? I'm not asking about Fubini theorem : I ask explictely if $\int_a^b\int_c^d f(x,y)dxdy$=$\int_c^d\int_a^b f(x,y)dxdy$ that is : could $x$ be either from $a$ to $b$ or from $c$ to $d$, without changing the result ?
Usually $\int_a^b\int_c^d f(x,y)dxdy$ is interpreted as $$\int_a^b\left(\int_c^d f(x,y)dx\right)dy$$ and this is, in general, different from $$\int_c^d\left(\int_a^b f(x,y)dx\right)dy$$ as you can see using a simple function as $f(x,y)=x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4267904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Translating English statements into quantifiers and predicates. Consider the following statement: "All your friends are perfect." This can be translated into quantifiers and predicates as follows : Let P (x) be “x is perfect” and let F (x) be “x is your friend” and let the domain be all people. So the statement can be written as : ∀x (F (x) → P (x)). However, consider the statement: At least one of your friends is perfect. The answer is ∃x (F (x) ∧ P (x)), but why it is not ∃x (F (x) → P (x)) ?
The last expression can be translated to English as "There is at least one person that if he is your friend then he is perfect". So there is the possibility that you don't have friends. The first expression requires you to have at least one friend. $F\land T=F$, while $F\to T=T$. Similarly, $F\land F=F$, while $F\to F=T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4268074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Generating series for ternary strings without 000 and not ending with 0 I would like to find a formula for $T_n$, the number of ternary strings of length $n$ so that they do not contain three consecutive zeroes, and they do not end with $0$ as well. I can find a recurrence relation for $T_n$, however I wonder if we can express $T_n$ as a coefficient of a generating series. Thanks for any help!
I see that you work over recurrence relations , and you are willing to learn. Hence , i will share a very powerful method with you. You can handle nearly all recurrence relation problems with it. Our method is Goulden -Jackson- Cluster Method . I put that link for you , you can read it to learn detaily. Lets turn to our question. We can claim that $\color{red}{|}$the number of strings that do not have three consecutive zeros$\color{red}{|}$ = $\color{blue}{|}$ the number of strings that do not have three consecutive zeros and end up with $0$$\color{blue}{|}$ + $\color{red}{|}$ the number of strings that do not have three consecutive zeros and end up with $1$ $\color{red}{|}$ + $\color{red}{|}$ the number of strings that do not have three consecutive zeros and end up with $2$ $\color{red}{|}$. Here , we are looking for $\color{red}{|}$ the number of strings that do not have three consecutive zeros and end up with $1$ $\color{red}{|}$ + $\color{red}{|}$ the number of strings that do not have three consecutive zeros and end up with $2$ $\color{red}{|}$. Right ? We can see that the number of strings of length $n$ that do not have three consecutive zeros and end up with $1$ is equal to the number of strings of length $(n-1)$ that do not have three consecutive zeros. It is also valid for the string that ends up with $2$. Now , it is the time for using our powerful method. We will find a generating function and convert it into a sequence of length $(n-1)$. Lets first calculate for the string end up with $1$ and do not have $3$ consecutive zeros.We said that it is equal to the number of string of length $n-1$ that does not have three consecutive zeros. We know that our alphabet consists of $3$ elements such as $V=\{0,1,2\}$ , and our bad word is $\{000\}$. According to the paper $(p.7)$ the generating function $A(z)$ is $$A(z)=\frac{1}{1-dz - \text{weight}(c)}$$ with $d=|V|=3$, the size of the alphabet and $C$ the weight-numerator of bad words with $$\text{weight(c)}= \text{weight}([000])$$ $$\text{weight}([000])= -z^3 - (z^2 +z)\text{weight}([000])$$ $$\text{A(z)}= \frac{1}{1-3z + \frac{z^3}{1+z+z^2}}$$ $$\text{A(z)}= \frac{1+z+z^2}{1-2z-2z^2 -2z^3}$$ We also obtain the same result for the string end up with $2$. Then , the sum of them will give us the generating function for them.Namely , $$\frac{2+2z+2z^2}{1-2z-2z^2 -2z^3}$$ Now , we should convert it into recurrence relation by We recall if a generating function has a representation as rational function of the form \begin{align*} A(z)=\sum_{n=0}^\infty a_n z^n=\frac{P(z)}{Q(z)} \end{align*} with $P(z), Q(z)$ polynomials, $\deg Q=q>\deg P$ and \begin{align*} Q(z)=1+\alpha_1 z+\alpha_2 z^2+\cdots + \alpha_q z^q \end{align*} then the coefficients $a_n$ follow the recurrence relation \begin{align*} a_{n+q}+\alpha_1 a_{n+q-1}+\alpha_2 a_{n+q-2}+\cdots +\alpha_q a_{n}=0\qquad\qquad n\geq 0 \end{align*} Then , our recurrence relation is $$a_{n-1}=2a_{n-2}+2a_{n-3}+2a_{n-4}$$ Calculation via wolfram It means that for $n=4$ , there are $52$ such strings , for $n=5$ , there are $152$ such strings , for $n=15$ , the answer is $6843168$ ,so on..
{ "language": "en", "url": "https://math.stackexchange.com/questions/4268228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Showing that a constructed point is the circumcenter of a given triangle For reference (exact copy of the question): In the figure shown, what notable dot is "$E$" for triangle $ABC$? (answer: circumcenter) Original figure: My progress: The original figure does not make it clear whether the centers of the circles are coincident or whether points $A$ and $C$ are tangent to the smaller circle. So I made this figure that seems to me to be the correct one. Based on it I can say with certainty that: $\angle E=90^\circ\\ EA=EC \therefore \triangle EAC(\text{isosceles})\implies \angle C=\angle A = 45^o$ So, $AE$ isn't a angle bisector and therefore $E$ isn't incenter $AE$ is not perpendicular to $BC$ therefore $E$ is not orthocenter The prolongation of $AE$ is not midpoint of $BC$ so $E$ is not barycenter Need to show that $EA=EB$...
$\small \angle EAC=\angle AOC=\angle ECA=\angle CJA=45^\circ$ (alternate segment theorem) Also $\small \angle BAC=\angle CJO$ and $\small \angle BCA=\angle AOJ$. (exterior angles of cyclic quadrilateral $\small AOJC$) Say $\small \angle AJO=\alpha$ and $\small \angle COJ=\beta$. Thus $\small \angle BAE=\alpha$ and $\small \angle BCE=\beta$. Considering $\small \triangle AOJ$, $\small \alpha+\beta+45^\circ=90^\circ\implies\alpha+\beta=45^\circ$ Considering $\small \triangle ABC$, $\small \alpha+\beta+45^\circ+45^\circ+\angle ABC=180^\circ$ $\small \implies\angle ABC=45^\circ$ Already found $\small \angle AEC=90^\circ$, $\small \angle ABC=\frac12 \angle AEC$. Also $\small AE=EC$. Therefore $\small E$ is the circumcentre of $\small \triangle ABC$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4268642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that the random walk X is null recurrent I am having trouble showing the relationship of stationary distribution of this random walk. Question: Consider a random walk $X$ on $\mathbb{Z}$ defined by $p(1)=p(0)=p(-1)=1 / 3$ (note that $\left.p_{x, y}=p(y-x)\right)$. Define $$ \begin{aligned} T &:=\inf \left\{n \geq 1: X_{n}=0\right\} \\ q_{i} &:=P_{i}(T=\infty) \end{aligned} $$ Prove that the random walk $X$ is null recurrent by showing that it cannot have any stationary distributions. My attempt: Edit With the help of Teresa Lisbon, I get that the recursive relationship is $$\pi(k)=\frac{1}{3} \pi(k)+\frac{1}{3} \pi(k-1)+\frac{1}{3} \pi(k+1)$$ This simplifies to the relationship of simple symmetric random walk where: $$\pi(k)=\frac{1}{2} \pi(k-1)+\frac{1}{2} \pi(k+1)$$ I see that this is a first order difference equation with the answer given as $$\pi_{k}=a+b k$$ for some constants a and b. But now I am a bit stuck as I don't know what is the boundary condition.
Clearly it is a slow down version of simple random walk on $\mathbb{Z}$. So it is recurrent. Consider the measure $\pi_i=1$ for all $i$. Then $\pi_i = \frac{1}{3}\pi_{i-1} + \frac{1}{3}\pi_{i+1} + \frac{1}{3}\pi_i$, which means $\pi = P \pi$, $P$ is the transition matrix. So $\pi$ is invariant. By Theorem 1.7.6 and Theorem 1.7.7 from J.R. Norris's Markov Chain, any invariant measure should be a scalar multiple of $\pi$. Since $\sum_{i\in\mathbb{Z}} \pi_i = \infty$, so there can be no invariant distribution, therefore the random walk is null recurrent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4268977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is $\frac{1}{m+1} = \sum_{n=0}^\infty \frac{2^{-2n-m-1} (2n+m)!}{(n+m+1)!n!}$? I have the strong assumption that for $m \in \mathbb N_0$ we have $$\frac{1}{m+1} = \sum_{n=0}^\infty \frac{2^{-2n-m-1} (2n+m)!}{(n+m+1)!n!}=\sum_{n=0}^\infty \frac{2^{-2n-m-1}}{2n+m+1}\binom{2n+m+1}{n}.$$ But I didn't find an explicit way to show it. In case you wanna know where this series is from: I'm still trying to solve How to prove an equality involving an infinity series and the modified Bessel functions?. If you plug in $s=\alpha=\beta=1$ you eventually come up with the series equation from above. At least the series equation would show the special case (in fact the other direction is not true).
I guess I post my approch which is based on the idea of Claude Leibovici for clarity. Using Legendre duplication we have $$(2n+m)!=\Gamma(2n+m+1) = \Gamma(n+ \tfrac{m+1}{2})\Gamma(n+ \tfrac{m+2}{2})2^{2n+m}/\sqrt{\pi}.$$ So we have $$\sum_{n=0}^\infty \frac{2^{-2n-m-1} (2n+m)!}{(n+m+1)!n!}=\frac{1}{2\sqrt{\pi}} \sum_{n=0}^\infty \frac{ \Gamma(n+ \tfrac{m+1}{2})\Gamma(n+ \tfrac{m+2}{2})}{\Gamma(n+m+2)n!}\\ = \frac{1}{2\sqrt{\pi}} \frac{ \Gamma(\tfrac{m+1}{2})\Gamma(\tfrac{m+2}{2})}{\Gamma(m+2)} F(\tfrac{m+1}{2},\tfrac{m+2}{2},m+2,1)\\ \frac{1}{2\sqrt{\pi}} \frac{ \Gamma(\tfrac{m+1}{2})\Gamma(\tfrac{m+2}{2})}{\Gamma(m+2)} \frac{\Gamma(m+2)\Gamma(m+2-\tfrac{m+1}{2}-\tfrac{m+2}{2})}{\Gamma(m+2-\tfrac{m+1}{2})\Gamma(m+2-\tfrac{m+2}{2})}\\ = \frac{1}{2\sqrt{\pi}} \frac{ \Gamma(\tfrac{m+1}{2})\Gamma(\tfrac{m+2}{2})}{\Gamma(m+2)} F(\tfrac{m+1}{2},\tfrac{m+2}{2},m+2,1)\\ \frac{1}{2\sqrt{\pi}} \frac{ \Gamma(\tfrac{m+1}{2})\Gamma(\tfrac{m+2}{2})}{\Gamma(m+2)} \frac{\Gamma(m+2)\Gamma(\tfrac{1}{2})}{\Gamma(\tfrac{m}{2}+\tfrac{3}{2})\Gamma(\tfrac{m}{2}+\tfrac{2}{2})} =\frac{1}{m-1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4269110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Enlarging a probability space to get independence Am reading Lemma 5.2 from this paper by Dvoretzky: Let $X$ be a real random variable on a probability space $(\Omega,\mathcal A,P)$ satisfying $|X|\leq 1$, let $\mathcal F=\sigma(X)$, and let $\mathcal G$ be a sub-$\sigma$-algebra of $\mathcal A$. Then $$E\Big[\big|E[X|\mathcal G]-E[X]\big|\Big]\leq 4\alpha(\mathcal F,\mathcal G) \quad\quad\quad (1)$$ where $\alpha(\mathcal F,\mathcal G)=\sup_{F\in\mathcal F, G\in\mathcal G} \big|P(F\cap G)-P(F)P(G)\big|$. At some point in the proof the author writes: "Let $\tilde{X}$ be a random variable with the same distribution as $X$ and independent of $\mathcal G$ (if need be the probability space can be enlarged in order to carry such a random variable)." I don't see how to achieve this result, and how to do it without changing the quantities that appear in $(1)$. Any ideas on how to proceed? Thanks a lot for your help. EDIT: I tried to write down the details below. Any comment is very appreciated.
Let $Y$ be a random variable on a probaility space $(\Omega',\mathcal A',P')$ having the same distribution as $X$. On $\Omega \times \Omega'$ with the product $\sigma-$ field and the measure $P\times P'$ define $\tilde{X} (\omega,\omega')=Y(\omega')$. Then $\tilde{X}$ has the same distribution as $X$ and it is independent of $\mathcal G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4269263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$\mathbb{P}(A|B)+\mathbb{P}(A^c|B^c)=1$ is false in general Let $A,B$ be some events in probability space $\Omega$. Show that $$\mathbb{P}(A|B)+\mathbb{P}(A^c|B^c)=1$$ is false in general. I thought of giving an example there events $A$ and $B$ are independent but then I get the correct equation because $$\mathbb{P}(A|B)+\mathbb{P}(A^c|B^c)=\frac{\mathbb{P}(A\cap B)}{\mathbb{P}(B)}+\frac{\mathbb{P}(A^c \cap B^c)}{\mathbb{P}(B^c)}=\mathbb{P}(A) + \mathbb{P}(A^c)=1.$$ My next idea was to check all other cases like when $A \subset B$ or when $A=B$ but everything seems to work out well. Can you help me of thinking of an example?
$$P(A|B)+P(A^c|B^c)=1\\\iff \frac c{c+d}+\frac a{a+b}=1\\\iff 2ac+bc+ad=ac+ad++bc+bd\\\iff ac=bd\\\iff A \text{ and } B \text{ are }\textbf{independent events}.$$ The last equivalence is from my previous derivation here. Therefore, the given statement is not in general true. P.S. To say that the given statement is “false in general” may suggest that it cannot be true, which is not quite correct. P.P.S. I wrote this related Answer, to another Question asked around the same time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4269479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How is that $E[g(Y)X]=E[g(Y)E(X\mid Y)]$? I came across this definition of conditional expected value: For a σ-algebra $\mathcal{G}$, $E[X\mid \mathcal{G}]$ is defined to be the random variable that satisfies: (i)$E[X\mid \mathcal{G}]$ is $\mathcal{G}$-measurable. (ii)$E[\mathbb{1}_CX]=E[\mathbb{1}_CE(X\mid \mathcal{G})]$ for all $C\in\mathcal{G}$ I don't get this equality, can someone explain?
You just need to think of $X$ as a measurable function from $(\Omega,\mathcal{F})$ to $\mathbb{R}$. Now, given a coarser $\sigma$-algebra $\mathcal{G}$ on $\Omega$, with respect to which $X$ might not be measurable, our goal is to find a measurable function $Y$ (with respect to $\mathcal{G}$) that resembles $X$. It's natural, then, to ask for $Y$ to "behave" the same way as $X$ at least in the eyes of the coarser one of the two $\sigma$-algebras, which is $\mathcal{G}$. That is, we want $$\int_{A}XdP=\int_{A}YdP$$ to hold true for $A\in\mathcal{G}$. This is just a rewrite of $$\mathbb{E}\left(1_{A}X\right)=\mathbb{E}\left(1_{A}Y\right),\text{ for }A\in\mathcal{G}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4269633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can Linearization from Calculus I be used as a Multivariable function? In my math class yesterday we learned about linearization of a function to approximate values in a small range of x values (followed by Newton's method of approximating the zeros of functions). We were given the general formula, $$L(x) = f(a)+f^\prime(a)(x-a)$$ Where a (usually an integer) a value 'close enough' to represent the actual value of x you want know I ran over this concept in my head through the rest of the day, and I came to a question. What if one considered all values of a and x for a given function. Essentially, $$L(x,a) = f(a)+f^\prime(a)(x-a)$$ Ex. Let $f(x) = x^3 + 2x+1$ (normally the function is more complex but I needed something with fairly simplistic algebra) We would then have $$f^\prime(x) = 3x^2+2$$ Suppose $a$ represents all possible approximation values, $$L(x, a)=a^3+2a+1+(3a^2+2)(x-a)$$ Simplifying, $$= a^3+2a+1+3a^2x+2x-3a^3-2a$$ $$\downarrow$$ $$= -2a^2+3a^2x+2x+1$$ Is this kind of function useful?
It's potentially useful in that it gives you all the linear approximations at once, but no, I don't think you'd often want to deal with it in practice. I don't know of a context where your $L(x,a)$ would be useful (but I'd be happy to be proven wrong by another answer). The linear approximation is typically bad when $x$ is far from $a$ - you don't actually care what the tangent line at $a=5$ thinks might be the value of the function at $x=173.2$. So in practice, you'd likely be taking linear approximation at a well-chosen $a$ near your value of interest, just as you describe. When we are interested in generic information rather than linear approximation at a point, then note that this basically just combines the information of $f(a)$ and $f'(a)$ in a standard way, so that we may as well keep the flexibility of the two separate functions/values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4269782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Superposition of two cosine terms This is an example problem from my textbook. I read through its solution, but was confused by the following step. $$x=\frac{\omega^{2}A\cos(\omega t-\delta)}{\sqrt{(\omega_{0}^{2}-\omega^{2})^{2}+\omega^{2}\gamma^{2}}}+A\cos\omega t$$ The textbook proceeds to say that, quote Since $x$ is a superposition of two cosine terms in $\omega t$, we can write it as $$x=C(\omega)\cos(\omega t-\alpha)$$, where $$[C(\omega)]^2=\frac{A^{2}(\omega_{0}^{4}+\omega^{2}\gamma^{2})}{(\omega_{0}^{2}-\omega^{2})^{2}+\omega^{2}\gamma^{2}}$$ I am confused how it gets there
First, recall that $\cos(\omega t - \delta) = \cos \omega t \cos \delta + \sin \omega t \sin \delta$. This lets you write the expression as $M \cos \omega t + N \sin \omega t$, where $M$ and $N$ may depend on $\omega$ and $\delta$ but are independent of $t$. Then, set $M \cos \omega t + N \sin \omega t = C \cos(\omega t - \alpha) = C \cos \omega t \sin \alpha + C \sin \omega t \cos \alpha$, so you can then solve $C \sin \alpha = M$ and $C \cos \alpha = N$ for $C$ and $\alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4269961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Value of $x$ for a probability game to be fair To make it clear, I am not asking the following question, though it is related. My question is instead the following. A person pays $k$ dollars to play a game. In the game, they roll a single 6-sided die. If it lands anywhere from $1$ to $5$ inclusive, they win exactly that many dollars. If they land on $6$, then the game terminates. What is the value of $x$ so that the game is fair? I am stuck for the following reason. It is clear that the expected number of rolls to land on a specific value is $6$. However, my concern is the journey it took to finally land on that $6$ to terminate the sequence. The money won for a $(1,1,1,6)$ journey is very different from the money won on a $(5,5,5,6)$ journey. I know that we want the expectation to be zero, but I am not sure what to define as the random variable $X$, much less so how to compute $E(X)$.
Let $N$ denote the number of rolls needed to arrive at result $6$ and let $X_i$ denote the gain at the $i$-th roll. Then the total gain can be expressed as:$$X=X_1+\cdots+X_{N-1}$$Here: $$\mathbb EX_i=\frac16[1+2+3+4+5]=\frac52$$ Then for positive integer $n$: $$\mathbb E[X\mid N=n]=\mathbb E[X_1+\cdots+X_n]=\frac{5n}2$$ Applying the law of total expectation we find:$$\mathbb EX=\sum_{n=1}^{\infty}P(N=n)\mathbb E[X\mid N=n]=\sum_{n=1}^{\infty}P(N=n)\frac52n=\frac52\mathbb EN=\frac52\cdot6=15$$ Another route (suggested by @lulu). If $\mu$ denotes $\mathbb EX$ then:$$\mu=\frac16\cdot0+\sum_{j=1}^5\frac16(j+\mu)=\frac52+\frac56\mu$$telling us directly that $\mu=15$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4270146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is a refinement useful for? From Wikipedia: A refinement of a cover $C$ of a topological space $X$ is a new cover $D$ of $X$ such that every set in $D$ is contained in some set in $C$. Formally, I find this definition easy to understand, but I am interested in why we need to introduce such term. One way to use it is for defining the (Lebesgue) covering dimension - the smallest number $n$ such that for every cover, there is a refinement in which every point in $X$ lies in the intersection of no more than $n + 1$ covering sets. The only intuition I have is that the covering dimension tells how well can we cover the space by "non-overlapping" sets. But I still do not see, why we need to introduce the refinement. Could you help me gain some intuition about refinement, and possibly also what role does it plays in the definition of the covering dimension? Thank you.
If we imagine a general open covering of (say) the real line, it could be very messy: the open sets could be disconnected, many of the open sets could be identical or nearly identical, and so on. Thinking about refinements allows us to clean up the covering so that it takes the "nicest" form: a sequence of open intervals overlapping slightly at the ends. For example, when an open set is disconnected, we can break it into its interval components; when two of the open sets are identical, we can just take one of them. That "nicest" form then suggests a canonical structure to the space: in this case, a refinement in which at most two open sets overlap at any point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4270544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating $\lim_{n \to \infty}\left[\,\sqrt{\,2\,}\,\frac{\Gamma\left(n/2 + 1/2\right)}{\Gamma\left(n/2\right)} - \,\sqrt{\,n\,}\right]$ I am trying to find this limit: $$\lim_{n \to \infty}\left[\,\sqrt{\,2\,}\,\frac{\Gamma\left(n/2 + 1/2\right)}{\Gamma\left(n/2\right)} - \,\sqrt{\,n\,}\right]$$ I know that $$\frac{\sqrt{2}\Gamma(\frac{n+1}{2})}{\Gamma(\frac{n}{2})}=\begin{cases}\sqrt{\frac{\pi}{2}} \frac{(n-1)!!}{(n-2)!!},\ n\ is\ even\sim \sqrt{n-1}\\ \sqrt{\frac{2}{\pi}} \frac{(n-1)!!}{(n-2)!!},\ n\ is\ odd \sim\sqrt{n+1}\end{cases}$$ However, I can not find the limit above. Based on the numeric study, it seems like the limit is 0:
$$f(n)=\sqrt{2}\,\frac{\Gamma(\frac{n+1}{2})}{\Gamma(\frac{n}{2})} - \sqrt{n}=\sqrt{2}\,y- \sqrt{n}$$ Take logarithms $$\log(y)=\log \left(\frac{\Gamma \left(\frac{n+1}{2}\right)}{\Gamma \left(\frac{n}{2}\right)}\right)=\log \left(\Gamma \left(\frac{n+1}{2}\right)\right)-\log \left(\Gamma \left(\frac{n}{2}\right)\right)$$ Use Stirling approximation twice $$\log(y)=\frac{1}{2} \log \left(\frac{n}{2}\right)-\frac{1}{4 n}+\frac{1}{24 n^3}+O\left(\frac{1}{n^5}\right)$$ Continue with Taylor $$y=e^{\log(y)}=\sqrt{\frac n 2}\Bigg[1-\frac{1}{4 n}+\frac{1}{32 n^2}+O\left(\frac{1}{n^3}\right)\Bigg]$$ $$f(n)=-\frac 1{4\sqrt {n}}\Bigg[1-\frac{1}{8 n}-\frac{5}{32 n^2}+O\left(\frac{1}{n^3}\right)\Bigg]$$ Use it for $n=9$; rhe above truncated series gives $$f(9) \sim -\frac{2551}{31104}=-0.0820152\cdots$$ while the exact value is $$f(9)=-3+\frac {128}{35} \sqrt{\frac 2 \pi}=-0.0820222\cdots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4270745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Solving differential equation $af'(x)^2-f''(x)=0$ I want to solve the differential equation $$af'(x)^2-f''(x)=0$$ where $a $ is constant and we have the boundary conditions $f(0)=0, f (1)=c $ for some positive $c$. If $a=0$ we get $f (x)=cx$ but what if $a\ne 0$. Could someone show how to solve this differential equation?
Set $y=f(x)$. The problem can be rewritten as $a(y')^2-y''=0$. Set $u:=y'$, so we get $au^2-u'=0\Rightarrow u'=au^2\Rightarrow \int \frac {du}{u^2}=\int adx\Rightarrow -\frac 1u=ax+C\Rightarrow u=-\frac {1}{ax+C}$, or by setting $c:=-C$ we get $u=\frac {1}{c-ax}$. Finally we have $y'=\frac {1}{c-ax}\Rightarrow\int dy=\int \frac {dx}{c-ax}\Rightarrow y=-\frac 1a ln|c-ax|+K$, where $K$ a real constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4271107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Prove the sequence converges We have a bounded sequence $a_n$ such that, $a_n - a_{n+1} \le 1/2^n$. Define $b_n = a_n - 1/2^{n-1}$ I need to show $a_n$ and $b_n$ converge. For me, The intuition Is that since the difference between two $a_n$ terms is very small when ${n\rightarrow \infty}$, $a_n$ must converge, But I have difficulties proving it. Plus, I don't understand why they are asking calculate the limits of both $b_n$ and $a_n$. They should be equal!
If you have $|a_n-a_{n+1}|<2^{n-1}$, then you can use the standard trick to show that $\{a_n\}$ is a Cauchy sequence by show that $|a_m-a_n|<=|a_m-a_{m+1}|+\cdots+|a_{n-1}-a_n|<1/2^m+\cdots<1/2^{m-1}$ which is sufficiently small for sufficiently large $m$, but the absolute value is missing here. To overcome this difficulty, we need to use the condition that $a_n$ is bounded with the help of the sequence $\{b_n\}$. The condition implies that $b_n-b_{n+1}<=0$, therefore $\{b_n\}$ is decreasing and it's easy to show that $b_n$ is bounded from $a_n$ is bounded. So $b_n$ converges, and $a_n$ converges follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4271294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $d(A, B) =|(A\Delta B)|$ a metric on all finite subsests? Let $F(S) $ be the set of all finite subsets of a set $S$. For all $A, B ∈ F(S)$, let $∆ (A, B)=(A\setminus B) ∪ (B\setminus A) $ be the symmetric difference between A and B. Let $d(A, B ) $ be the cardinality of $∆ (A, B).$ Is $d$ a metric? I am able to prove positivity, definiteness, symmetry. For any three sets $A, B, C$ I have to show $|(A\Delta B)| \le |(A \Delta C) |+| (C\Delta B) |$. My intuition suggests triangle inequality may also hold, but I can't make it rigorous, please elaborate.
Since symmetric difference is commutative and associative, we have $$A\Delta B\ =\ (A\Delta C)\,\Delta\, (C\Delta B)\ \subseteq (A\Delta C)\cup(C\Delta B)$$ and the latter has at most $|A\Delta C|+|C\Delta B|$ elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4271464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why is the statement $(\forall x\in\mathbb R)(x^2\ge0)$ true even when $x$ is not a real number? It seems obvious that the statement $(\forall x\in\mathbb R)(x^2\ge0)$ is true. To prove it, however, we need to show that the implication $x\in\mathbb R\to x^2\ge0$ holds for all $x$. But suppose that $x$ is not a real number. Then, $x^2$ is not defined, and so presumably $x^2\ge0$ does not have a truth value (and nor does $x\in\mathbb R\to x^2\ge0$). Therefore, it is unclear to me how the implication $x\in\mathbb R\to x^2\ge0$ can be true for all $x$. What am I missing?
There are two ways to remedy this issue (namely the well-formedness of the statement): Locally: Restrict the language (or domain of discourse) so that this statement is only well formed with respect to real numbers, that is quantification is over real numbers. In this case there is no need to include the hypothesis $(x\in \mathbb R)$ to a quantifier. This would result in the statement "In the theory of real numbers, $\forall x. x^2 \geq 0$." Globally: If we choose to keep your domain of discourse to be say, set theory, then we must generalize all real function and relation symbols so that they are total. For example, multiplication and ordering on real numbers produces a value for any set. Since we add the hypothesis that $x\in \mathbb R$ whenever quantifying over $x$, it does not matter how we define these operations on sets. This results in the statement "In the theory of sets, $\forall x. (x\in \mathbb R)\implies (x^2\geq 0)$, and whenever $x\not\in \mathbb R$ or $y\not\in\mathbb R$, we define $x * y := 0$ and $x \geq y$ to be true." There is also a third option if you view your statement in type theory, in which multiplication and ordering is defined only for real numbers, but quantification is now indexed by a type. This guarantees that the following is well formed "$\forall_{\mathbb R} x. x^2 \geq 0$."
{ "language": "en", "url": "https://math.stackexchange.com/questions/4271650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Calculate $\lfloor \log _{2} \pi + \log _{\pi} 5 +\log_5 8 \rfloor=?$ Calculate $$\lfloor \log _{2} \pi + \log _{\pi} 5 +\log_5 8 \rfloor=?$$ I suppose that: $$\log_2 3 < \log _{2} \pi <\log_2 4 $$ But how can I approximate the $\log_2 3$? Can somebody give an idea?
$\newcommand{\fl}[1]{\left\lfloor #1 \right\rfloor}$Let $a = \log_{2}(\pi)$, $b = \log_{\pi}(5)$, and $c = \log_{5}(8)$. Then, we have $abc = \log_{2}(8) = 3$. Thus, by the AM-GM inequality, we have $$a + b + c \geqslant 3 \sqrt[3]{3} \geqslant 4.$$ (The second inequality follows by showing that $\sqrt[3]{3} \geqslant \frac{4}{3}$. To check that, simply note that $3^4 > 4^3$ and then take cube roots and rearrange.) Thus, $\fl{a + b + c} \geqslant 4$. To show that it is equal to $4$, it suffices to show that $$a + b + c < 5.$$ Claim 1. $a < 2$. Proof. $\log_{2}(\pi) < \log_{2}(4) = 2$. $\square$ Claim 2. $b < \frac{5}{3}$. Proof. Note that $$\log_{\pi}(5) < \frac{5}{3} \iff 5^3 < \pi^5.$$ But the second statement is clearly true since $$5^3 = 125 < 243 = 3^5 < \pi^5. \qquad \square$$ Claim 3. $c < \frac{4}{3}$. Proof. As before, this reduces to showing that $$8^3 < 5^4,$$ which is easily checked. $\square$ Thus, we have $$a + b + c < 2 + \frac{5}{3} + \frac{4}{3} = 5,$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4271824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Show that the set $U$ contains the set $S$ I want to show that the set $U=\{\left(x, y\right)\in\mathbb R^2 \ | \ \left(x-1\right)^2 + \left(y-1\right)^2 <3\}$ contains the set $S=\{\left(x, y\right)\in\mathbb R^2 \ | \ x \geq1; \ y\geq0; \ x^2+y^2 \leq 4\}$ It is easy to see that $U$ contains $S$ graphically. But I don't know how to show it mathematically. If I can get a little hint, I will appreciate it.
$U$ and $S$ are connected (and convex) subsets of the plane. To prove that $S \subseteq U$ or $U \subseteq S$, you need to: * *Find one point, i.e. $(1,0)$, that belongs to both of $S$ and $U$. *Prove that the boundary of $S$ doesn't intersect the boundary of $U$. In addition, to prove specifically that $S \subseteq U$ you need to: *Find one point of $U$ that does not belong to $S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4272015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Asking for help on a question about integral: $\int_0^\pi \frac{x\sin x}{3+\cos^2 x}\mathrm{d}x$ Given the following intergral, and the fact that $a$ and $c$ are prime numbers, $$\int_0^\pi \frac{x\sin x}{3+\cos^2 x}\mathrm{d}x= \frac{π^a}{b\sqrt c} $$ Evaluate $a+b+c$ I've tried to solve the intergral by using intergration by part where I let $u$ be $x$ and $dv$ be $\frac{\sin x}{3+\cos^2 x}$. Which then gave me $$\int_0^\pi \frac{x\sin x}{3+\cos^2 x}\mathrm{d}x = \frac{-x}{\sqrt 3}\arctan{\left(\frac{\cos x}{\sqrt 3}\right)}\Big|_0^π + \int_0^\pi \frac{1}{\sqrt 3}\arctan{\left(\frac{\cos x}{\sqrt 3}\right)}\mathrm{d}x$$ But I'm pretty much out of luck at this point since I don't know how to solve $\int_0^π v\mathrm{d}u$ in this case and I believe there is a better way to do this.
So I eventually figured out the answer to $$I_1 = \int_0^\pi \frac{x\sin x}{3+\cos^2 x}\mathrm{d}x$$ By using the following property: $$\int_a^b f(x)\mathrm{d}x = \int_a^b f(a+b-x)\mathrm{d}x$$ we can proof that $$I_1 = \frac{π}{2}\int_0^π f(\sin x)\mathrm{d}x = \frac{π}{2}I_2$$ where $I_2 = \int_0^π \frac{\sin x}{3+\cos^2 x}\mathrm{d}x$. To solve for $I_2$ we substitute $u$ for $\cos x$ which gave us $$I_2=\frac{1}{\sqrt 3}\arctan \left(\frac{u}{\sqrt 3}\right) \big|_{-1}^1=\frac{π\sqrt 3}{9}$$ Therefore $$I_1=\frac{π}{2}\frac{π\sqrt 3}{9}=\frac{π^2}{6\sqrt 3}$$ And $a+b+c = 11$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4272163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Zeros of Jacobi elliptic functions For the past month or so, I have been studying Jacobi elliptic functions $sn$, $cn$ and $dn$. However I stumbled upon a problem and it goes as following: I am trying to find $z \in \mathbb{C}$, such that $sn(z) = sin({\psi}) = 0$, where $\psi$ is the inverse of the integral: $$z(\psi) = \int_0 ^\psi \frac{1}{\sqrt{1 - k^2 \sin^2(x)}} dx .$$ By definig constant $K$ as: $$K = K(k) = \int_0 ^\frac{\pi}{2} \frac{1}{\sqrt{1 - k^2 \sin^2(z)}} dz $$ I was able to prove by induction that: $$2K = \int_{\pi(k-1)}^{\pi k}\frac{1}{\sqrt{1 - k^2 \sin^2(z)}} dz $$ for $k \in \mathbb{Z}.$ To find zeros of $sn$ I did the following: $$sn(z) = \sin(\psi) = 0 \iff \psi = n\pi \iff z = \int_0 ^{n\pi} \frac{1}{\sqrt{1 - k^2 \sin^2(z)}} dz $$ for $n \in \mathbb{Z}$. I rewrote the last integral as: $$z = \sum_{k=1}^n \int_{\pi(k-1)}^{\pi k}\frac{1}{\sqrt{1 - k^2 \sin^2(z)}}dz = \sum_{k=1}^n 2K = 2nK.$$ Now it appears as is I found all zeros of $sn(z)$ for $z\in \mathbb{C}$, however there are also zeros of the form $2iK’$, where $K’$ is: $$ K’ = K(k’) = 2K = \int_0^\frac{\pi}{2}\frac{1}{\sqrt{1 - k’^2 \sin^2(z)}} dz $$ where $k’ = \sqrt{1 - k^2}.$ Now I obviously made mistake somewhere however I have no idea where. I would appreciate any help or tips.
You should look into Jacobi's Imaginary Transformation. It is $$\sin\varphi=i\tan\psi$$ under this, $$\int\frac{d\varphi}{\sqrt{1-k^2\sin^2\varphi}}= i\int\frac{d\psi}{\sqrt{1-k^{\prime 2}\sin^2\psi}}$$ This allows you to prove that the periods of $sn(z)$ are $2K$ and $4K^{\prime}i$. Modulo these periods you have all the roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4272296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can I depict $~u\left(r\right)=\frac{1}{r}\exp\left(ir\right)~$? I want to depict the graph of the following formula . $$ u \left( r \right) = \frac{1}{ r } \exp\left(i r\right) $$ Needless to say as we depict a locus of $~ \exp\left(i r \right) ~$ , the circle with radius $~ 1 ~$ can be shown . As we call $~ \frac{1}{ r } ~$ as an amplitude , then this amplitude becomes smaller as $~ r ~$ increases . Hence I thought the graph is like as the below one . Is this graph is close to the correct one ? I even want to depict it using some software .
Assuming $r$ is only a real number then we can expand the equation, as $e^{ix}=cos(x)+i$ $sin(x)$, So we get $u(r)= \frac{cos(r)}{r}+i\frac{sin(r)}{r}$, so if we plot the real component we get and if we plot the imaginary component. (the plots where made using desmos) hopefully that helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4272497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $1-\frac{1}{2} x^{2} \leq \cos x,\: \forall x \in \mathbb{R}$ Prove that $$1-\frac{1}{2} x^{2} \leq \cos x,\: \forall x \in \mathbb{R}$$ I am trying to prove this using taylor's theorem: We have: $$f(x)=f(0)+f'(0)x+\frac{x^2}{2!}f''(0)+...$$ Now approximating the function as a polynomial of degree $3$ we get: $$\cos x=1-\frac{x^2}{2!}+\frac{\sin c}{6}x^3$$ where $c \in (0, x)$ Case $1.$ if $0 < x \le \pi$, then we have $\sin c \geq 0$ and $x^3 >0$, so we get $$\cos x \geq 1-\frac{x^2}{2}$$ Case $2.$ If $\pi < x \leq 2\pi$, i am confused, because $\sin c<0$ Any help?
My answer's approach resembles that of MotylaNogaTomkaMazura but since his $f'[x]$ is wrong and I can't comment on his answer, I hope my answer will give you a better idea. Let $$f[x] = \cos[x] - 1 + \frac{1}{2}x^2$$ for $x \ge 0$ $$f'[x] = x - \sin[x]$$ We can prove $f'[x] \ge 0$ by taking its derivative: $$f''[x] = 1 - \cos[x] \ge 0$$ since $\cos[x] \le 1$. Since $f'[0] = 0$ and $f''[x] \ge 0, f'[x] \ge 0.$ Then again, $f[0] = 1 - 1 + 0 = 0$ and $f'[x] \ge 0$; therefore $f[x] \ge 0$ and thus $$\cos[x] \ge 1 - 1/2 x^2$$ Since $f[x]$ is an even function, $$\cos[x] \ge 1 - 1/2 x^2$$ for all $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4272758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Product of a divergent limit and a convergent limit approaching infinity I am studying convergent and divergent series and therefore have to review some limits. While doing a practice problem I encountered the follwoing problem: $$\lim_{\ n \rightarrow \infty } \frac {{(-1)}^{n}\sqrt{n+1}}{n}$$ In this case, the ${(-1)^n}$ diverges according to Symbolab. However, the other bit converges to $0$. $$\lim_{\ n \rightarrow \infty }\frac {\sqrt{n+1}}{n} = \lim_{\ n \rightarrow \infty }\sqrt {\frac {{n+1}}{n^2}}=\lim_{\ n \rightarrow \infty }\sqrt {\frac1n+\frac1{n^2}}=0$$ My assumption is that if we were to split the first part of the equation into two options we could say ${-1}^n$ is either negative or positive depending if $n$ is odd or even. In any case, we would still be multiplying by $0$ making the limit of the whole function $0$. Is my line of thought wrong? And if so, why cannot I analyze the function that way? My guess would be that I am treating $\infty$ like a number so my analysis would be invalid.
Your idea is fine since $a_n=\frac {\sqrt{n+1}}{n} \to 0$ and $b_n=(-1)^n$ is bounded by $|b_n|\le M=1$ it is a general result that by squeeze therorem $$\left|a_n \cdot b_n \right|\le |a_n|M \to0 \cdot M =0 \implies a_n \cdot b_n \to 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4273109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Glass Bridge Game in Squid Game (Episode 7/ Game 5) Spoilers for Squid Game. In the show, there is a game where people try to cross a bridge, made of $n=18$ rows of 2 side by side glass panes, which they must cross one row at a time. One glass pane can support a person while the other will break, causing the person to fall and get eliminated. Each person must select which glass pane to jump onto, from one row to the next, and try to reach the other side without falling through. In the show, some people cross the same bridge later than others, so they can tell which of the steps already crossed are sturdy or not. Assume that the selection of the sturdy and weak glass panes are random, that later players take the same steps that previous players took up to the point that they fall through (i.e. no forgetting which pane is sturdy or guessing again on an already solved pane). Ignore all human elements, like people trying to force others to fail to figure out future panes, or being able to tell the difference between tempered sturdy panes and weaker panes (i.e. guessing is random). Given $n$ rows of glass panes, how many players would it take until there is a player with a $>50\% $ chance of crossing the bridge? In the show there are 16 players and $n=18$ rows of glass, so what is the most likely outcome, in terms of number of people being able to cross the bridge?
Observe that the person $k$ in line has a total advancement in the bridge distributed as $S_k+k$, where $S_k\sim \mbox{NB}(k,p)$, where $p=1/2$ is the correct tile selection probability. Now, let $n=16$ be the total number of people and $m=18$ the bridge length. Then the probability of the $k$-th person traversing is \begin{align*} \mathbb{P}(S_k+k> m)= 0.407,\, 0.593,\quad \mbox{for}\quad k=9,\,10,\:\: \mbox{respectively}, \end{align*} and thus player number $10$ is the first to have more than $50\%$ chance of traversing. Now, define the random variables $$D=\mbox{number of dead people},\quad S=\mbox{number of survivors}.$$ Then observing that $S_{k+1}$ is the sum of $S_k$ and an independent geometric random variable, say $G$, we obtain the pmf of $D$ as \begin{align*} p_D(k)=\mathbb{P}(D=k)&=\mathbb{P}(S_k+k\le m,S_{k}+k +G>m)\\ &=\sum_{n=1}^\infty \left(F_{S_k}(m-k)-F_{S_{k}}(m-k-n)\right)p(1-p)^{n-1}. \end{align*} Finally, we get that for the above parameters, $$\mathbb{E}(S)=n-\mathbb{E}(D)=n-\sum_{k=1}^n kp_D(k)=7.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4273290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 2, "answer_id": 0 }
Closed-form of an integral I came across the following integral: $$ \int_0^{2\pi} \frac{\sin \theta \ d \theta}{(x-a \cos \theta)^2+(y-b \sin \theta)^2}$$ where $x,y$ are real variables independent of $\theta$ and $0<b<a$. Now I was wondering if it could be written in a closed-form. I have been trying a number of different things but nothing seems to be working. Is there anyone how knows if this is even possible at all? And if so, would you be so kind to help me in the right direction? Any hint that gets me in the right direction is much appreciated.
Claude expanded on the half-tangent case, so let me do the contour integral version. Assuming $b^2x^2+a^2y^2\neq a^2b^2$ (and maybe $(x,y)\neq(\pm\sqrt{a^2-b^2},0)$ too, for simplicity). The substitution $z=e^{i\theta}$ gives \begin{align*} \int_0^{2\pi}\frac{\sin\theta\,\mathrm{d}\theta}{(x-a\cos\theta)^2+(y-b\sin\theta)^2} &=\int_{\mathbb{T}}\frac{\frac12(z^{-2}-1)\,\mathrm{d}z}{(x-\frac12a(z+z^{-1}))^2+(y-\frac1{2i}b(z-z^{-1}))^2}\\ &=\int_{\mathbb{T}}\frac{2(1-z^2)\,\mathrm{d}z}{(2xz-a(z^2+1))^2+(2yz+ib(z^2-1))^2}\\ &=\int_{\mathbb{T}}\frac{2(1-z^2)\,\mathrm{d}z}{((a+b)z^2-2(x+iy)z+(a-b))((a-b)z^2-2(x-iy)z+(a+b))} \end{align*} The poles are at $$\require{color} z_{{\color{red}\pm},{\color{blue}\pm}}=\frac{w_{\color{red}\pm}{\color{blue}\pm}\sqrt{w_{\color{red}\pm}^2-a^2+b^2}}{a{\color{red}\pm}b},\quad w_{\pm}=x\pm iy $$ and you should be able to work out the residues and whether $z$ lies inside/on/outside the unit circle. If you have assumed $(x,y)\neq(\pm\sqrt{a^2-b^2},0)$, we have four simple poles. Hence $$ \int_0^{2\pi}\frac{\sin\theta\,\mathrm{d}\theta}{(x-a\cos\theta)^2+(y-b\sin\theta)^2}=2\pi i\sum_{\epsilon_1,\epsilon_2\in\{\pm\}} \frac{1_{\lvert z_{\epsilon_1,\epsilon_2}\rvert\leq1}+1_{\lvert z_{\epsilon_1,\epsilon_2}\rvert<1}}{2}\operatorname*{res}_{z=z_{\epsilon_1,\epsilon_2}}f $$ where $f(z)=\frac{2(1-z^2)}{((a+b)z^2-2(x+iy)z+(a-b))((a-b)z^2-2(x-iy)z+(a+b))}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4273426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Can any rotation be a vector? The rotation of a body can be specified by the direction of the axis of rotation, and the angle of rotation about the axis. Does that make any rotation a vector?
The rotations on $\mathbb R^3$ have the structure of a group and are not a vector space, so rotations cannot be vectors. This is because the combination of two rotations depends from the order (it is not commutative) so it cannot be represented as an addition operation of vectors. As you noted we need three numbers $(v_1,v_2,v_3)$ for a vector that identifies the axis of rotation with the condition that te vector is normalized: $v_1^2+v_2^2+v_3^2=1$ and a fourth number for the angle of rotation around this axis. So we need four numbers with a condition about tree of them. These four numbers can form the objects of different mathematical structures that are possible representations of the group of rotations. Possible representations are the $3\times3$ orthogonal matrices with unit determinant or the unit quaternions, that are a generalization of the complex numbers with four components.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4273624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Expanding $(1-i)^\frac{1}{3}$ using De Moivre's formula I want to rewrite $(1-i)^ \frac 1 3$ using de Moivre's formula. I defined $z := 1 - i$, then $r_z = \sqrt{2}$ and $1 = \sqrt2\cos\theta$ and $-1 = \sqrt2 \sin\theta \Rightarrow \theta = -\frac \pi 4$ So: $$1 - i = \sqrt{2}(\cos(\frac \pi 4) + i\sin(\frac \pi 4))$$ $$(1-i)^\frac 1 3 = 2^ \frac 1 6 (\cos(\frac \pi {12} + i\sin(\frac \pi {12}))$$ Am I correct with this derivation?
* *Your final step is invalid (discarding solutions!) because it misapplies De Moivre's theorem, which does not generally hold for non-integer powers like $\frac13.$ So, for example, \begin{align}1^\frac12&=\left(\cos(2\pi)+i\sin(2\pi)\right)^{\frac12}\\&\neq\cos(\frac12\times2\pi)+i\sin(\frac12\times2\pi)\\&=-1.\end{align} This is because in the complex world, raising a number to a non-integer power generally outputs multiple values. *In your exercise, the goal is to determine $(1-i)^\frac13,$ i.e., the third roots of unity of $1-i.$ This process (abbreviating $(\cos x+i\sin x)$ as $\text{cis }x)$ invokes not De Moivre's theorem but the multi-valued definition of $e^z:$ \begin{align}(1-i)^\frac13&=\left[\sqrt2 \text{ cis} \left(-\frac\pi4\right)\right]^\frac13\\&=2^\frac16\text{ cis} \left(\frac{-\frac\pi4+2k\pi}3\right)\\&=2^\frac16\text{ cis} \left(\frac{8k-1}{12}\pi\right)\\&=2^\frac16\text{ cis} \left(\frac{-\pi}{12}\right),\,2^\frac16\text{ cis} \left(\frac{7\pi}{12}\right)\text{ or }\,2^\frac16\text{ cis} \left(\frac{5\pi}{4}\right).\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4273796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Convexity of a function in $\mathbb{R^2}$ I have to find if this function is convex or not... $$f(x_1, x_2) = 2x_1^2 −x_1x_2 + x_2^2 −3x_1 + e^{2x_1+x_2}$$ For me its convex (I have plot the surface and it seems convex). To prove it, I tried many things but It didn't work. First, $2x_1^2$, $x_2^2$ and $e^{2x_1+x_2}$ are convex (trivial), $-3x_1$ is affine so also convex. Now, I have to show that $−x_1x_2$ but I am not really sure of that Thank you in advance !
Hint: $-x_1 x_2$ is not convex, but you can combine it with the other quadratic terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4273962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Find a homogeneous system that will give all the elements of B. If A = $\begin{bmatrix} 1 & -1 \\ 0 & -2 \end{bmatrix}$, set B = {X ∈ $M_2(\mathbb{R})$∣AX + $X^\intercal$A = 0} Find a homogeneous system that will give all the elements of B. My attempt: AX + $X^\intercal$A = 0 AX = $-$ ($X^\intercal$A) ($\cfrac{1}{A}$) AX = $-$ ($X^\intercal$A)$(\cfrac{1}{A})$ X = $-$ $X^\intercal$ This would mean that all skew-symmetric matrices would satisfy the conditions. However, I tried plugging in random skew-symmetric matrices but AX + $X^\intercal$A woouldn't equate to 0. I may have erred in my solution: I just don't know where.
You cannot infer from the step $AX=-(X^TA)$ that $A^{-1}AX=-(X^TA)A^{-1}$. When two square matrices are equal, say $X=Y$, you can left-multiply both sides by a matrix $M$ to get another equality $MX=MY$, or you can right-multiply both sides by $M$ to get $XM=YM$. However, you cannot infer that $MX=YM$, because matrix multiplication in general is not commutative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4274114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
On improving the upper bound $I(m^2) \leq \frac{2p}{p+1}$, if $p^k m^2$ is an odd perfect number with special prime $p$ (Preamble: This question is an offshoot of this earlier MSE post.) Let $p^k m^2$ be an odd perfect number (OPN) with special prime $p$ satisfying $p \equiv k \equiv 1 \pmod 4$ and $\gcd(p,m)=1$. Denote the classical sum of divisors of the positive integer $x$ by $\sigma(x)=\sigma_1(x)$, the deficiency of $x$ by $D(x)=2x-\sigma(x)$, and the abundancy index of $x$ by $I(x)=\sigma(x)/x$. Note that it is trivial to prove that $$\frac{p+1}{p} \leq I(p^k) < \frac{p}{p-1}$$ from which we obtain $$\frac{2(p-1)}{p} < I(m^2) = \frac{2}{I(p^k)} \leq \frac{2p}{p+1}.$$ This implies that $$\frac{2}{p+1} \leq \frac{D(m^2)}{m^2} < \frac{2}{p}.$$ Taking reciprocals, multiplying by $2$, and subtracting $1$, we get $$p-1 < \frac{\sigma(m^2)}{D(m^2)} \leq p,$$ where we note that $(2/p) < p-1$. Now consider the quantity $$\bigg(\dfrac{D(m^2)}{m^2} - \dfrac{2}{p}\bigg)\bigg(\dfrac{\sigma(m^2)}{D(m^2)} - \dfrac{2}{p}\bigg).$$ This quantity is negative. Thus, we obtain $$I(m^2) + \bigg(\dfrac{2}{p}\bigg)^2 < \dfrac{2}{p}\bigg(\dfrac{D(m^2)}{m^2} + \dfrac{\sigma(m^2)}{D(m^2)}\bigg) = \dfrac{2}{p}\Bigg(\bigg(2-I(m^2)\bigg) + \bigg(\dfrac{I(m^2)}{2-I(m^2)}\bigg)\Bigg).$$ Now, let $z_1=I(m^2)$. Then we have the inequality $$z_1 + \bigg(\dfrac{2}{p}\bigg)^2 < \dfrac{2}{p}\Bigg(\bigg(2-z_1\bigg) + \bigg(\dfrac{z_1}{2-z_1}\bigg)\Bigg),$$ from which we obtain $$\dfrac{2(p-1)}{p} < z_1=I(m^2) < 2$$ using WolframAlpha. Here is my inquiry: QUESTION: In this closely related MSE question, we were able to derive the improved lower bound $$\frac{2(p-1)}{p}+\frac{1}{pm^2}<I(m^2).$$ Can we similarly derive an improved upper bound for $I(m^2)$, that is hopefully better than $$I(m^2) \leq \frac{2p}{p+1}?$$ If we cannot, then can you explain why? MY ATTEMPT Consider the quantity $$\bigg(\dfrac{D(m^2)}{m^2} - \dfrac{2}{p+1}\bigg)\bigg(\dfrac{\sigma(m^2)}{D(m^2)} - \dfrac{2}{p+1}\bigg).$$ This quantity is nonnegative. Thus, we obtain $$I(m^2) + \bigg(\dfrac{2}{p+1}\bigg)^2 \geq \dfrac{2}{p+1}\bigg(\dfrac{D(m^2)}{m^2} + \dfrac{\sigma(m^2)}{D(m^2)}\bigg) = \dfrac{2}{p+1}\Bigg(\bigg(2-I(m^2)\bigg) + \bigg(\dfrac{I(m^2)}{2-I(m^2)}\bigg)\Bigg).$$ Now, let $z_2 = I(m^2)$. Then we have the inequality $$z_2 + \bigg(\dfrac{2}{p+1}\bigg)^2 \geq \dfrac{2}{p+1}\Bigg(\bigg(2-z_2\bigg) + \bigg(\dfrac{z_2}{2-z_2}\bigg)\Bigg),$$ from which we obtain $$\dfrac{4}{p+3} \leq z_2=I(m^2) \leq \dfrac{2p}{p+1},$$ using WolframAlpha, which does not improve on the previous known bounds for $I(m^2)$.
This is an experimental attempt - my apologies for any silly mistakes. Since we have $$I(m^2) \leq \dfrac{2p}{p+1}$$ and because $p^k m^2$ is perfect, then we have $$I(p^k)I(m^2) = I(p^k m^2) = 2$$ where we have used the fact that the abundancy index function is multiplicative. Hence, we obtain (via iteration) $$\text{First iteration: } I(m^2) \leq I(p^k)I(m^2)\cdot\dfrac{p}{p+1} \leq 2I(p^k)\cdot\dfrac{p^2}{(p+1)^2} = \dfrac{2p^2\Bigg(p^{k+1} - 1\Bigg)}{p^k (p-1)(p+1)^2}$$ $$\text{Second iteration: } I(m^2) \leq 2I(p^k)\cdot\dfrac{p^2}{(p+1)^2}$$ $$= \bigg(I(p^k)\bigg)^2 {I(m^2)} \cdot\dfrac{p^2}{(p+1)^2} \leq 2\bigg(I(p^k)\bigg)^2 \cdot\dfrac{p^3}{(p+1)^3}$$ $$\ldots$$ $$\ldots$$ $$\ldots$$ by recursively replacing $2$ with $I(p^k)I(m^2)$ and then bounding $I(m^2)$ from above by $2p/(p+1)$. Note that, at the $n^{\text{th}}$ iteration, we have the inequality $$I(m^2) \leq 2\bigg(I(p^k)\bigg)^n \cdot \dfrac{p^{n+1}}{(p+1)^{n+1}}.$$ Repeating the process ad infinitum, we get $$I(m^2) \leq \lim_{n \rightarrow \infty}{\Bigg(2\bigg(I(p^k)\bigg)^n \cdot \dfrac{p^{n+1}}{(p+1)^{n+1}}\Bigg)} = \lim_{n \rightarrow \infty}{\Bigg(\dfrac{2\bigg(I(p^k)\bigg)^n}{\dfrac{(p+1)^{n+1}}{p^{n+1}}}\Bigg)},$$ a limit which is of the indeterminate form $$\dfrac{+\infty}{+\infty}.$$ Hence, we may apply L'Hôpital's rule, by separately considering $$h(p) = \Bigg(\dfrac{2\bigg(I(p^k)\bigg)^n}{\dfrac{(p+1)^{n+1}}{p^{n+1}}}\Bigg)$$ as a function of $p$, and $$j(k) = \Bigg(\dfrac{2\bigg(I(p^k)\bigg)^n}{\dfrac{(p+1)^{n+1}}{p^{n+1}}}\Bigg)$$ as a function of $k$. (Also, note that there is "currently no L'Hôpital's rule for multiple variable limits". The closest thing that I could find on arXiv is this preprint by Gary R. Lawlor.) I will stop here for the time being. Update: (October 13, 2021 - 11:37 AM) - Manila time Since the denominator of $j(k)$ is strictly a function of $p$ and $n$, then there is no need to consider $j(k)$. Working on $h(p)$ now, will post an update in a bit. Update: (October 13, 2021 - 12:27 PM) - Manila time Cross-posted this answer as a question to MO, since the computations involved are somewhat tedious and complicated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4274437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Modules, Invariant Dimension Property, Direct Sum Let R be a ring with no zero divisors such that for all $r,s \in R$ there exist $a,b \in R$, not both zero, with ar + bs = 0. (a) If $R = K \oplus L$ (module direct sum), then K = 0 or L = 0. (b) If R has an identity, then R has the invariant dimension property. This question is from my module theory assignment and I am struck on the problem. (a) I am completely stumped by this and I don't have any idea. (b) for all $r,s\in R$ there exist $a,b \in R$ , not both 0 with ar + bs = 0. Now, this is a thing I deduced: Let X be the basis. Then $X={x_1 , ... ,x_m}$ be the basis. Now, let m be odd , then for all {$x_1,x_2$}, {$x_3, x_4$},...{$x_{m-2} ,x_{m-1}$} there exists r and s for each couple such that $rx_{p}+ sx_{q} =0$ and $rx_{m+1}=0$. Then, $x_{m+1}$ will also be 0 as R doesn't have a zero divisor. Now, if m is even then all such elements will be made 0 by m/2 couples which means that any such set X in Linearly Independent. So, I think 1 is the only element in the basis and any other basis must contain only 1 element and hence invariant dimension property.
For part (b), recall that to show $R$ has the invariant dimension property, we need to show that every basis for a free $R$-module $F$ has the same cardinality. In other words, if $R^m\cong R^n$, $m=n$. The method you are trying to use does not quite work. Let $x_1,x_2\in F$. Since $x_1,x_2$ are not elements of $R$, we cannot assume that there exists $a,b\in R$ such that $ax_1+bx_2 = 0$, so the proof breaks down a bit. This question has been posted a lot on this site, and I don't think I have seen the following proof for it, so I will post a way to proceed: Suppose $F$ is a free module on $R$ such that both $F\cong R^n$ and $F\cong R^m$. This implies that $R^n\cong R^m$. Without loss of generality, suppose $m>n$. By taking the module quotient by $R^{n-1}$, we see that $R\cong R^{m-n+1}$, so $R\cong R\oplus R^{m-n}$ as an $R$-module. By part (a), either $R=0$ or $R^{m-n} = 0$, a contradiction. This implies that $m=n$, so $R$ has the invariant dimension property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4274543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving that $\rho =3\cos\varphi$ is a circle Practising some exercises in Physics. In one of the questions they said that $\rho =3\cos\varphi$ is a circle and asked to prove that it is a circle. I'm trying to figure out how to do it. On the internet I saw that the general form of a circle in central $r_0,\varphi$ and radius $a$ is: $$ r^2 - 2 r r_0 \cos(\theta - \varphi) + r_0^2 = a^2 $$ Do I need to compare the two?
You are given $\rho = 3 \cos \varphi $ From which it follows that $x = \rho \cos \varphi = 3 \cos^2 \varphi $ and $y = \rho \sin \varphi = 3 \cos \varphi \sin \varphi $ Simplifying the expressions, we get $x = \frac{3}{2} (1 + \cos 2 \varphi) $ and $y = \frac{3}{2} \sin 2 \varphi $ Thus $\cos 2 \varphi = -1 + \dfrac{2}{3} x $ and $\sin 2 \varphi = \frac{2}{3} y $ And since $\cos^2 \theta + \sin^2 \theta = 1$ for any $\theta$ , then $\left(-1 + \dfrac{2}{3} x \right)^2 + \left(\dfrac{2}{3} y \right)^2 = 1$ Multiplying through by $\left( \dfrac{3}{2} \right)^2$ , we get, $ \left(x - \dfrac{3}{2} \right)^2 + y^2 = \left( \dfrac{3}{2} \right)^2 $ Which is an equation of a circle centered at $ \left(\dfrac{3}{2}, 0 \right) $ and having a radius of $\left( \dfrac{3}{2} \right) $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4274743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Let $A \subset \Bbb R^n$ and $f:A \to \Bbb R$. Show that if $A = \bigcup_{i} B_i$ and $f\mid_{B_{i}}$ is measurable for any $i$ then $f$ is measurable Let $A \subset \Bbb R^n$ and $f:A \to \Bbb R$. Show that if $A = \bigcup_{i} B_i$ for $B_1, B_2, \dots$ and $f\mid_{B_{i}}$ is measruable for all $i$, then $f$ is measurable. Since $f\mid_{B_{i}}$ is measurable for all $i$ we have that $E=\{x \in A \mid f\mid_{B_{i}}(x) > a \}$ is measurable for any $i$. Now since this holds for any $i$ $$E= \{x \in A \mid f\mid_{B_{i}}(x) > a \} = \{x \in A \mid f\mid_{\bigcup_i B_{i}}(x) > a \} = \{x \in A \mid f(x) > a \}$$ and so $f$ is measurable. Is the solution here correct?
Consider a measurable set $S \subseteq \mathbb{R}$. Then consider that $\begin{equation} \begin{split} f^{-1}(S) &= f^{-1}(S) \cap A \\ &= f^{-1}(S) \cap \bigcup\limits_{i \in \mathbb{N}} B_i \\ &= \bigcup\limits_{i \in \mathbb{N}} f^{-1}(S) \cap B_i \\ &= \bigcup\limits_{i \in \mathbb{N}} f|_{B_i}^{-1}(S) \end{split} \end{equation}$ which is a countable union of measurable sets, hence measurable. So $f$ is measurable. Note that the fact that $A \subseteq \mathbb{R}^n$ is not used here at all, nor is the fact that the codomain of $f$ is $\mathbb{R}$. Any measure spaces could have been used instead. The key facts which are needed are that intersections distribute over arbitrary unions and that if $X \subseteq Y$, $g : Y \to Z$, $W \subseteq Z$, then $g^{-1}(W) \cap X = g|_X^{-1}(W)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4274911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
existence of the minimizer for a finite dimensional subspace of a normed vector space Let $X$ be a normed vector space and let $Y$ be a finite dimensional subspace. If $x \in Y^C$, then we need to prove that there exist $y_0 \in Y$ such that $||x-y_0||= d(x,Y)= \inf_{y\in Y}||x-y||=M$. My work: by the definition of the infimum, we have for each $n \in \mathbb{N}$, an element $y_n\in Y$ such that: $||x-y_n|| < M +\frac{1}{n}$ but, $ M\leq ||x-y_n||$ thus we let $n\rightarrow \infty$ and we get: $\lim_{n\rightarrow \infty}||x-y_n||=M$. I need to show that this sequence $\{y_n\}$ has a convergent subsequence by proving that it is bounded. I was able to prove $\{y_n\}$ is bounded, but not sure how to show that it has a convergent subsequence.
In a finite-dimensional normed space, closed bounded balls are compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4275079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Given two pythagorean triples, generate another I don't know if this has been asked before, but I could not find any existing answer. I noticed that for any pair of primitive pythagorean triples (not necessarily distinct), let's say: a² + b² = c² d² + e² = f² Then there is at least another primitive triple where: g² + h² = (cf)² And there are 2 if the starting triples are distinct. So, for example: (3,4,5) and (5,12,13) -> (16, 63, 65) and (33, 56, 65) (5,12,13) and (8,15,17) -> (21, 220, 221) and (140, 171, 221) (3,4,5) (5,12,13) (8,15,17) -> (817,744,1105) (943,576,1105) (1073,264,1105) (1104,47,1105) (3,4,5) and (3,4,5) -> (7,24,25) I think there is an explanation for that, a property of pythagorean triples, or in general of diophantine equations. Is it true in every case? Is there a way to calculate the two legs of the resulting triple(s)?
Given any pair of triples \begin{align*} A_1^2+B_1^2=C_1^2\\ A_2^2+B_2^2=C_2^2 \end{align*} There are $\space 2^{n-1}\space$ other triples: $\space A_3^2+B_3^2=(C_1\times C_2)^2=C_3^2\space$ where $\space n\space$ is the number of distinct prime factors $\space f\equiv 1 \pmod 4\space$ of $\space C_3\space.\quad$ To find these, we begin with Euclid's formula $\space A=m^2-k^2\quad B=2mk\quad C=m^2+k^2.\quad $ We solve the $\space C$-function for $\space k\space$ and test a defined range of $\space m$-values to see which yield integers. $$C=m^2+k^2\implies k=\sqrt{C-m^2}\\ \qquad\text{for}\qquad \bigg\lfloor\frac{ 1+\sqrt{2C-1}}{2}\bigg\rfloor \le m \le \big\lfloor\sqrt{C-1}\big\rfloor$$ The lower limit ensures $m>k$ and the upper limit ensures $k\in\mathbb{N}$. Take the example \begin{align*} 15^2+8^2=17^2\\ 21^2+20^2=29^2 \end{align*} $$C=17\cdot29=493\implies \\ \bigg\lfloor\frac{ 1+\sqrt{986-1}}{2}\bigg\rfloor=16 \le m \le \big\lfloor\sqrt{493-1}\big\rfloor=22\quad \\ \text{and we find} \quad m\in\{18,22\}\implies k\in\{13,3\}\\$$ $$F(18,13)=(155,468,493)\qquad F(22,3)=(475,132,493)\quad $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4275309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
One can find $a_i, a_j, a_k$ sides of a triangle among a certain sequence $a_1, ..., a_{2n+1}$ where $1 Suppose $a_1, a_2, ..., a_{2n+1}$ are distinct positive integers so that $1<a_i<2^n$ for all $1\leq i\leq2n+1$, show that there exists $a_i, a_j, a_k$ ($1\leq i<j<k\leq2n+1$) so that they are the sides of a triangle. This is my attempt to this problem: WLOG, suppose $a_1<a_2<...<a_{2n+1}$. We have to prove that $a_i+a_j>a_k$ , $a_i+a_k>a_j$, $a_j+a_k>a_i$. Since $i<j<k$, we can obviously see that $a_i+a_k>a_j$ and $a_j+a_k>a_i$ because $a_k>a_i$ and $a_j$. So the problem reduces to showing that there are $i,j,k$ so that $a_i+a_j>a_k$. This is a problem related to the Pigeonhole principle, so I suppose I need to find a way to make the "containers" and "pigeons". I tried to work around the $2^n$ tight bound, but didn't quite get the answer. One more thing I found is that we can safely say we only need to find 3 consecutive numbers, since it will maximize $a_i+a_j$ and minimize $a_k$, but I can't quite figure out what to do with that. I would appreciate a hint to this problem and thank you. Edit: I found an answer using the Pigeonhole principle: Since we have $2n+1$ numbers and $1\leq a_i<2^n$, we can put 3 numbers into $n$ "containers": $[2^0, 2^1), [2^1, 2^2), [2^2, 2^3), ..., [2^{n-1}, 2^n)$. Assume $a_x, a_y, a_z\in[2^k, 2^{k+1})$, $0\leq a_x<a_y<a_z\leq n-1$. Then we have $a_x+a_y\geq2^k+2^k=2^{k+1}>a_z$. The case of $a_z+a_y>a_x$ and $a_z+a_x>a_y$ are trivial.
Proceed by contradiction, assume every value is in range $(1,2^n)$ and there is no triangle. One has $a_1\geq 1$ and $a_2\geq 1$ and by induction one has $a_{m+2}\geq a_{m} + a_{m+1}$ for every $m$ if there is no triangle with $a_m,a_{m+1},a_{m+2}$. It follows one has $a_m \geq f_{m}$ for every $m$, where $f_m$ is the usual fibonacci sequence, however we can show $f_{2n+1} \geq 2^n$ by induction. The case $n=0$ holds with equality. For the inductive step we have $f_{2(n+1)+1} = f_{2n+1} + f_{2n+2} \underbrace\geq_{\hspace{-1cm}\text{induction hypothesis}\hspace{-1cm}} 2^n + 2^n \geq 2^{n+1}$. We have reached a contradiction as $a_{2n+1} \geq 2^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4275861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Volume of Revolution, Cross-Section Confusion I know this is a basic question, but I'm having a hard time with the following from https://www.sfu.ca/math-coursenotes/Math%20158%20Course%20Notes/sec_volume.html (not HW): The region in the first image is the cross-section above x = 1/2 The base of a solid is the region between f(x) = $x^2 - 1$ and $g(x) = -x^2 + 1$. Find the volume of the solid. What I don't understand is how the cross-sections are equilateral triangles. For the area above x = 1/2, it's not an equilateral triangle because it's a curve, though it does look like one so maybe we can approximate. But choosing something like x = 0, the region above curves in on itself so it definitely isn't an equilateral triangle at all. So I don't see how equilateral triangles relates to that region. What am I missing?
The solid is indeed built up using equilateral triangles. The triangle that sits on top of the line $x=\frac12$ is outlined in red in the figure below. In three-dimensional Cartesian coordinates, the two lower vertices of the triangle are at $(x,y,z) = \left(\frac12, -\frac34, 0\right)$ and $(x,y,z) = \left(\frac12, \frac34, 0\right)$ in the $x,y$ plane; the upper vertex is at $(x,y,z) = \left(\frac12, 0, \frac34\sqrt3\right).$ The part of the solid to the right of $x=\frac12$ has been cut off in this figure; the part to the left of the red triangle is the solid on the left side of $x=\frac12.$ It is in fact all equilateral triangles but you can't see any of them as triangles because one side of every triangle is hidden behind the figure and another side is hidden beneath. The blue curve in the lower left part of the diagram is the graph of the function $y = x^2 - 1$ in the $x,y$ plane. The graph of the function $y = 1 - x^2$ is completely hidden behind the solid object. Another way to describe the solid is that it is the solid region that is below the curved surface given by the equation $$z = \sqrt3(y - (x^2 - 1)), \tag1$$ also below the curved surface given by the equation $$z = \sqrt3((1 - x^2) - y), \tag2$$ and above the $x,y$ plane, which is given by the equation $$z = 0. \tag3$$ Follow this link for another view of the two curved surfaces and this plane. If you set $x$ to a fixed value between $-1$ and $1$ and consider each of the equations $(1),$ $(2),$ and $(3)$ to define $z$ as a function of $y,$ you can plot these functions and find that their graphs are three lines that form an equilateral triangle. It may seem like this does not have much to do with volumes of revolution, since nothing got revolved here, but it's actually a very basic intuition about volume that goes back at least to Archimedes. The usual disk or washer method that is taught in beginning calculus is just one example of this intuition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4275996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Biconditionals and Conjunctions in Truth Tables Given that a biconditional $p\iff q$ is True what can be concluded from the statement $\lnot p\land \lnot q$? In a worded example: I wear my running shoes if and only if I exercise. (True) I am not exercising AND I am not wearing my running shoes. (?) If we set up a truth table, the biconditional is True in two of the four occurrences, but we see that $\lnot p\land \lnot q$ is both True and False, which would mean there is no conclusion, correct?
Restating Ryan G's answer, which I completely agree with: Given $P \iff Q$ you know that there are only two possibilities: Possibility (1) : $P \wedge Q$. Possibility (2) : $(\neg P) \wedge (\neg Q).$ Since the only information available is that the biconditional is true, there is no way of determining which of Possibility (1) or Possibility (2) pertains.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4276175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
if $M=\left( \begin{matrix} A &B \\B^{T} &C \end{matrix} \right)$ is a positive definite matrix,Prove that $|M|\leq|A||C|$ Let $A,B,C$ be n-order matrices. If $$M=\left( \begin{matrix} A &B \\B^{T} &C \end{matrix} \right)$$ is a positive definite matrix, prove that $|M|\leq|A||C|$. My attempt: Since A and C is necessarily also positive definite, there is some invertible $P$ and $Q$ such that $P^{T}AP=E_n$ and $Q^{T}CQ=E_n$, hence we have $$\left( \begin{array}{cc} P^{T} &0 \\0 &Q^{T} \end{array} \right) \left( \begin{array}{cc} A &B \\B^{T} &C \end{array} \right) \left( \begin{array}{cc} P &0 \\0 &Q \end{array} \right)=\left( \begin{array}{cc} E_n &P^TBQ \\Q^TB^TP &E_n \end{array} \right)$$ so taking det both sides we have $|M||A||C|\leq \det(RHS)$, but I'm stuck here. Does anyone know how to prove it? Thank you.
To finish the proof via congruence: $P^TAP = I_n$ and $Q^TCQ = I_n$ $\implies \det\big(P^TP\big) = \det\big(A\big)^{-1}$and $\det\big(Q^TQ\big) = \det\big(C\big)^{-1}$ $\mathbf 0\prec \left[ \begin{array}{cc} P^{T} &\mathbf 0 \\\mathbf 0 &Q^{T} \end{array} \right] \left[ \begin{array}{cc} A &B \\B^{T} &C \end{array} \right] \left[ \begin{array}{cc} P &\mathbf 0 \\\mathbf 0 &Q \end{array} \right]=\left[ \begin{array}{cc} I_n &* \\* &I_n \end{array} \right]$ taking determinants: $\det\left(\left[ \begin{array}{cc} P^{T} &\mathbf 0 \\\mathbf 0 &Q^{T} \end{array} \right]\right) \det\left(\left[ \begin{array}{cc} A &B \\B^{T} &C \end{array} \right]\right) \det\left(\left[ \begin{array}{cc} P &\mathbf 0 \\\mathbf 0 &Q \end{array} \right]\right)$ $=\det\left(\left[ \begin{array}{cc} P^{T}P &\mathbf 0 \\\mathbf 0 &Q^{T}Q \end{array} \right]\right) \cdot\det\big(M\big)$ $=\det\big(P^TP\big)\cdot \det\big(Q^TQ\big)\cdot\det\big(M\big)$ $=\det\big(A\big)^{-1}\cdot \det\big(C\big)^{-1}\cdot\det\big(M\big)$ $=\det\left(\left[ \begin{array}{cc} I_n &* \\* &I_n \end{array} \right)\right]$ $\leq 1$ by Hadamard's Determinant Inequality. So $\det\big(A\big)^{-1}\cdot \det\big(C\big)^{-1}\cdot\det\big(M\big)\leq 1$ and re-scaling each side by $\det\big(A\big)\cdot \det\big(C\big)$ gives the result. note: There is no need for $A$ and $C$ to be the same size. This proof runs verbatim when $M\succ \mathbf 0$ and $A$ and $C$ are square. With this view: a special case of this inequality occurs when $C$ is $1\times 1$, which was this question: Inequality for a determinant Via induction, said special case proves Hadamard's Determinant Inequality which is at the core of this problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4276355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Let $\{X_n:n\geq 1\}$ be a sequence of r.v. with $X_n\to X$ a.s., then $X$ is r.v I'm trying to understand the "almost sure convergence" concept. I've seen two definitions, Let $\{X_n:n\geq 1\}$ be a sequence of r.v. defined on the same probability space $(\Omega,\mathcal{A},P) $. Definition 1: Let $X:\Omega\to\mathbb{R}$ be a function. We will say that $X_n\to X$ a.s. (almost sure convergence) if $\exists N\subset\Omega : P(N)=0 $ with $\lim_n X_n(w)=X(w), \forall w\in N^c $ Definition 2: Let $X$ be a r.v. on $(\Omega,\mathcal{A},P) $, then we will say that $X_n\to X$ a.s. if $P(\lim_n X_n = X )=1$ In the second one, we are defining the limit as a r.v., but in the first one, it is not said that $X$ is a r.v. so I have searched if it is (trying not to use the second one). What I found is to define $X^*(w)=\begin{cases} X(w) & w\in N^c \\ 0 &w\in N \end{cases} $, then it is said that $X_n\to X^*$. As measurable functions preserves limits, $X^*$ is r.v., and because $X=X^*$ a.s, we have $X$ is r.v. too. I'm not seeing why $\lim_n X_n(w)= 0$ when $w\in N$. Is this proof correct? I don't think so. The argument satisfies me, but we don't know if $X_n\to X^*$.
If $X_n\to X$ on $A\in\mathcal{A}$ with $\mathsf{P}(A^c)=0$, $X$ need not be measurable unless $\mathsf{P}$ is complete. On the other hand, $X_n\to X^*$ on $A$, and $X^*$ is measurable, i.e., it is a random variable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4276515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $\lvert X_{ij}\rvert \leq 1$ for all $i, j$, how do we show that the diagonal elements of $(X^T X)^{-1}$ are at least $\frac{1}{n}$? I am reformulating the question in what I hope will be a simpler, clearer way. However, I am also leaving the original formulation below. Given an $n \times p$ matrix $X$, if $\lvert X_{ij}\rvert \leq 1$ for all $i, j$, how do we show that $(X^T X)^{-1}_{j,j} \geq \frac{1}{n}$ for all $j$? Original formulation: I have a linear model $Y = X\beta + \epsilon$ where $\epsilon \sim (0_n, \sigma^2 I_n)$. The matrix $X$ is $n \times p$. If $\hat \beta$ is the least squares estimator of $\beta$ and $\lvert x_{ij} \rvert \leq 1$ for all $i, j$, then I want to show that $\text{Var}(\hat \beta_j) \geq \sigma^2 / n$ for $j = 1, \dots, p$. (I guess the point here is that matrices $X$ with small values lead to large variability in the least squares estimators for $\beta$.) I know that $\hat \beta = (X^T X)^{-1} X^T Y$, and $\text{Var}(\hat \beta) = \sigma^2 (X^T X)^{-1}$. I was thinking that if I had a formula for the diagonal entry of $(X^T X)^{-1}$ that might be handy, but I'm not sure where I am going with that. I was also thinking about the triangle inequality, but I'm not sure how to use it. One consequence of the assumption $\lvert x_{ij} \rvert \leq 1$ is that all entries in $X^T X$ will have absolute value less than $n$, right? I was also playing with a toy $3 \times 2$ example for X, but that didn't seem to lead anywhere. I appreciate any help. Edit: Response to Hyperplane's answer (NOTE: the answer was deleted) I think it is clear that proving that $(X^T X)^{-1}_{j,j} \geq \frac{1}{n}$ for all $j$ will suffice, because multiplying both sides by $\sigma^2$ will give us the desired inequality. I see that you have argued that all the diagonal elements of $(X^T X)^{-1}$ are trapped between the smallest and largest eigenvalues of $(X^T X)^{-1}$. In particular, \begin{align*} (X^T X)^{-1}_{j,j} \geq \frac{1}{\lambda_{\max}(X^T X)} \end{align*} because eigenvalues are inverted for an inverse matrix. Therefore, if we can show that \begin{align*} \frac{1}{\lambda_{\max}(X^T X)} &\geq \frac{1}{n}, \text{ or equivalently}\\ n &\geq \lambda_{\max}(X^T X) \end{align*} we'll essentially be done. (There is an assumption here that $\lambda_{\max}(X^T X) \neq 0$; I am not entirely sure how to argue this away.) My main concern is in your second last line. Matrix norms are new to me, but according to Wikipedia, we should have $\lambda_{\max}(X^T X) = \Vert X \Vert_2^2$, which (unless I'm mistaken) means you've shown that $\lambda_{\max}(X^T X) \leq n^2$, which is not good enough. I am hoping this is a simple misunderstanding on my part.
Note that for any positive definite matrix $A$ and unit vector $u$, we have $$ (u^\ast Au)(u^\ast A^{-1}u) =\|A^{1/2}u\|_2^2\|A^{-1/2}u\|_2^2 \ge|\langle A^{1/2}u,A^{-1/2}u\rangle|^2 =1. $$ In particular, when $A=X^TX$ and $u=e_k$, $$ (A^{-1})_{kk} =e_k^T A^{-1}e_k \ge\frac{1}{e_k^T Ae_k} =\frac{1}{\|Xe_k\|_2^2} =\frac{1}{\sum_{i=1}^nx_{ik}^2} \ge\frac1n. $$ Equalities hold if and only if $A^{1/2}e_k$ is parallel to $A^{-1/2}e_k$ and $\|Xe_k\|_2^2=n$. This means $e_k$ is a right singular vector of $X$ and $|x_{ik}|=1$ for every $i$. That is, the $k$-th column of $X$ is orthogonal to all other columns and all entries on the $k$-th column are equal to $\pm1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4276717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Can the limits and derivatives of a function always be solved by applying its basic theorems? I'm studying calculus, specifically about the limits and derivatives of a function. So I became interested in studying the basic theorems and the properties of limits and derivatives of a function. However, throughout the theory, the following question arises: Can the limits and derivatives of a function always be solved by applying its basic theorems? I would like to know if this is true or false and I think the best way to do it is through an example, and I am looking for it.
I suspect the OP has something simpler than the comments in mind. Consider the function $$ f(x) = \begin{cases} x\sin \left(\frac{1}{x}\right) & x \neq 0 \\ 0 & x = 0 \end{cases} $$ You can show that $\lim_{x \to 0} f(x) = 0$, but $f'(0)$ does not exist. By contrast $$ g(x) = \begin{cases} x^2\sin \left(\frac{1}{x}\right) & x \neq 0 \\ 0 & x = 0 \end{cases} $$ has $g'(0) = 0$. You can't use the "basic theorems" to compute these limits and derivatives. You have to make some kind of special argument. For these functions, the squeeze theorem is probably the most expedient route, but you could also argue from the definition. There are other more difficult functions to compute the limits and derivatives of, but hopefully this gives you some idea that computing limits and derivatives is not always a trivial exercise. Indeed, there are cases where the best we can do is say that the limit exists and then approximate it with numerical methods.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4276908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that $a_n\to\sqrt{2}$ with $a_n$ defined by $a_1=1$ and $a_{n+1}=1+\frac{1}{1+a_n}$ I want to show that the real sequence $\{a_n\}_{n=1}^\infty$, defined recursively by $a_1=1$ and $a_{n+1}=1+\frac{1}{1+a_n}$, converges to $\sqrt{2}$. The crux of matter is to prove that this sequence is convergent. By considering two subsequences consisting of the terms of even and odd indices, respectively, I was able to use the monotone sequence theorem to conclude that the original sequence truly converges. And a little algebra gives us the limit of $\sqrt{2}$. Now I'm wondering if it is possible to arrive at convergence by showing that $\{a_n\}_{n=1}^\infty$ is a Cauchy sequence. Then I consider the difference $a_{n+k+1}-a_{n+1}$ because our sequence is recursively defined. And it's like: $$a_{n+k+1}-a_{n+1}=\frac{a_n-a_{n+k}}{(1+a_{n+k})(1+a_n)}.$$ This seems to add no further information. Did I go the wrong way? Are we doomed to failure in consideration of Cauchy sequences? Thank you.
Clearly all of the terms $a_n > 0$. Now look at the difference $a_n- \sqrt{2}$ and get $$a_{n+1} - \sqrt{2} = - \frac{\sqrt{2}-1}{1+ a_n} \cdot (a_n - \sqrt{2})$$ and from here we see that $$|a_{n} - \sqrt{2}| < (\sqrt{2}-1)^{n-1} |a_1 - \sqrt{2}| = (\sqrt{2}-1)^n$$ for all $n> 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4277214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\theta$ is an irrational multiple of $2\pi$ given $\cos(\theta/2)\equiv \cos^2(\pi/8)$ How do we prove that $\theta$ is an irrational multiple of $2\pi$ given $\cos(\theta/2)\equiv \cos^2(\pi/8)$? With $\operatorname{SU}(2)$ rotations, \begin{align}R_z(\pi/4)R_x(\pi/4)&=[\cos(\pi/8)I-i\sin(\pi/8)Z][\cos(\pi/8)I-i\sin(\pi/8)X]\\ &=\cos^2(\pi/8)I-i[\cos(\pi/8)X+\sin(\pi/8)Y+\cos(\pi/8)Z]\sin(\pi/8)\\ &=\cos(\theta/2)I-i(\hat{n}.\vec{\sigma})\sin(\theta/2)=R_\hat{n}(\theta)\end{align} where $\vec{n}=(\cos(\pi/8),\sin(\pi/8),\cos(\pi/8))$ and $\hat{n}=\frac{\vec{n}}{||\vec{n}||}$, and $\vec{\sigma}=(X,Y,Z)$ where $X,Y,Z$ are Pauli matrices. Thus $\cos(\theta/2)\equiv\cos^2(\pi/8)$ and $\sin(\theta/2)\equiv\sin(\pi/8)\sqrt{1+\cos^2(\pi/8)}$. Original Context in my Reference Ref. to Page 196, 214 of QC and QI by Nelsen and Chuang Any hint on the possible ways to approach this could be appreciated. Note : Publication which possibly contains the proof
Let $p,q$ be positive coprime integers and suppose by contradiction that $\theta=2\pi p/q$. Since $\cos^2(\pi/8)=(2+\sqrt2)/4$, it has algebraic degree $2$. By considering Galois conjugates of primitive roots of unity, we can show that over the rationals, the algebraic degree of $\cos(p\pi/q)$ is $\phi(q)$ when $q$ is even, and $\phi(q)/2$ when $q$ is odd. Therefore, we have $\phi(q)=2$ when $q$ is even, giving $q=4,6$ (when $q$ is odd there are no solutions to $\phi(q)=4$). As none of $\cos(\pi p_1/4)$ or $\cos(\pi p_2/6)$ equals $(2+\sqrt2)/4$ for each integer $p_1\in\{1,3\}$ and $p_2\in\{1,5\}$, we obtain the desired contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4277384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
If $Ax=0$ for all $x\in \{b\}^{\perp}$ then prove that there exists $a\in \mathbb R^{n}\setminus\{0\}$ such that $A=a\otimes b.$ If $Ax=0$ for all $x\in \{b\}^{\perp}$ then prove that there exists $a\in \mathbb R^{n}\setminus\{0\}$ such that $A=a\otimes b.$ Also it's given that $Ab\neq 0.$ Actually I can find a vector $a\in \mathbb R^n\setminus \{0\}$ such that $A=a\otimes Ab$ because the columns or rows of $A$ span a vector space of rank $1$. But how can I prove that $A=a \otimes b.$ Any help is appreciated. Thank you. Edit: I think if we can get a solution of the equation $Ax=b$ then we are done. So can we get a solution of this equation?
The columns of $A$ span a vector space of dimension $1$, hence the matrix $A$, of rank $1$, can be written $A=uv^T$, for two nonzero vectors $u$ and $v$. Let $(e_1,e_2,\dots,e_n)$ be an orthogonal base (not necessarily orthonormal), with $e_1=b$. Then, since $Ae_i=0$ for all $i>1$ and $Ae_1\ne0$, we get that the dot product $v^Te_i$ is zero for $i>1$ and nonzero for $i=1$. That is, the only nonzero component of $v$ in the base $(e_i)_i$ is along $e_1=b$. That is, $v=\lambda b$, for some $\lambda\ne0$. Then, with $a=\lambda u$, we have $A=uv^T=ab^T$, which you can rewrite $A=a\otimes b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4277488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Upper Bound on Probability of Maximum Statistic from Exponential Distribution Let $X_1,\cdots,X_n$ be I.I.D. Exponential$(\lambda)$. Let $X_{(n)}$ denote the maximum ordered statistic. Prove that $$P\left(X_{(n)}\geq \frac{2\log(n)}{\lambda}\right)\leq\frac{1}{n}.$$ Work so far: $$ \begin{align} P\left(X_{(n)}\geq\frac{2\log(n)}{\lambda}\right)&=1-F_{X_{(n)}}\left(\frac{2\log(n)}{\lambda}\right)\\&=1-\left[F_X\left(\frac{2\log(n)}{\lambda}\right)\right]^n\\&=1-\left[1-\text{exp}\{-2\log(n)\}\right]^n\\&=1-\left[1-\frac{1}{n^2}\right]^n \end{align} $$ From here, I can't think of a clever way to change the RHS to be less than or equal to $\frac{1}{n}$. Any hints or tips would be much appreciated!
Hint: Bernoulli’s inequality: $$(1+x)^n\geq 1+nx\quad x\geq -1, n\in\mathbb N.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4277619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Snake-like continuum vs Peano continuum A snake-like continuum is a continuum such that for every $\varepsilon >0$ there exist a collection of open sets $d_1,d_2,\ldots d_n$ with diameters les than $\varepsilon$ such that $d_i \cap d_j\neq \emptyset$ iff $|i-j|\leq 1$ and $X=\cup_{i=1}^n d_i$. A Peano continuum is a continuum such that for every $\varepsilon >0$ there exist a collection of connected sets $d_1,d_2,\ldots d_n$ with diameters les than $\varepsilon$ such that $X=\cup_{i=1}^n d_i$. Triod is an example of a Peano continua that is not snake-like. My question is: Are all snake-like continua a Peano continua too? If the answer is affirmative, it is enough to prove that any open set can be represented as a finite union of connected sets (is that true for metric spaces?). Otherwise we should find a counterexample.
Since you insist of having a continuum, here is a counter-example: Let $X$ be the subset of $R^2$ which is the union of the graph of the function $$ y=\sin(1/x), 0<x\le 1 $$ and the vertical interval $\{(0,y): -1\le y\le 1\}$. I leave it to you to check that $X$ is a snake-like continuum. It is not a Peano continuum since it is not even path-connected. However, I do not have examples of path-connected snake-like continua which are not Peano. There is a good chance, they do not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4277790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Reading functions in local coordinates (vector bundles) I'm currently studying bundles from Husemaller and don't understand the proof of Proposition $1.5$ p.$25$. The argument is to show the continuity property holds on an open covering, i.e the one composed by $p^{-1}(U)$. What I don't get is how to relate the definition of vector bundles in a way that I could use the $h$-isomorphism of the definition and the maps in the proposition which are $a,s$. My attempt for $a$ since $s$ should be similar: I thought this is the interpretation of reading a map in charts so I end up having the following diagram $\require{AMScd}$ \begin{CD} U \times F^k \times F^k @>{h}>> p^{-1}(U) \oplus p^{-1}(U) = q^{-1}(U)\\ @V{\tilde{a}}VV @V{a}VV\\ U \times F^k @>{h}>> p^{-1}(U) \end{CD} And I thought that it would make sense if $\tilde{a}(u,x,y) = (u,x+y)$ which is continuos if $F = \mathbb{R},\mathbb{C}, \mathbb{H}$ which are the cases I'm working with. The problem here is that I don't know how to create the diagram and whether the maps are explicit in order to have a commutative diagram. Any help in order to clarify this detail and understand this concept would be appreciated.
Let me write $E|_U$ for $p^{-1}(U)$. Let $h:E|_U \to U \times F^k$ be a trivialisation of $E|_U$, that is $h$ is a homeomorphism and $h_p : E_p \to \{p\}\times F^k$ is a vector space isomorphism. Now we can build a trivialsiation of $(E\oplus E)|_U$: $$\tilde h: (E\oplus E)|_U \to U \times F^k \times F^k \\ (v\oplus w) \mapsto \left(pr_U(h(v)), pr_{F^k}(h(v)), pr_{F^k}(h(w))\right)$$ Let $$\tilde a: U \times F^k \times F^k \to U \times F^k \\ (u,x,y) \mapsto (u,x+y)$$ and $$a: (E\oplus E)|_U \to E|_U\\ (v\oplus w) \mapsto (v + w)$$ Now check that $h\circ a = \tilde a \circ \tilde h$. Then $a$ is continous if and only if $\tilde a$ is continous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4278179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Paths in a $n \times n$ square grid with diagonal moves allowed : is it possible to simplify the sum? In another post it was asked to find the number of paths in a $5 \times 5$ square board with diagonal moves allowed. While answering that question, I found myself thinking about the $n \times n$ grid case. The total number of paths for a $n \times n$ grid is $$\sum _{k = 0} ^{n-1} {2(n-1)-k \choose k}{2(n-1)-2k \choose n-k-1} =$$ $$\sum _{k = 0} ^{n-1} \frac{(2(n-1)-k)!} {k![(n-k-1)!]^2}$$ Is there a way to find a simpler form for this sum so that to reduce the computational complexity of the formula? Just a fast empirical check in Python on the better computational complexity of the formula proposed in the answer of @Gerry Myerson : Naive implementation of the formula $\sum _{k = 0} ^{n-1} {2(n-1)-k \choose k}{2(n-1)-2k \choose n-k-1} $ : from math import comb def delannoy_1(n) : central_delannoy = 0 for k in range(0,n) : central_delannoy += comb(2*(n-1)-k,k) * comb(2*(n-1)-2*k,n-k-1) return central_delannoy Implementation of the formula $\sum_{k=0}^n2^k{n\choose k}^2$ : def delannoy_2(n) : central_delannoy = 0 bin_k = 1 c = 1 for k in range(0,n+1) : central_delannoy += c * (bin_k * bin_k) c *= 2 bin_k *= n-k-1 bin_k //= k+1 return central_delannoy For $n = 3000$ the second implementation is roughly $300$ times faster than the first one (I have cheated a bit using a better implementation for the second one actually, if I mantain the standard of the first one is just $6$ times faster).
As @BillyJoe notes, these are the central Delannoy numbers. Many formulas are given at the OEIS page. Perhaps the simplest is $$\sum_{k=0}^n2^k{n\choose k}^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4278277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that there exists a $c : f(c) = g(c)$ I'll try to present a solution for this problem, and I hope I can receive feedback on what went wrong, if something went wrong of course. Let $f, g : [a, b] \to \Bbb R$ be continuous functions and $\int_{a}^{b} f(x) dx = \int_{a}^{b} g(x) dx$. Show that there exists $c \in [a, b]$ such that $f(c) = g(c)$. Solution Let's define $h(x) = \int_{a}^{x}f(x)dx-\int_{a}^{x}g(x)dx$ $h(x)$ is continous, since $f(x)$ and $g(x)$ is continous. I hope this argument is correct. We see $h(a) = h(b) = 0$. Applying Rolle's Theorem, we get that $\exists \xi \in (a,b) : h'(\xi) = 0$ In other terms, $f(\xi) = g(\xi)$ $\square$ Thanks!
Instead of claiming $h$ to be continuous, you need that $h$ is differentiable, in order to apply Rolle's Theorem. The differentiability follows since $f$ and $g$ are continuous, so then the Fundamental Theorem of Calculus tells you that $h$ is differentiable. Apart from that, the solution is correct. Here's another method: Assume for the sake of contradiction that the statement is not true. Define $h := f - g$. By assumption, $h$ is never zero. Thus, $h$ is of one sign. WLOG, $h > 0$. But this means that $\int_a^b h > 0$, contradicting our assumption. (The work here is gone into showing that $h > 0 \implies \int_a^b h > 0$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4278451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
A generalization of the distance formula It is well-known that the distance of a point (3D vector) $q$ from the plane $n \cdot ( r - r_0 ) = 0$ is given by $d=\dfrac{ | n \cdot (q - r_0) | }{ \sqrt{ n \cdot n } } $​ In this problem, I seek a generalization of the formula to $\mathbb{R}^n$, and to multiple hyperplanes. So consider a point $q \in \mathbb{R}^n$, I want to find the point $x \in \mathbb{R}^n$ which satisfies $A x = b$, where $A$ is an $m \times n$ known matrix whose rows are the normals to $m$ hyperplanes, where $m \lt n $, and $b \in \mathbb{R}^{m} $ is a known vector of $ m $ constants. Hence, $x$ lies on the intersection of $m$ hyperplanes, each described by: $$ A_i x = b_i \hspace{48pt} , i = 1, ..., m $$ where $A_i$ is the $i$-th row of matrix $A$ and $b_i$ is the $i$-th element of vector $b$. The problem is to determine the minimum distance between such an $x$ and point $q$ which is given. The distance is assumed to be the Euclidean distance, given by, $ d(x, q) = \sqrt{ (x - q)^T (x - q) }$
The problem can be stated as follows: $\text{Minimize } (x- q)^T (x - q) , \text{subject to } A x = b$ Direct application of the Lagrangian multipliers method yields the desired result. Define $g(x) = (x - q)^T (x - q) + \lambda^T(A x - b)$ where $\lambda \in \mathbb{R}^m$ is the Lagrangian multipliers vector. Then, the gradient with respect to $x$ is $\nabla_x {g} = 2 (x - q) + A^T \lambda = 0 \hspace{48pt} (1) $ and the gradient with respect to $\lambda$ is: $ \nabla_\lambda {g} = A x - b = 0 \hspace{48pt} (2) $ From $(1)$, we obtain, $x = q - \frac{1}{2} A^T \lambda \hspace{48pt} (3)$ Substituting this into $(2)$, $ A(q − \frac{1}{2}A^T \lambda) −b = 0 \hspace{48pt} (4) $ Therefore, $\lambda = −2 ({AA}^T)^{−1}(b − A q) \hspace{48pt} (5) $ Substituting this into $(3)$, $x - q = A^T ({A A}^T)^{-1} (b - A q) \hspace{48pt} (6)$ so that, $x = q + A^T ({A A}^T)^{-1} (b - A q) \hspace{48pt} (7)$ Finally, from $(6)$, the minimum distance is $\begin{equation} \begin{split} d_{\text{min}} &= \sqrt{(x - p)^T (x - p)}\\ &= \sqrt{ (b - A q)^T ({A A}^T)^{-T} {A A}^T ({A A}^T)^{-1} (b - A q)} \\ &= \sqrt{ (b - A q )^T ({A A}^T)^{-1} (b - A q)} \hspace{48pt} (8) \end{split} \end{equation}$ It is easy to see that the distance formula in $3D$ with one plane is a special case of $(8)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4278581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Are there any books written using dialogues? Last year I read Questions And Answers In School Physics. The book is based on a dialogue between a student and a teacher. A lot of concepts and ideas are seamlessly driven by the dialogue. I found the presentation to be lucid and pedagogical. I am wondering if there is books with a similar style but for mathematics.
I don't think if it suits the needs, but Logicomix by Apostolos Doxiadis and Christos Papadimitriou seems to me to be a great comics book dealing about the history and some concepts of Logic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4278697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 13, "answer_id": 10 }
Asymptotic integration of $\int_0^\infty\frac{x^{-\frac{1}{2}+a}J_{-\frac{1}{2}+a}(x\alpha)}{e^x-1}{\rm d}x$ when $\alpha \gg 1$ What is the asymptotic integration of $$\int_0^\infty\frac{x^{-\frac{1}{2}+a}J_{-\frac{1}{2}+a}(x\alpha)}{e^x-1}{\rm d}x$$ when $\alpha\rightarrow\infty$. How to compute that using standard identities? Please give a general expression and then consider a special case where $a=\frac{5}{2}$.
We assume that $a>\frac{1}{2}$ is fixed. Note that $$ \frac{{x^{a - 1/2} }}{{e^x - 1}} \sim \sum\limits_{n = 0}^\infty {\frac{{B_n }}{{n!}}x^{n + a - 3/2} } $$ as $x\to 0+$, where $B_n$ denotes the Bernoulli numbers. Then, by Theorem 2 in https://doi.org/10.1137/0507061, we find \begin{align*} \int_0^{ + \infty } {\frac{{x^{a - 1/2} }}{{e^x - 1}}J_{a - 1/2} (\alpha x)dx} & \sim \frac{1}{{2}}\sum\limits_{n = 0}^\infty {\frac{{B_n }}{{n!}}\frac{\Gamma\! \left( {\frac{n}{2} + a - \frac{1}{2}} \right)}{\Gamma\! \left( {1-\frac{n}{2}} \right)}\!\left( {\frac{2}{\alpha }} \right)^{n + a - 1/2} } \\ & = \frac{1}{2}\Gamma \!\left( {a - \frac{1}{2}} \right)\!\left( {\frac{2}{\alpha }} \right)^{a - 1/2} - \frac{{\Gamma (a)}}{{4\sqrt \pi }}\left( {\frac{2}{\alpha }} \right)^{a + 1/2} \end{align*} as $\alpha \to +\infty$. Note that the Bernoulli numbers of odd index $n\geq 3$ or the reciprocal gamma function eliminate all terms in the asymptotic expansion except the first two. The absolute error of this two-term approximation decays to zero faster in $\alpha$ than any negative power of $\alpha$. Thus, this asymptotics is rather accurate for large $\alpha$. In the special case $a=\frac{5}{2}$, the aproximation is $$ \int_0^{ + \infty } {\frac{{x^2 }}{{e^x - 1}}J_2 (\alpha x)dx} \sim \frac{2}{{\alpha ^2 }} - \frac{3}{{2\alpha ^3 }}. $$ The right-hand side apporaches $0$ from above for large positive values of $\alpha$. A different type of expansion may be obtained as follows. We can expand the denominator of the integrand and integrate term-by-term using http://dlmf.nist.gov/10.22.E49, to deduce \begin{align*} \int_0^{ + \infty } {\frac{{x^{a - 1/2} }}{{e^x - 1}}J_{a - 1/2} (\alpha x)dx} & = \int_0^{ + \infty } {x^{a - 1/2} \left( {\sum\limits_{n = 1}^\infty {e^{ - nx} } } \right)J_{a - 1/2} (\alpha x)dx} \\ & = \sum\limits_{n = 1}^\infty {\int_0^{ + \infty } {x^{a - 1/2} e^{ - nx} J_{a - 1/2} (\alpha x)dx} } \\ & = \left( {\frac{\alpha }{2}} \right)^{a - 1/2} \Gamma (2a)\sum\limits_{n = 1}^\infty {\frac{1}{{n^{2a} }}{\bf F}\!\left( {a,a + \frac{1}{2};a + \frac{1}{2}; - \left( {\frac{\alpha }{n}} \right)^2 } \right)} \\ & = \left( {\frac{\alpha }{2}} \right)^{a - 1/2} \frac{{\Gamma (2a)}}{{\Gamma \!\left( {a + \frac{1}{2}} \right)}}\sum\limits_{n = 1}^\infty {\frac{1}{{(n^2 + \alpha ^2 )^a }}} \\ & = (2\alpha )^{a - 1/2} \frac{{\Gamma (a)}}{{\sqrt \pi }}\sum\limits_{n = 1}^\infty {\frac{1}{{(n^2 + \alpha ^2 )^a }}} \end{align*} for $a>\frac{1}{2}$ and $\alpha>0$. Here $\mathbf F$ stands for the regularised hypergeometric function. The last series is a generalised Mathieu series. Thus applying Theorem 1 in https://arxiv.org/abs/1601.07751v1, we obtain \begin{align*} \int_0^{ + \infty } {\frac{{x^{a - 1/2} }}{{e^x - 1}}J_{a - 1/2} (\alpha x)dx} = & \;\frac{1}{2}\Gamma\! \left( {a - \frac{1}{2}} \right)\!\left( {\frac{2}{\alpha }} \right)^{a - 1/2} - \frac{{\Gamma (a)}}{{4\sqrt \pi }}\left( {\frac{2}{\alpha }} \right)^{a + 1/2} \\ & +(2\pi )^{a - 1/2} \frac{{e^{ - 2\pi \alpha } }}{{\sqrt \alpha }}\left( {1 + \mathcal{O}\!\left( {\frac{1}{\alpha }} \right)} \right) \end{align*} as $\alpha \to +\infty$, with $a>\frac{1}{2}$ being fixed. This shows the exponential accuracy of our original two-term asymptotics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4278832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Divide the first 5n positive integers into $5$ groups such that the sum of the numbers in each group is equal Prove that for every integer $ n> 1$, it is possible to divide the first 5n positive integers into $5$ groups such that each group has exactly $n$ numbers and the sum of the numbers in each group is equal. If n is even, I divide into 5 groups as follows: $A_1: 1,3,5,...,n-1,4n+2,4n+4,…,5n$ $A_2: 2,4,6,...,n,4n+1,4n+3,…,5n-1$ $A_3: n+1,n+3,n+5,...,2n-1,3n+2,3n+4,…,4n$ $A_4: n+2,n+4,...,2n,3n+1,3n+3,…,4n-1$ $A_5: 2n+1,2n+2,…,3n$ $\Rightarrow A_1=A_2=A_3=A_4=A_5=\frac{5n^2+n}{2}$ If n is odd, I don't know how to prove. Please help me.
Whenever $p$ is some prime, every group of $pn$ ($n > 1$) integers can be split into $p$ groups such that the sum of numbers in each group is equal. Unfortunately, this proof is only existential. $\textbf{Proof;}$ Set $S_{p,n} = \{1,...,pn\}$ and set $F_{n,p} = (2^{S_{p,n}})^{p}$ so that $F_{n,p}$ is the set of $p$-tuples where each tuple entry is a subset of $S_{p,n}$. We will say that an element $x$ of $F_{n,p}$ is valid whenever we have $$\bigcup_{j=1}^{N}(x)_{j} = S_{p,n}$$ $$(x)_{i} \cap (x)_{j} = \emptyset \text{ whenever }i \neq j$$ here $(x)_{j}$ is the projection of $x$ onto its $j-$th tuple component. We will denote the set of valid elements of $F_{n,p}$ as $F_{n,p}^{*}$. Given a finite set $A$ such that $A \subset \mathbb{N}$ we will denote $$\text{Sum}(A) = \sum_{y \in A}y.$$ We define $\phi_{n,p}: F_{n,p}^{*} \rightarrow \mathbb{C}$ via $$\phi_{n,p}(x) = \sum_{j=0}^{p-1}\text{Sum}((x)_{j+1})e^{\frac{2\pi i j}{p}}$$ Note that $$\phi_{n,p}(x) = 0 \text{ iff } \text{Sum}((x)_{1}) = \text{Sum}((x)_{2})...= \text{Sum}((x)_{p})$$ due to Eisenstein's criterion applied to cyclotomic polynomials. Now note that $$V_{n,p} := \prod_{x \in F_{n,p}^{*}} \phi_{n,p}(x)$$ is of the form $$\sum_{j=0}^{p-1}A_{j,n,p}e^{\frac{2\pi i j}{p}}$$ and by symmetry $$A_{0,n,p}=A_{1,n,p}...=A_{p-1,n,p} = A_{n,p}.$$ thus $V_{n,p} = 0$. Thus some $x^{*} \in F_{n,p}^{*}$ which satisfies $\phi_{n,p}(x^{*}) = 0$ is the configuration we are after.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4279016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Function that wraps unit circle twice around itself I'm looking for an example of a function $f: S^1 \to S^1$ (from unit circle to unit circle), that is continuous, open and surjective but not injective. I have intuitively thought of example, that would be a function that "wraps" unit circle twice around itself. More precisely this is what I had in mind. We can uniquely write points on unit circle as $(\cos t, \sin t)$, where $t \in [0, 2 \pi)$. And the function would be $f: S^1 \to S^1$ with $(\cos t, \sin t) \to (\cos 2t, \sin 2t)$. Now, I'm not even sure if this would be a good thing to look at, and if it was, I don't know how to show that $f$ is continuous and open. Keep in mind that this is for course of real analysis and I'm not allowed to use facts and parametrizations from complex analysis. Any comments or suggestions are very welcomed.
To formally define the map you describe consider the map $t\mapsto 2t\mapsto (\cos(2t),\sin(2t))$. Clearly, this is a continuous map $\phi\colon [0,2\pi]\to S^1$ as it is a composition of continuous maps. It is, * *Open, since it is a composition of open maps. *Surjective, since it is a composition of surjective maps. *Not injective, as it is two-to-one. The map $\phi$ descends to the quotient of $[0,2\pi]$ identifying the endpoints giving a map $\widetilde{\phi}\colon S^1\to S^1$. This is certainly surjective, and the properties of being open and continuous also decends to the quotient by properties of the quotient topology (look at the properties characterizing the quotient topology and chase around the commutative diagram one obtains).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4279338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Connection between the many different definitions of the Axiom of Choice I'm currently trying to write a paper on the Axiom of Choice. With my research I have found one very simple definition of the Axiom of Choice : "Let X be a non-empty set of non-empty sets. There exists a choice function for X." This seems intuitive to me and I feel like I can understand this from the classic shoes and socks example. But I am also seeing the Axiom of Choice defined as: "The Cartesian Product of a nonempty family of nonempty sets is nonempty." This seems less easy to understand at first glance, but reading further I understand how this could make sense. The issue is, I'm struggling to connect these two definitions. In my head I see them as two separate statements, each making sense individually. Is there an easy example to understand the Cartesian Product Definition, like can I relate it back to the sock and shoe example?
The direct sum in your example is a formal one. It's equivalent to a set of sets here. To pick an element of a direct sum is the choice function of your wording as a set of sets. Hewitt/Stromberg, Real and Abstract Analysis, GTM 25 gave an equivalence proof for: $$ \text{AC }\Leftrightarrow\text{ Tuckey's Lemma }\Leftrightarrow \text{ Hausdorff maximality principle }\Leftrightarrow \text{ Zorn's Lemma }\Leftrightarrow\text{ well-ordering theorem}$$ in case you are interested in more perspectives.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4279495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
$O$ is intersection of diagonals of the square $ABCD$. If $M$ and $N$ are midpoints of $OB$ and $CD$ respectively ,then $\angle ANM=?$ $O$ is intersection of diagonals of the square $ABCD$. If $M$ and $N$ are midpoints of the segments $OB$ and $CD$ respectively, find the value of $\angle ANM$. Here is my approach: Assuming the length of the square is $a$. We have $\tan(\angle AND)=2$ and I draw a perpendicular segment from $M$ to $NC$ and calling the intersection point $H$ then $\tan (\angle MNH)=\dfrac{\frac34a}{\frac a4}=3$ ($MH$ can be found by Thales Theorem in $\triangle BDC$) Hence $$\angle ANM=180^{\circ}-(\tan^{-1}2+\tan^{-1}3)=45^{\circ}$$ I'm looking for other approaches to solve this problem if it is possible. Intuitively, If I drag the point $N$ to $D$ and $M$ to $O$ (the angle is clearly $45^{\circ}$ here) then by moving $N$ from $D$ to $C$ and $M$ from $O$ to $B$ with constant speed, I think the angle remain $45^{\circ}$. But I don't know how to prove it.
Hint: $\;\triangle ANJ\,$ is an isosceles right triangle. [ EDIT ] The following is about the second part of OP's question. Intuitively, If I drag the point $N$ to $D$ and $M$ to $O$  [...]  then by moving $N$ from $D$ to $C$ and $M$ from $O$ to $B$ with constant speed, I think the angle remain $45^{\circ}$. Let $\,AB=1\,$, $\,DN=z \in \left[0, \frac{1}{2}\right]\,$ and $\,\frac{OM}{OB}=\frac{DN}{DC}\,$ so that $\,z=\frac{1}{2}\,$ corresponds to the original diagram and $\,z=0\,$ corresponds to $\,M \mapsto O, N \mapsto D\,$. Then $\,OM = \frac{OB}{DC} \,DN = \frac{z}{\sqrt{2}}$, and: $$ \tan\left(\angle AND\right) = \frac{1}{z} \\ \tan\left(\angle MNC\right)=\frac{1+z}{1-z} = -\,\frac{1+\frac{1}{z}}{1 - \frac{1}{z}} = -\,\frac{\tan\left(\frac{\pi}{4}\right)+\tan\left(\angle AND\right)}{1-\tan\left(\frac{\pi}{4}\right)\,\tan\left(\angle AND\right)} \\ = \tan\left(\pi -\left(\frac{\pi}{4}+\angle AND\right)\right) $$ It follows that $\,\angle MNC + \frac{\pi}{4} + \angle AND = \pi\,$, and therefore $\,x = \frac{\pi}{4}\,$ regardless of $\,z\,$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4279663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 3 }
Solution group of Pell's equation Let the Solution space of Pell's equation $ x^2-Dy^2 = 1 $ as $ S_D = \{ (x,y) | x^2-Dy^2=1 \} $. I know the well-defined trivial injection between $ S_\sqrt D $ and $ {\mathbb{Q}(\sqrt D)}^\times $. Let the trivial injection as $ \varphi $, and let the Solution group of Pell's equation $ G_\sqrt D = \varphi(S_\sqrt D) $. Then, What group is isomorphic to $ G_\sqrt D $? I computed when $D$ is $2$, and got an answer that $ G_\sqrt 2 $ and $ \mathbb{Z} \otimes \frac{\mathbb{Z}}{2\mathbb{Z}} $ are isomorphic. In general cases, how can compute that?
I'll assume that $D$ is a squarefree integer, and that $x$ and $y$ are required to be integers. Then this group is the unit group of the ring of integers of $\Bbb{Q}(\sqrt{D})$. Dirichlet's unit theorem answers your question for the most part. This theorem tells you that the unit group is a finitely generated abelian group, and its rank is $1$ if $D>0$ and $0$ if $D<0$. This means that for every squarefree integer $D$ there is some finite abelian group $\mu_D$ such that \begin{eqnarray*} G_{\sqrt{D}}&\cong&\Bbb{Z}\times\mu_D\qquad&\text{ if }D>0,\\ G_{\sqrt{D}}&\cong&\mu_D,\qquad&\text{ if }D<0. \end{eqnarray*} It is a nice exercise to verify that the torsion group $\mu_D$ is $\langle-1\rangle$ if $D\neq-1,-3$. The two exceptional values $D=-1$ and $D=-3$ correspond to the Gaussian and Eisenstein integers, respectively, with torsion groups $\langle i\rangle$ and $\langle\omega\rangle$ that are cyclic of orders $4$ and $6$, respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4279913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Meaning of a the notation $P(E)$, where $E$ is a sheaf. I am reading a paper where I find this sentence : "Let $Y=\mathbb P^1$ and $E$ a locally free sheaf on $Y$ of rank $r$. We put $X=\mathbb P(E)$, and let us denote by $\pi:X\rightarrow Y$ the canonical map, so $\pi_*(\mathcal O_X(1))\simeq E$." My question is : What is $\mathbb P(E)$? Thank you.
$\mathbb{P}(E) = \mathrm{Proj}\left( \bigoplus_{k=0}^\infty \mathrm{Sym}^kE \right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4280076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does the quotient $\mathcal{H}/\Gamma_0(2)$ look like? We often see in books (typically of modular forms) the fundamental domain of the modular surface $\mathcal{H}/SL_2(\mathbb{Z})$. I would like to see similar pictures and know how to draw them. In particular, what does $\mathcal{H}/\Gamma_0(2)$ look like?
You might want to read about "Farey symbols". These are a compact way of describing fundamental domains for subgroups of $\operatorname{PSL}_2(\mathbb{Z})$ and how the edges are identified by the group generators. Sage or Magma can compute them for any congruence subgroup. There's a lovely survey article by Kurth and Long explaining the theory here: https://arxiv.org/abs/0710.1835.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4280285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Suppose $S \subset \mathbb{R}^n$ is a finite set and is path connected. How many points does $S$ contain? Suppose $S \subset \mathbb{R}^n$ is a finite set and is path connected. How many points does $S$ contain? My answer: $S$ must contain exactly one point. Proof: Assume that $S \subset \mathbb{R}^n$ is a finite, path-connected subset with more than one element. By definition of path-connected, there exists a continuous function $f: [0,1] \longrightarrow S$ such that $f(0)=s_1$ and $f(1)=s_2$ for any $s_1,s_2 \in S$. In order for $f$ to be continuous, there then must be uncountably infinite $s_i \in S$ with $f(x_i)=s_i$ for all $x_i$ with $0 < x_i < 1$. This contradicts our assumption that $S$ is finite, so therefore $S$ has exactly one point. $\Box$ Is this an accurate way of stating the reasoning behind this answer? Is there a simpler way of explaining why this is true?
It may be that the infinitely many points between $0$ and $1$ map to only finitely many points on $S$, so more justification is needed. Instead I would say that if $S$ is path connected, it must also be connected. But if $S$ contains more than one point, it is possible to separate $S$ into two open sets, one containing one point and one containing all the others. This is the same as Zanzag's suggestion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4280497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Induction inequality/ n in exponential I need to find out through induction for which $n \geq 0$ the following inequality holds $$3n+2^{n} \leq 3^n$$ clearly it does not work for n=1 and n=2, so my induction hypothesis is that it works for $n \geq 3$, however when performing the induction step, I am kind of stuck because $ n \rightarrow n+1$ $3 (n+1) + 2^{n+1} \leq 3^{n+1}$ $3(n+1) + 2^{n} \cdot 2 \leq 3^n \cdot 3$ I recreated the starting inequality within (overline) $3n + 3 + 2^n + 2^n \leq (2+1) \cdot 3^n$ $2^n + 3 + \overline{3n + 2^n \leq 3^n} + 2\cdot 3^n$ I tried to show that what was added on the LHS is always smaller than on the RHS, therefor $2n+3 \leq 2\cdot 3^n$ $2n + 3 \leq 3^n + 3^n$ so it is clear that this holds,but would the proof work that way? if yes, are there other, more elegant approaches?
Another way. For $n\geq3$ by AM-GM we obtain: $$3^n-2^n=(3-2)\left(3^{n-1}+3^{n-2}\cdot2+...+3\cdot2^{n-2}+2^{n-1}\right)\geq$$ $$\geq n\sqrt[n]{3^{n-1+n-2+...+1}2^{1+...+n-1+n-2}}=n\cdot6^{\frac{n-1}{2}}\geq n\cdot6^{\frac{3-1}{2}}=6n>3n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4280652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can the poles of a complex function $f(z)$ be defined as the locations where $\lvert f(z) \rvert = \infty$? According to Wikipedia, A zero of a meromorphic function $f$ is a complex number $z$ such that $f(z) = 0$. A pole of $f$ is a zero of $1/f$. Is there a reason why a pole cannot be defined as the location where $\lvert f(z) \rvert = \infty$? I was imagining this to be the case based on this plot of the magnitude of the complex gamma function:
I'd say that your assertion is approximately correct, since one characterization of a pole of $f$ at $z_o$ (as opposed to essential singularity) is that $\lim_{z\to z_o} |f(z)|=+\infty$. In fact, although, there are complications in talking about "the value $\infty$", it can be made to make sense for poles: the function takes values on the Riemann sphere $\mathbb C\mathbb P^1$, which is obtained by adding (in suitable fashion) a "point at infinity" to the complex plane $\mathbb C$. The behavior of $f$ with isolated singularity that is not a pole, but is an essential singularity, is not only captured by the negation of the condition for a pole, but, further, by the Casorati-Weierstrass theorem: in every neighborhood of an essential singularity $z_o$ of $f$, $f$ comes arbitrarily near to every value in $\mathbb C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4280785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Solving $x \ln(x) = (1+x) \ln(1+x) \ln\left(\ln(1+x)\right)$ I'm trying to find a familiar-looking solution for the following (there's a single solution >0, around 9.93): $x \ln(x) = (1+x) \ln(1+x) \ln\left(\ln(1+x)\right)$ Is there anything tractable to attack this problem? I am unable to come up with a substitution that looks promising, but I am also quite rusty.
Consider the two functions $$f(x)=x \log (x)-(x+1) \log (x+1) \log (\log (x+1))$$ $$g(x)=(x+1) \log (x+1)-(x+1) \log (x+1) \log (\log (x+1))$$ $\forall x$, $f(x)<g(x)$ and $g(x)=0$ when $x=e^e-1$. Just use Newton, Halley or Householder method with this guess. Householder iterates are $$\left( \begin{array}{cc} n & x_n \\ 0 & 14.154262 \\ 1 & 10.000627 \\ 2 & 9.9352338 \end{array} \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4280919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Switch being red knowing that it worked? (conditional probability) A spaceship has switches inside its cabinet control. 30% of them are red, 70% are blue. You've learned that the probability of a switch not to work is 0.12 if it is red, and 0.2 if it is blue. A switch is randomly pressed. What is the probability of it being a red switch knowing that it worked? I'm really stuck in what to think here. Probability is not really intuitive to me. This is what I've tried, being $R = "Red"$, $W = "Worked"$: $P(R | W) = \frac{P(R \cap W)}{P(W)}$, now I have to find each the numerator and denominator, but I can't get there with the info that I have.
One tactic with these sorts of problems with two different variables (color and does/doesn't work) is to draw a square with four boxes depicting the four possibilities. For instance, you can have left = red, right = blue, top = works, bottom = doesn't work. Studies have shown the probability problems are more intuitive if they're put in terms of portions of a given total, rather than percentages, so let's take 10k buttons. 3k are read, and 7k are blue. In the lower left box, you have red buttons that don't work, which is 12% of 3k, or 360. In the upper left box, you have red buttons that work, so that's 3000-360 = 2640. The we have 1400 in the lower right box, and 5600 in the upper right box. We're told to find the probability given that the button works, so we just have the top row. There are 2640+5600=8040 total buttons in that row, and 2640 of them are red. So we take 2640/8040 and get 32.84%.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4281061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Show that $\exists x_i \in \mathbb{R}$ and $i \in \mathbb{N}$ such that $\lim_{i\to \infty} x_i = \infty$ and $\lim_{i \to \infty} x_i f(x_i) = 0$ Suppose that $f : \mathbb{R} \to \mathbb{R}$ is an integrable function. Show that there exists $x_i \in \mathbb{R}$ and $i \in \mathbb{N}$ such that $$\lim_{i\to \infty} x_i = \infty \text{ and } \lim_{i \to \infty} x_i f(x_i) = 0.$$ Supposing the contrary one has that $\lim_{i\to \infty} x_i = 0$ and $\lim_{i\to \infty} x_if(x_i) = \infty$. From integrability of $f$ one has also that $\int_{\mathbb{R}} |f|< \infty$. The assupmtion seems to imply that $x_if(x_i)$ diverges faster than $x_i$ converges. Somehow this feels like the assumption that the integral is finite would become a problem if $\lim_{i\to \infty} x_if(x_i) = \infty$, but I don't really see how? Is there a direct way to approach this or is the contradiction the way to go?
I am assuming we mean Lebesgue integrable. The conclusion is equivalent to the claim that $\liminf_{x \to +\infty} |xf(x)| = 0$. If to the contrary $\liminf_{x \to +\infty} |xf(x)| = \epsilon>0$, we have $|f(x)| > \frac{\epsilon}{2x}$ for $x>N$ for some $N$. Thus derive a contradiction comparing $\int |f|$ and $\int \frac{\epsilon}{2x}$. It's interesting to ask whether this also holds for functions which are improper Riemann integrable, in the sense that $\lim_{(x,y) \to (\infty, \infty)} \int_{-x}^{y} f(t) \ dt $ exists, where the integrals are Riemann integrals. It turns out the result doesn't hold in that case. Define a sequence $a_0 = 0$, $a_{j} = 1/j + a_{j-1}$ for $j \geq 1$. Define $f:\mathbb{R} \to \mathbb{R}$ by $f(x) = \mathbf{1}_{x \geq 0}(x) \sum_{j \geq 0} (-1)^j\mathbf{1}_{[a_{j}, a_{j+1})}(x)$. We have $\int_{\mathbb{R}} f = \sum_{j \geq 1} (-1)^{j+1}/j$ in the improper Riemann sense but $f$ only takes on values $-1, 1$ for $x \geq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4281242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Book on advanced discrete probability? Searching for materials on discrete probability I find "first course in probability" type stuff and some seemingly rather specific things. Is there a principled, step-by-step introduction to advanced discrete probabilistic topics (e.g. in a book)? Are there any advanced general theorems in this area at all?
Discrete Probability Models and Methods, by Pierre Brémaud (Springer). Quoting from the introduction to the book: The emphasis in this book is placed on general models (Markov chains, random fields, random graphs), universal methods (the probabilistic method, the coupling method, the Stein-Chen method, martingale methods, the method of types) and versatile tools (Chernoff's bound, Hoeffding's inequality, Holley's inequality) whose domain of application extends far beyond the present text. [...] The level of the book is that of a beginning graduate course. It is self-contained, the prerequisites consisting merely of basic calculus (series) and basic linear algebra (matrices). The reader is not assumed to be trained in probability since the first chapters give in considerable detail the background necessary to understand the rest of the book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4281400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does the sequence of functions $k(x + \frac{1}{n}, y)$ converge uniformly to $k(x,y)$ if $k$ is continuous? Suppose I have a function $k(x,y): [0,1] \times [0,1] \rightarrow \mathbb{R}$ such that $k$ is continuous. Suppose further that for a certain application, I am fixing the $x$-coordinate and allowing $y$ to vary, and that I define $k_x: [0,1] \rightarrow \mathbb{R}$ so that $k_x(y) = k(x,y)$. Finally, for each $n \in \mathbb{N}$, I define $k_n: [0,1] \rightarrow \mathbb{R}$ so that $k_n(y) = (x + \frac{1}{n}, y)$. Then it is clear to me that $k_n \rightarrow k_x$ pointwise. I want to know if it is also the case that $k_n \rightarrow k_x$ uniformly. I think so, and it seems like it should be easy to prove. But I have just been totally stuck on this!
Since $[0,1]\times[0,1]$ is compact, $k$ is uniformly continuous. Take $\varepsilon>0$. Since $k$ is uniformly continuous, there is some $\delta>0$ such that$$\|(x_1,y_1)-(x_0,y_0)\|<\delta\implies\bigl|f(x_1,y_1)-f(x_,y_0)\bigr|<\varepsilon.$$Now, take $N\in\Bbb N$ such that $\frac1N<\delta$. Then\begin{align}n\geqslant N&\implies\left\|\left(x+\frac1n,y\right)-(x,y)\right\|<\delta\\&\implies\left|f\left(x+\frac1n,y\right)-f(x,y)\right|<\varepsilon.\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4281552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\sum_{d | n} \mu(d) (\log(d))^2=0$ If $n$ is a positive integer with more than 2 distinct prime factors, how to prove that $\sum_{d | n} \mu(d) (\log(d))^2=0$? I struggle on how to continue from this. Suppose $n=p_1 p_2 ... p_r$, because $\mu(d) \neq 0$ for square free $n$. $\sum_{d | n} \mu(d) \log^2(d)=(-1)\sum_{p_i|n}\log^2(p_i)+(-1)^2\sum_{p_i|n,p_j|n,p_i\neq p_j}\log^2(p_i p_j)+...+(-1)^r\log^2(p_1 p_2 ... p_r)$ I thought of changing $\log^2(p_i p_j)$ to $(\log p_i + \log p_j)^2$, but still stuck. How to prove this statement? If the way that I've shown won't work, feel free to suggest another way to prove this.
As you noted, it's obvious that $\prod_{i=1}^k p_i^{a_i}$ (where $p_i$ is prime and $a_i\geq 1$) and $\prod_{i=1}^k p_i$ will have the same result when the sum is evaluated, so let's just assume that $n=\prod_{i=1}^k p_i$ Denote $S_i$ as the set of all divisors of $n$ that contain exactly $i$ prime factors. For example, $S_0=\{\}$, $S_1=\{p_1,p_2,\ldots p_k\}$ and $S_n=\{p_1p_2\ldots p_k\}$. Our sum is equivalent to $$\sum_{i=0}^k\sum_{s\in S_i} \mu(s)\log^2(s)$$ $$=\sum_{i=0}^k (-1)^i\sum_{s\in S_i} \log^2(s)$$ While we can directly compute this inner sum, it will be better to exploit symmetry. For a fixed $i$, we will have $\binom{k}{i}$ different possibilities for $s$. We will then expand $(\log s)^2$, where $s$ is a product of $i$ primes. For any $s$, this expansion will have $i$ terms of the form $\log^2 p$ (where $p$ is a prime) and $i(i-1)$ terms of the form $(\log p)(\log q)$ (where $p$ and $q$ are distinct primes). Using the fact that this sum is symmetric, we get that for a fixed $i$, $$\sum_{s\in S_i} \log^2 (s)=\frac{i}{k}\binom{k}{i}\sum_{j=1}^k \log^2 (p_j)+\frac{i(i-1)}{k(k-1)}\binom{k}{i}\sum_{\substack{l,m\in [1,k]\\l\neq m}}(\log p_l)(\log p_m)$$ We can define the sums $\sum_{j=1}^k \log^2 (p_j)$ and $\sum_{\substack{l,m\in [1,k]\\l\neq m}}(\log p_l)(\log p_m)$ as constant $P_1$ and $P_2$. Hence, our original expression is equivalent to $$=\sum_{i=0}^k (-1)^i\left(\frac{i}{k}\binom{k}{i}P_1+\frac{i(i-1)}{k(k-1)}\binom{k}{i}P_2\right)$$ $$=\sum_{i=0}^k (-1)^i\left(\binom{k-1}{i-1}P_1+\binom{k-2}{i-2}P_2\right)$$ $$=P_1\sum_{i=0}^k \binom{k-1}{i-1}(-1)^i+P_2\sum_{i=0}^k\binom{k-2}{i-2}(-1)^i$$ When $k-2>0$, using binomial theorem, both of these sums evaluate to $0$, hence our expression is equivalent to $$=P_1(0)+P_2(0)$$ $$=\boxed{0}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4281707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Strange/Unexpected behavior of an Infinite product Some friends and I were playing around with this continued fraction: We noticed when writing it out for each next step, the end behavior went either to 1 (when there was an even number of terms) or to a linear dependence on x (when there were odd). This was expected - but fitting the linear dependence gave something we didn't expect. We wrote the odd-term fraction as a product: . We expected that the tail behavior would go as $(x+2m+2)$, but insead we found it closer to $(x+m+1)$: https://www.desmos.com/calculator/7ye4ijgoef Its clear both give identical limits for finite $m$ as $x \longrightarrow \infty$, but is there an argument as to why the significantly more accurate behavior $(x+m+1)$ occurs over the expected behavior $(x+2m+2)$ for large but fininte $x$?
The limit you are looking for is $\lim_{x \to \infty}(f(x) - x)$. Write $$\lim_{x \to \infty}(f(x) - x) = 2m+2 + \lim_{x \to \infty} x \cdot \left( \prod_{n=0}^m \frac{x+2n}{x+2n+1} - 1 \right) \,.$$ The limit in the right-hand side is not $0$, but rather $-m-1$. You can compute it as follows: $$\begin{align*}\lim_{x \to \infty} x \cdot \left( \prod_{n=0}^m \frac{x+2n}{x+2n+1} - 1 \right) &= \lim_{t \to 0} \frac1t \cdot \left( \prod_{n=0}^m \frac{\frac1t+2n}{\frac1t+2n+1} - 1 \right) \\ &= \left.\frac d{dt}\right\vert_{t=0} \prod_{n=0}^m \frac{1+2n t}{1+ (2n+1)t} \,. \end{align*}$$ This is easiest to compute using a Taylor expansion: $$\begin{align*}\prod_{n=0}^m \frac{1+2n t}{1+ (2n+1)t} &= \prod_{n=0}^m (1+2n t)(1- (2n+1)t + O(t^2)) \\ &= \prod_{n=0}^m (1-t + O(t^2)) \\ &= (1-t)^{m+1} + O(t^2) \end{align*}$$ so that our derivative is $-m-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4281850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to solve $\\e^{-u}+\frac{u}{5}=1\\$ for $\\u\\$ ? What is the method to solve it without using graph. Solving $\\e^{-u}+\frac{u}{5}=1\\$ without using graph. From graphing line $y(u)=1$ intersects with $\\e^{-u}+\frac{u}{5}\\$ at two points (0,1) & (4.97,1) so that gives $\\u = 0\\$ or $\\4.97\\$. But how can I solve it analytically ? [[graph]
I have come up with an "algebrical" approximate solution of the curve shown in function $\\y(u)=e^{-u}+\frac{u}{5}-1=0\\$ which is as follows: As can be noticed and observed from graph, the curve in excess of $\\u>7.7\\$ becomes a straight line. Hence first we take the derivative of the function $\\y(u)\\$: $\\y(u)=e^{-u}+\frac{u}{5}-1\\\tag{1}$ $\frac{\mathrm{d(y(u)} )}{\mathrm{d} u}=-e^{-u}+\frac{1}{5}$ = slope Now the equation of straight line as mentioned above will be: $\\y=mu+c\\$ and $\\y'(u)\\$ becomes, $\\y=(-e^{-u}+\frac{1}{5})u+c\\\tag{2}$ Now original equation Eq. (1) in this form will be, $\\y=(u.e^{-u}+\frac{u}{5u})u-1\\\tag{3}$ Considering Eq. (2) & (3), it is can deduced that, $c=(-1)$ Now put $c=(-1)$ and $y=0$ for $u-intercept$ in Eq. (2), we get: $\\0=(e^{-u}+\frac{1}{5})u-1\\$ $\\1=u.e^{-u}+\frac{u}{5}\\$ $\\e^{-u}=\frac{1}{u}(1-\frac{u}{5})=\frac{(5-u)}{5u}\\$ Now put value of $e^{-u}$ and $y=0$ in Eq. (2) again, we get: $\\0=(-\frac{(5-u)}{5u}+\frac{1}{5})u-1\\$ simplifying will get, $\\5=-(5-u)+u\\$ $\\10=2u\\$ $\\u=\frac{10}{2}=5\\$ Hence $u=5$ an approximate solution. Putting value of $u$ in Eq. (1): $\\e^{-5}+\frac{5}{5}-1=0\\$ $\\e^{-5} = 0\\$ $\\(0.00673) \approx 0\\$ [The idea of doing all excercise is to see if there is an algebric solution available or not]
{ "language": "en", "url": "https://math.stackexchange.com/questions/4281989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Is $\Bbb Z[x]/(2x-3)\cong\Bbb Z$? Consider the following problem. I know that $\mathbb{Q}[x]/ (2x-3) $ is isomorphic to $\mathbb{Q}$. Use the evaluation map $x \to \frac{3}{2}$. But why $\Bbb Z[x]/(2x-3)\cong\Bbb Z[1/2]$? Can I use the same evaluation map here and get $\Bbb Z[x]/(2x-3)\cong\Bbb Z$? Please help me?
Well, how do you use the evaluation map in the rational case? You consider the map $$ e_{3/2}: \mathbb{Q}[x] \to \mathbb{Q} $$ given by $e_{3/2}(f) = f(3/2)$. Now, you notice that $e_{3/2}$ is surjective (why?), that $\ker e_{3/2} = (2x - 3)$ and you use the first isomorphism theorem to conclude that $$ \frac{\mathbb{Q}[x]}{(2x - 3)} \simeq \mathbb{Q}. $$ Now, consider the same map with domain $\mathbb{Z}[x]$. What can you say about its image ? Is it well defined as a function with co-domain $\mathbb{Z}$? Can you apply the same argument?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4282146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Convergence of a special integral Determine the value of the integral below: $\lim_{x\rightarrow 0^+} \int_{x}^{2x} \frac{\sin{(t)}}{t^2} \,dt\ $ My attempt to solution: I could not solve the problem. And I do not know how to continue. $1)$ I started analysing just integral, and after that I applied the integration by parts: $\int_{x}^{2x} \frac{\sin{(t)}}{t^2} \,dt\ = -\frac{\sin{(2x)}}{2x} +\frac{\sin{(x)}}{x} - \int_{x}^{2x} \frac{-\cos{(t)}}{t} \,dt\ $ $\int_{x}^{2x} \frac{\sin{(t)}}{t^2} \,dt\ = -\frac{\sin{(2x)}}{2x} +\frac{\sin{(x)}}{x} + \int_{0}^{2x} \frac{\cos{(t)}}{t} \,dt\ - \int_{0}^{x} \frac{\cos{(t)}}{t} \,dt\ $ $2)$ Now, applying the limit: $\lim_{x\rightarrow +0} \int_{x}^{2x} \frac{\sin{(t)}}{t^2} \,dt\ = \lim_{x\rightarrow +0} (-\frac{\sin{(2x)}}{2x} +\frac{\sin{(x)}}{x} + \int_{0}^{2x} \frac{\cos{(t)}}{t} \,dt\ - \int_{0}^{x} \frac{\cos{(t)}}{t} \,dt\ )$ $3)$ If we prove that $\exists\lim_{x\rightarrow +0} \frac{\sin{(x)}}{x}$, $\exists\lim_{x\rightarrow +0}\int_{0}^{x} \frac{\cos{(t)}}{t} \,dt\ $, we could be able to apply the sum of limits: $\lim_{x\rightarrow +0} \int_{x}^{2x} \frac{\sin{(t)}}{t^2} \,dt\ = \lim_{x\rightarrow +0} -\frac{\sin{(2x)}}{2x} +\lim_{x\rightarrow +0} \frac{\sin{(x)}}{x} + \lim_{x\rightarrow +0}\int_{0}^{2x} \frac{\cos{(t)}}{t} \,dt\ - \lim_{x\rightarrow +0} \int_{0}^{x} \frac{\cos{(t)}}{t} \,dt\ $ $4)$ I know that $\lim_{x\rightarrow +0}\frac{\sin{(x)}}{x} = 1$, but I do not know if $ \exists\lim_{x\rightarrow +0} \int_{0}^{x} \frac{\cos{(t)}}{t} \,dt\ $, and how to find the value of it. I tried for long time, but I could not find.
In the limit as $x\to0^+$, the following two effects compete each other: * *The interval of integration, $[x, 2x]$ shrinks. *The average magnitude of the integrand $\frac{\sin t}{t^2}$ for $x \leq t \leq 2x$ increases. One way to cancel out these two effects is to substitute $t=xu$: $$ \int_{x}^{2x} \frac{\sin t}{t^2} \, \mathrm{d}t = \int_{1}^{2} \frac{\sin (xu)}{x u^2} \, \mathrm{d}u. $$ In light of this and $\lim_{r \to 0} \frac{\sin r}{r} = 1$ together, it is natural to expect that the limit is $\int_{1}^{2} \frac{1}{u} \, \mathrm{d}u = \log 2$. To justify this, for any $\varepsilon > 0$, choose $\delta > 0$ such that $$ \left| r - 0 \right| < 2\delta \quad \Rightarrow \quad \left| \frac{\sin r}{r} - 1 \right| < \frac{\varepsilon}{\log 2}. $$ Now let $0 < x < \delta$. Then for any $u \in [1, 2]$, we have $0 < xu < 2\delta$, and so, $$ \left| \frac{\sin(xu)}{xu} - 1 \right| < \frac{\varepsilon}{\log 2}. $$ Plugging this to the integral representation above, \begin{align*} \left| \int_{1}^{2} \frac{\sin (xu)}{x u^2} \, \mathrm{d}u - \log 2 \right| &= \left| \int_{1}^{2} \frac{1}{u} \left( \frac{\sin (xu)}{xu} - 1 \right) \, \mathrm{d}u \right| < \int_{1}^{2} \frac{1}{u} \cdot \frac{\varepsilon}{\log 2} \, \mathrm{d}u = \varepsilon. \end{align*} Summarizing, we have established the $\varepsilon$-$\delta$ property for the limit $\lim_{x\to0^+} \int_{x}^{2x} \frac{\sin t}{t^2} \, \mathrm{d}t = \log 2$ and therefore we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4282447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
the matrix inequlity Is it true that $$\|A\| \leq \|A^2\|$$ for $A \in SL(2,\mathbb{R})$, where $\| \|$ is the operator norm that is the first singular value? $$\left \| A \right \| =\sqrt{\lambda_{\text{max}}(A^{^*}A)}=\sigma_{\text{max}}(A).$$ Definitively, it is not true for $GL(2, \mathbb{R})$ as one can consider $A=diag(1/3, 1/3).$
A counterexample could be $$A=\pmatrix{3&5\\-2&-3}.$$ Some thought about how you can see that this $A$ works. First of all, its determinant is $-9-(-10)=1$ as required. This also means that the product of its two eigenvalues is $1$. The trace of $A$ is $3-3=0$. So the sum of the eigenvalues is $0$. Given the product and the sum of the eigenvalues, it is clear that the eigenvalues are $+i$ and $-i$ where $i$ is the imaginary unit. Since the eigenvalues are distinct, this $A$ is diagonalizable by some complex matrix in $GL(2,\mathbb{C})$. Now think about $A^2$. It is diagonalizable as well (because $A$ is, and by the same complex matrix). The eigenvalues of $A^2$ are $-1$ and $-1$, the squares of the eigenvalues of $A$. But a diagonalizable matrix all of whose eigenvalues are equal, is really a diagonal matrix, so we have $$A^2=\pmatrix{-1&0\\0&-1}.$$ The operator norm you are asking about satisfies $$\|A\|=\sup\left\{\|Ax\|\,\middle|\, x\in\mathbb{R}^2 \text{ and } \|x\|=1\right\}$$ where the symbols $\|\cdot\|$ inside the set on the right-hand side denotes the standard (Euclidean) length of a vector in $\mathbb{R}^2$. So $\|A\|$ is the maximal length of the image of a unit vector. It is clear that $\|A^2\|=1$ since $A^2$ maps all unit vectors to unit vectors. It is also clear that $\|A\|$ is strictly larger. For simplicity, take $x=\pmatrix{0\\1}$, then $Ax=\pmatrix{5\\-3}$ whose length is $\sqrt{5^2+3^2}=\sqrt{34}$, so it follows that the operator norm of $A$ is at least this number, $\|A\|\ge\sqrt{34}$. These arguments show that any matrix $$A=\pmatrix{t&s\\-\frac{1+t^2}{s}&-t}$$ where $s\ne 0$ and $t$ are real numbers has determinant $1$ and trace $0$ and therefore will square to $\pmatrix{-1&0\\0&-1}$. By taking both $s$ and $t$ huge, you can make the length of the second column of $A$ as huge as you want. So there are matrices with arbitrarily large operator norms in $SL(2,\mathbb{R})$ whose squares have operator norm $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4282612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Compute limit of $a_n:=E[f(x_1)f(x_2)]$, for random $(x_1,\ldots,x_n)$ on unit-sphere in $R^n$ and any function $f$ with a jump discontinuity at $0$. Let $f:[-1,1] \to \mathbb R$ be continuous a.e (and assumed to be bounded, if that helps), with a jump-discontinuity at $0$ and set $$ c:=\frac{f(0^-)+f(0^+)}{2}. $$ For any integer $n \ge 2$, define $a_n$ by $$ a_n := \mathbb E[f(x_1)f(x_2)], $$ where $x=(x_1,\ldots,x_n)$ is uniform on the unit-sphere in $\mathbb R^n$. Question. Is it true that $\lim_{n \to \infty} a_n = c^2$ ? For example, if the function $f$ is given by $$ f(x) := \begin{cases}\cos^{13}(x),&\mbox{ if }x > 0,\\x^2-5,&\mbox{ else,}\end{cases} $$ then $f$ has a jump discontinuity at $0$ with $c=-2$, and numerical experiments show that $a_n \to 4=c^2$.
Disclaimer. If I've not made obvious errors, the post below answers the question in the affirmative. Suggestions / comments welcome. Define the function $F:[-1,1]^2 \to \mathbb R$ by $F(u, v) := f(u)f(v)$, and let $\mu_n$ be the marginal distribution of $(x_1,x_2)$ when $(x_1,x_2,\ldots,x_n)$ is uniform on the unit-sphere in $\mathbb R^n$. Thus, we see the limit of the sequence of numbers defined by $a_n = \int_{\mathbb R^2}Fd\mu_n$. Now, define a sequence of functions $(f_n)_n:[-1,1] \to \mathbb R$ by $$ f_n(t) := \begin{cases}f(t),&\mbox{ if }1/n < |t| \le 1,\\ g_n(t),&\mbox{ if }|t| \le 1/n,\end{cases} $$ for any choice of functions $g_n:[-1,1] \to \mathbb R$ such that * *$g_n$ is continuous on $[-1/n,1/n]$, *$g_n(-1/n) = f(-1/n)$, *$g_n(0) = c$, *$g_n(1/n) = f(1/n)$. Define $F_n:[-1,1]^2 \to \mathbb R$ by $F_n(u,v):=f_n(u)f_n(v)$. We note the following facts: * *(1) $F_n \to F$ pointwise on $[-1,1]^2\setminus L$, where $L := \{(u,v) \mid u = 0\,\lor\, v= 0\}$, a set with $\mu(L) = \mu_n(L) = 0$ for all $n$. *(2) $F_n$ is continuous (and therefore bounded). *(3) $\mu_n \to \mu := \delta_{(0,0)}$ weakly. For example, see the comments under this question https://mathoverflow.net/q/406716/78539, and combine with Scheffé's Lemma. *(4) $\int_{\mathbb R^2} F_n d\mu_n \to c^2$. One computes $$ \left|a_n-\int_{\mathbb R^2}F_nd\mu_n\right| \le \int_{\mathbb R^2} |F-F_n| d\mu_n := A_n. $$ Thanks to this post https://mathoverflow.net/a/406728/78539, we know that $A_n \to 0$. We conclude from (4) that $a_n \to c^2$ as claimed. $\quad\quad\quad\Box$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4282954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How can I intuitively predict the shape of a solution to ordinary differential equations? I am new to differential equations, and I realized that I don't have even an intuition as to what solutions to ordinary differential equations would roughly look like. For example, given the governing equation: $$ \frac{\partial^2 y}{\partial x^2} -\frac{\partial y}{\partial x} + y = 0 $$ for the range $$0<x<10$$ given the initial conditions $$y(0) = 1$$ $$y(10) = 5$$ The plotted graph for the function y looks like: How can you intuitively predict that y will be mostly flat in the range 0<x<6? If so, how would you intuitively picture the solution for this function y without solving for it analytically?
I don't think that it is possible to predict the shape of a general ordinary differential equation (ODE),i have found this tool useful to look at the behaviour of ODE's, it is best for first order ODE's. but there are some that are more common (a quick note $A$ and $B$ are arbitrary constants) * *for example the simple harmonic oscillator $$ \frac{\partial^2 y}{\partial x^2} +k^2 y = 0 $$ which has solutions $y=A \sin(kx)+B \cos(kx)$ *a related ODE is this one $$ \frac{\partial^2 y}{\partial x^2} -k^2 y = 0 $$ which has solutions $y=A \sinh(kx)+B \cosh(kx)$ *then there is the damped simple harmonic oscillator $$ \frac{\partial^2 y}{\partial x^2} +k_0\frac{\partial y}{\partial x} +k_1^2 y = 0 $$ which has solutions $y=e^{x\alpha}(A \sinh(x\omega)+B \cosh(x\omega))$ for constants $\omega$, $\alpha$ which depend on $k_0 $ & $ k_1$. There are also solutions of commonly occurring ODEs that have less pretty forms * *Bessel functions, used in determining waves on a circular plate. *Legendre polynomials, used in the describing spherical waves in 3d, and used to describe atomic orbitals. *Chebyshev polynomials, used in signal analysis *Hermite polynomials, several uses in signal analysis, and in the description of some quantum states. Hopefully that helps
{ "language": "en", "url": "https://math.stackexchange.com/questions/4283101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Circle the standing coin Dori and Ariel are playing with Mr. Fin's coin collection. When taking two special coins they noticed that one of them has a radius five times larger than the other. They decide to leave the larger coin still and see how many turns the other coin takes to circle the standing coin. How many turns does the small coin make? Attempt: I found this reasoning from an exercise sheet: If the radius of the minor were "once larger than another" it would be $2r$. So five times bigger is $6r$. is it correct? answer: 6
The center of circle A moves around a circle with radius equal to the radius of $B$ plus the radius of $A$, which is $6r$. So the center of $A$ travels a distance of $2 \pi 6r$. The circle $A$ is rolling through this distance, each rotation covering $2\pi r$. Therefore, it takes 6 rotations to complete the path.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4283255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Use of "Let" in proofs I'm confused about the use of the word let in proofs. For instance, if we want to prove that $A\subset B$ we can start by saying: 'let $x\in A$'. Are we assuming then that $A$ is non-empty? Another example: We can prove that if $A\times B\subset C\times D $ then $A \subset C$ and $B \subset D$ starting with let $a\in A$ and $b\in B$ But that is false if $B$ is empty. What is wrong with that proof?
There are two confusing and different uses of “Let.” In this case, when proving $A\subset B,$ you are saying “Let $x$ run through all elements of $A.$” Then you prove that $x\in B,$ too, proving that: $$\forall x:x\in A\implies x\in B\tag1$$ In this case, you don’t really need to consider $A$ empty. This is a special case of a general type of proof. If $P(x)$ and $Q(x)$ are statements, and we want to prove: $$\forall x:P(x)\implies Q(x)\tag2$$ we need not know whether $P(x)$ is ever true or not, we just need to show if $P(x)$ is true, we can conclude $Q(x).$ There are theorems like “If $n$ is an odd perfect number, then <something else about $n$>.” The proofs might start, “Let $n$ be an odd perfect number….” We still don’t know if odd perfect numbers exist, but we can still prove things about them. The other meaning would be where you only needed one element of $A.$ Then saying, “Let $x\in A\dots$” requires you to prove $A$ is non-empty first. It does take some time to get used to these two usages, since nothing in the language, only the context, hints at the meaning. It might be better in the first example to say, “If $x\in A,$ then… “ rather than “Let $x\in A…$”
{ "language": "en", "url": "https://math.stackexchange.com/questions/4283471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }