Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to prove universally-quantified formula is true by contraposition? For all natural numbers $x$ and $y$, if $x+y$ is odd, then $x$ is odd or $y$ is odd. How do I prove the following statement is true by contraposition without using a truth table and theorems?
The contrapositive of "if $x+y$ is odd then either $x$ is odd or $y$ is odd" would be "if $x$ and $y$ are both even or both odd, then $x+y$ is even". The sum of two even numbers, $2a$ and $2b$, is $2(a+b)$, which is even. The sum of two odd numbers, $2a+1$ and $2b+1$, is $2(a+b+1)$, which is even too. This is sufficient. In fact, you can prove that the implication goes both ways very similarly. Edit: as originally stated, one could assume that "or" in "$x$ is odd or $y$ is odd" is not exclusive. In that case the statement only works in one direction, and the proof by contraposition involves proving the statement "$x$ even and $y$ even implies $x+y$ even", which is proved above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3134362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Harmonic series in probability mass function problem Suppose $X$ is a discrete random variable with possible values $\{1, 2, 3,\dots\}.$ Further, suppose the p.m.f is $$c\left(\frac{1}{x}-\frac{1}{x+1}\right)\enspace\text{s.t. $c > 0$}$$ Find c and $E[X].$ Idea: We have $$1=\sum_{x=1}^{\infty}c\left(\frac{1}{x}-\frac{1}{x+1}\right)=c\left(\sum_{x=1}^{\infty}\frac{1}{x}-\sum_{x=1}^{\infty}\frac{1}{x+1}\right)$$ But since $\sum_{x=1}^{\infty}\frac{1}{x}$ is a harmonic series, diverges. Thus, there is no value for $c$. Since it diverges, $E[X]$ does not exist. Questions: Is it possible for c not to exist? Did I do a mistake? Update: $$1=\sum_{x=1}^{\infty}c\left(\frac{1}{x}-\frac{1}{x+1}\right)=c\sum_{x=1}^{\infty}\left(\frac{1}{x}-\frac{1}{x+1}\right)$$ by telescoping series we have $$1=c\cdot 1$$ So, our p.m.f is $$\left(\frac{1}{x}-\frac{1}{x+1}\right)$$ But, $$E[X]=1\cdot\frac{1}{2}+2\cdot\frac{1}{6}+3\cdot\frac{1}{12}+4\cdot \frac{1}{20}+\dots=\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\dots$$ But that diverges. So, $E[X]$ doesn't exist.
Hint: Evaluate the series up to a finite $N$ first, then take the limit as $N\to \infty$. The series up to a finite $N$ will be a telescoping series, i.e. most of the terms will cancel, making it easy to evaluate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3134508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Motivate why we may calculate $P[XGiven two independent random variables $X $ and $Y $ with continuous densities we know that there exists a regular conditional distribution $P[X<Y|Y=y] $, and futher that the regular conditional distribution has density $f_{X,Y}(x,y)/f_Y(y) $. How do we motivate that $P[X<Y]=\int_\infty^\infty P[X<Y|Y=y]f_Y(y)dy$. This looks like a generalization of the discrete case where we may write $\{X<Y\}$ as the disjoint union $\bigcup _{k \in Z } \{X<k\} \cap \{Y=k \}$ and then use the countable additivity of $P $ to conclude that $$P[X<Y]=\sum_{k \in Z }P[\{X<k\} \cap \{Y=k \}] $$ (And of course for every $k$, $P[\{X<k\} \cap \{Y=k \}]=P[X<y|Y=k]P[Y=k]$ ) Any guidance would be appreciated!
This is a special case of the very general "double expectation theorem" of conditional expectations: Theorem. Let $(\Omega,\mathcal{F},P)$ be a probability space, $\mathcal{G} \subseteq \mathcal{F}$ a $\sigma$-algebra of events, and $Z$ a $\mathcal{F}$-measurable random variable such that either $Z$ is non-negative or $Z \in L^1(P)$. Then $$ E[Z] = E[E[Z \mid \mathcal{G}]]. $$ As a special case, using the fact that a probability of an event is the same as the expected value of the indicator function of that event, we obtain Corollary. Let $(\Omega,\mathcal{F},P)$ be a probability space, $\mathcal{G} \subseteq \mathcal{F}$ a $\sigma$-algebra of events, and $A \in \mathcal{F}$ an event. Then $$ P(A) = E[P(A \mid \mathcal{G})]. $$ If we have a random variable $Y$, then conditioning on $Y$ means (by definition) conditioning on the $\sigma$-algebra $\mathcal{G} = \sigma(Y)$. In particular, $$ P(A) = E[P(A \mid Y)] $$ for any event $A$. Even more specifically, in your case we can obtain the general formula $$ P(X < Y) = E[P(X < Y \mid Y)]. $$ If $X$ and $Y$ are both discrete, then this reduces to the usual formula: $$ \begin{aligned} P(X < Y) &= E[P(X < Y \mid Y)] \\ &= \sum_y P(X < Y \mid Y = y) P(Y = y) \\ &= \sum_y P(X < Y, Y = y). \end{aligned} $$ Moreover, if $X$ and $Y$ are both absolutely continuous with joint density $f_{X, Y}$ and marginal densities $f_X$ and $f_Y$, respectively, then one can show that $$ \begin{aligned} P(X < Y) &= E[P(X < Y \mid Y)] \\ &= \int_{\mathbb{R}} P(X < Y \mid Y = y) f_Y(y) \, dy. \end{aligned} $$ This is what you asked about, and it is because $P(X < Y \mid Y)$ is the $\sigma(Y)$-measurable random variable $g(Y)$, where $g : \mathbb{R} \to \mathbb{R}$ is the function given by $$ g(y) = P(X < Y \mid Y = y) = \int_0^y f_{X \mid Y}(x \mid y) \, dx. $$ Here $f_{X \mid Y}$ is the conditional density $$ f_{X \mid Y}(x \mid y) = \begin{cases} \frac{f_{X, Y}(x, y)}{f_Y(y)} &\text{if $f_Y(y) > 0$} \\ \text{undefined} & \text{otherwise}. \end{cases} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3134595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find all n for which z is a purely imaginary number Find every $n\in \Bbb{N}$ for which $z=\dfrac{(1-i)^{3n+4}}{{2^n}(1+i)}$ is a purely imaginary number. I know $1-i = \sqrt 2\cdot e^{\frac{7\pi}{4}i}$ Then argument of the numerator is $(3n+4)$ $\cdot \frac{7\pi}{4} +2k\pi \quad k \in \Bbb{Z}$ And I know $1+i= \sqrt 2\cdot e^{\frac{\pi}{4}i}$ So the argument of the denominator is still $\\\frac{\pi}{4}$ (right?) because $2^n$ is just a real number? And I want the argument of $z$ to be either $\frac{\pi}{2}$ or $\frac{3\pi}{2}$? Please help me find my mistakes and finish this exercise! Many thanks in advance.
Since $2^n\in\mathbb R$ and since $\dfrac{1-i}{1+i}=-i$, the number $z$ is purely imaginary if and only if $(1-i)^{3n+3}$ is real. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3134705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Verifying a $K$-basis for a primitive extension $K\subset K(\alpha)$ Let $K\subset L:=K(\alpha)$ be a primitive field extension of degree $n$ and we define $c_i\in L$ as \begin{align*} \sum_{i=0}^{n-1}c_i x^i=\frac{f^{\alpha}_K}{(x-\alpha)}\in L[x]\quad(1) \end{align*} where $f^{\alpha}_K$ is the minimal polynomial of $\alpha$ over $K$. The goal is to prove that $\{c_i\}_{i\in\{0,\ldots,n-1\}}$ is a $K$-basis for $L$. The book (considering introductions to Galois theory) does not define what a $K$-basis for $L$ is but I suppose it is defined as follows: for a (primitive) field extension $K\subset L$ of degree $n$, a $K$-basis for $L$ is defined as a set $\{c_i\}_{i\in\{0,\ldots,n-1\}}$ such that every element $l\in L$ can be written as a unique (because basis elements are independent) combination $k_0c_0+k_1c_1+\ldots+k_{n-1}c_{n-1}$ with $k_i\in K$. Is this a plausible definition? Now let's look at how to manipulate the equation at (1). This can be written as $$(x-\alpha)=\frac{f^{\alpha}_K}{\sum_{i=0}^{n-1}c_i x^i}.$$ Since the linear term $(x-\alpha)$ is definitely in $L[x]=K(\alpha)[x]$, the RHS must also be. We make the remark that $\deg f^{\alpha}_K=n$ because $K\subset L$ is a field extension of degree $n$ and also that the degree of the polynomial in $L[x]$ in the denominator is at most $n-1$. Hence, at most a quantity of $n-1$ basis elements $c_i$ is needed. Whether these elements are independent, I wouldn't know; maybe it is shown by contradiction, or by long division arguments. Can someone pull me into the right direction? Thanks for the time!
Yes, your definition of basis is correct. Here, "basis" is just the usual linear algebra definition of basis, which applies here because $L$ is indeed a $K$-vector space. Your long division idea is the right direction to head towards. Setting $f^\alpha_K(x) = x^n + d_{n-1}x^{n-1} + \ldots + d_0$, the first few terms of $f^\alpha_K / (x - \alpha)$ are $$ \frac{f^\alpha_K}{x-\alpha} = x^{n-1} + (d_{n-1} + \alpha)x^{n-2} + (\alpha(d_{n-1} + \alpha) + d_{n-2})x^{n-3} + \ldots$$ Note that the coefficients are increasing degree polynomials in $\alpha$ with $K$-coefficients. Thus, it's easy to construct $1, \alpha, \alpha^2, \ldots, \alpha^{n-1}$ as $K$-linear combinations of them. So we conclude that they are a $K$-basis for $L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3134838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A proof strategy for $L_XY=[X,Y]$ I'm trying to prove that $$L_XY=[X,Y]$$ I do realize that there are other proofs given of this assertion on stackexchange. However, I'm looking for ways to prove it using my strategy, as given below: Let $\phi(t)$ be the flow of the vector $X$ at the point $p$. Then $$L_XY=\lim\limits_{t\to 0}\frac{\phi^*_{-t}Y(\phi(t))-Y(p)}{t}$$ Now $\phi^*_{-t}Y(\phi(t))$ can be written as $V(\phi(t))+\int_t^0\nabla_{-\phi'(s)}Y(\phi(s))ds$. Hence, $$\lim\limits_{t\to 0}\frac{\phi^*_{-t}Y(\phi(t))-Y(p)}{t}=\lim\limits_{t\to 0}\frac{V(\phi(t))+\int_t^0\nabla_{-\phi'(s)}Y(\phi(s))ds-V(p)}{t}$$ This . can be simplified as $$\lim\limits_{t\to 0}\frac{\int_t^0\nabla_{-\phi'(s)}Y(\phi(s))ds}{t}+\lim\limits_{t\to 0}\frac{V(\phi(t))-V(p)}{t}$$ Can we write $$\lim\limits_{t\to 0}\frac{\int_t^0\nabla_{-\phi'(s)}Y(\phi(s))ds}{t}=-\nabla_YX$$ Because then we'll be done, as $\lim\limits_{t\to 0}\frac{V(\phi(t))-V(p)}{t}=\nabla_XY$
Let's just act on a function: $$ L_X(Y) (f) = X (Y(f)) -Y(X(f)),$$ by the Leibnitz rule and the property $L_X(f)=X(f)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3134981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a multi variable function bijective I understand the theory of how to prove a mutli variable function bijective, however I somehow can neither prove this function injective or surjective: $$f: \mathbb N\times \mathbb N \rightarrow \mathbb N, (a, b) \mapsto {(a+b)}^{2} + a$$ I tried to start with $f(a, b) = f(a', b')$ but I don't really know how to continue from there. To prove it surjective I started with: show that for every $n$ there is $(a, b)$ with $f(a, b) = m$ but was not successful there either. To make matters worse I don't even know yet if this function is injective, surjective, neither or both. Any help would be appreciated!
The map is certainly not surjective, as it has been pointed out in the comments. Let us try to prove injectivity. I will rewrite the function $(a+b)^2+a$ as $c^2+a$ where $c=a+b$, and in particular note $c\geq a$ because $b\in\mathbb{N}$. Argue by contradiction, suppose there exist $a, c, \hat{a}, \hat{c}$ with $c\neq \hat{c}$ and $a\neq \hat{a}$ such that $$ c^2+a=\hat{c}^2+\hat{a}, $$ noting the case $c=0$ and the case $\hat{c}=0$ are excluded because they would imply $a=b=\hat{a}=\hat{b}=0$. Without loss of generality, let $c>\hat{c}$. Write $$ c^2=\hat{c}^2+\hat{a}-a, $$ and we try to prove the LHS is strictly larger than the RHS, deriving a contradiction. We prove it for the "best" case, $a=0$, which then proves it for $a>0$ as well. Thus we write $$ c^2=\hat{c}^2+\hat{a}, $$ and since $\hat{a}\leq \hat{c}$ the RHS is bounded above by $\hat{c}^2+\hat{c}$. We had $c>\hat{c}$; taking the best case scenario again we try $c=\hat{c}+1$, which yields $c^2=\hat{c}^2+2\hat{c}+1$, strictly larger than the upper bound on the RHS, which yields the contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3135072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $f$ is closed and convex then $f = f^{**}$ proof question. If $f$ is closed and convex then $f = f^{**}$ Let $f$ be closed and convex. Then $f^* = \sup_x(y^Tx - f(x))$. Since $$\{h(x) = ax + b | h(x) \le f(x) \text{ for all $x$ }\} = \{h(x) = y^Tx + c | y \in \text{dom}(f^*), c \le - f^*(y)\}$$ we have $$f(x) = \sup\{g(x) | g \text{ affine}, g(z) \le f(z) \text{ for all } z\}$$ And $$f= f^{**} $$ How is $f = f^{**}$ here? Why is $f^{**} = \sup\{g(x) | g \text{ affine}, g(z) \le f(z) \text{ for all } z\}$ If $f^*(y) \equiv \sup_x(y^Tx - f(x))$ then shouldn't $(f^{*}(y))^*(u) = f^{**}(u) = \sup_y (u^Ty - f^*(y))$? What exactly am I missing here?
For the affine minorant part, notice that we may write $$ f^{**}(x) = \sup_{y} x^\top y - f^*(y) = \sup_{y, \beta} \left\{ x^\top y - \beta \ \middle|\ \beta \geq f^*(y) \right\} \\ = \sup_{y, \beta} \left\{ x^\top y - \beta \ \middle|\ \beta \geq \sup_{z} z^\top y - f(z) \right\} = \sup_{y, \beta} \left\{ x^\top y - \beta \ \middle|\ z^\top y - \beta \leq f(z), \; \forall z \right\}. $$ The first equality between suprema is easy to verify, and the supremum in the final expression is taken over all functions $g_y(z) := z^\top y - \beta$ such that $g_y(z) \leq f(z), \; \forall z$. However, any function $g_y(z)$ that satisfies this is an affine minorant of $f$, so you are effectively taking the supremum over all affine minorants.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3135378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it possible to prove that the limit, as n goes to infinity, of the sequence $a_n = 3^n/7^n$ is $0$? Intuitively this seems to be true, but is there a way to actually prove it instead of just saying "the denominator gets bigger faster"? I'm not sure that the sequence can be modeled by a function, but even in doing that and using l'Hopital's rule I didn't get 0.
This is a pretty standard and basic result related to sequences. Theorem: Let $x\in\mathbb {R} $ then the sequence $\{a_n\} $ defined by $a_n=x^n$ * *diverges to $\infty$ if $x>1$. *converges to $1$ if $x=1$. *converges to $0$ if $|x|<1$. *oscillates finitely with values $1,-1$ if $x=-1$. *oscillates infinitely if $x<-1$. Your case is about $x=3/7$ and then from 3rd bullet point above the desired limit is $0$. The proof of the theorem above is not that difficult and is a very instructive one. I give below a simple proof for the first case which also implies the third case (which applies here). If $x>1$ then we can write $$x^n=(1+(x-1))^n>1+n(x-1)$$ via binomial theorem. Now the RHS of the inequality tends to $\infty $ and therefore so does the LHS. Now we deal with the third case. If $x=0$ the result is obvious. So let $0<|x|<1$ so that $y=1/|x|>1$ and therefore from last paragraph $y^n\to\infty $. And then $|x^n|=1/y^n\to 0$. Since $$-|x^n|\leq x^n\leq |x^n|$$ it follows by squeeze theorem that $x^n\to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3135653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Find when two sin graphs hit $x=0$ at the same time Lets say I have 2 sine functions. They could be any sine functions but I'm more interested in the ones that get bigger/smaller as they go. So we have: $$ y_1 = \sin{(x^2)} $$ $$ y_2=\sin{\left(\sqrt{x}\right)} $$ I'm interested in when they both hit $y=0$ at the same time. I don't know where to even start.
You have $x^2 = \pi n$ and $\sqrt{x} = \pi m$. Eliminating $x$ gives $n = \pi^3 m^4$. As $\pi$ is irrational $m$ must be 0, and hence $x = 0$ is the only solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3135783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Two players throw a die until the sequence $1,2,3$ appears, and the winner is the one who roll $3$. What is the probability the second player wins? Alameda and Belisario alternate turns throwing a fair die. Alameda plays first and they continue throwing, one at a time, until the sequence $1$-$2$-$3$ appears. Whoever throws the $3$ is the winner. What is the probability that Belisario wins? Hmmm - probabilities is not my strong domain! First let's see what the chances are to get a $1$-$2$-$3$ regardless who gets it. Is it $1$ in $6\cdot 6\cdot 6$? Then if this probability is $p$, my understanding is that the probability for Belisario to win is smaller, but I can't compute it :(
Let's use states. We'll label a state according to how much of the $1,2,3$ chain has been completed and according to who's turn it is. Thus you start from $(A,\emptyset)$, and the other states are $(B,\emptyset),(X,1),(X,1,2)$ Win and Loss (Where $X\in \{A,B\}$. In a given state $S$ we let $p_S$ denote the probability that $B$ will win. Thus the answer you want is $P_{A,\emptyset}$. In this way we have $6$ variables (since the probability from the Win, Loss are clear). Of course these variables are connected by simple linear equations. For instance $$P_{A,\emptyset}=1-P_{B,\emptyset}$$ and, more generally, $$P_{A,s}=1-P_{B,s}$$ where $s$ is any part of the sequence. Thus we are down to $3$ variables. (Why? Well, In the state $(A,\emptyset)$, A is in the exact same position that $B$ is in in the state $(B,\emptyset)$. Thus $A$ has the same probability of winning from $(A,\emptyset)$ as $B$ has of winning from $(B,\emptyset)$. Same with any $s$) Considering the first toss we see that $$P_{A,\emptyset}=\frac 16\times P_{B,1}+\frac 56\times P_{B,\emptyset}$$ (Why? Well, $A$ either throws a $1$ or something else. The probability of throwing a $1$ is $\frac 16$ and if that happens we move to $(B,1)$. If $A$ throws something else, probability $\frac 56$, then we move to $(B,\emptyset)$) Similarly: $$P_{B,1}=\frac 16\times P_{A,1,2}+\frac 16\times P_{A,1}+\frac 46\times P_{A,\emptyset}$$ and $$P_{B,1,2}=\frac 16\times 1+\frac 16\times P_{A,1}+\frac 46\times P_{A,\emptyset}$$ (Why? Similar reasoning. Consider the possible throws $B$ might make and what states they each lead to). Solving this system we get the answer $$\boxed {P_{A,\emptyset}= \frac {215}{431}\approx .49884}$$ Note: I used Wolfram alpha to solve this system but it's messy enough so that there could certainly have been a careless error. I'd check the calculation carefully. Sanity check: Or at least "intuition check". Given that this game is likely to go back and forth for quite a while before a winner is found, I'd have thought it was likely that the answer would be very close to $\frac 12$. Of course, $A$ has a small advantage from starting first (it's possible that the first three tosses are $1,2,3$ after all), so I'd have expected an answer slightly less than $\frac 12$. Worth remarking: sometimes intuition of that form can be a trap. After all, the temptation is to stop checking as soon as you get an answer that satisfies your intuition. In fact, the first time I ran this, I got an answer of $.51$ which seemed wrong. Worse, that solution showed that $P_{A,1,2}$ was about $.58$ which seemed absurd (how could $B$ have a strong advantage when $A$ is one toss away from winning?). So, I searched for and found the careless error. Second trial gave all plausible results so I checked casually and stopped. But you should do the computation again to be sure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3135987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
$d_p$ and $d_\infty$ in $\mathbb{C}^n$ are uniformly equivalent I need to prove this Show with the metrics $d_p$ and $d_{\infty}$ in $\mathbb{C}^n$ are uniformly equivalents, with $p \in [1, \infty)$. So, I have in my book two definitions about equivalence metrics in a metric space $(X,d)$. 1) Two metrics $d_1$ and $d_2$ are uniformly equivalents if there are constants $a, b >0$ such that $ad_1(x,y) \leq d_2(x,y) \leq bd_1(x,y), \forall x, y \in \mathbb{C}^n$ or equivalently, if $a \leq \dfrac{d_2(x,y)}{d_1(x,y)}\leq b $ for all $x \neq y$. 2) The second definition is about topology equivalence. We say that two metrics $d_1$ and $d_2$ are topology equivalents if any sequence convergent in space $X$ with the metric $d_1$ also converge in the metric $d_2$ and for the same limit point. I have already proved two facts: a) If two metrics $d_1$ and $d_2$ are uniformly equivalent in $X$, then a subset $M$ of $X$ is bounded with respect to the metric $d_1$ if, and only if, $M$ is bounded with respect to the metric $d_2$. b) If two metrics are uniformly equivalent, then they are topologically equivalent. But my problem above continues. I could this: By definition we know that $$ d_p (x,y) = \left( \sum_{i=1}^{n} \ |x_i -y_i|^{p} \right)^{1/p} \mbox{e}\;\; d_\infty (x,y) = \sup_{i=1,..., n}{ |x_i - y_i| }. $$ So, by Minkowski's inequality, we have $$ d_p (x,y) = \left( \sum_{i=1}^{n} \ |x_i -y_i|^{p} \right)^{1/p} \leq \left( \sum_{i=1}^{n} \ |x_i|^{p} \right)^{1/p} + \left( \sum_{i=1}^{n} \ |y_i|^{p} \right)^{1/p} \leq M_1 + M_2 = M $$ and $$ d_\infty = \sup_{i=1,..., n}{ |x_i - y_i| } \leq N. $$ How $0 < d_p(x,y)$ and $0 < d_\infty (x,y)$ for all $x \neq y$, we have with statements above that $$ 0 \leq \dfrac{d_p(x,y)}{d_{\infty}(x,y)} \leq \dfrac{M}{N}=b, b>0. $$ My problem here is how can I to prove with there is a constant positive $a$ such that $a \leq \dfrac{d_p(x,y)}{d_{\infty}(x,y)}$. Thanks.
For any $p\ge 1$ and any $u=(u_1,u_2,\dots,u_n)\in\mathbb{R}^n$, $$\|u\|_p = \left(\sum_{k=1}^n |u_k|^p\right)^{1/p} \le \left(\sum_{k=1}^n (\sup_k|u_k|)^p\right)^{1/p} = \left(n\|u\|_{\infty}\right)^{1/p}=n^{1/p}\|u\|_{\infty}$$ Estimate each term in the sum by the largest one. It's that simple. For the distance, apply this to $u=x-y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3136177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluate the limit: $\lim\limits_{n\to\infty} \frac{4^nn!}{(3n)^n}$ Evaluate the following limit $$ \lim_{n\to\infty} \frac{4^nn!}{(3n)^n} $$ I've shown the limit is equal to $0$. One may for example use the ration test by which: $$ \begin{align} \frac{x_{n+1}}{x_n} &= \left({4\over 3}\right)^{n+1}\frac{(n+1)!}{(n+1)^{n+1}} \cdot \left({3\over 4}\right)^{n}\frac{n^n}{n!}\\ &= \frac{4}{3}\frac{n^n}{(n+1)^n} \end{align} $$ Now taking the limit of the fraction: $$ {4\over 3}\lim_{n\to\infty} \left(n\over n+1\right)^n = {4\over 3e} < 1 $$ By this the sequence is converging to $0$. Another way could be Stirling's approximatiom, by which: $$ \frac{4^nn!}{(3n)^n} \sim \frac{4^n}{(3n)^n}\cdot \sqrt{2\pi n}\cdot\left({n\over e}\right)^n = \left({4\over 3e}\right)^n\sqrt{2\pi n} $$ Applying ration test after Stirling's approximation yields the same result. The problem is that Stirling's approximation has not been introduced yet. Also this limit comes right after the problems on proving some specific statements, among which are: $$ \begin{align*} \lim_{n\to\infty} x_n = x &\implies \lim_{n\to\infty}\frac{x_1 + x_2 + \cdots +x_n}{n} = x \tag 1\\ \lim_{n\to\infty} x_n = x &\implies \lim_{n\to\infty} \sqrt[n]{x_1x_2\dots x_n} = x\tag 2\\ \lim_{n\to\infty}\frac{x_{n+1}}{x_n} = x &\implies \lim_{n\to\infty} \sqrt[n]{x_n} = x\tag 3 \end{align*} $$ Right before the problem from question section the book is asking to find the limit of: $$ \lim_{n\to\infty} {1\over n}\sqrt[n]{(n+1)(n+2)\dots(2n)} $$ This may be easily handled by applying $(3)$. My assumption is that the author expects me to use one of those proofs I've done before, however I don't see how any of them may be applied. Also please note that proving Cesaro-Stolz is following this limit. I would appreciate if someone could point me to a way to use $(1), (2)$ or $(3)$ for evaluating the limit. Or possibly suggest other approaches to find that limit from question section. Thank you!
Here is another approach. Using Stirling's approximation $n!\sim \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$ you get $$\frac{4^nn!}{(3n)^n}\sim \sqrt{2\pi n}\frac{4^n(n/e)^n}{3^n n^n}=\sqrt{2\pi n}(\frac{4}{3e})^n$$ which clearly tends to zero as $n\to\infty$, because $4<3e$. (The symbol $\sim$ means that the ratio of the two sides tends to $1$ as $n\to\infty$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3136272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Movement of $1/z$ in complex plane How is moving $\frac{1}{z}$ in complex plane if $z$ is described by a circle which has radius $r$ and center $a+b*i$ I've just started complex algebra and still having some trouble imagining it. How one does even solve this kind of problems, I don't want the complete solution I just want some appropriate ways for working with complex numbers to solve this kind of problems. Thank you.)
Hint: Compute $\dfrac1{r\bigl(\cos(\theta)+i\sin(\theta)\bigr)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3136522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Polar decomposition of an operator and the partial isometry Let $A \in L(H)$, the space of bounded operators on the Hilbert space $H.$ I need to show that there is a partial isometry $U$ (i.e. $U^{*}U$ and $UU^{*}$ are projections), such that: $$A=U|A|$$ is the polar decomposition of $A,$ whereby $|A|=(A^{*}A)^{1/2}.$ Do you have any suggestion or a solution proposal ? Many thanks for any comment.
Let $\sqrt{A^*A}$ denote the unique positive square root of $A^*A$. Then \begin{align} \|Ax\|^2 &=\langle A^*Ax,x\rangle \\ & = \langle \sqrt{A^*A}x,\sqrt{A^*A}x\rangle \\ & =\|\sqrt{A^*A}x\|^2 \end{align} Let $U$ be defined so that $U=0$ on $\mathcal{N}(\sqrt{A^*A})$ and be defined on $\mathcal{N}(\sqrt{A^*A})^{\perp}=\overline{\mathcal{R}(\sqrt{A^*A})}$ so that $U\sqrt{A^*A}=A$. Then $U$ is a partial isometry, and $A=U|A|$ is the desired decomposition, where $|A|=\sqrt{A^*A}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3136653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Example of two homotopically equivalent manifolds such that one admits a symplectic structure and the other does not A smooth manifold $M$ admits a symplectic structure if there is an alternating non degenerate $2$-form $\omega \in \Lambda^2(M)$ that is also closed i.e. $d\omega = 0.$ Usually we can express obstructions to the existence of certain tensors in terms of vanishing of some cohomology classes. On the other hand, due to the "integrability" condition $d\omega = 0$ one should expect that a set of necessary and sufficient condition for $M$ to admit a symplectic structure cannot be expressed just in homological terms. In order to support this guess I am thus looking for a concrete example: Find $M_1, M_2 $ (possibly) compact smooth manifolds (of the same dimension) such that * *$M_1$ is symplectic *$M_2$ does not admit a symplectic structure *$M_1$ is homotopically equivalent to $M_2$
There are many examples in four dimensions, in fact there are infinitely many examples where $M_1$ and $M_2$ are actually homeomorphic. Let $M_1$ be a simply connected Kähler surface which is not spin. Then $M_1$ is symplectic and is homeomorphic to the smooth manifold $M_2 = b^+\mathbb{CP}^2\# b^-\overline{\mathbb{CP}^2}$ by Freedman's Theorem. However, if $b^+ > 1$, if follows from Seiberg-Witten theory that $M_2$ does not admit a symplectic form; in particular, $M_1$ and $M_2$ are not diffeomorphic. Taubes proved that a symplectic manifold with $b^+ > 1$ has a non-zero Seiberg-Witten invariant. However, all the Seiberg-Witten invariants of $M_2$ vanish because $M_2$ is diffeomorphic to the connected sum of two manifolds with $b^+ \geq 1$. An explicit example of such a pair is $M_1 = \operatorname{Bl}_p(K3) = K3\#\overline{\mathbb{CP}^2}$, the blowup of $K3$ at a point, and $M_2 = 3\mathbb{CP}^2\# 20\overline{\mathbb{CP}^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3136763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Continuous function and limit $ f(x,y)=\left\{\begin{matrix} (x-y)\sin\frac{1}{x}\sin\frac{1}{y} & , xy\neq 0 \\ 0 &, x=y=0 \end{matrix}\right. $ I have this function:$$ f(x,y)=\left\{\begin{matrix} (x-y)\sin\frac{1}{x}\sin\frac{1}{y} & , xy\neq 0 \\ 0 &, x=y=0 \end{matrix}\right. $$ a) Show that $ \lim_{x\rightarrow 0}[\lim_{y\rightarrow 0} f(x,y)] $ does not exist. b) Show that $ f(x,y) $ is continuous at $ (0,0) $ . For (a) I took $ \lim_{y\rightarrow 0} f(x,y) $ and I got that it's equal to $$ x\sin\frac{1}{x}\lim_{y\rightarrow 0}\sin\frac{1}{y} - \sin\frac{1}{x}\lim_{y\rightarrow 0}y \sin\frac{1}{y} $$, but $$ \lim_{y\rightarrow 0}\sin\frac{1}{y} $$ does not exist, so $ \lim_{y\rightarrow 0} f(x,y) $ does not exist and $ \lim_{x\rightarrow 0}[\lim_{y\rightarrow 0} f(x,y)] $ does not exist. Am I right? For (b) I guess that I have to show that $$ \lim_{(x,y)\rightarrow (0,0)} f(x,y) = 0 $$ How can I do this ?
$sin{(1/x)sin(1/y)}$ is an interval and $x-y$ under the limit of $(x,y)\to{(0,0)}$ is an interval too. if a function in a close interval is continuous, its valued field is an interval, but its converse proposition may be not true. for example, your function is a counterexample. so you should separate your problem into two cases with $f^{'}(0,0)$ and $lim_{(x,y)\to{(0,0)}}f^{'}(x,y)$ the first limit exists but the second do not! for example rewrite your function to $F_{x}^{'}(x,y)=sin(1/x)sin(1/y)+(x-y)sin(1/y)cos(1/x)(-1/x^2)$ $F_{y}^{'}(x,y)=-sin(1/x)sin(1/y)+(x-y)sin(1/x)cos(1/y)(-1/y^2)$ divide both side with $sin(1/x)sin(1/y)$ to get $1+(x-y)cos(1/x)(-\frac{1}{x^{2}sin(1/x)})$ $-1+(x-y)cos(1/y)(-\frac{1}{y^{2}sin(1/y))}$ since $f(x)=x^2{sin(1/x)}$ or equal to zero exists a limit at $f^{'}(0)$, but $lim_{x\to{0}}f^{'}(x)=2xsin(1/x)-cos(1/x)$ do not exists, this result give a certain conclusion to your problem. thank you!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3136903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Is there any neat way to calculate this contour integral $PV\int_{\mathbb{R}^n}\frac{e^{i(b_1z_1+\cdots+b_nz_n)}}{\prod_{j$$\mathrm {PV}\int_{\mathbb{R}^n} \frac{e^{i(b_1z_1+\cdots+b_nz_n)}}{\prod_{j<k}(z_k-z_j)}dz=?$$ Other than integrate this term by term? It is in fact the Fourier transform of the inverse Vandermonde determinant. I also found this question: Fourier transform of the inverse of Vandermonde determinant which is asking for the $n=3$ case.
So finally you meant the Fourier transform of the distribution $$F(x) = PV(\frac{1}{\prod_{j < k} (x_k-x_j)})$$ that is $$\hat{F}(\omega) = \int_{\mathbb{R}^n} F(x) e^{-i \langle \omega,x \rangle}d^n x$$ Then $$1=F(x)\prod_{j < k} (x_k-x_j)$$ implies $$(2\pi)^n i^{n(n-1)/2} \delta(\omega)=\prod_{j < k}(\partial_j-\partial_k) \widehat{F}(\omega) $$ so that, with $\ast$ the convolution and $T_{j,k}$ the Dirac delta distribution on the subspace orthogonal to $x_j-x_k$ $$\widehat{F}(\omega) = (2\pi)^n i^{n(n-1)/2} \quad T_{1,2} 1_{x_1-x_2 > 0} \ast \ldots \ast T_{j,k} 1_{x_j-x_k > 0} \ast \ldots \ast T_{n-1,n} 1_{x_{n-1}-x_n > 0} + P(\omega)$$ for some polynomial $P$ such that $(\prod_{j < k}(\partial_j-\partial_k))P = 0$, and we may deduce the coefficients of $P$ from symmetry considerations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3137018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to approach this question on subspaces? Determine which of the following are subspaces of $3 \times 3$ matrix $M$ all $3 \times 3$ matrices $A$ such that the trace of $A$ is $\mbox{tr}(A) = 0$. What does trace mean?
Let $V$ be a vector space. Then $W \subset V$ is a subspace of $V$ if a) $0 \in W$; b) $u, v \in W$ then $u + v \in W$; c) $u \in W$ e $\lambda \in \mathbb{R}$, then $\lambda u \in W$. The trace of a matrix $A$ of order 3 is given by $\mbox{tr} (A) = a_{11} + a_{22} + a_{33}$ (it is the sum of the principal diagonal). Then the set $W$ formed by all matrices of order 3 x 3 whose trace is zero is a subspace of matrices of order 3. In fact, just show the 3 items above. a) the null matrix is in $W$ (note that the trace is zero). b) Let $A, B \in W$, then $\mbox{tr} (A) = \mbox{tr} (B) = 0$. Hence $\mbox{tr} (A + B) = \mbox{tr} (A) + \mbox{tr} (B) = 0 + 0 = 0$. c) Let $A \in W$ and $\lambda \in \mathbb{R}$, then $\mbox{tr} (\lambda A) = \lambda a_{11} + \lambda a_{22} + \lambda a_{33} = \lambda (a_{11} + a_{22} + a_{33}) = \lambda \mbox{tr} (A)$. And the result is proven.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3137160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove $ u \cdot (u \times (\nabla \times u)) = 0 $ Prove $$ u \cdot (u \times (\nabla \times u)) = 0 $$ Where '$u$' is a 3D-velocity vector. I came across this for a proof for converting Euler's Equation to the Bernoulli expression for a steady-state, in compressible fluid. Anyone know why this is the case?
For simplicity, call ${\bf v} = \nabla \times {\bf u}$. The idea is that ${\bf u} \times {\bf v}$ is perpendicular to both ${\bf u}$ and ${\bf v}$, so that $$ {\bf u} \cdot({\bf u} \times {\bf v}) = 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3137381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all positive integers $a, b, c$ such that $21^a+ 28^b= 35^c$ . Find all positive integers $a, b, c$ such that $21^a+ 28^b= 35^c$. It is clear that the equation can be rewritten as follows: $$ (3 \times 7)^a+(4 \times 7)^b=(5 \times 7)^c $$ If $a=b=c=2$ then this is the first possible answer to this issue. It is also obvious that the sum of $(3*7)^a+(4*7)^b$ must end and be divisible by $5$. Since $21^a$ always ends at $1$, then $28^b$ should end at $4$ . Defined $b$ as $b=2+4k$ -- even positive integer.
Note that $21=3\times7$, $28=4\times7$ and $35=5\times7$, and so by unique factorization the numbers $21^a$, $28^b$ and $35^c$ are all distinct for all positive integers $a$, $b$ and $c$. By unique factorization we see that the left hand side of $$21^a+28^b=35^c,$$ is divisible by $7^{\min\{a,b\}}$ and hence $c\geq\min\{a,b\}$. Moreover the right hand sides of $$21^a=35^c-28^b \qquad\text{ and }\qquad 28^b=35^c-21^a,$$ are divisible by $7^{\min\{b,c\}}$ and $7^{\min\{a,c\}}$, respectively, because $28^b\neq35^c\neq21^a$. This implies $$a\geq\min\{b,c\} \qquad\text{ and }\qquad b\geq\min\{a,c\},$$ from which it follows that $a=b=c$. Dividing out the factor $7^a$ leaves us with $$3^a+4^a=5^a,$$ which clearly has the unique solution $a=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3137485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Find the number of elements $a^{x}=x$ depending on $ a> 0 $ My tryLet $$f(x)=a^{x}-x=e^{x\cdot ln a}-x$$So $$f'(x)=a^{x-1}\cdot x-1$$Then I should examine the $ f $ monotonicity.But when I can do it I have:$$f'(x)>0$$$$a^{x-1}\cdot x>1$$However I don't know what can I do in this moment. I tried:for $x>0$: $(x-1)lna>-lnx$ but I still do not know how to get to zero places $ f '(x) $.Can you get some tips how do it?
You have differentiated $a^{x}$ wrongly. It should be $a^{x}\ln(a)$. But I suggest you sketch the graphs of $y=x$ and $y=a^{x}$ on the same axes and consider the number of intersections, and the conditions on $a$ for there to be any intersections at all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3137612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integrate $\int \frac{\sqrt{x^2-1}}{x^4}dx$ I am trying to integrate $\int \frac{\sqrt{x^2-1}}{x^4}dx$ via trig substitution. I decided to substitute $x = \sec\theta$ into the square root and $dx = \sec\theta \tan\theta\,d\theta$. $$\int \frac{\sqrt{\sec^2 \theta-1^2}}{\sec^4\theta} \,dx = \int \frac{\sqrt{\tan^2\theta + 1 - 1}}{\sec^4\theta}\,dx = \int \frac{\tan\theta}{\sec^4\theta} \sec\theta \tan\theta\,d\theta = \int \dfrac{\tan^2\theta}{\sec^3\theta}\,d\theta$$ Here is where I am currently stuck. I attempted substitution with $u = \sec\theta, du = \sec x \tan x dx$ but that didn't seem to work out. I wasn't able to get an integration by parts strategy working either. I think the answer lies in some sort of trigonometry regarding $\int \frac{\tan^2\theta}{\sec^3\theta}\,d\theta$ that I am overlooking to further simplify the problem, but no idea what it is
$$ \begin{aligned} & \int \frac{\sqrt{x^{2}-1}}{x^{4}} d x \\\stackrel{y=\frac{1}{x}}{=} &\int\frac{\sqrt{\frac{1}{y^{2}}-1}}{\frac{1}{y^{4}}}\left(-\frac{1}{y^{2}} d y\right)\\ =& -\int y \sqrt{1-y^{2}}d y \\ =&\frac{\left(1-y^{2}\right)^{\frac{3}{2}}}{3}+C \\=&\frac{\left(x^{2}-1\right)^{\frac{3}{2}}}{3 x^{3}}+C \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3137719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
why set $S$ is not closed under addition? some doubts : Consider $\mathbb{R}^2$ over $ \mathbb{R}. $ Then Given set $ S = \{(x,0) : x \in \mathbb{R}\} \cup \{ (0,y) : y \in \mathbb{R}\}$ My question is that why set $S$ is not closed under addition? My attempt : I think set $S$ is closed under the addition because $$ S = \{(x,0) : x \in \mathbb{R}\} \cup \{ (0,y) : y \in \mathbb{R}\}= \{(x,y) : x,y \in \mathbb{R}\}$$ Any hints/solution thanks u
First of all, do you agree that $(1,1) \not\in \{(x,0) : x \in \mathbb{R}\}$ and $(1,1) \not\in \{(0,y) : y \in \mathbb{R}\}$? If yes, then it is obviously not in their union. You are confusing two different concepts, I think. Consider the plane ($\mathbb{R}^2$) and then let $l_1$ be the line $\{(x,y)\in\mathbb{R}^2: y=x\}$ and let $l_2$ be the line $\{(x,y)\in\mathbb{R}^2: y=-x\}$. Obviously, the union of these two lines is not a vector space because $(0,1)$ is not on any of these lines. But the space generated by these two lines is all of the plane. Can you see why? This is similar to your case: $$S = \underbrace{\{(x,0) : x \in \mathbb{R}\}}_{S_1} \cup \underbrace{\{ (0,y) : y \in \mathbb{R}\}}_{S_2} \subsetneq \{(x,y) : x,y \in \mathbb{R}\}$$ because $(1,0) \in S_1$ and $(0,1) \in S_2$ but $(1,1)$ is neither in $S_1$ nor in $S_2$. So, it can't be in $S=S_1 \cup S_2$ because by definition, if something is in $S_1 \cup S_1$, then it belongs to $S_1$ or $S_2$. However, it is true that $\langle S_1 \cup S_2\rangle=\mathbb{R}^2$. Can you see why?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3137844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If X, Y independent then X/Y and Y are independent? if two random Variables $X, Y$ are independent Does that mean that $\left(\frac{\ X }{Y}\right)$ and $Y$ are independent? for example is it true that E[$\left(\frac{\ X }{Y}\right)$|$Y$]= E[$\left(\frac{\ X}{Y}\right)$] because of the independence of the random variables X and Y?
Suppose that $X,Y$, satisfy $\mathbb{P}( X=-1 ) =\mathbb{P}( X=1 )=1/2 $ and $\mathbb{P}( Y=-1 ) =\mathbb{P}( Y=2 )=1/2 $. Set $Z=X/Y$, then $\mathbb{P}( Z=1/2, Y=-1)=0$ however $\mathbb{P}(Z=1/2)\neq 0$ and $\mathbb{P}( Y=-1)\neq 0$. So, they are not independent. For the conditional what we have is $E\left ( X/Y\right |Y=y ) = E(X/y)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3138053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to calculate $\int_{0}^{4} \sqrt{\frac{t+4}{t}}dt$? I need help with this problem. I'm calculating the length of the smooth simple arc of the portion of the parabola $y^2=16x$ which lies between the lines $x=0$ and $x=4$. So I first parametrized the function like this: $t=x\Rightarrow y=\pm\sqrt{16t}$, thus $f(t)=(t, \pm\sqrt{16t})$. I first started with the positive square root. I calculated the derivative and the norm of the derivative: $f'(t)=(1,\frac{2}{\sqrt{t}})$ and $\Vert f'(t)\Vert=\sqrt{1^2+(\frac{2}{\sqrt{t}})^2}=\sqrt{\frac{t+4}{t}}$. After that, I used the formula of the length:$$l(t)=\int_{0}^{4} \sqrt{\frac{t+4}{t}}dt$$ the problem is that I don't know how to integrate that. Please help me.
You can also set $u = 1+\frac4t$ to obtain $$\int_0^4 \sqrt{1+\frac4t}\,dt = \begin{bmatrix} u = 1+\frac4t \\ du = -4(u-1)^2\,dt \end{bmatrix} = \int_{2}^\infty \frac{\sqrt{u}}{4(u-1)^2}\,du$$ You can decompose the latter function as $$\frac{\sqrt{u}}{4(u-1)^2} = \frac1{16}\left(\frac1{(\sqrt{u}-1)^2}-\frac1{(\sqrt{u}+1)^2}\right)$$ Then notice that $$\int \frac{du}{(\sqrt{u}\pm1)^2} = \begin{bmatrix} z = \sqrt{u}\pm 1 \\ du = 2(z\mp 1)\,dz \end{bmatrix} = \int \frac{2(z\mp 1)}{z^2}\,dz$$ which should be solvable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3138197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Question about Fouriers Inversion Theorem. This is my professors definition of the FIT. Fourier Inversion Theorem: Assume that $f\in L^2(\mathbb{R}).$ Define the Fourier transform to be $$\widehat{f}(\xi)=\int\limits_{\mathbb{R}}f(y)e^{-iy\xi} \ dy.\tag 1$$ Then, as an equality in $L^2(\mathbb{R})$ we have the inverse $$f(x)=\frac{1}{2\pi}\int\limits_{\mathbb{R}}\widehat{f}(y)e^{ixy} \ dy. \tag2$$ These notations confuse me a lot. In $(2)$, don't we actually want to get back $f(y)$ from $\widehat{f}(t)$ just like it says here? But then Wikipedia kind of contradicts(?) itself here. Can anyone explain what is going on here with the symbols? I understand it as if we start with $f(y)$ and apply the tranform, we get $f(\xi).$ So far so good. But when we apply the inverse transform, It should be applied to $\widehat{f}(\xi)$ and give us back our original $f(y)$?
The choice of variables that were used is confusing. Note the $x$ and $y$ in your second equation of $$f(x)=\frac{1}{2\pi}\int\limits_{\mathbb{R}}\widehat{f}(y)e^{ixy} \ dy. \tag2$$ are basically "dummy" variables, with $x$ being a placeholder for the variable of the function $f$ and $y$ specifying the variable being integrated. As such, it's just as accurate to use $$f(y)=\frac{1}{2\pi}\int\limits_{\mathbb{R}}\widehat{f}(\xi)e^{iy\xi} \ d\xi. \tag3$$ instead, where I've replaced the $y$ with $\xi$ and $x$ with $y$. This shows the inverse transform does use $\widehat{f}(\xi)$ to go back to your original $f(y)$. I trust this help to explain the issue to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3138321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate :$\int_{-1}^{1} 2\sqrt{1-x^2} dx $ Evaluate: $$\int_{-1}^{1} 2\sqrt{1-x^2} dx $$ The answer is $\pi$ My attempt $x = \sin(u), dx = \cos(u)du$ $$\int_{-1}^{1} 2 \sqrt{1-\sin^2(u)}\cos(u)du = \int_{-1}^{1} 2 \cos^2(u)du =\int_{-1}^{1} \frac{1}{2}(1+\cos(2u))du = \bigg(\frac{u}{2} + \frac{1}{2}\sin(2u) \bigg)\Bigg|_{-1}^{1}$$ confused how to proceed ?
Geometrically , the unit circle can be represented as $$x^2+y^2=1$$ so $$y=\pm \sqrt{1-x^2}$$ and your case $y=+ \sqrt{1-x^2}$ So $\int_{-1}^1\sqrt{1-x^2} dx $ is the area of a (upper )semi circle, which is $\frac{\pi}{2}$. So $$2 \int_{-1}^1 \sqrt{1-x^2}dx =2 \frac{\pi}{2}=\pi$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3138443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
How do I find/estimate the unknown given that the equation has exactly $2$ solutions? My problem: The equation $x(x^2-7) = 2x+c$ has exactly $2$ solutions. Estimate the value(s) of $c$, writing you answer(s) in the form $c_1< c <c_2$, where $c_1$ and $c_2$ are integers. My question: How do "$2$ solutions" contribute to finding $c$? Like I couldn't find any relation between them. Thank you.
Consider $f(x)=x^3-9x$. Now, $$f'(x)=3x^2-9=3(x-\sqrt3)(x+\sqrt3).$$ For $f'(x)=0$ we obtain $x=\sqrt3$ or $x=-\sqrt3$. For $x=\sqrt3$ we obtain $c=-6\sqrt3$ and $$x(x^2-7)-2x-c=0$$ it's $$(x-\sqrt3)^2(x+2\sqrt3)=0,$$ which has two different roots. For $x=-\sqrt3$ we obtain $c=6\sqrt3$ and the following equation. $$(x+\sqrt3)^2(x-2\sqrt3)=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3138621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How did Gauss conjecture there were nine Heegner numbers? Coming from someone not very knowledgable in algebraic number theory it seems odd. At the time they didn't have the computing power to determine whether very high values (>>163) were Heegner numbers; so, why even assume there was a finite amount rather than infinite (let alone exactly the nine there are)?
Gauss had plenty of computing power. He calculated class numbers up to 2000, and found they got scarcer as he climbed higher, with none at all after 163. That seemed enough to conjecture there weren't any more. Gauss was working with quadratic forms, rather than quadratic fields, and the bigger the discriminant, the easier it was to find inequivalent forms, so it stood to reason that eventually there would be no discriminants with just one class of forms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3138747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
How many monoids, groups, etc are there defined on a finite set of size $n$? Suppose we have a set $X$. We can define a binary relation $\cdot :X\times X\to X$, and get an algebraic structure $(X,\cdot)$. There are $|X|^{|X|^2}$ such binary relations that can be defined. I.e. let $N=n^{n^2}$ where $n=|X|$. Now, if we introduce axioms on $\cdot$, the amount of binary relations we can define will of course shrink. Denote by $N_A$ the amount of possible binary relations that satisfy axioms $A$. Denote by $N_A^*$ the number of them that are unique up to isomorphism. $N_A$ and $N_A^*$ are obviously a functions of $n$. I am wondering how much the different algebraic axioms constrain the size $N_A,N_A^*$, such as associativity, commutativity, existence of inverses, etc. How does $N_A,N_A^*$ depend on the different axioms? * *Are there precise formulas for $N_A$ and $N_A^*$, depending on the common axioms as a function of $n$? (Associativity, inverses, identity, commutativity,...) *Are there interesting interaction effects between the axioms? *Is there a name for the topic I’m pointing to in this question?
For groups this is a well studied problem with no easy answer. See finite simple groups classification for reference. If you take enough axioms though then this computation can become quite simple, for instance number of finite abelian groups of order $n$ can be easily described look at this question for example. In summary it can go both ways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3138829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A decreasing transfinite sequence of subsets of a countable set. Let $X$ be a countable set and $(S_\alpha)_{\alpha< \rho}$ is a decreasing transfinite sequence of subsets of $X$ in the sense that $$ S_\alpha \supset S_\beta $$ whenever $\alpha<\beta$. Here $\rho$ is some fixed ordinal. Suppose that $(S_\alpha)_{\alpha< \rho}$ is strictly decreasing, i.e. $S_\alpha \ne S_\beta$ whenever $\alpha<\beta$, how do we show that $\rho$ must be a countable ordinal? I am sorry if this question is elementary, I have very little training in axiomatic set theory. I think I could prove by contradiction but some crucial steps are missing and I don't know how to make it rigorous.
If it is strictly decreasing, then $A_\alpha=S_\alpha\setminus S_{\alpha+1}$ is non-empty, and $A_\alpha\cap A_\beta=\varnothing$ whenever $\alpha\neq\beta$. How many pairwise disjoint subsets a countable set can have?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3138946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Conserving algebraic quantities I have the relations $$r_0^{\pm} = \frac{1}{2}\left (\frac{\hat \rho_0\zeta_1^{1/2}}{\hat \rho_1\zeta_0^{1/2}} \pm \frac{\zeta_0^{1/2}}{\zeta_1^{1/2}} \right )$$ $$r_1^{\pm} = \frac{1}{2}\left (\frac{\hat \rho_1\zeta_2^{1/2}}{\hat \rho_2\zeta_1^{1/2}} \pm \frac{\zeta_1^{1/2}}{\zeta_2^{1/2}}\right )$$ $$T_{0,1} = \frac{1}{r_{0,1}^{+}}$$ $$R_{0,1}= -\frac{r_{0,1}^{-}}{r_{0,1}^{+}}$$ and I'm trying to show that $$R_{0,1}^2+T_{0,1}^2 = 1.$$ if $\hat \rho_0 = \hat \rho_2$ and $\zeta_0=\zeta_2.$ So far I've failed to show this. When I compute the above, I get $$R_0^2+T_0^2 = \frac{\hat \rho_1^2 \zeta_0^2 + 4\hat \rho_1^2 \zeta_0 \zeta_1 - 2 \hat \rho_1 \hat \rho_0 \zeta_0 \zeta_1 + \hat \rho_0^2 \zeta_1^2}{(\hat \rho_1 \zeta_0 + \hat \rho_0 \zeta_1)^2}$$ If I could somehow find a way to factor this to achieve $1$, it would be great. Could anyone cast an eye over this to see if I'm not spotting the obvious? Thanks in advance.
For $i\in\{0,1\}$ you want to show that $R_i^2+T_i^2=1$, or equivalently $$(r_i^+)^2-(r_i^-)^2=1.$$ For the sake of legibility let $a_k:=\hat{\rho}_k$ and $b_k:=\zeta_k^{1/2}$. Then for $i=0$ we have $$r_0^{\pm}=\frac12\left(\frac{a_0b_1}{a_1b_0}\pm \frac{b_0}{b_1}\right) =\frac{a_0b_1^2\pm a_1b_0^2}{2a_1b_0b_1},$$ and so the standard identity $(x+y)^2-(x-y)^2=4xy$ shows that \begin{eqnarray*} (r_0^+)^2-(r_0^-)^2 &=&\frac{(a_0b_1^2+a_1b_0^2)^2}{4a_1^2b_0^2b_1^2} -\frac{(a_0b_1^2-a_1b_0^2)^2}{4a_1^2b_0^2b_1^2}\\ &=&4\frac{a_0a_1b_0^2b_1^2}{4a_1^2b_0^2b_1^2}=\frac{a_0}{a_1}, \end{eqnarray*} and so the identity holds if and only if $a_0=a_1$. Because $a_2=a_0$ and $b_2=b_0$, the exact same proof works for $i=1$ by changing the indices; change all the $0$'s to $1$'s and all the $1$'s to $0$'s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3139131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In a normal linear model (with intercept), show that if the residuals satisfy $e_i = a + \beta x_i$, then each residual is equal to zero. In a normal linear model (with intercept), show that if the residuals satisfy $e_i = a + \beta x_i$, for $i = 1\dots n$, where $x$ is a predictor in the model, then each residual is equal to zero. I'm not really sure how to do this, I tried writing out $e_i = y_i - \hat{y}_i$; but I wasn't able to get anywhere.
Since your regression model has intercept, we can assume, the X matrix for the regression has the form $$ X = \begin{pmatrix} 1 & x_1^T \\ 1 & x_2^T \\ \vdots & \vdots \\ 1 & x_n^T \\ \end{pmatrix} $$ Note that the residual must be orthogonal to every vector in column space of $X$. This is because the predicted value, $\hat{Y} = P_X Y$ (where $P_X$ is the orthogonal projection matrix onto the column space of $X$), and hence the residual vector $e = Y - \hat{Y} = (I - P_X)Y $. So for any vector $c$ of appropriate dimension $$c^T e = (c^T - c^TP_X)Y.$$ Now if $c$ lies in the column space of $X$ then $$P_X c =c$$ or $$c^T = c^T P_X^T = c^T P_X \text{ since $P_X$ is symmetric}$$ and it follows for any $c$ in the column space of $X$ $$c^T e = 0.$$ Now, your condition implies $e$ itself lies in the column space of $X$ and hence must be orthogonal to itself, i.e., $e^Te = \|e\|^2 = 0$, i.e., $e = 0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3139297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does it mean for a number to be independent of ZFC? Since the definition of the Busy Beaver function by Radó in 1962, an interesting open question has been what [is] the smallest value of $n$ for which $BB(n)$ is independent of ZFC set theory. Source: the first sentence of the abstract of the paper A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory, with the part in bold added by me (I think it was a typo, they likely forgot the "is"). They proceed to prove that such $n$ is at most 7918. But $BB(7918)$ is a number, right? So what does it mean for a number to be independent of ZFC? Bonus question: how much is $BB(7918)$?
Sure, $BB(7918)$ is some number. But it is provably beyond the capabilities of ZFC to figure out which number that is, or even an upper bound for that number. Specifically, given any (very large, but constructively described) integer $D$, the statement $BB(7918)<D$ can never be proven with ZFC. As to the bonus question, I've been fascinated with large number notation for a long time. Up-arrow notation, side-arrow notation, Graham's number and the Ackerman function are all cool, and combining them gives some truly mind-bogglingly large numbers. And, of course, one could always invent new, more powerful notation. Even using recursive functions like the arrow notations mentioned above and the Ackerman function, and constructing new notation like those, I personally believe that there isn't enough room in the observable universe to describe a number anywhere close to $BB(7918)$. And even if it were possible, it's not like we could prove it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3139456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proving a function $p:V→[0,\infty)$ forms a norm on V if and only if the unit ball $\{{x∈V|ρ(x)<1}\}$ is convex I have a function $p:V→[0,\infty)$ where V is a vector space and we know $p$ to be positive, absolutely homogeneous and non-degenerate (i.e.: we know $p$ satisfies all norm conditions other than the triangle inequality). I need to prove that $p$ forms a norm on $V$ (i.e.: satisfies the triangle inequality) if and only if the unit ball $\{{x∈V|p(x)<1}\}$ is convex. I've shown that if the unit ball is convex, we have $tx+(1−t)y<1$ for $x,y∈V,t∈[0,1]$. Buy am unsure as to where to go from there, or if that's even useful. I've seen a similar question like this on here, but it is for $V=R^2$,with the proof taking $x∈V$ to be a scalar, and I don't know if I can do that for a general vector space. Any help is appreciated.
Some hints: If you knew that the closed unit ball $\{ x \in V \mid \rho(x) \le 1 \}$ was convex, then for nonzero vectors $x$ and $y$ you could use the convexity to conclude: $$ \rho \left( \frac{\rho(x)}{\rho(x) + \rho(y)} \cdot \frac{x}{\rho(x)} + \frac{\rho(y)}{\rho(x) + \rho(y)} \cdot \frac{y}{\rho(y)} \right) \le 1. $$ Now, as for how you show the closed unit ball is convex if the open unit ball is: suppose $\rho(x), \rho(y) \le 1$. Then for any $\epsilon \in (0, 1)$, we have $\rho(\epsilon x) < 1$ and $\rho(\epsilon y) < 1$, so $\epsilon \rho(tx+(1-t)y) = \rho(t \cdot \epsilon x + (1-t) \cdot \epsilon y) < 1$, and so $\rho(tx + (1-t)y) < \frac{1}{\epsilon}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3139569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that n(n + 2)(n + 4) is either divisible by 16, or is an odd number. I understand more or less how to do this problem, however, I am having trouble actually showing that n can be divisible by 16. Here's what I have done so far If n is an odd integer, then n = 2k + 1, where k is any integer. 2k + 1 * (2k + 3) * (2k + 5) must be an odd number (due to multiply odd numbers) If n is an even integer, then n = 2k, where k is any integer. 2k * (2k + 2) * (2k + 4) How do I show that 2k * (2k + 2) * (2k + 4) is a multiple of 16?
Your approach is fine and, as noted by Randall, will lead to a solution. The other way to handle the problem is to note that if $n$ is even, either $n \equiv 2 \pmod{4}$ or $n \equiv 0 \pmod {4}$, and in either case the result will follow.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3139659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Sum of cross terms vs sum of squares? What do we know about sum of squares vs sum of cross terms? Does one always dominate the other? Any theorems on that? e.g for $a^2 + b^2 + c^2 \ < ? > \ ab + ac + bc $ for any number of terms. Thank you
for $a,b,c\in R(a\neq b\neq c)$ $(a-b)^2+(b-c)^2+(c-a)^2>0$ $a^2+b^2+c^2>ab+bc+ca$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3139752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Cartesian Product over a list of objects in MAGMA I'm currently trying to create the Cartesian Product of certain objects (namely: Character Tables of different finite groups). However, it seems like I don't really understand the car<$...$> constructor. In the documentation it says, that car<$...$> expects a list of sets or algebraic structures. If the list of character tables is called $xs$, then car<$xs$> won't work. Is it really necessary to "spell out" all the involved structures, like car<$A,B,C$> or am I missing something else? Alternatively: Shouldn't it be possible to store Character Tables of different finite groups in a sequence? If so, how? As of my background: I worked quite a lot with Haskell, but it is my first time working with MAGMA. Has MAGMA implemented basic higher-order functions like $fmap$ from Haskell? (so far, I implemented one myself for different types of structures (Sequences, Lists, Tuples) but this seems quite... vulgar) More concretely: I've got a list of polynomials $fs$ (given as parameter of a function) over a finite field $F$. I basically want to store $CharacterTable(UnitGroup(ext<F|f>))$ where $f$ ranges through the polynomials in $fs$. I later want to define a group action on the cartesian product of those Character Tables (after applying SequenceToSet on the CharacterTables). Ideally, I'd like to have separate CharacterTables instead of looking at the CharacterTable of the product of the groups, as I only need certain characters (those that are primitive in some sense) and it seemed more straightforward to sort them out on each group separately.
You can store the character tables in a list using [* ... *], for example F := FiniteField(3); P<x> := PolynomialRing(F); S := [ x^2 + 1, x^3 +x^2 +x +2]; CT := [* CharacterTable(UnitGroup(ext<F|f>)) : f in S *]; Then if you want to create a cartesian product space, you can use the CartesianProduct command as so: CTP := CartesianProduct([Universe(c) : c in CT]); I'm not sure how easy it is to define a group action on this, but I hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3139943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability of matching 5 cards from a deck of 40 Suppose we have a deck of 40 cards which has, 5 Aces, 6 Kings, 9 Queens, 20 Jacks. A game is played where a contestant will continuously draw cards until they have 5 matching cards (not necessarily in order). The cards are drawn without replacement. I'm trying to find the probability of the contestant getting the five matching cards they get are all Aces (i.e. they have to get this before they get any other five matching cards). Does anyone have any idea on how to approach this?
Suppose a win occurs when the fifth ace is drawn on draw $r+1$, for $0 \le r \le 16$. On the previous draw, the hand must have contained four aces and no more than four of any of the other ranks, and then the contestant must draw an ace on draw $r+1$. Let's try to find the probability of being in a favorable state on draw $r$. There are $\binom{40}{r}$ possible hands of $r$ cards, all of which we assume are equally likely. We would like to count the number of hands containing exactly four aces and no more than four of any other rank, which we will call a "favorable hand". To do this, we will find the generating function of the number of favorable hands. Numbering the ranks from ace to jack as $1$ through $4$, let's say $n_i$ is the initial number of cards in the deck of rank $i$, for $i=1,2,3,4$, so $n_1 = 5$, $n_2=6$, etc. If we consider only cards of rank $i$, then the generating function for the number of hands containing zero to four of those cards is $$\sum_{j=0}^4 \binom{n_i}{j} x^j$$ The generating function for the number of ways to draw exactly four aces is simply $\binom{5}{4}x^4$; so the generating function for the number of favorable hands containing cards of all ranks is $$f(x) = \binom{5}{4}x^4 \prod_{i=2}^4 \sum_{j=0}^4 \binom{n_i}{j} x^j$$ I.e., the coefficient of $x^r$ when $f(x)$ is expanded, which which we will denote by $[x^r]f(x)$, is the number of hands containing exactly four aces and no more than four of any other rank. After some computation (I used Mathematica, but a pencil and paper computation should not be difficult), $$f(x) = 5 x^4+175 x^5+2975 x^6+32725 x^7+261800 x^8+ \\ 1544980 x^9+6741525 x^{10}+21960225 x^{11}+53723775 x^{12}+ \\ 96756975 x^{13}+122906250 x^{14} +102343500 x^{15}+45785250 x^{16}$$ The probability of a favorable hand of $r$ cards is then $$\frac{[x^r]f(x)}{\binom{40}{r}}$$ In order to win on draw $r+1$ the player must have a favorable hand on draw $r$ and then draw an ace on draw $r+1$, when there is one ace left in the deck and $40-r$ cards total. So the probability of a win on draw $r+1$ is $$\frac{[x^r]f(x)}{\binom{40}{r}} \cdot \frac{1}{40-r}$$ and the overall probability of winning is $$\sum_{r=0}^{16} \frac{[x^r]f(x)}{\binom{40}{r}} \cdot \frac{1}{40-r} = \boxed{0.00194347}$$ Edit (deleted) Edit to the Edit Prompted by one of the comments, I provided a list of learning resources for people who are new to generating functions and would like to learn about them. I have moved that list to a new question and answer here because it seems more suitable for development as a community resource. Please take a look.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3140034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Problem similar to Gambler's Ruin I am working on a problem that is similar to the gambler's ruin problem. We start with a bet of $\$1$. * *With probability $p$ the game is won and we win the amount of the bet, i.e., if we bet $\$1$, we win $\$2$. *With probability $1-p =: q$ the game is lost. If we win the game, we take our earnings and stop playing. If we lose we double our bet and keep on playing until we lose. What is the expected amount we bet in total? I would argue that the formula for the total bet looks like this, but I'm not sure if I made a mistake somewhere, as it would give negative bet values for a small $p$. $$\sum \limits_{k=1}^{\infty}(2^k-1)p(1-p)^{(k-1)}=\frac{2p}{2p-1}-1$$ Many thanks for your time and help!
You can split your sum into two separate sums: $$E(p)=\sum \limits_{k=1}^{\infty}(2^k-1)p(1-p)^{(k-1)}=p\sum \limits_{k=1}^{\infty}2^k(1-p)^{k-1} - p\sum \limits_{k=1}^{\infty}(1-p)^{k-1} \\ = 2p\sum \limits_{k=0}^{\infty}2^{k}(1-p)^{k} - p \sum \limits_{k=0}^{\infty}(1-p)^{k}$$ The sum $\sum\limits_{k=0}^{\infty}2^{k}(1-p)^{k} $ is convergent only, if $p>\frac{1}{2}$. For $p \leq 2$ we have $\sum\limits_{k=0}^{\infty}2^{k}(1-p)^{k} =\infty $ The second sum is equal $\frac{1}{p}$ for any $p\neq 0$ Therefore we have: $$E(p)=\begin{cases}\frac{1}{2p-1}, & p>\frac{1}{2}\\ \infty,& p\leq \frac{1}{2}\end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3140164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding integer solution to a quadratic equation in two unknowns We have an equation: $$m^2 = n^2 + m + n + 2018.$$ Find all integer pairs $(m,n)$ satisfying this equation.
Simpler start: separating variables to either side gives: $$m^2-m=n^2+n+2018$$ which then factors roughly for the variables as: $$m(m-1)=n(n+1)+2018$$ which since both pairs(m,m-1) and (n,n+1) are consecutive integers, you can divide both sides by two giving: $$\frac{m(m-1)}{2}=\frac{n(n+1)}{2}+1009$$ But, $\frac{y(y+1)}{2}$ is the form of the y-th triangular number, so the solutions are such that 1009 is the difference of two triangular numbers $T_{\vert m-1 \vert}$ and $T_{\vert n \vert}$ . Solve for n, and m-1 .
{ "language": "en", "url": "https://math.stackexchange.com/questions/3140268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Euler's totient $\phi$ function as a product of primes Many web pages say that Euler's totient function $\phi(n)$ can be given as $$\phi(n)=n \prod_{p|n} \biggl(1- \frac{1}{p} \biggr)$$ But $\phi(1)=1$, and no primes divide $1$. Surely this gives $$\phi(1)=\prod_{p|1} \biggl(1- \frac{1}{p} \biggr)=0n=0$$ Is $\phi(1)$ a special case, or am I missing something?
There's no problem here. $\phi(1)=1$ even according to the product definition; there are no primes dividing $1$, so it's an empty product, which is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3140411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Show that there exists a $t \in [0, 1]$ such that $\alpha(f) =f(t)$ for all $f \in C_\mathbb{R}[0, 1]$. Suppose $\alpha$ is a nonzero multiplicative linear functional on $C_\mathbb{R}[0, 1]$. Show that there exists a $t \in [0, 1]$ such that $\alpha(f) =f(t)$ for all $f \in C_\mathbb{R}[0, 1]$. I approached this by contradiction. I defined $g_t(x) = f_t(x) - \alpha(f_t)$ such that when I apply $\alpha$ to it, I get $0$ with $g_t(t) \ne 0$. I take a finite subcover of $[0,1]$ where each open set is centered around a $t$. I then define $G(x) = \sum\limits_{j = 1}^\infty (g_{t_j}(x))^2$ (note that $G \ne 0$ for all $x \in [0,1]$). I want to apply $\alpha$ to $G$ to get $0$, and then get a contradiction with $1 = \alpha(1) = \alpha(G \frac{1}{G}) = \alpha(G) \alpha(\frac{1}{G})$, but I don't know if I can apply $\alpha$ to an infinite sum. Any help would be appreciated.
Nice! Here's a more direct approach. * *$\alpha (f) = \alpha(1 f ) = \alpha(1)\alpha(f)$. Therefore $$ (*)\quad \alpha(1) =1,$$ because $\alpha$ is not identically zero. * *If $f\ge 0$ $$ (**) \quad 0 \le \alpha (\sqrt{f})^2 = \alpha(f).$$ * *Since $0\le \|f\|_\infty-f$, it follows that $$ (***) \quad |\alpha(f)| \le \alpha (\|f\|_\infty) = \|f\|_\infty.$$ * *By $(**)$, $\alpha(x) \ge 0$, and then by $(***)$, $\alpha (x) \in [0,1]$. Therefore by the intermediate value theorem there exists $x_0$ satisfying $$ \alpha (x) = x_0.$$ This, along with $(*)$, linearity and multiplicative property imply $$\alpha (p) = p(x_0),$$ whenever $p$ is a polynomial. * *As polynomials are dense in $C[0,1]$, it follows that for every $f$, there exists a polynomial $P_n$ such that $\| f-P_n\|_\infty \le \frac1n$. Thus $$ |\alpha(f) - f(x_0) | = \underset{\le \frac {1}{n}}{\underbrace{ |\alpha (f) - \alpha(P_n) |}} + \underset{=0}{\underbrace{|\alpha(P_n) - P_n (x_0)|}} + \underset{\le \frac 1n}{\underbrace{| P_n (x_0) - f(x_0)|}},$$ with first inequality on RHS is due to $(***)$. This completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3140669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Let $k$ be a non-algebraically closed field and $I\subset k[x_1,\dots, x_n]$ be maximal ideal. Is $V_{\bar{k}}(I)$ necessarily finite? Let $k$ be a non-algebraically closed field and $I\subset k[x_1,\dots, x_n]$ be a maximal ideal. $\textbf{Q:}$ Is $V_{\bar{k}}(I)=\{x\in\bar{k}^n\vert \forall f\in I, f(x)=0\}$ necessarily finite?
Yes. Let $R=k[x_1,\cdots,x_n]$ and $\overline{R}=\overline{k}[x_1,\cdots,x_n]$. Then $\overline{R}$ is an integral extension of $R$ and both rings are normal domains, and we may apply going up and going down to see that $V_{\overline{k}}(I)$ also has dimension zero, and the dimension zero closed subsets of affine space are precisely finite collections of points. Alternatively, pick a generating set for $I$ and note that there are finitely many polynomials each of finite degree in this set, so we may adjoin a finite number of algebraic elements over $k$ so that all of our generating polynomials factor completely. So over some finite extension, our ideal is now a finite product of ideals of the form $(x_1-a_1,x_2-a_2,\cdots)$. Clearly the base change of this to the algebraic closure has finitely many points, but it's the same as the base change up to the algebraic closure of our original variety.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3140793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Construct a set of points having some properties (colouring related) Construct a finite set of points $S$, all in the same plane, such that: * *Every line in the plane intersects $S$ in no more than $4$ points. *If the points of $S$ are arbitrarily coloured with two colours, there are 3 collinear points of $S$ having same colour. I think the following set $S$ of $13$ points would solve the problem (the horizontal lines are parallel): Obviously the first condition is fulfilled. Now, suppose there are two colours, blue and green. It seems that no matter how I distribute them, I end up with having 3 points with the same colour on the same line. But I cannot give a rigorous proof and I need help with this (of course, listing all the possibilities is not an option).
I'll number the points from top to bottom and on each line from left to right. We will try to construct a coloring that fails to satisfy condition $2$ and show that we can't do it. I'll think of the colors as red and green. The corners of the triangle are points $1, 10$, and $13$. If all three corners are red, then the inside points of the exterior edges, $2, 5, 6, 9, 11,$ and $12$, all must be green or one of the exterior edges satisfies condition $2$. But if $2$ and $5$ are both green, then $3$ and $4$ must both be red, and if $6$ and $9$ are both green, then $7$ and $8$ must both be red, and that means, for example, that $1, 3,$ and $7$ are collinear red points. Thus, both colors must be represented on the corners of the triangle. Let's assume $1$ is red and $10$ and $13$ are green. Then $11$ and $12$ must both be red, so $3, 4, 7,$ and $8$ must be green, and $2, 5, 6,$ and $9$ must be red, which means that $1, 2,$ and $6$ are collinear red points. By symmetry, the last possibility is that $1$ and $10$ are red and $13$ is green. Then $2$ and $6$ must be green. We also know that $11$ and $12$ must be different colors. First, assume $11$ is red. Then $3$ and $7$ must be green, so $4$ and $8$ must be red and $1, 4$, and $8$ are collinear red points. If $11$ is green and $12 $ is red, then $4$ and $8$ must be green so $3$ and $7$ must be red and $1, 3,$ and $7$ are collinear red points. We have exhausted all possibilities so we have proved it is impossible to color the points without ending up with the collinear points of the same color.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3140905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove that this is a surjection and find the kernel We have a map $\alpha: G \rightarrow S(G)$, where $S(G)$ is the group of all bijections from $G$ to $G$. And $\alpha(g) = f_g$, where $f_g(a) = gag^{-1}$. It's easy to prove that this is a homomorphism and its kernel is the set of $g\in G$ such that $f_g = \operatorname{Id}_G$, i.e. $f_g(a) = gag^{-1} = a$ $\forall a\in G$. It means that $ga = ag$ $\forall a\in G$ and it is the definition of $Z(G)$, the center of group $G$. But what about proving that this is a surjection? I find it obvious by definition (I mean that $\alpha(f_g) = g$) But how to prove that this is a surjection strongly and what about my solution of kernel? Is it ok?
In general, $\alpha$ is not surjective. This follows already from the fact that $|S(G)|=n!$ when $|G|=n$ (and $n!>n$ for $n>2$). Also, we have $\alpha(g)(1)=1$ for all $g\in G$, but (unless $n=1$) there exist bijections $\in S(G)$ that map $1$ elsewhere. In summary, $\alpha$ is surjective iff $G$ is trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3141101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to show that $|D_{2n}| = 2n$ via the presentation? Consider the dihedral group $$D_{2n}= \langle a,b \mid a^n = 1 = b^2, b^{-1}ab = a^{-1}\rangle$$ How can I show that $|D_{2n}| = 2n$? I'm trying to show that we can write every element in the form $$x = a^i b^j$$ where $i= 0, 1, \dots, n-1$; $b = 0,1$. I managed to show existence, and it is clear that there are $2n$ such elements, so if I can show that every choice of $i,j$ gives a distinct element, I'll be done. Any ideas?
Assume $D_{2n}= \langle a,b \mid a^n = 1 = b^2, b^{-1}ab = a^{-1}\rangle$ has this presentation. $D_{2n}\neq\emptyset$ since $1\in D_{2n}$. Let $a,b\in D_{2n}$ such that $a\neq b$, $a,b\neq1$ else this presentation is futile. $\textit{Claim}$: $|a|=n$ and $|b|=2$. $b^2=1.$ Then $ |b|\:\Bigg|\:2$. Since $b\neq1\implies |b|=2$. $a^n=1.$ Then $ |a|\:\Bigg|\:n$. Let $|a|=k,\;k<n$. By Euclid's algorithm, $\exists!\; q,r\in\mathbb{Z}\;:\;n=kq+r$ with $0\leq r<n$. $$a^n=a^{kq+r}\implies a^r=1\Rightarrow\Leftarrow |a|=k$$ Hence the claim. $D_{2n}=\langle a,b\rangle$, every element of $D_{2n}$ has the form $a^ib^j,\;i\in\{0,1,\cdots,(n-1)\} \;\text{and}\;j\in\{0,1\}$. We prove that the set has distinct elements for all $(i,j)$. Using $bab=a^{-1}$, we can derive * *$a^kba^{k}=b$, *$ba^kb=a^{-k}\quad$ and subsequently *$b^ma^kb^m=a^{((-1)^mk)}$. We shall also use the fact : $\langle a\rangle \cap \langle b\rangle=\{1\}\tag1$ Suppose $$a^ib^j =a^mb^k\tag2$$ where $i\neq m \;\text{where}\;i,m\in\{0,1,\cdots,(n-1)\},\;j\neq k\;\text{where}\;j,k\in\{0,1\}$. Without loss of generality let $m>i$, from $(2)$ we have the following: $$\begin{align}a^ib^ja^i =a^mb^ka^i &\implies b^j=a^{m-i}b^k\\ &\implies b^kb^j=b^ka^{((-1)^k(m-i))}b^k=a^{((-1)^k(m-i))}\end{align}$$ i.e. $$b^{k+j}=a^{((-1)^k(m-i))}\tag3$$ Thus, $b^{k+j},a^{((-1)^k(m-i))}=1$. $\; b^{k+j}=1\implies \;k+j\:\Bigg|\:2$. So $k+j=1$ or $2$. $k+j\neq2$ as $j\neq k$ and $j,k\in\{0,1\}$. So $j+k=1$. $(3)\implies b=a^{((-1)^k(m-i))}\Rightarrow\Leftarrow$ as $b\notin \langle a\rangle$. For the case in $(2)$ where $i=m$ but $j\neq k$, $$a^ib^j =a^mb^k\implies b^j=b^k\iff j=k$$ For the case in $(2)$ where $i\neq m$ but $j= k$, $$a^ib^j =a^mb^k\implies a^{m-i}=1\;\text{with}\; (m-i)<n \Rightarrow\Leftarrow\;\text{ as}\; |a|=n$$ Since we have shown that $a^ib^j$ is distinct for all $(i,j)$, by simple combinatorics we see that $|D_{2n}|=2n$ Proof for $(1)$ : For suppose $\langle a\rangle \cap \langle b\rangle\neq\{1\}$ then $b\in\langle a\rangle$. If $n$ is odd then, $|b|\not\Bigg|\;n$ giving us a contradiction. If $n$ is even, then $\langle a\rangle$ has two elements of order $2$ namely $b$ and $a^{n/2}$. But this will also contradict the fact that "If $G$ is a group of even order then the number of elements of $G$ of order $2$ is odd." Hence the fact $\langle a\rangle \cap \langle b\rangle=\{1\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3141226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How does this work? $| [f(x)−g(x)] − (L−M) | \leq | f(x)−L | + | g(x)−M |$ My teacher was showing my class a proof for the limit difference rule using the epsilon-delta definition, and nearing the end, he showed us this: |[f(x)−g(x)] − (L−M)| ≤ |f(x)−L| + |g(x)−M| . I know what the triangular inequality is, and how it works, but on the left, he has a subtraction not a addition. Are there steps he didn't show, or did he just make a mistake?
The trick is that $$|M-g(x)|=|g(x)-M|$$ because $$|M-g(x)|=|(-1)(g(x)-M)|=|-1|\cdot|g(x)-M|=|g(x)-M|$$ So that $$|(f(x)-g(x))-(L-M)|=|(f(x)-L)+(M-g(x))|\leq |f(x)-L|+|g(x)-M|$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3141310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Incomplete elliptic integral and Jacobi's form The incomplete elliptic integral of the first kind is written (using trigonometric form) : $ F(\varphi,k)=\int_{0}^{\varphi} \frac{1}{\sqrt{1-k^2 \sin^2(\theta)}} \mathrm{d}\theta $. Then, it is noted everywhere that if we make the change of variable $t=\sin(\theta)$, then the integral can be re-written in the so-called Jacobi's form : $F(x,k)=\int_{0}^{x} \frac{1}{\sqrt{ (1-t^2)(1-k^2 t^2) }} \mathrm{d}t$, where $x$ is used instead of $\sin(\varphi)$... So good up to there, but, $t=\sin(\theta) \Longrightarrow \mathrm{d}t=\cos(\theta)\mathrm{d}\theta$, and, depending on $\theta$, $\cos(\theta)=\pm\sqrt{1-\sin^2(\theta)}=\pm\sqrt{1-t^2}$ So I wonder why, in the form of Jacobi, we use the positive writing of $\cos(\theta)$ ?
Because the original integral makes sense for $\theta$ in a neighborhood of $0$. There, $\cos$ is positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3141411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are these two graphs isomorphic? Why/Why not? Are these two graphs isomorphic? According to Bruce Schneier: "A graph is a network of lines connecting different points. If two graphs are identical except for the names of the points, they are called isomorphic." Schneier, B.  "Graph Isomorphism" From Applied Cryptography John Wiley & Sons Inc. ISBN 9780471117094 According to a GeeksforGeeks article: These two are isomorphic: And these two aren't isomorphic: Manwani, C. "Graph Isomorphisms and Connectivity" From GeeksforGeeks https://www.geeksforgeeks.org/mathematics-graph-isomorphisms-connectivity/ According to a MathWorld article: "Two graphs which contain the same number of graph vertices connected in the same way are said to be isomorphic." Weisstein, Eric W. "Isomorphic Graphs." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/IsomorphicGraphs.html The details are beyond me, but the MathWorld explanation seems to conflict with the first GeeksforGeeks example; the vertices appear the same, but they appear to be connected differently. To add to the confusion, the same could be said for the second example. So I can't really deduce the facts. Please try to keep answers as clear and simple as possible for the sake of understanding. "Truth is ever to be found in the simplicity, and not in the multiplicity and confusion of things." –Isaac Newton
Both claims are correct. Mapping $$e_1 \to c_1, \qquad e_2 \to c_3, \qquad e_3 \to c_5, \qquad e_4 \to c_2, \qquad e_5 \to c_4$$ maps the edges of the left graph precisely to those of the right graph, so that map defines an isomorphism of graphs. The right graph has cycles of length $3$ (e.g., $aefa$) but he left graph does not, so the graphs cannot be isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3141500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 6, "answer_id": 3 }
Other inner products for $\mathbb{R}^n$ For $\mathbb{R}^n$, the standard inner product is the dot product. It is defined as $ \langle v,\,w\rangle = \sum_i v_i \cdot w_i $. I am aware that any scaled version, namely $ \langle v,\,w\rangle = \sum_i\lambda_i\cdot v_i \cdot w_i $ will still satisfy the 4 inner product requirements. Is there any inner product for $\mathbb{R}^n$ that is not just a scaled version of the standard dot product? I tried for $\mathbb{R}^2$ with $ \langle v,\,w\rangle = v_1 \cdot w_2 + v_2 \cdot w_1 $ but that is not positive definite.
I agree with SmileyCraft. In finite dimensional vector spaces, bilinear transformations, as linear transformations, can be written in terms of the values that they adopt in a given base:$$\left \langle x,y \right \rangle=\sum_{i,j=1}^{n}x_iy_j\left \langle e_i,e_j \right \rangle.$$ I believe you can arrive in this representation without difficult, proving then you suspicion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3141609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 4 }
What is the "determinant" of two vectors? I came across the notation $\det(v,w)$ where $v$ and $w$ are vectors. Specifically, it was about the curvature of a plane curve: $$\kappa (t) = \frac{\det(\gamma'(t), \gamma''(t)) }{\|\gamma'(t)\|^3}$$ What is it supposed to mean?
They formed a matrix by stacking $\gamma'(t)$ and $\gamma''(t)$ next to each other as column vectors. You can also regard it as the cross product of the two vectors if you extend both with a $z=0$ coordinate and take the z component of the resulting vector (that way you can relate it to the 3d formula in a way).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3141770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
orthogonal chebyshev polyhomials This Theorem says statement: $\int_{-1}^{1} \frac{T_n(x)T_m(x)}{ \sqrt{1+x^2} } dx = 0 ;$ when $n\ne m $ proof: "substitute $x= cos \theta$ " and that's it. So I am wondering should I start with this $\int_{-1}^{1} \frac{cos(\theta n)cos(\theta m)}{\sqrt{1+x^2}} dx $ or with this $\int_{-1}^{1} \frac{cos(\theta n)cos(\theta m)}{\sqrt{1+cos^2 \theta}} dx $ I order to verify the proof. note: T(x) is Chebyshev polynomial.
When you do this, you apply the substitution rule for integrals, fully. Everything with $x$ in it, including the denominator, the $dx$, and the limits, transform. On the other hand, the statement that you're trying to prove is incorrect. The correct form has $\sqrt{1-x^2}$ in the denominator instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3141913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Arithmetic Progression Time and Work Problem A group of men working at the same rate can finish a job in $45$ hours. However, the men report to work, one at a time, at equal intervals over a period of time. Once on the job, each man stays until the job is finished. If the first man works five times as many hours as the last man, find : 1) The number of hours the first man works. 2) The total number of men in the group. I know it is an AP problem but I can't figure out how to solve it. I also know I have to use the formula $n = (l-a)/d + 1$ where $l$ = last term, $a$ = first term, $d$ = common difference, $n$ = no. of terms
Suppose that there are $M$ men in the group. We are told that it takes $45M$ man-hours to finish the job. Now suppose the first man comes to work at time $0$ and works until time $T,$ and that the other men arrive at intervals of $h$ hours. The second man arrives at time $h$ and works until time $T$ so he works $T-h$ hours. The second man arrives at time $2h,$ so he works $T-2h$ hours and so on. The total number of man-hours worked is $$ T + (T-h)+(T-2h)+\cdots+(T-(M-1)h))=M\left({2T-(M-1)h\over2}\right)=45M$$ so that $$2T-(M-1)h=90\tag{1}$$ The first man works $T$ hours, the last man works $T-(M-1)h$ hours and we are told that $$T=5(T-(M-1)h)$$ so that $$4T=5(M-1)h\tag{2}$$ Solving $(1)$ and $(2)$ simultaneously gives $$\begin{align} (M-1)h&=60\\ T&=75\end{align}$$ We can say that the first man works $75$ hours (without any rest!) and the last man $15,$ but the number of men is not determined. If $M=2,$ the first man works $75$ hours, the last man works $15$ and the whole job takes $$75+15=90=45M\text{ man-hours.}$$ If $M=3,$ then $h=30,$ the second man works $45$ hours, and the whole job takes $135$ man-hours. Any integer $\ge2$ gives a valid value for $M.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3142054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find value of $|z_2+z_3|$ Given that complex numbers $z_1,z_2,z_3$ lie on unit circle and $$|z_1-z_2|^2+|z_1-z_3|^2=4$$ Then find value of $|z_2+z_3|$ My try: We can take $z_1=e^{i\alpha}$, $z_2=e^{i\beta}$ and $z_3=e^{i\gamma}$ So we have $$|z_1-z_2|=2\sin\left(\frac{\alpha-\beta}{2}\right)$$ $$|z_1-z_3|=2\sin\left(\frac{\alpha-\gamma}{2}\right)$$ So we get: $$\sin^2\left(\frac{\alpha-\beta}{2}\right)+\sin^2\left(\frac{\alpha-\gamma}{2}\right)=1$$ Now we have: $$|z_2+z_3|=2\cos\left(\frac{\beta-\gamma}{2}\right)$$ Any help here?
Hint: Writing $$\frac{\beta-\gamma}{2}=\frac{\beta-\alpha}{2}+\frac{\alpha-\gamma}{2}$$ and combing this with trigonometric identity for the sum of cosine $$\cos(x+y)=\cos(x)\cos(y)-\sin(x)\sin(y)$$ we get \begin{align}\cos\left(\frac{\beta-\gamma}{2}\right)&=\cos\left(\frac{\beta-\alpha}{2}+\frac{\alpha-\gamma}{2}\right)\\ &=\cos\left(\frac{\beta-\alpha}{2}\right)\cdot\cos\left(\frac{\alpha-\gamma}{2}\right)-\sin\left(\frac{\beta-\alpha}{2}\right)\cdot\sin\left(\frac{\alpha-\gamma}{2}\right).\end{align} Now rewrite the identity $$\sin^2\left(\frac{\alpha-\beta}{2}\right)+\sin^2\left(\frac{\alpha-\gamma}{2}\right)=1$$ as $$\sin^2\left(\frac{\alpha-\beta}{2}\right)=1-\sin^2\left(\frac{\alpha-\gamma}{2}\right)=\cos^2\left(\frac{\alpha-\gamma}{2}\right),$$ and plug this into the previous to conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3142187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Alice and Bob take turns to remove numbers from a list Alice and Bob play the following game. In the beginning there is list of numbers $$\{0, 1, 2,\dotsc, 1024\}.$$ Alice starts, and removes 512 numbers of her choice. Bob continues and removes 256 numbers of his choice. Alice continues and removes 128 numbers of her choice, and so on, until only 2 numbers remain. Then Alice pays Bob the difference between the two numbers. What are the optimal strategies for Alice and for Bob, respectively?
Given a set $X$, let $A(X)$ denote the largest distance between any two points in $X$ and $B(X)$ denote the smallest distance between any two different points in $X$. For a two-element set, $A(X)=B(X)$. We start with $A(X)=1024$ and $B(X)=1$. I will show Alice can halve $A(X)$ at each of her turns, and Bob can double $B(X)$ at each of his turns. Since both players have 5 turns, this means that Alice can ensure that the result is at most 32 and Bob can ensure the result is at least 32. Note that at each turn, the player to move is given a set with an odd number of elements. Given $X=\{x_0,\dots,x_{n/2},\dots,x_n\}$ sorted ascending, we have $A(X)=x_n-x_0 = (x_n - x_{n/2}) + (x_{n/2} - x_0)=A(\{x_{n/2},\dots,x_n\})+A(\{x_0,\dots,x_{n/2}\})$. At least one of the two summands is $\leq A(X)/2$. Therefore, Alice removes all topmost or all bottommost elements on each of her turns, depending on the situation. Given $X=\{x_0,x_1,..,x_{n}\}$, Bob removes $x_i$ such that $i$ is odd. It's easy to see that this guarantees that $B(X)$ is at least doubled. A slight generalization: in your problem Alice and Bob have the same number of turns. In general, if we start with $\{0,1,\dots,2^n\}$ and remove almost-half of remaining elements at every turn, then the optimal payoff will be $2^k$ where $k$ is the number of turns controlled by Bob and $n-k$ the number of turns controlled by Alice; the players don't have to alternate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3142291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Simple Displacement of Parametric Equations Dispute A parametric equation with $\frac{dx}{dt}$ = something, $\frac{dx}{dt}$ = something, has a resultant velocity vector by pythagorean theorem to be $\sqrt{(\frac{dx}{dt})^2 + (\frac{dy}{dt})^2}$. Calculus theorem dictates that integral of velocity over interval equals displacement over interval. Thus it must be true that d=$\int\sqrt{(\frac{dx}{dt})^2 + (\frac{dy}{dt})^2}$. However, my dispute (wrong for some reason) is that if we treat velocity separate, displacement in x is $\int\frac{dx}{dt}$ and displacement in y is $\int\frac{dy}{dt}$ . If you know displacement along x, and displacement along y, wouldn't your total displacement from origin be $\sqrt{x^2 +y^2 }$ as it is the diagonal length of the parallelogram formed by lengths x and y? Now I've read before something like diagonal length can't be approximated treating x and y separate or something. It would be like breaking down a diagonal into infinitely tiny x and y steps, instead of just drawing a diagonal line. I don't really know, but can someone explain this better? thank you a lot EDIT: This is even more confusing for me now as I remember how we are taught early on in maths that d=$\sqrt{x^2 +y^2 }$
I understand this actually now. The "paradox" I was referring to with infinite x steps and y steps is called the staircase paradox and can be used to falsely claim pi is 4. By doing what I did with the diagonal of the x coordinate and y coordinate you are finding the distance (length) of a path that follows straight from the origin to the diagonal. The actual arc length of a path doesn't just take into account final x and y coordinate and their formed diagonal's distance from origin. Multiple paths (e.g. a squiggly line going back and forth vs a straight line) can both reach the same coordinates. The arc length formula takes into account the true limit of a curve and thus finds its distance using integrals of velocity functions, rather than just "approximating" the shape of the distance with staircases. EDIT: There are more proper mathematical solutions to my problem, but my answer gives a simple reasoning for anyone else that may be confused in the future.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3142410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the coefficient of $x^{13}$ in the convolution of two generating functions Four thiefs have stolen a collection of 13 identical diamonds. After the theft, they decided how to distribute them. 3 of them have special requests: * *One of them doesn't want more than 2 diamonds ($\leq2$). *The other one only wants a number of diamonds that's a multiple of 3. *And the other one wants an odd number of diamonds greater or equal than 3. Find in how many ways they can distribute the diamonds. My first thought was to use generating functions to find the coefficient of $x^{13}$, for this problem it would be: $f(x)=(1+x+x^2+x^3+...)(1+x+x^2)(1+x^3+x^6+...)(x^3+x^5+x^7+...)=\frac{1}{1-x} \frac{1-x^3}{1-x} \frac{1}{1-x^3}\frac{x^3}{1-x^3}=\frac{1}{(1-x)^{2}}\frac{x^3}{(1-x^2)}$ and that would be equivalent to finding the coefficient of $x^{10}$ in $\frac{1}{(1-x)^{2}}\frac{1}{(1-x^2)}$. I know that I could use the binomial theorem, but the solution I have says I should be using convolution of these two generating functions, but I have no idea how to use it to find the coefficient of $x^{10}$.
It is a laborious work. $$\frac{d^n}{dx^n}\frac{1}{(1-x)^2(1-x^2)}=-\frac{1}{2}(-2-n)_n(-1+x)^{-3-n}+\frac{1}{4}(-1-n)_n(-1+x)^{-2-n}+\frac{1}{8}(-n)_n(1+x)^{-1-n}-\frac{1}{8}(-n)_n(-1+x)^{-1-n}$$ The bracket symbol is the Pochhammer symbol. $$f^{(10)}(x)=\frac{d^{10}}{dx^{10}}\frac{1}{(1-x)^2(1-x^2)}=$$ $$\frac{-7257600(143x^{10}+715x^9+2574x^8+5148x^7+7722x^6+7722x^5+5720x^4+2860x^3+975x^2+195x+18)}{(x+1)^{11}(x-1)^{13}}$$ $$c_{10}=\frac{f^{(10)}(0)}{10!}=36$$ Or collect together all coefficients of $x^{13}$ from your series representation in your question, e.g. by a computer program.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3142542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
if $M \otimes_A (A_m/mA_m)=0$ for every maximal ideal $m \subset A$, then $M=0$, $M$ finitely generated Suppose $M$ is a finitely generated $A$-module. Prove that if $M \otimes_A (A_m/mA_m)=0$ for every maximal ideal $m \subset A$, then $M=0$. Subscrpit $_m$ means localization at $m$. First consider the exact sequence $$m \rightarrow A \rightarrow A/m \rightarrow 0.$$ Since localization of modules is an exact functor, we have $$mA_m \rightarrow A_m \rightarrow A_m/mA_m \rightarrow 0$$ is exact. Taking tensor product gives us $$M \underset{A}{\otimes}mA_m \rightarrow M \underset{A}{\otimes}A_m \rightarrow M \underset{A}{\otimes}A_m/mA_m \rightarrow0$$ which is exact. Note that $M \underset{A}{\otimes}A_m \simeq M_m$ and by assumption $M \underset{A}{\otimes}A_m/mA_m=0$. Now using $M$ is finitely generated, how can I show that $M_m=0$? If I can show this then since $M=0 \Leftrightarrow (M_m=0$ for all maximal ideals $m$ of $A$), I can complete the proof.
Hint: $$M \otimes_A A_{\mathfrak m}/\mathfrak mA_{\mathfrak m}\simeq M_{\mathfrak m}/\mathfrak mM_{\mathfrak m}.$$ Then use Nakayama's lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3142696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Let $B \subseteq \biguplus^{\infty}_{n=1} A_n$, show that $\mathbb{P}(B)=\sum^{\infty}_{n=1} \mathbb{P}(A_n) \mathbb{P}(B|A_n)$ Question: Let $(\Omega, \mathcal{A}, \mathbb{P})$ be a probability space with events $A,B\in\mathcal{A}$. Now, let $B \subseteq \biguplus^{\infty}_{n=1} A_n$, where $A_n \in \mathcal{A}$ for each $ n \in \mathbb{N}$. Show that: $\mathbb{P}(B)=\sum^{\infty}_{n=1} \mathbb{P}(A_n) \mathbb{P}(B|A_n)$ My attempt so far was to restructure the right-hand side to $\sum^{\infty}_{n=1}\mathbb{P}(A_n)\frac{\mathbb{P}(B \cap A_n)}{\mathbb{P}(A_n)} = \sum^{\infty}_{n=1}\mathbb{P}(B \cap A_n)$ but I don't know if I'm even on the right track, so I'm pretty much stuck at this point.
Hint: if $\biguplus$ means disjoint union, $$B = B\cap\left(\biguplus_{n=1}^\infty A_n\right) = \biguplus_{n=1}^\infty(B\cap A_n). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3142803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Interpretation of Einstein notation for matrix multiplication Consider the matrix product $C = AB$ where $A \in \mathbb{R}^{m \times n}, B \in \mathbb{R}^{n \times p}$. The Einstein summation notation for this is $$ c_{ik} = a_{ij}b_{jk}. $$ Is there any example from math, physics, engineering, statistics etc. where each term in the sum $a_{ij}b_{jk}$ has a meaningful interpretation? Equivalently, I guess: suppose we did not use Einstein summation notation, and instead defined $D \in \mathbb{R}^{m \times n \times p}$ as: $$ D_{ijk} = A_{ij}B_{jk}. $$ Would the elements of $D$ mean anything with respect to $A, B, C$ or the linear transformations represented by these matrices? Obviously $C_{ik} = \sum_{j} D_{ijk}$, but what does this mean? Or suppose, instead of summing over the second index of $D$ to get $C$, we instead summed over the first index of $D$ to get $E \in \mathbb{R}^{n \times p}$: $$ E_{jk} = \sum_{i} D_{ijk}. $$ Obviously this cannot be represented in Einstein notation. So does this not have meaning as a tensor contraction, and unlikely to have a physical interpretation? Edit: For the powers of adjacency matrices (eg $C = A^2 = AA$), the terms do have an interpretation. Iff $a_{ij}b_{jk} = 1$ (not using Einstein notation), then there is a path from node $i$ to node $k$ through node $j$.
Define a third order tensor whose components are equal to zero unless all three indices are equal $${\cal H}_{ijk} = \begin{cases} 1 \quad{\rm if}\; i\!=\!j=\!k \\ 0 \quad{\rm otherwise} \\ \end{cases} $$ Then you can use Einstein notation to write $${\cal D}_{ijk} = A_{ip}{\cal H}_{pjs}B_{sk}$$ This tensor is a useful addition to standard matrix algebra. It can be used to generate a diagonal matrix $A$ from a vector $a$ (using a single-dot product) $$A = \operatorname{Diag}(a) = {\cal H}\cdot a \quad\implies A_{ij} = {\cal H}_{ijk}\,a_k$$ or to create a vector $b$ from the main diagonal of a matrix $B$ (using a double-dot product) $$b = \operatorname{diag}(B) = {\cal H}:B \quad\implies b_{i} = {\cal H}_{ijk}\,B_{jk}$$ or simply as a way to write ${\cal D}$ without resorting to index notation $${\cal D} = A\cdot{\cal H}\cdot B$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3142957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $\sqrt{a}-\sqrt{b}$ is a root of a polynomial with integer coefficients, then so is $\sqrt{a}+\sqrt{b}$. If $\sqrt{a} - \sqrt{b}$, where $a$ and $b$ are positive integers and non-perfect squares, is a root of a polynomial with integer coefficients, then $\sqrt{a} + \sqrt{b}$ also is. It seems to hold some relationship with the quadratic formula. However, I have no idea on how to prove it.
Returning to this question to complete my work in my previous answer - the crucial insight to the following argument was provided by the answer of @robjohn. Supposing $a$, $b$ are positive integers, $a \neq b$ such that none of $a$, $b$, $ab$ are perfect squares, then for any polynomial with integer coefficients $p(x)$, it holds that $p(\sqrt a - \sqrt b) = 0 \implies p(\sqrt a + \sqrt b) = 0$ The necessary lemma, already argued by @robjohn, is that $1$, $\sqrt a$, $\sqrt b$, and $\sqrt {ab}$ are all linearly independent over $\mathbb Q$. That is, if $A,B,C,D$ are rationals such that $A + B\sqrt a + C\sqrt b + D\sqrt{ab} = 0$, then $A = B = C = D = 0$. To complete the work I presented in my previous answer, I shall prove There is no nonzero polynomial $r(x)$ of degree at most $3$ with rational coefficients such that $r(\sqrt a - \sqrt b) = 0$. In which case $H(x) = (x^2 - a - b)^2 - 4ab$ is the unique monic polynomial of minimal degree for which $H(\sqrt a - \sqrt b) = 0$. Then, the observation that $H(\sqrt a + \sqrt b) = 0$ and an argument via the polynomial division algorithm finishes the problem. I have shown before that there is no nonzero degree $1$ polynomial $r(x)$ such that $r(\sqrt a - \sqrt b) = 0$. For the quadratic case, suppose $r(x) = Ax^2 + Bx + C$ is a polynomial with integer coefficients such that $r(\sqrt a - \sqrt b) = 0$. By expanding the equation $A(\sqrt a - \sqrt b)^2 + B(\sqrt a - \sqrt b) + C = 0$, we can conclude $$(Aa + Ab + C) + B \sqrt a - B \sqrt b - 2A \sqrt{ab} = 0$$ From the lemma above, we conclude in particular that $B = 0$. However this also implies $$2A\sqrt{ab} = Aa + Ab + C$$ which implies $\sqrt{ab}$ is rational (and must therefore be an integer), contradicting the assumption that $ab$ is not a square. Finally, turning to the cubic case. Suppose $r(x)$ is a polynomial of degree $3$ such that $r(\sqrt a - \sqrt b) = 0$. Without loss of generality $r(x)$ has rational coefficients and is monic, in which case $r(x) = x^3 + Ax^2 + Bx + C$ for rational $A$, $B$, $C$. Expanding $$(\sqrt a - \sqrt b)^3+A(\sqrt a - \sqrt b)^2 + B(\sqrt a - \sqrt b) + C = 0$$ and applying the linear independence of $1$, $\sqrt a$, $\sqrt b$, and $\sqrt {ab}$ over $\mathbb Q$ leads to the following linear system of equations for $A$, $B$, and $C$. \begin{array}{rlc} Aa + Ab + C & =0 \quad & (1)\\ a + 3b + B &=0 & (\sqrt a) \\ -3a - b - B &=0 & (\sqrt b) \\ -2A &=0 & (\sqrt{ab}) \end{array} From the fourth equation $A = 0$, which implies by the first equation $C = 0$. But if $C = 0$ then $\frac{r(x)}{x}$ is a rational polynomial of degree $2$ with a root at $x = \sqrt a - \sqrt b \neq 0$, contradicting the previous argument. This finishes the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3143101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
About branch points of a holomorphic map Let $F:X \to Y$ be a holomorphic map between Riemann surfaces. $q \in Y$ is a branch point if it is the image of a ramification point. How to prove that the set of branch points is a discrete subset of $Y$. This is from Rick Miranda's Algebraic curves and Riemann surfaces. Thank you.
You forgot to say $F$ is non-constant. Then again, I guess $Ram(F)$ is not defined for $F$ non-constant. In general for any map $F: X \to Y$ of any topological spaces $X$ and $Y$ with $X$ compact and $Y$ Fréchet/T1 and for any closed discrete subspace $A$ of $X$, we have $F(A)$ discrete. Proof: Closed discrete subspaces $A$ of compact is finite $\implies$ $A$ is finite $\implies$ $F(A)$ is finite $\implies$ $F(A)$ is discrete because finite subspaces of Fréchet/T1 are discrete. QED Apply this to the case of $A=Ram(F)$ when $F$ is a non-constant holomorphic map between connected Riemann surfaces with $X$ compact (and thus $F$ is surjective, open, closed and proper and $Y$ is compact) to get $F(A)=Branch(F)$ is discrete. In particular, this means we do not use that $F$ is proper, closed, open, surjective, non-constant or holomorphic or that $X$ is connected or that $Y$ is connected. We can relax this to $X$ compact (and not necessarily Riemann surface) and $Y$ Fréchet/T1 (and not necessarily Riemann surface, Hausdorff/T2 or compact).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3143213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Definition of convergent sequence using only closure operator I have interest in Kuratowski closure axioms for topology. I would like to know how to define convergent sequence using only closure operator such that it is the same to definition of convergent sequence of equivalent axiomatic framework of topological spaces using open sets. In this post: Why is a topology made of open sets? there is an answer about Kuratowski closure axioms for topology and the fact that using them many equivalent definitions can be made. There, user "Vectornaut" states that "(WARNING: I'm kinda rusty at this, so these definitions may not be correct.)" and that "The sequence $\{ x_n \}$ converges to the point $x$ if $x$ touches every subsequence of $\{ x_n \}$." Translating this into closure operator axiomatic system (denoting closure operator by cl) we have that the sequence $\{ x_n \}$ converges to the point $x$ if $x$ is in the closure of every subsequence of $\{ x_n \}$. Is this statement true? Can someone maybe hint me towards some reference about proving equivalence of this with usual definition?
If $s: \mathbb{N} \to X$ is a sequence in $X$, we can define that $s$ converges to $x \in X$ by $$x \in \bigcap \{\operatorname{cl}(s[A]) : A \subseteq \mathbb{N} \text{ infinite }\}$$ which can be shown by considerations as William Elliott gave as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3143356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Transforming Quadrics in Characteristic 2 I’m trying to solve the following problem given in a textbook: Let $k$ be an algebraically closed field and $Q=V(F)$ a quadric in $\mathbb{P}^3(k)$, where $F$ is an irreducible polynomial in $X,Y,Z,T$, and hence gives rise to a quadratic form on $k^4$ which we assume is non-degenerate. Show that after some change of coordinates we can write $Q=V(XT-YZ)$. I’ve solved the case where $\text{char}(k)\neq2$ by diagonalising the quadratic form and then making a suitable change of coordinates. However this process involves using the correspondence between quadratic forms and symmetric bilinear forms which is not valid in characteristic $2$ (and if we could diagonalise $F$ then it would become reducible). My first issue is that I can’t find a reference defining what it means for a quadratic form to be non-degenerate in characteristic $2$, so if I tried to come up with a counterexample I wouldn’t know if it was valid or not. Beyond this, I’m not even sure that the result is true, I can’t seem to find anywhere claiming that it is. Has the textbook simply forgotten to specify that $\text{char}(k)\neq2$, or am I missing something? Any help would be much appreciated.
Having spent more time on this, I think I have solved the problem, and the result is in fact true. Arf defines non-singularity for quadratic forms of characteristic $2$ here, explained in English here. From these papers, we see that if $Q$ is non-singular over a field of characteristic $2$, we can write $$Q=(aX^2+XT+bT^2)+(cY^2+YZ+dZ^2)$$ for some $a,b,c,d\in k$ and an appropriate choice of coordinates. Let’s consider $aX^2+XT+bT^2$. If $a=b=0$ then we have $XT$ already, if say $b=0$ then we have $X(aX+T)$ so taking the inverse of the transformation sending $X\mapsto X$ and $T\mapsto aX+T$ we have $XT$. Then we assume $a,b\neq0$. Sending $X\mapsto\frac{1}{\sqrt{a}}X$ and $T\mapsto\frac{1}{\sqrt{b}}T$ we have $X^2+\alpha XT+T^2$ for $\alpha=\frac{1}{\sqrt{ab}}$. All square roots exist since $k$ is algebraically closed, and for the same reason we can also find a root $\beta$ of the polynomial $x^2+\alpha x+1$. Then sending $X\mapsto X+\frac{1}{\alpha^2}T$ and $T\mapsto\beta X+\frac{\alpha+\beta}{\alpha^2}T$ we have $XT$. This transformation is invertible since $$\begin{vmatrix}1 & \beta \\\frac{1}{\alpha^2} & \frac{\alpha+\beta}{\alpha^2}\end{vmatrix}=\frac{1}{\alpha}\neq0$$ We can repeat the same process for $Y$ and $Z$, and so we can write $Q=XT+YZ=XT-YZ$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3143476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $f(x+y)=f(x)f(y)-g(x)g(y)$ and $g(x+y)=f(x)g(y)+f(y)g(x)$, with $f'(0)=0$, determine $[f(x)]^2+[g(x)]^2$ Given the expressions: $f(x+y)=f(x)f(y)-g(x)g(y)$ $g(x+y)=f(x)g(y)+f(y)g(x)$ the exercise is to show that $[f(x)]^2+[g(x)]^2$ is constant for all real $x$ and determine its value, knowing that $f$ and $g$ are real differentiable non-constant functions, and that $f'(0)=0$. I realized it looks like the $\sin$ and $\cos$ functions, so the answer must be $1$. To prove something that way, I tried showing that $f$ and $g$ were always on the interval $[-1,1]$. I have also tried to derivate each expression and plug in $x=y=0$ or only $y=0$, but was unable to develop the solution.
If we let $\phi(x)=f(x)+ig(x)$, then $\phi$ satisfies a functional equation $$ \phi(x+y)=\phi(x)\phi(y). $$ This gives $\phi(x)=\phi(x)\phi(0)$, so either $\phi \equiv 0$ or $\phi(0)=1$. Since $\phi \equiv 0$ is a trivial solution, we assume $\phi(0)=1$. Differentiating with $y$ and plugging $y=0$, we obtain $\phi'(x)=\phi'(0)\phi(x)$. Now, define $u(x)=e^{-\phi'(0)x}\phi(x)$ and observe that $$ u'(x)=e^{-\phi'(0)x}\left(\phi'(x)-\phi'(0)\phi(x)\right)=0. $$ This gives $u(x) = u(0)=\phi(0)=1$ and hence $\phi(x) = e^{\phi'(0)x}$. Since $$\phi'(0)=f'(0)+ig'(0)=ig'(0)$$ it follows $\phi(x) = e^{ig'(0)x}=e^{i\theta x}$ for some $\theta\in\Bbb R$, hence by Euler's identity $$ \left(f(x)\right)^2+\left(g(x)\right)^2 =\cos^2(\theta x)+\sin^2(\theta x) =1. $$ (Also note that trivial solution $\phi =0$ gives $\left(f(x)\right)^2+\left(g(x)\right)^2 =0$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3143564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
If the median AM of a triangle ABC bisects the angle $\hat{A}$, then the triangle is an isosceles. Can we solve the above problem using only the criteria for congruent triangles (i.e., without using the fact that the sum of the angles of a triangle is $180^\circ$)?
This is the most basic approach, I believe. Consider the triangle $\triangle ABC$, in which $AD$ is both median and bisector. * *Extend $CD$ to a segment $DC'\cong CD$. *Then $\triangle ACD \cong \triangle BDC'$ by SAS criterion. *Consequently you have $\angle BC'D \cong \angle ACD$ and $AC \cong BC'$. *For transitivity then $\angle BC'D \cong \angle BCD$. *Thus $\triangle BCC'$ is isosceles and $BC \cong BC'$. *But from transitivity and point 3. then $BC \cong AC$ and $\triangle ABC$ is isosceles.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3143797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Expected number of colors in a sampling of colored balls without replacement, Suppose there is a box containing differently colored balls. There are $G$ colors, each color having $n$ balls of this color, i.e. $G \times n$ balls. What is the expected number of different colors in a sample of size $s$ without replacement? The problem is similar to Expected number of different colors, but without replacement. In case of $G = 2$ it becomes pretty easy, so with $s$ values of 1 and 2. But in general case I can't seem to avoid double counting the combinations.
We have from first principles that the PGF in $u$ with the coefficient on $[u^q]$ representing the probability of $q$ different colors / coupons not being seen in a sample of size $s$ is given by $$\frac{1}{s!} {nG\choose s}^{-1} s! [z^s] \left(u + \sum_{k=1}^n \frac{n!}{(n-k)!} \frac{z^k}{k!}\right)^G.$$ This simplifies to $${nG\choose s}^{-1} [z^s] \left(u+\sum_{k=1}^n {n\choose k} z^k\right)^G = {nG\choose s}^{-1} [z^s] (u-1+(1+z)^n)^G.$$ As a sanity check we indeed have on setting $u=1$ $$ {nG\choose s}^{-1} [z^s] (1+z)^{nG} = 1.$$ For example, with four colors and four instances each we get for six draws the distribution $${16\choose 6}^{-1} [z^6] (u-1+(1+z)^4)^4 = {\frac {3\,{u}^{2}}{143}} +{\frac {60\,u}{143}}+{\frac {80}{143}}.$$ where e.g. the last term gives the probability that none of the colors are missing. We cannot have three colors missing because that leaves only one color to cover all six draws, we have only four instances, however. With this PGF we can answer the question about the probability that $q$ colors are missing in a draw of $s$ items, which is $${nG\choose s}^{-1} [z^s] [u^q] (u-1+(1+z)^n)^G = {nG\choose s}^{-1} [z^s] {G\choose q} (-1+(1+z)^n)^{G-q} \\ = {nG\choose s}^{-1} [z^s] {G\choose q} \sum_{p=0}^{G-q} {G-q\choose p} (-1)^{G-q-p} (1+z)^{np}.$$ This yields for the probability $$\bbox[5px,border:2px solid #00A000]{ {nG\choose s}^{-1} {G\choose q} \sum_{p=0}^{G-q} {G-q\choose p} (-1)^{G-q-p} {np\choose s}}$$ which is inclusion-exclusion. Returning to the main question we thus have for the expectation of coupons that did not occur $${nG\choose s}^{-1} \left.\frac{\partial}{\partial u} [z^s] (u-1+(1+z)^n)^G \right|_{u=1} \\ = {nG\choose s}^{-1} [z^s] \left. G (u-1+(1+z)^n)^{G-1} \right|_{u=1} = {nG\choose s}^{-1} [z^s] G (1+z)^{n(G-1)}.$$ We get for the number of coupons that did occur $$\bbox[5px,border:2px solid #00A000]{ G - G {nG\choose s}^{-1} {nG-n \choose s}.}$$ E.g. when we draw one coupon we obtain $$G - G \frac{1}{nG} (nG-n) = G - G + G\frac{n}{nG} = 1$$ as expected. Also note that we obtain the value $G$ when $s\gt nG-n$ (second binomial coeffcient is zero). This is because the maximum coverage with $G-1$ colors is $nG-n$ and with the next sample we must use the last missing color.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3143895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simple integral with $e^x$ - how to decompose it? How to calculate this integral? $$\int \frac{e^{2x}+2}{e^x+1}dx$$ I have tried various substitions such as: $t = e^x, t = e^x + 1, t = e^x +2, t = e^{2x}$ and none seem to work. According to wolframalpha I can simplify this expression into: $$\frac{e^{2x}+2}{e^x +1} = e^x + \frac{3}{e^x+1} - 1$$ And then it'd be rather simple. But still no idea how to decompose it like that. Any tips?
\begin{align} \frac{e^{2x}+2}{e^x +1}&=\frac{(e^{x})^2+2e^x+1-2e^x+1}{e^x +1}\\ &=\frac{(e^x+1)^2-2e^x+1}{e^x +1}\\ &=e^x+1+\frac {-2e^x-2+3}{e^x +1}\\ &=e^x+1-2+\frac {3}{e^x +1}\\ &=e^x + \frac{3}{e^x+1} - 1 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3144032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How can we construct an 8x8 grid with minimal squares Links to similar questions: What is the minimum number of squares needed to produce an $ n \times n $ grid? How can we draw $14$ squares to obtain an $8 \times 8$ table divided into $64$ unit squares? The second link is a similar question, but at the bottom, someone has provided a graphical representation of the problem. If anyone could provide a conclusive solution to this question, it'd be much appreciated.
For all $n\ge 4$, the optimal number of squares is $2(n-1)$. A construction, taken from Jorik's answer, is as follows. If $n$ is even, * *$n-2$ squares have lower left corner $(0,0)$, whose widths comprise all integers between $1$ and $n-1$ except for $n/2$. *$n-2$ squares have upper right corner $(n,n)$, with these same widths. *Two squares have width $n/2$. One has its lower right corner at $(n,0)$, the other has its upper left corner at $(0,n)$. If $n$ is odd, * *$n-3$ squares have lower left corner $(0,0)$, whose widths comprise all integers between $1$ and $n-1$ except for $(n-1)/2$ and $(n+1)/2$. *$n-3$ squares have upper right corner $(n,n)$, with these same widths. *Two squares have width $(n-1)/2$. One has its lower right corner at $(n,0)$, the other has its upper left corner at $(0,n)$. *Two squares have width $(n+1)/2$. One has its lower right corner at $(n,0)$, the other has its upper left corner at $(0,n)$. Here is a proof that this is optimal, taken from joriki's answer. Consider the $4(n-1)$ unit line segments in the grid which have one endpoint on the outside of the grid and the other endpoint inside the grid. Each square can cover at most two of these line segments. Therefore, in order to cover all of them, you need at least $4(n-1)/2=2(n-1)$ squares.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3144226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Missing constraint While reading this paper, I stumbled over the statement (1, 8) about linear independence in trees. Just to make sure, that I understand it correctly: He means conical independence and forces $a_{i, k} \geq 0$?. Linear independence as soon as the number of vertices $n$ is larger than the dimension should be false.
No, this is linear independence. A tree always has $n$ vertices and $n-1$ edges, so here we are talking about the linear independence of $n-1$ elements of an $n$-dimensional space. This should always be possible; there is no dimension argument against it. Another way to phrase the argument for proving this linear independence is by induction on the size of the tree. Suppose we have an $n$-vertex tree and vertex $s$ is a leaf with neighbor $t$. Then if $$ \sum_{vw \in E(T), v<w} a_{vw}(x_v - x_w) = 0 $$ the coefficient of $x_s$ is $a_{st}$ alone, so $a_{st} = 0$. Therefore $$ \sum_{vw \in E(T - s), v<w} a_{vw} (x_v - x_w) = 0 $$ and we have reduced the problem to one about the $(n-1)$-vertex tree $T-s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3144376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
System of equations can be interpreted as intersection of $3$ planes in $3$-dimensional space I'm struggling with this question. I have worked out that attempting to solve for the $x_i$ leads to a contradiction, and that $$\begin{vmatrix}1&4&6\\1&-2&1\\2&14&17\end{vmatrix}=0$$ So there is no solution for $x$. But what are the planes? * *Three parallel planes *two parallel planes and one intersecting plane *three planes that intersect the other two but not at the same location Link So I have narrowed down the answer to 3, 4, 5. Which one is it and how do we know?
Labelling the coefficient matrix's rows as $R_1,R_2,R_3$, we have $R_3=3R_1-R_2$ but $R_1$ and $R_2$ independent, but $3\cdot18-(-6)\ne-6$ so the third equation is not a linear combination of the first two. This means that * *the planes corresponding to $R_1$ and $R_2$ intersect in a line *$R_3$'s plane is parallel to that line of intersection, but not parallel to the first two planes Therefore the fifth answer is correct: there is no solution for $x$ even though none of the planes are parallel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3144524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simplifying $\prod\limits_{k\neq j=0}^{n-1}\frac1{\lambda_{n,k}-\lambda_{n,j}}$ for $\lambda_{n,k}=\exp\frac{i\pi(2k+1)}{n}$ I have been able to show that for $n\in\Bbb N_{\geq2}$ $$\phi(n)=\int_0^1\frac{dx}{x^n+1}=\sum_{k=0}^{n-1}\Gamma_{n,k}\log\frac{\lambda_{n,k}-1}{\lambda_{n,k}}$$ Where $$\lambda_{n,k}=\exp\frac{i\pi(2k+1)}{n}$$ And $$\Gamma_{n,k}=\prod_{k\neq j=0}^{n-1}\frac1{\lambda_{n,k}-\lambda_{n,j}}$$ And I was wondering: how do we simplify $\Gamma_{n,k}$ to ease the manual calculation of $\phi(n)$ values. The integral is always real, so I am sure there is a major way we can simplify $\Gamma_{n,k}$, but I have been so far unable to find it. I do suspect however that the product $$P_n=\prod_{k=0}^{n-1}\Gamma_{n,k}$$ May play a significant role in finding the simplification I seek. For those interested, a proof. Note that $x^n+1$ bay be factored as $$x^n+1=\prod_{k=0}^{n-1}(x-\lambda_{n,k})$$ Hence $$\phi(n)=\int_0^1\prod_{k=0}^{n-1}\frac1{x-\lambda_{n,k}}dx$$ Then define $\Gamma_{n,k}$ by saying that $$\prod_{k=0}^{n-1}\frac1{x-\lambda_{n,k}}=\sum_{k=0}^{n-1}\frac{\Gamma_{n,k}}{x-\lambda_{n,k}}$$ Multiplying both sides by $\prod_{j=0}^{n-1}(x-\lambda_{n,j})$: $$1=\sum_{k=0}^{n-1}\frac{\Gamma_{n,k}}{x-\lambda_{n,k}}\prod_{j=0}^{n-1}(x-\lambda_{n,j})$$ $$1=\sum_{k=0}^{n-1}\Gamma_{n,k}\prod_{k\neq j=0}^{n-1}(x-\lambda_{n,j})$$ So for any integer $0\leq m\leq n-1$ we may plug in $x=\lambda_{n,m}$ and simplify to get $$\Gamma_{n,m}=\prod_{m\neq j=0}^{n-1}\frac1{\lambda_{n,m}-\lambda_{n,j}}$$ And our result follows directly. Perhaps another motivation for easing manual calculation of this product would be that $$\sum_{k=0}^{\infty}\frac{(-1)^k}{nk+1}=\phi(n)$$ Which brings about a plethora of interesting closed forms. Edit: A little progress We define $$c_{n,j}=\operatorname{Re}\lambda_{n,j}=\cos\frac{\pi(2j+1)}{n}$$ And $$s_{n,j}=\operatorname{Im}\lambda_{n,j}=\sin\frac{\pi(2j+1)}{n}$$ So $$\log\frac{\lambda_{n,k}-1}{\lambda_{n,k}}=\log\left(1-\lambda_{n,k}^{-1}\right)=\log\left(1-c_{n,k}+is_{n,k}\right)$$ And we also see that $$\begin{align} \prod_{k\neq j=0}^{n-1}\frac1{\lambda_{n,k}-\lambda_{n,j}}&=\prod_{k\neq j=0}^{n-1}\frac1{e^{i\pi(2k+1)/n}-e^{i\pi(2j+1)/n}}\\ &=\prod_{k\neq j=0}^{n-1}\frac{e^{-i\pi(2k+1)/n}}{1-e^{i\pi(2j-2k)/n}}\\ &=e^{i(2k+1)(2-n)/n}\prod_{k\neq j=0}^{n-1}\frac12\left(1+i\cot\frac{\pi(j-k)}n\right)\\ \Gamma_{n,k}&=\frac{\lambda_{n,k}^{2-n}}{2^{n-2}}\prod_{k\neq j=0}^{n-1}\left(1+i\cot\frac{\pi(j-k)}n\right) \end{align}$$ But the remaining product I do not know how to deal with.
Defining the polynomial \begin{align} P(x)&=x^n+1\\ &=\prod_{j=0}^{n-1}\left( x- \lambda_{n,j}\right) \end{align} we can express its derivative at $x=\lambda_{n,k}$ as: \begin{align} P'(\lambda_{n,k})&=\prod_{k\neq j=0}^{n-1}\left( \lambda_{n,k}-\lambda_{n,j} \right)\\ &=\frac{1}{\Gamma_{n,k}} \end{align} But we have also $P'(x)=nx^{n-1}=n\tfrac{x^n}{x}$. Thus, as $\left(\lambda_{n,k} \right)^n=-1$, \begin{equation} P'(\lambda_{n,k})=n\frac{-1}{\lambda_{n,k}} \end{equation} Finally, \begin{equation} \Gamma_{n,k}=-\frac{\lambda_{n,k}}{n} \end{equation} This trick comes rather naturally if the integral is evaluated by the residue method, for the function $f(z)=(1+z^n)^{-1}\ln\left(\tfrac z{1-z}\right)$ along the keyhole contour.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3144672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
$\| X'(t) \| = (\| X(t) \|)'$ Let $X : \mathbb{R} \to \mathbb{R}^n$ be a $C^1$ function. Let $\| .\|$ be the norm : $\| v \| = \max_{1 \leq i \leq N} \mid v_i \mid$. Then is it true that : $$\| X'(t) \| = (\| X(t) \|)'$$ ? I am wondering if in general if I have any function $f : \mathbb{R}^n \to \mathbb{R}^p$ and a norm $N$ on a : $\mathbb{R}^p$ then is it always possible to invert the norm and the differential operator or the norm and in the integral? Thank you.
For $n=1$ the identity function $X(x)=x$ is a $C^{1}$ function. In this case $|X(x)|$ is not even differentiable at $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3144813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the matrix of $A$? We know the following about the linear map $A$: $\mathbb{R}$$^3$ -> $\mathbb{R}$$^3$: $A$ is orthogonal $A$(1,2,2) = (1,2,2) The vector (2,0,-1) is eigenvector for eigenvalue -1 dim $E_1$ = 1 Determine the matrix of $A$ I'm not quite sure which properties to use, such that i can create a matrix $A$. Any help/tips? on proceeding this particular question?
From the given conditions you have two equations $Av_1=v_1$ and $Av_2=-v_2$. Notice that here additionally $v_1^Tv_2=0$ what means that both vectors are orthogonal. You can find also transformed the third vector $v_3$ using as input vector cross product of $v_1$ and $v_2$, the result vector is orthogonal to both $= \pm (v_1 \times v_2)$ (transformation with orthogonal matrix preserves lengths and angles of vectors ) With this you have transformation $A[v_1 \ \ v_2 \ \ v_1 \times v_2] = [v_1 \ \ -v_2 \ \ \pm v_1 \times v_2]$ what leads to direct calculation of $A= [v_1 \ \ -v_2 \ \ \pm v_1 \times v_2][v_1 \ \ v_2 \ \ v_1 \times v_2]^{-1} $ (two solutions). Additionally if $\text{dim} \ E_1= 1$ means that $1$ is the eigenvalue with multiplicty $1$ then you can exclude one solution. (the $-1$ is then eigenvalue with multiplicity $2$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3144949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Limit of a function, given the recurrence relation Let $f(n)$ be a function defined for $n\ge 2$ and $n\in N$ which follows the recurrence(for $n\ge 3$) $$\displaystyle f(n)=f(n-1) +\displaystyle \frac {4\cdot (-1)^{(n-1)} \cdot \left(\displaystyle \sum_{d \vert (n-1)} (\chi (d))\right) }{n-1}$$ where $d\vert (n-1)$ means $d$ divides $(n-1)$ i.e. $d$ is divisor of $(n-1)$ .Also assume that $f(2)=-4$. Where I define $$\chi(d) = \begin{cases} 1, & \text{if $d=4k+1$ where $k$ is a whole number} \\ -1, & \text{if $d=4k+3$ where $k$ is a whole number} \\ 0, & \text {if $d$ is even natural number} \end{cases}$$. Then find $$\lim_{n\to \infty} f(n)$$ First of all this is not at all an assignment or homework problem. It is just a question I came up with, when I was playing with a limit consisting of tedious geometry. Second thing, I tried to find an explicit formula for the function but it seems impossible for me. Also I tried to use the recurrence and guess the approaching value. But the function I guess approaches to some limit (which I don't know) very slowly and hence I am not able to guess the limit. Any guidance and help towards the solution would be quite helpful.
A preliminary lemma relates your $\chi$ function with the Gaussian integers: $$ 4\sum_{d\mid n}\chi(d) = r_2(n) = \left|\{(a,b)\in\mathbb{Z}^2:a^2+b^2=n\}\right| $$ hence your question is equivalent to the determination of the Dirichlet L-series $$ L=\sum_{n\geq 1}\frac{(-1)^{n+1} r_2(n)}{n} $$ which is conditionally convergent convergent by Gauss circle problem: the average value of $r_2(n)$ is $\pi$, i.e. the area of the unit circle. Since $r_2(2n)=r_2(n)$, the algebra of the Dirichlet series ensures that the wanted limit is given by $$4\sum_{n\geq 1}\frac{1}{n}\sum_{d\mid n}\chi_4(d)\chi_2\left(\frac{n}{d}\right)=4 L(\chi_4,1)L(\chi_2,1)=\color{red}{-\pi\log 2} $$ where $\chi_4=\chi$ is the non-principal character $\!\!\!\pmod{4}$ and $\chi_2$ is the non-principal character $\!\!\!\pmod{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3145071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can I prove that $(a_1+a_2+\dotsb+a_n)(\frac{1}{a_1}+\frac{1}{a_2}+\dotsb+\frac{1}{a_n})\geq n^2$ I've been struggling for several hours, trying to prove this horrible inequality: $(a_1+a_2+\dotsb+a_n)\left(\frac{1}{a_1}+\frac{1}{a_2}+\dotsb+\frac{1}{a_n}\right)\geq n^2$. Where each $a_i$'s are positive and $n$ is a natural number. First I tried the usual "mathematical induction" method, but it made no avail, since I could not show it would be true if $n=k+1$. Suppose the inequality holds true when $n=k$, i.e., $(a_1+a_2+\dotsb+a_k)\left(\frac{1}{a_1}+\frac{1}{a_2}+\dotsb+\frac{1}{a_k}\right)\geq n^2$. This is true if and only if $(a_1+a_2+\dotsb+a_k+a_{k+1})\left(\frac{1}{a_1}+\frac{1}{a_2}+\dotsb+\frac{1}{a_k}+\frac{1}{a_{k+1}}\right) -a_{k+1}\left(\frac{1}{a_1}+\dotsb+\frac{1}{a_k}\right)-\frac{1}{a_{k+1}}(a_1+\dotsb+a_k)-\frac{a_{k+1}}{a_{k+1}} \geq n^2$. And I got stuck here. The question looks like I have to use AM-GM inequality at some point, but I do not have a clue. Any small hints and clues will be appreciated.
Here is the proof by induction that you wanted. I added a more exact version of the identity used in the proof at the end. Let $s_n =u_nv_n $ where $u_n=\sum_{k=1}^n a_k, v_n= \sum_{k=1}^n \dfrac1{a_k} $. Then, assuming $s_n \ge n^2$, $\begin{array}\\ s_{n+1} &=u_{n+1}v_{n+1}\\ &=(u_n+a_{n+1}) (v_n+\dfrac1{a_{n+1}})\\ &=u_nv_n+u_n\dfrac1{a_{n+1}}+a_{n+1}v_n+1\\ &=s_n+u_n\dfrac1{a_{n+1}}+a_{n+1}v_n+1\\ &\ge n^2+u_n\dfrac1{a_{n+1}}+a_{n+1}v_n+1\\ \end{array} $ So it is sufficient to show that $u_n\dfrac1{a_{n+1}}+v_na_{n+1} \ge 2n $. By simple algebra, if $a, b \ge 0$ then $a+b \ge 2\sqrt{ab} $. (Rewrite as $(\sqrt{a}-\sqrt{b})^2\ge 0$ or, as an identity, $a+b =2\sqrt{ab}+(\sqrt{a}-\sqrt{b})^2$.) Therefore $\begin{array}\\ u_n\dfrac1{a_{n+1}}+v_na_{n+1} &\ge \sqrt{(u_n\dfrac1{a_{n+1}})(v_na_{n+1})}\\ &= \sqrt{u_nv_n}\\ &=2\sqrt{s_n}\\ &\ge 2\sqrt{n^2} \qquad\text{by the induction hypothesis}\\ &=2n\\ \end{array} $ and we are done. I find it interesting that $s_n \ge n^2$ is used twice in the induction step. Note that, if we use the identity above, $a+b =2\sqrt{ab}+(\sqrt{a}-\sqrt{b})^2$, we get this: $\begin{array}\\ s_{n+1} &=s_n+u_n\dfrac1{a_{n+1}}+a_{n+1}v_n+1\\ &=s_n+2\sqrt{u_n\dfrac1{a_{n+1}}a_{n+1}v_n}+1+(\sqrt{u_n\dfrac1{a_{n+1}}}-\sqrt{a_{n+1}v_n})^2\\ &=s_n+2\sqrt{s_n}+1+\dfrac1{a_{n+1}}(\sqrt{u_n}-a_{n+1}\sqrt{v_n})^2\\ &=(\sqrt{s_n}+1)^2+\dfrac1{a_{n+1}}(\sqrt{u_n}-a_{n+1}\sqrt{v_n})^2\\ &\ge(\sqrt{s_n}+1)^2\\ \end{array} $ with equality if and only if $a_{n+1} =\sqrt{\dfrac{u_n}{v_n}} =\sqrt{\dfrac{\sum_{k=1}^n a_k}{\sum_{k=1}^n \dfrac1{a_k}}} $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3145187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Problem of probability distribution A box contains $N$ tickets numbered $1, 2, 3,...., N$. If $m$ tickets are drawn one by one from the box without replacement, then find the mean of the sum of the numbers obtained on the tickets drawn. I have approached the sum as below. Let $X_i$ denote the number on the $i$th ticket drawn, where $i= 1, 2,..., m$. The sum of the numbers obtained on the tickets drawn is $S= \sum_{i=1}^{m}X_i$ Hence, the required mean $= E(S) =E(\sum_{i=1}^{m}X_i) =\sum_{i=1}^{m}E(X_i).$ Each $X_i$ can take the values $1, 2,..., N$ with probability $\frac{1}{N}$. Then $E(X_i)= \frac{N+1}{2}$ So, $E(S)= \frac{m(N+1)}{2}$ But my doubt is in the above line 'Each $X_i$ can take the values $1, 2,..., N$ with probability $\frac{1}{N}$.' Because when the drawing is done without replacement, after each draw, the number of tickets remaining decreases by 1. So the number of values left for $X_2$ is $N-1$, not $N$. So how can the probability be $\frac{1}{N}$? Will anyone please explain where is the mistake? Thanks in advance.
Your calculation is actually correct, though it is not trivial to see it. On one hand your argument is right, after the first number is drawn, the second number ($X_2$) only has one of $N-1$ possible values. Even worse, if $X_1=1$, then the expected value of $X_2$ will be $\frac{2+3+\ldots+N}{N-1}=\frac{N^2+N-2}{2(N-1)} > \frac{N+1}2.$ OTOH, the first number could also be $X_1=N$, making the expected value for $X_2$ smaller than $\frac{N+1}2$. In the end, all of this will balance out. It should be pretty intuitive to see that if you don't consider what happens before, the value of $X_i$ can be $1,2,\ldots,N$, each with the same probability $1 \over N$. Overall, there is no higher probability to draw a $4$ than a $7$ on the third draw, for example. The step that most people have problems with is when you wrote $$E(\sum_{i=1}^mX_i) = \sum_{i=1}^mE(X_i)$$ This is correct, but most people's intuition is against this, because they say "the $X_i$ are dependend". It turns out, using the integral definition, that the equation is true even if the $X_i$ are dependend (which they of course are in our case).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3145344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Example of series where $\sum a_n$ convergent but $\sum n {a_n}^2$ divergent? Can anyone suggest an example of a sequence $\{a_n\}_n\subset (0,\infty)$ such that $\sum a_n$ is convergent but $\sum n {a_n}^2$ is divergent?
Since we must have $a_n = \Omega(\frac{1}{n})$ we think about sparsness, so something like: $a_{2^m}=\frac{1}{m^2+1}$, all other a's being very small, say $a_n = 2^{-n}$ if $n$ is not a power of two, will do since then for $n=2^m$, $na_n^2 = \frac{2^m}{(m^2+1)^2}$ which obviously goes to infinty, so the series $\sum n {a_n}^2$ is divergent
{ "language": "en", "url": "https://math.stackexchange.com/questions/3145479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving there are infinitely many rational numbers in $[x,y]$ Prove if $x$ and $y$ are real numbers with $x \lt y$, then there are infinitely many rational numbers in the interval $[x,y]$. What I got so far: Let $x,y \in \Bbb R$ with $x \lt y$ Let $S = [x,y]$ By the density of $\Bbb Q$ in $\Bbb R$, $\exists r \in \Bbb Q$ such that $x \lt r \lt y$ where $r \in S$. This is where I got stuck.
Okay, so I'll give it another shot given the feedback. Proof: Let $x,y \in \Bbb R$ with $x \lt y$ and $S = [x,y]$ Suppose there are only $n$ rational numbers between $x$ and $y$ such that:$$x \lt r_1 \lt \cdot \cdot \cdot \lt r_n \lt y$$ But since $\Bbb Q$ is dense in $\Bbb R$, there exists $r_{n+1} \in \Bbb Q$ such that: $$ x \lt r_{n+1} \lt r_1 \lt \cdot \cdot \cdot \lt r_n \lt y$$ which contradicts our assumption that there are only $n$ rational numbers in $[x,y]$. Therefore, there must be infinitely many rational numbers in the interval $[x,y]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3145550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Probability of Prime within radius around number Here is my question: do we have any kind of estimate about $p_{k, d}(n)$ the probability that there are at least $k$ prime numbers in a radius of $d$ around $n$? Do you have any suggestions regarding related work? For instance, we know that for $n$ there is a prime $p : n\leq p \leq 2n $ (Tchebychev, 1850), meaning: $p_{1, n/2}(\frac{3n}{2}) = 1, \forall n>1$ Also since it has been shown that there are infinitely many prime gaps at most 246: $p_{1, 246}(n) \neq 0, \forall n>1$ $^1$ I believe 246 is the smallest, though 2 is a well known conjecture
Which other unsolved problems, have necessary restrictions on the prime gaps? a related question I just got answered. As the comments on your question talk about though, there's not really a restriction. Primorials (products of all primes up to a number) have potentially massive gaps nearby, You can gaurantee all numbers from the primorial plus or minus 2, until the primorial plus or minus the first prime not in the primorial minus or plus 1, are composite for $30=2\cdot3\cdot5$ you get that all numbers in ranges 24-28 and 32-36 are necessarily composite (divisible by a prime in the factorization of 30). Unsolved conjectures, put bounds on d for all k values. Goldbach, has Bertrand's postulate as a necessary condition. Legendre, implies that two primes exists between $y^2$ and $(y+2)^2$ , $y$, a natural number. Grimm's, implies that $d<\pi(n)$ for $k=1$, for almost all ( all but finitely many) $n$. If not then we have a pigeonhole contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3145655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Examples of non-unitary isometries on finite dimensional Hilbert spaces? I was reading the question A Finite Dimensional non-Unitary Isometry?, which gives an example of a non unitary isometry which is a map $T: R \rightarrow R^2 $. This question is based on a previous question Difference between an isometric operator and a unitary operator on a Hilbert space, in which there is an example of non-unitary isometry in an infinite-dimensional Hilbert space. Are there any examples of operators on finite dimensional Hilbert spaces $V: H_A \rightarrow H_A$ which have $V^\dagger V = \mathbb{I}$ but $V V^\dagger \neq \mathbb{I}$, or does isometry imply unitarity in this special case?
Very generally, if $X$ is a finite-dimensional vector space and $A,B:X\to X$ are linear maps such that $AB=1$, then $BA=1$. Indeed, if $AB=1$, then $A$ and $B$ must be invertible (consider their determinants), and so $$BA=BA(BB^{-1})=B(AB)B^{-1}=BB^{-1}=1.$$ From a different perspective, a linear isometry $T:X\to Y$ between two Hilbert spaces is just map that is unitary onto its image, i.e. $T$ is unitary when considered as a map $X\to T(X)$. This implies $T(X)$ has the same dimension as $X$. So if $X$ and $Y$ have the same finite dimension, $T(X)$ must be all of $Y$ and so $T$ must actually be a unitary. What's going on in infinite dimensions is that $Y$ can have a proper subspace of the same dimension, but that can't happen in finite diimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3145778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Find all solutions, if any, to the equation $[28][x]-[22] = [33]$ in $\mathbb{Z}_{51}$. Find all solutions, if any, to the equation $[28][x]-[22] = [33]$ in $\mathbb{Z}_{51}$. I know this simplifies to $[28][x] = [55]$, which can be rewritten as $28x \equiv 55 \bmod{51}$. From here do I use SMT and split it, then proceed to use CRT? Any suggestions are much appreciated, thanks.
You can certainly solve the congruence $\ 28x \equiv 55 \bmod{51}\ $ (which you can rewrite as $\ 7x \equiv 1 \bmod{51}\ $) by using the same procedure used in the proof of the Chinese remainder theorem. That is, if $\ x_1\ $ satisfies the congruence $\ x_1 \equiv 1 \bmod{3}\ $, and $\ x_2\ $ the congruence $\ 7x_2 \equiv 1 \bmod{17}\ $, then the unique $\ x\in \left\{0,1,\dots, 50\right\}\ $ simultaneously satisfying both of the congruences $\ x\equiv x_1 \bmod{3}\ $ and $\ x\equiv x_2 \bmod{17}\ $ will be a solution of your original congruence, $\ 7x \equiv 1 \bmod{51}\ $. Alternatively, since $\ \gcd\left(7,51\right)= 1\ $, you can also use the extended Euclidean algorithm to find integers $\ a\ $ and $\ b\ $ satisfying the equation $\ 7a + 51b = 1\ $. Then $\ x=a\ $ will be a solution of the congruence $\ 7x \equiv 1 \bmod{51}\ $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3146002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is this proof of the uniqueness of prime factorizations unnecessarily long? I am learning textbook Analysis I by Herbert Amann and Joachim Escher. The authors present Prime Factorization Theorem and a proof of uniqueness On the basis of authors' proof, I have found a shorter way to fulfill the task as follows: Let p be the least such number with prime factorizations $p = p_0 p_1 \cdots p_k = q_0 q_1 \cdots q_n$. We have $p_i \neq q_j$ for all $i$ and $j$, since any common factor could be divided out to give a smaller natural number $p'$ with two different prime factorizations, in contradiction to the choice of $p$. We can suppose that $p_0 ≤ p_1 ≤ \cdots ≤ p_k$ and $q_0 ≤ q_1 ≤ \cdots ≤ q_n$ as well as $p_0 < q_0$. (I quote these two paragraphs from authors' work) Clearly, $p_0 \mid p$. It follows from $p_0 < q_0 ≤ q_1 ≤ \cdots ≤ q_n$ and $p_0, q_0, q_1, \cdots, q_n$ are all prime numbers that $p_0 \not \mid q_j$ for all $j$. Then $p_0 \not \mid q_0 q_1 \cdots q_n$ and thus $p_0 \not \mid p$, which is a contradiction. * *I would like to ask if my modification of authors' proof is correct. I can not understand why the authors did not take this shorter approach. *In the proof, the authors said that Consequently we have the prime factorization $$p − q = p_0 r_1 \cdots r_l$$ for some prime numbers $r_1 ,\cdots,r_l$. It seems to me that this statement is not correct since it maybe the case that $r_1=\cdots=r_l=1$, which are not prime numbers. Is my understanding of this statement correct? Thank you for your help!
* *Your proof uses the following fact: if $p_0$ is prime and $p_0\nmid a$, $p_0\nmid b$, then $p_0\nmid ab$. While this is not a very advanced number theory fact, it does require proof given the standard (in my experience) definition of a prime as a number having exactly two positive divisors. It's quite possible that the proof of uniqueness of factorizations that you included was designed to not use this fact (perhaps the author wanted it to appear earlier in the book than that fact). *Usually, mathematicians deem notation like this to include the possibility that $l=0$, that is, that there are no $r$s and that $p-q=p_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3146090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a way to find $\min\{m(|X-c|), \:c \in \mathbb{R}\}$? Suppose X is a random variable, such that $F(t) := P(X < t) \in C(\mathbb{R})$. Is there a way to find $\min\{m(|X-c|), c \in \mathbb{R}\}$? Here $m$ stands for median. I know the solutions for two particular cases: (and they both use a similar method): If $X \sim U[a, b]$, then $$P(|X - c| < t) = P(c - t < X < c + t) = \begin{cases} 0 & \quad \text{if } c + t < a\\ 0 & \quad \text{if } c - t > b\\ \frac{2t}{b - a} & \quad \text{if } a < c - t < c + t < b\\ \frac{c + t - a}{b - a} & \quad \text{if } c - t < a < c + t < b\\ 1 & \quad \text{if } c - t < a < b < c + t\\ \frac{b - c + t}{b - a} & \quad \text{if } a < c - t < b < c + t \end{cases} $$ That results in $$m(|X-c|) = \begin{cases} \frac{b + a}{2} - c & \quad \text{if } c < \frac{3a + b}{4}\\ \frac{b - a}{4} & \quad \text{if } c \in [\frac{3a + b}{4}; \frac{a+3b}{4}]\\ c - \frac{a + b}{2} & \quad \text{if } c > \frac{a+3b}{4} \end{cases}$$ And that means, that $\min\{m(|X-c|), \:c \in \mathbb{R}\} = \frac{b - a}{4}$. If $X \sim Exp(\lambda)$, then $$P(|X - c| < t) = P(c - t < X < c + t) = \begin{cases} 0 & \quad \text{if } c < -t\\ 1 - e^{-\lambda(c + t))} & \quad \text{if } c \in [-t; t]\\ 2e^{-\lambda c}sinh(\lambda t) & \quad \text{if } c > t \end{cases} $$ That results in $$m(|X-c|) = \begin{cases} \frac{\ln2}{\lambda} - c & \quad \text{if } c < \frac{\ln2}{2\lambda}\\ \frac{arsinh(\frac{e^{\lambda c}}{4})}{\lambda} & \quad \text{if } c > \frac{\ln2}{2\lambda} \end{cases}$$ And that means, that $\min\{m(|X-c|), \:c \in \mathbb{R}\} = \frac{\ln2}{2\lambda}$ However, I failed to apply this method to the general case.
Not a full solution but an idea that gives useful shortcuts (sometimes). Imagine the PDF of $X$ as a picture. The the PDF of $X-c$ is of course just shifting, and $|X-c|$ is then folding around $c$. What is $median(|X-c|)$? Since $|X-c|$ has a definite lower bound at $0$, the median is just the point $m>0$ where the CDF of $|X-c|$ reaches exactly $P(|X-c| < m) = 1/2$. If you "unfold" the picture, this corresponds to points $-m, m$ where: $$P(-m < X-c < m) = 1/2 = P(c-m < X < c+m)$$ Think of your optimization as running over all $c, m$ values, but subject to the constraint above, and your objective is minimize $m$. But this is equivalent to minimizing ${b-a \over 2}$ over all possible $a,b$ values, constrained by: $$P(a < X < b) = 1/2 = CDF(b) - CDF(a)$$ So generically, you can do this: For every $a$, define $B(a)$ to be the value $b$ s.t. $CDF(b) - CDF(a) = 1/2$. Then you find the $a$ which minimizes ${B(a)-a \over 2}$. However, graphically this view gives you some possibility for shortcuts. You're trying to find a range $(a,b)$, as narrow as possible, which still contains $1/2$ of the probability. So you look at the PDF and in many well-known cases the solution is visually obvious. E.g. * *For $X \sim U[a,b]$ it is obvious that any range of width $w = {b-a \over 2}$ which is entirely $\subset [a,b]$ will do. So your minimal $m = {w \over 2} = {b - a \over 4}$ and you actually have a choice of $c$ from the 25% to the 75% point. *For $X \sim Exp(\lambda)$ the PDF is strictly decreasing, so the optimal range must be at the front, i.e. of the form $(0,b)$. You also need $CDF(b) - CDF(0) = CDF(b) = 1/2$, so $b = median(X)$. From wikipedia :) we have $median(X) = {\ln 2 \over \lambda}$, so your minimal $m = {b - 0 \over 2} = {\ln 2 \over 2 \lambda}$ *For any $X$ whose PDF is symmetric about its mean, and which decreases monotonically away from the mean, first shift it to zero-mean, then by symmetry your optimal range is $(-z, z)$ which contains $1/2$ of the probability. So you look up $z = CDF^{-1}(3/4)$. In case of $N(0,1)$ you can look this up easily. In case of a triangle or a trapezoid, you do a bit of geometry. Many "common" PDFs have a single peak and decays on both sides, so the optimal range must include that. Many also have symmetry. Whether there are closed form solutions will depend on the math details but at least you now know where to look. Hope this helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3146231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is there unique plane which passes through given point and is parallel to given line I was trying to solve one question which is asking to find a plane which passes through given point and is parallel to given line. The given point is $M(2,-5,3)$ and the given line is given as an interesection of the planes $2x-y+3z-1=0 \text{ and } 5x+4y-z-7=0$ It is still unclear for me why there is only one unique plane which can be answer, I think that there are more possible planes that can be answers to this.
You are right: there are infinitely mane planes passing through a point and parallel to a given line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3146342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove, that for every real numbers $ x \ge y \ge z > 0 $, and $x+y+z=\frac{9}{2}, xyz=1$, the following inequality takes place Prove, that for every real numbers $ x \ge y \ge z > 0 $, and $x+y+z=\frac{9}{2}, xyz=1$, the following inequality takes place: $$ \frac{x}{y^3(1+y^2x)} + \frac{y}{z^3(1+z^2y) } + \frac{z}{x^3(1+x^2z)} > \frac{1}{3}(xy+zx+yz) $$ I've tried using the fact that $(xy+yz+zx)^2 \ge xyz(x+y+z) $ or $xy+yz+zx \le \frac{(x+y+z)^2}{3} $ I've also arrived to the fact that the inequality is equivalent to $$ \sum_{cyc}{\frac{(xz)^{7/3}}{y^{5/3}(z+y)} > \frac{1}{3}(xy+yz+zx)} $$ which is homogenous. I can't seem to find a nice way of using the given conditions for the sum and their order, thank you.
Note: I have found a solution. Shall we observe that each term of the LHS sum is of the form $\frac{x^4z^4}{y+z}$, the inequality is equivalent to $$\sum_{cyc}{\frac{x^4z^4}{y+z}} > \frac{1}{3}{(xy+yz+zx)} $$ But from Titu's Lemma, we have $$ \sum_{cyc}{\frac{x^4z^4}{y+z}} = \sum_{cyc}{\frac{(x^2z^2)^2}{y+z}} \ge \frac{({x^2z^2+y^2x^2+z^2y^2})^2}{2(x+y+z)} \ge^{(Quadratic Mean\ge AM)} \frac{(xy+yz+xz)^4}{18(x+y+z)} $$ Hence it suffices to prove $$\frac{(xy+yz+yz)^4}{18(x+y+z)} > \frac{1}{3}{(xy+yz+zx)} $$which is equivalent to $$(xy+yz+xz)^3 > 6(x+y+z)=27 $$ or, equivalently, $$ xy+yz+xz > 3$$ which is true by AM-GM inequality: $$xy+yz+xz \ge 3(x^2y^2z^2)^{\frac{1}{3}}=3$$ With the equality case being impossible, since it would imply $x=y=z$, implying both $x=y=z=1$ ( from $xyz=1$) and $x=y=z=\frac{3}{2}$ (from $x+y+z=\frac{9}{2}$), we have only the strict version taking place: $ xy+yz+xz > 3$ Q. E. D.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3146462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $A \in \mathcal{L}(H)$ and $\langle A(u),u \rangle \geq \langle u, u \rangle$, then $A$ is invertible. Exercise : Let $H$ be a Hilbert space and $A \in \mathcal{L}(H)$ such that : $$\langle A(u),u \rangle \geq \langle u, u \rangle \; \forall u \in H$$ Show that $A$ is invertible. Attempt/Thoughts : The inequality given to hold can be transformed to $$\langle A(u),u \rangle \geq \|u\|^2$$ since $\|u\| = \sqrt{\langle u, u \rangle} $ by the definition of the inner product functional for Hilbert spaces. Now, by the Cauchy-Schwarz inequality, one can yield : $$|\langle Au, u\rangle| \leq \|Au\|\|u\|$$ Combining the two expressions now gives us : $$\langle Au, u \rangle \geq \|u\|^2 \Rightarrow |\langle Au,u\rangle| \geq \|u\|^2 \Rightarrow \|Au\| \|u\| \geq \|u\|^2$$ $$\implies$$ $$\boxed{\|Au\| \geq \|u\|}$$ Now if I consider a sequence $\{u_n\}_{n \in \mathbb N} \subset H$ such that $Au_n \to u \in H$ then it would be : $$\|A(u_n - u_m) \| \to 0 \implies \|u_n-u_m\| \to 0 \quad \text{for} \; n,m \to \infty$$ That means that $\{u_n\}_{n \in \mathbb N}$ is Cauchy and thus $A : H \to A(H)$ is injective $("1-1")$, thus invertible ? Is my approach correct ? Any tips, corrections and/or elaborations will be appreciated.
If $Au = Av$ then $$\|u-v\| \le \|Au - Av\| = 0$$ so that $u=v$ too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3146658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to find limit of sum $\lim\limits_{n\to\infty}\sum_{k=1}^{100n}\frac{k^p}{n^{p+1}}$ How can I find this limit? I've tried to use Stolz theorem, but have not succeed. I have heard smth about Riemann sums, but have not found good algorithm how to use it. Can you help me to solve it with the help of riemann sums or show me algorithm how to use it
I thought it might be instructive to present an approach that does not use Riemann sums. To that end, we proceed. Note that $$\sum_{k=1}^N \underbrace{\left(k^{p+1}-(k-1)^{p+1}\right)}_{=(p+1)k^p+O(k^{p-1})}=N^{p+1}$$ which by induction reveals that $$\sum_{k=1}^N k^p=\frac{N^{p+1}}{p+1}+O(N^p)\tag1$$ Hence, we have $$\sum_{k=1}^{100n}\frac{k^{p}}{n^{p+1}}=\frac{100^{p+1}}{p+1}+O\left(\frac1n\right)$$ Now let $n\to\infty$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3146805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Are all values of $x$ solutions for $e^{2\ln(\sin(x))} = 1 - e^{2\ln(\cos(x))}$ in $\mathbb R$? Does all values of $x$ in $\mathbb R$ satisfy equation: $$e^{2\ln(\sin(x))} = 1 - e^{2\ln(\cos(x))}$$ I am asking this, because by checking WolframAlpha solution there is an answer: (all values for $x$ are solutions over reals), but we know that $\ln(0)$ is undefined, same for negative numbers. Wolfram Alpha solution Therefore I assume in $\mathbb R$, zero and negative numbers doesn't satisfy this equation.
Note that : $$e^{2\ln \sin x} + e^{2 \ln \cos x} = 1 \Rightarrow e^{\ln (\sin x)^2} + e^{\ln (\cos x)^2} = 1 \Leftrightarrow \sin^2x + \cos^2x = 1 \rightarrow \text{true} \; \forall x \in \mathbb R$$ Restrictions apply so as the initial expression holds, so that narrows down the solution set. Note the usage of $\Rightarrow$ instead of an $\Leftrightarrow$ at the start.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3146923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Range of radical functions. Suppose we have $f(x)=\sqrt{x-1}+\sqrt{5-x}$; how do we find range for this function? Single radicals are easy , but two of them are in this particular function.I have the domain of the function and I can only think of differentiation to get the maximum of the function in the valid domain to find the range of $f(x)$. Is differentiation the only way, or is there something easier ?
By the AM-GM inequality, $$\sqrt{2+t} \sqrt{2-t} \le \frac {(2+t)+(2-t)} 2 = 2,$$ so $$(\sqrt{2+t}+\sqrt{2-t})^2 = (2+t)+(2-t)+2\sqrt{2+t}\sqrt{2-t}=4+2\sqrt{2+t}\sqrt{2-t}\le 8$$ so $$\sqrt{2+t}+\sqrt{2-t}\le\sqrt{8}.$$ Now let $x=t+3.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3147086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $x_1$ and $x_2$ be independent uniform variables from [0, 2]. What is the probability that $|x_1-x_2| \leq 1$? What I have so far for the solution Since they are both continuous uniform variables. And because they are independent, we can say that $$f(x_1, x_2)=\frac{1}{4}$$ $$P(|x_1-x_2| \leq 1) = P(x_1-1 \leq x_2 \leq x_1+1)$$ $$P(x_1-1 \leq x_2 \leq x_1+1) = \int_{-\infty}^{+\infty} \int_{x_1 - 1}^{x_1 + 1}\frac{1}{4}dx_2dx_1$$ What I am having trouble with However, when I compute the aforementioned integral, I get a probability of $1$ or $100\%$ $$\int_{-\infty}^{+\infty} \int_{x_1 - 1}^{x_1 + 1}\frac{1}{4}dx_2dx_1 = \int_{0}^{2} \int_{x_1 - 1}^{x_1 + 1}\frac{1}{4}dx_2dx_1 = 1$$ I know that I am supposed to get $\frac{3}{4}$. But I have no idea how.
In addition to the condition $x_1-1 \leq x_2 \leq x_1+1$ you have to remember that $x_2$ has to lie between $0$ and $2$. For example, if $x_1 <1$ then $x_1-1 <0$ so the integral w.r.t. $x_2$ cannot start from the negative number $x_1-1$. If $x_1 <1$ then the integral w.r.t. $x_2$ starts from $0$ and if $x_1 >1$ then the integal ends at $2$. So split the integral into two parts depending on whether $x_1 <1$ or $>1$. So you have to compute $\int_0^{1} \int_0^{x_1+1} \frac 1 4 dx_2dx_1+\int_1^{2} \int_{x_1-1}^{2} \frac 1 4 dx_2dx_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3147369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Matrices Inequality Proof Recently, I read a paper and there is a step which turns out not obvious to me. The statement is as follows: All matrices here are real matrices. $F$ is an arbitrary square matrix. $\Psi$ is a symmetric positive definite matrix. Let $$\lambda_{\max}(A)\equiv\text{The maximum eigenvalue of symmetric matrix A}$$ (The ambiguity comes when $A$ is not symmetric. Here I guess if $A$ is not symmetric, then $\lambda_{\max}(A)=\sqrt{\text{Maximum eigenvalue of }A^TA}$ ). Then the following inequality holds: For all $x\in \mathbb R^n$ $$x^T(I-F)^T\Psi(I-F)x\le\lambda_\max(\Psi^{-1}(I-F)^T\Psi(I-F))x^T\Psi x $$ rewrite it, $$x^T\Big[(I-F)^T\Psi(I-F)-\lambda_\max(\Psi^{-1}(I-F)^T\Psi(I-F))\Psi\Big]x\le0 $$ or $$x^T\Psi\Big[\Psi^{-1}(I-F)^T\Psi(I-F)-\lambda_\max(\Psi^{-1}(I-F)^T\Psi(I-F))I\Big]x\le0\tag{*} $$ and if $\Psi$ commutes with $(I-F)^T\Psi(I-F)$, then, $\Psi^{-1},\,(I-F)^T\Psi(I-F)$ can be simultaneously diagnolized. Then $$\Psi^{-1}(I-F)^T\Psi(I-F)-\lambda_\max(\Psi^{-1}(I-F)^T\Psi(I-F))I $$ is negatively semi-definite and diagnolized under certain basis, same as $\Psi$. Then under the basis, since $\Psi$ is positive definite, $\Psi\Big[\Psi^{-1}(I-F)^T\Psi(I-F)-\lambda_\max(\Psi^{-1}(I-F)^T\Psi(I-F))I\Big]$ is negative semi-definite$\Rightarrow$ the inequality holds. However, in general, $\Psi$ may not commute with $(I-F)^T\Psi(I-F)$. Are there any answers to that?
Let $A=\Psi^{-1/2}(I-F)^T\Psi(I-F)\Psi^{-1/2}$ and $y=\Psi^{1/2}x$. Then $\Psi^{-1}(I-F)^T\Psi(I-F)=\Psi^{-1/2}A\Psi^{1/2}$ is similar to $A$ and hence the inequality in question can be rewritten as $$ y^TAy\le\lambda_\max(A)y^Ty. $$ Now the inequality holds because $A$ is positive semidefinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3147491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Two forms related by an automorphism are in the same cohomology class? Let $f: M \to M$ define an automorphism on the smooth manifold M. Given a differential form $\omega \in \Omega^k$ is it true that the de Rham cohomology class of $\omega$ and $f^*\omega$ are the same? That is, does $[\omega]=[f^*\omega]$.
No. One example: take the torus $X = \mathbb{R}^2/\mathbb{Z}^2$. The flip-flop on the factors interchanges the closed forms $dx$ and $dy$ which are linearly independent in $H^1(X)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3147645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Formula (how to calculate) Y axis cross-point of two intersecting lines i.e. I have two lines: A) Orange (Y axis starts at: 6, end at: -3) B) Green (Y axis starts at: 5, end at: -2) the start/end X axis values are same. Please note, I don't have SLOPE(angle) information, we only know what I've mentioned. How to calculate (what formula to use) to get the cross-point Y value? it's about 0.8 estimately (as I see visually), but what is the formula I cant reach... I've tried so far: mid_orange = (orange_start_Y + orange_end_Y )/2 mid_green = (green_start_Y + green_end_Y )/2 cross_point_Y= (mid_orange *m + mid_green *n )/2 I think I need correct m and n coefficients... I don't know...
I've also successfully used this formula: xCoef = (orangeStartY - greenStartY)/(greenEndY-greenStartY-orangeEndY+orangeStartY) CrossPointY = xCoef * (greenEndY-greenStartY) + greenStartY
{ "language": "en", "url": "https://math.stackexchange.com/questions/3147734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Ratio of $\frac{\zeta(2n)}{\zeta(n)}$ from infinite product involving primes Given that $$\zeta(n)=\sum_{k=1}^\infty \frac{1}{k^n}=\prod_{k=1}^\infty \frac{1}{1-\frac{1}{(p_k)^n}}\tag{1}$$ where $n>1$ and $p_k$ is the $k^{th}$ prime. Proof of the Euler product formula for the Riemann zeta function It immediately follows that $$\frac{\zeta(2n)}{\zeta(n)}=\prod_{k=1}^\infty \frac{1}{1+\frac{1}{(p_k)^n}}\tag{2}$$ The question is: Does equation (2) have an easily derivable infinite series form?
Yes, for all $s\in \Bbb C$ with $\Re(s)>1$ we have $$ \frac{\zeta(2s)}{\zeta(s)}=\sum_{n=1}^{\infty}\lambda(n)n^{-s}. $$ Here $\lambda(n)$ is the Liouville function, defined by $\lambda(1)=1$ and $$ \lambda(n)=\lambda(p_1^{e_1}\cdots p_r^{e_r})=(-1)^{\sum_{i=1}^re_i}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3147884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing that the unit sphere is a surface I was going through Andrew Pressley's book and on the place where they have discussed surfaces, there is one example which deals with the unit sphere. I have understood to the point where they have taken the surface patch and the fact that it will not be able to cover the whole sphere and will leave out a semicircle. But after that they are saying they want to rotate the given surface patch by π radians about the z axis and π/2 radians about the x axis to obtain another surface patch which would cover the region not covered before. Here my doubt is, wouldn't rotating it just by π radians about the z axis solve my problem? I'm confused here. Ps. I wanted to upload a photo but the site wouldn't let me because I don't have enough reputation. It's on page 72 of Elementary differential geometry by Andrew Pressley
Lots of ways to do this. One way would be to consider the function $f: \mathbb R^3\rightarrow\mathbb R$ defined by $x \mapsto ||x||^2 $. Check that $1$ is a regular value of this smooth map and $S^2=f^{-1}(1)$ (Use Implicit Function Theorem). The other way would be to look at the stereographic projection $S^2-N\rightarrow\mathbb R^2$ where $N $ is the north pole. and similarly $S^2-S\rightarrow\mathbb R^2$ where $S$ is the south pole. The way you mention it the parametrization is given by $(-\pi/2,\pi/2)\times (0,2\pi)\rightarrow S^2$ given by $(\theta,\phi)\mapsto(cos\ \theta\ cos\ \phi,cos\ \theta\ sin\ \phi, sin\ \theta) $. This covers $S^2-\{x\in S^2:x_1>0,x_2=0\}$ i.e. the sphere minus a semicircle. It is easy to see if you rotate the surface patch with axis of rotation as the $z-axis$ followed by the $x-axis$ then the image is still in $S^2$ and it is still a surface patch since $rotation$ is an isometry. Finally the two surface patches cover $S^2$ and this shows $S^2$ is a regular surface in $\mathbb R^3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3148035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it true that $(a^2-ab+b^2)(c^2-cd+d^2)=h^2-hk+k^2$ for some coprime $h$ and $k$? Let us consider two numbers of the form $a^2 - ab + b^2$ and $c^2 - cd + d^2$ which are not both divisible by $3$ and such that $(a, b) = 1$ and $(c,d) = 1$. Running some computations it seems that the product $$(a^2 -ab + b^2)(c^2 - cd + d^2) $$ is still of the form $h^2 - hk + k^2$ for some suitable coprime integers $h,k$. Is this true? I tried to prove it by writing down explicitly the product and looking for patterns, but I had no luck. Any help would be appreciated!
There is this Identity: $[(ac+bd)^2-(ab(c^2+d^2)-(abcd)+cd(a^2+b^2))+(bc-ad)^2]=(a^2-ab+b^2)(c^2-cd-d^2)$ Hence for: $(a^2-ab+b^2)(c^2-cd-d^2)=(h^2-hk+k^2)$ $h=(ac+bd)$ $k=(bc-ad)$ $hk=(ac+bd)(bc-ad)$ Condition (c,d)=(2b,b-2a) For $(a,b,c,d)=(3,7,14,1)$ we get: $(49^2-49*95+95^2)=(37)*(183)=6771$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3148152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Vector Field Exponential Map I've got ${\bf v} = x^2\partial_x$, and I'm trying to find $\exp(\varepsilon{\bf v})$, but I'm having some trouble. If I define ${\bf v}^{n+1} = {\bf v}{\bf v}^n$ then I get a different outcome to ${\bf v}^{n+1} = {\bf v}^n{\bf v}$. For example: $${\bf v}^2 = {\bf vv} = (x^2\partial_x)(x^2\partial_x) = x^2(\partial_xx^2)\partial_x = x^2(2x)\partial_x = 2x^3\partial_x$$ $${\bf v}^3 = {\bf v}{\bf v}^2 =(x^2\partial_x)(2x^3\partial_x)=x^2(\partial_x2x^3)\partial_x = x^2(6x^2)\partial_x = 6x^4\partial_x$$ $${\bf v}^3 = {\bf v}^2{\bf v} = (2x^3\partial_x)(x^2\partial_x) = 2x^3(\partial_xx^2)\partial_x = 2x^3(2x)\partial_x = 4x^4\partial_x$$ Using ${\bf v}^{n+1} = {\bf v}{\bf v}^n$ gives $$\exp(\varepsilon {\bf v})x = \frac{x}{1-\varepsilon x}$$ While using ${\bf v}^{n+1} = {\bf v}^n{\bf v}$ gives $$\exp(\varepsilon {\bf v})x = \frac{x}{2}(1+\mathrm e^{2\varepsilon x})$$ In both cases, when $\varepsilon =0$, we get just $x$, i.e. the identity element. Also, in both cases, we get $$\lim_{\varepsilon \to 0} \frac{\mathrm d}{\mathrm d\varepsilon} \exp(\varepsilon {\bf v})x = {\bf v}$$ The same ${\bf v} \in \mathfrak g$ can't possible generate two different flows, can it?
You are looking at the simplest Lie advective flow in perturbation theory (as applied in physics: QFT) and the workhorse example in the 19th century book of Georg Sheffer cited. v generates a shift operator, and it pays to define suitable canonical coordinates, $$ y=-1/x, \qquad \Longrightarrow \qquad x^2 \partial_x=\partial_y , $$ so that you are shifting y by $\epsilon$, $$ e^{\epsilon \partial_y} ~~f(y)= f(y+\epsilon), $$ which reads $$ e^{\epsilon x^2\partial_x} ~~g(x)= g\left(\frac{-1}{y+\epsilon}\right )=g\left (\frac{x}{1-\epsilon x}\right ), $$ a standard formula in the RG advection of QFT. Your difficulties are traceable to your refusal to perform your Heaviside calculus manipulations of functions of differential operators with a "test-function" f(x) on the right, which keeps track of the proper chain rule action of noncommuting derivative operators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3148273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }