Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Does the recurring succession $X_{n+1} = 1+\frac{1}{X_n}$ converge? I'm working on some calculus exercises and came across this one. It is familiar yet not the same as the $\mathcal e$ one. The question states: Given $X_0 = 1$ and $X_{n+1} = 1+\frac{1}{X_n}$, for $n \geq 0$. Show that it converges and find its limit. So I tried to approach this by looking at its behave. Does $\frac{1}{X_n}$ converge to $0$ ? But it's not trivial answer. I also tried to look at $X_{n+1} $ expressed as a function of $X_0$ but was hard to see if I could simplify it in one-line formula. Any ideas?
It does converge. Here is an sketch of how one can proceed. From definition, it is easy to check that $$1\leq X_n\leq2$$ The right-hand side for example follows by induction from $1\leq X_n$ for then $X_{n+1}=1+\frac{1}{X_n}\geq2$. It is easy to check that $X_{2n}\leq X_{2(n+1)}$ and $X_{2(n+1)-1}\leq X_{2n-1}$. From this it you can check that $X_n$ converges an the limit must satisfy $X=1+\frac{1}{X}$, that is, $X^2-X-1=0$. The solution that we look for is $X=\frac{1+\sqrt{5}}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3252562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
non-trivial solution of an integral equation The values of $\lambda$ for which the following equation has a non-trivial solution $\phi(x)=\lambda\int_0^{\pi} K(x,t)\phi(t)dt$ where $0\leq x\leq \pi$ and $K(x,t)= \begin{cases} \sin x\cos t & 0\leq x\leq t \\ \cos x\sin t & t\leq x\leq \pi \end{cases} $ are $(a)$ $\bigg(n+\frac{1}{2}\bigg)^2-1$, $n\in \mathbb{N}$. $(b)$ $n^2-1$, $n\in \mathbb{N}$. $(c)$ $\frac{1}{2}(n+1)^2-1$, $n\in \mathbb{N}$. $(d)$ $\frac{1}{2}(2n+1)^2-1$, $n\in \mathbb{N}$. I converted this integral equation into its equivalent ordinary second order linear differential equation $\phi''(x)+2\lambda\cos^2(x)\phi(x)=0$ with mixed boundary conditions $\phi(0)=0$ and $\phi'(\pi)=0$. Next I need to find the eigenvalues of this ODE to find the required answer. $\lambda=0$ will give us trivial solution, so this possibility is discarded. But cannot check for the other two cases $\lambda=\mu^2>0$ and $\lambda=-\mu^2<0$. Any help will be appreciated in this regard.
Unfortunately there seems to be a mistake in the above. Substituting the kernel in and given it's form, it is easy to prove that: $$\phi(x)=\lambda\sin x\int_{x}^\pi dt \cos t~\phi (t)+\lambda\cos x\int_{0}^x dt \sin t~\phi (t)$$ Then applying derivatives successively we can also prove that: $$\phi'(x)=\lambda\cos x\int_{x}^\pi dt \cos t~\phi (t)-\lambda\sin x\int_{0}^x dt \sin t~\phi (t)$$ $$\phi''(x)=-\lambda\sin x\int_{x}^\pi dt \cos t~\phi (t)-\lambda\cos x\int_{0}^x dt \sin t~\phi (t)-\lambda\cos^2 x~\phi(x)-\lambda\sin^2x~\phi(x)=-(1+\lambda)\phi(x)$$ We also obtain the constraints $\phi(0)=0,\phi'(\pi)=0$ as mentioned in the question. and now we can solve the equation setting $\omega=\sqrt{1+\lambda}$: $$\phi(x)=A\cos(\omega t)+B\sin(\omega t)$$ and applying the constraints we obtain $$A=0~~,~~\omega\pi=(n+\frac{1}{2})\pi$$ and therefore the eigenvalues that produce legitimate solutions to the equation are given by $$\lambda_n=(n+\frac{1}{2})^2-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3252724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
how to justify the logistic function is the inverse of the natural logit function? per wiki The logistic function is the inverse of the natural logit function The standard logistic function looks like (equation_1) $$ {\displaystyle {\begin{aligned} f(x)&={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{e^{x}+1}}={\frac {1}{2}}+{\frac {1}{2}}\tanh({\frac {x}{2}})\\ \end{aligned}}} $$ the natural logit function looks like (equation_2) $$logit(p) = \log\left(\dfrac{p}{1-p}\right)$$ how to justify equation_1 is the inverse of equation_2?
just calculate $f\big(\operatorname{logit}(p)\big)=p$ and $\operatorname{logit}\big(f(x)\big)=x$; $$f\big(\operatorname{logit}(p)\big) = \frac{1}{1+e^{-\log\left(\frac{p}{1-p}\right)}}=\frac{1}{1+\frac{1-p}{p}}=\frac{1}{\frac{1}{p}}=p$$and$$\operatorname{logit}\big(f(x)\big)=\log\left(\frac{\frac{1}{1+e^{-x}}}{1-\frac{1}{1+e^{-x}}}\right)=\log\left(\frac{1}{1+e^{-x}-1}\right)=\log\left(e^x\right)=x.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3252945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Can a conditional statement have multiple premises? Is it correct to say that a conditional statement of the form p ^ q -> r has multiple premises (where the premises are the individual conjuncts p and q) or is that an incorrect use of the term "premise" when referring to implications (as the conjunction itself could be considered the only "premise" of the implication)? I ask because, in Rosen's Discrete Mathematics and its Applications, he claims that "A theorem may be the universal quantification of a conditional statement with one or more premises and a conclusion". I am unsure whether he is referencing the premises and conclusion of the universally quantified conditional statement or the premises and conclusion of something else, such as the premises assumed to be true so that the theorem can be concluded.
To prove a mathematical theorem you assume your premise and do your best to get to the conclusion. Very often, the premise you are assuming that's true contains a lot of claims. If you let each of those claims be a logical letter such as p,q, and so on, you'll get the structure that you're asking, a "multiple premises".
{ "language": "en", "url": "https://math.stackexchange.com/questions/3253089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that for every $v\in\mathbb R^6$ there is a $0\ne w\in\mathbb R^6$ such that $B(v,w)=B(w,w)=0$. Let $B$ be a nondegenerate symmetric bilinear form on $\mathbb R^6$ with signature $0$. Show that for every $v\in\mathbb R^6$ there is a $0\ne w\in\mathbb R^6$ such that $B(v,w)=B(w,w)=0$. My attempt: Since $B$ has signature $0$ and $B$ is nondegenerate, we can choose a basis $e_1,...,e_6$ such that $$B(\textbf{x},\textbf{y})=x_1y_1+x_2y_2+x_3y_3-x_4y_4-x_5y_5-x_6y_6$$ where $\textbf{x}=\sum_{k=1}^6x_ke_k$ and $\textbf{y}=\sum_{k=1}^6 y_ke_k$. Then if we set $v=\sum_{k=1}^6v_ke_k$ then we need to show that there exists $w=\sum_{k=1}^6w_ke_k$ such that \begin{align} \sum_{k=1}^3 v_kw_k-\sum_{k=4}^6v_kw_k&=0\tag 1\\ \sum_{k=1}^3 w_k^2-\sum_{k=4}^6 w_k^2&=0\tag 2 \end{align} But I can't solve this system of equations.... So how to move on?
Hint Here's a geometric approach that requires proving a short lemma, namely that the punctured null cone $N := \{u \in \Bbb R^6 : B(u, u) = 0\}$ is (path-)connected. Now, pick any $X \in N$: If $B(v, x) = 0$, we are done; otherwise $B(v, x)$ and $B(v, -x)$ have opposite signs. Since $N$ is (path-)connected, there is some path $y(t)$ in $N$ connecting $x, -x$. Now, consider the function $t \mapsto N(v, y(t))$. The same approach shows, by the way, that the nondegeneracy hypothesis can be dropped, and that we can replace $\Bbb R^6$ with $\Bbb R^n$ and allow any signature $-n + 2 < s < n - 2$. In particular this argument does not apply to Lorentzian signature ($s = \pm(n - 2)$), and indeed the claim is false in that case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3253192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is it possible to have 2 different but equal size real number sets that have the same mean and standard deviation? By inspection I notice that * *Shifting does not change the standard deviation but change mean. {1,3,4} has the same standard deviation as {11,13,14} for example. *Sets with the same (or reversed) sequence of adjacent difference have the same standard deviation. For example, {1,3,4}, {0,2,3}, {0,1,3} have the same standard deviation. But the means are different. My conjecture: There are no two distinct sets with the same length, mean and standard deviation. Question Is it possible to have 2 different but equal size real number sets that have the same mean and standard deviation?
A simple way to find a counterexample to the conjecture is to focus on sets whose values are symmetrical about zero. This ensures that the two sets have the same mean, and also simplifies calculation of their standard deviations. Let $a,b$ be any real numbers. Then the set $\{a,b,-a,-b\}$ has mean zero and SD $\sqrt{(2a^2+2b^2)/4}$. Now let $c$ be any real number not equal to any member of that set and such that $c^2 < a^2+b^2$. Let $d$ be given by: $$d = \sqrt{a^2 + b^2 - c^2}$$ implying $a^2 + b^2 = c^2 + d^2$. Then the set $\{c,d,-c,-d\}$ has the same mean and SD.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3253262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 9, "answer_id": 4 }
Counterexample to a Set that cannot be Written as a Cross Product Let $f:\mathbb{R} \to \mathbb{R}$ be a function, and let $S = \{(x,f(x)):x \in \mathbb{R}\}$. Then for any $A \subseteq \mathbb{R}$ and $B \subseteq \mathbb{R}$, $S\neq A \times B$. My first thought was that this statement was false and I was searching for a counterexample. However the more I think about it I believe it is true because $A \times B$ would cause each $x \in A $ to be paired with multiple $f(x)\in B$. Thus $f$ would not be a function. Any thoughts regarding this question would be appreciated.
$S$ is of the form $A \times B$ iff $f$ is a constant: suppose $B$ has only one point $b_0$. Then, for any $x$, $(x,f(x)) \in A \times B$ so $f(x)=b_0$ and $f$ is a constant. If $B$ has two distinct points $b_1$ and $b_2$ pick any $a \in A$. Then $(a,b_1)\in A\times B=S$ so $b_1=f(a)$. Similarly, $b_2=f(a)$ so we get a contradiction to the fact that $b_1 \neq b_2$. Thus you can take any non-constant $f$ to get a counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3253568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why $A-I $ invertible implies $ I + A + A^2 + \cdots + A^{k-1} = 0 $ Let $A$ be a square complex matrix such that $1$ is not an eigenvalue of $A$ and $A^k=I_n$ for some positive integer $k$. I want to show that $$ I + A + A^2 + \cdots + A^{k-1} = 0 $$. I was reading an answer but I do not understand the last step. I understand why $A-I$ is invertible but how does it help me in arriving at the conclusion. Here is the solution I am reading: If $A^k = I$, then we have $A^k - I = 0$. Factoring, we have $$ (A - I)(I + A + A^2 + \cdots + A^{k-1}) = 0 $$ Since $1$ is not an eigenvalue, $A-I$ is invertible so that $$ I + A + A^2 + \cdots + A^{k-1} = 0 $$ Edit: Added detail. Thanks.
Since $(A - I)(I + A + A^2 + \cdots + A^{k-1})=0$ and $A-I$ is invertible, by composing by the inverse $(A-I)^{-1}$ of $A-I$, we have: $(A-I)^{-1}(A - I)(I + A + A^2 + \cdots + A^{k-1}) = I + A + A^2 + \cdots + A^{k-1}=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3253810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A Poisson process with a fixed maximum number of counts? For a Poisson process, we have the pmf of arrival in (0,t] as: $$p(n)=\frac{(\lambda t)^n \exp(-\lambda t)}{n!}$$ where $\lambda$ is the arrival rate and the PDF of the inter-arrival times is given as: $$f(t)=\lambda \exp(-\lambda t)$$ What if I assume that the maximum number of counts that is possible is $N$, I mean in case of a call scenario the maximum calls can never exceed beyond say $N$. For such a scenario: $\sum_{n=1}^{500}p(n)=1$ and $\sum_{n=501}^{600}p(n)=0$. Can w incorporate such a scenario in the Poisson processes?
If I understand your question correctly, then I believe such a distribution is possible. Your original probability mass function was defined by $$P(n\text{ arrivals in }(0,t])=p(n)=\frac{(\lambda t)^n e^{-\lambda t}}{n!}$$ If you wish to limit the “number of arrivals” to $N$, then the new probability mass function will be given by the following, for $n\le N$: $$\begin{align}P(n\text{ arrivals in }(0,t] \space | \text{ at most N arrivals})&=\frac{P(n\text{ arrivals in }(0,t])}{P(\text{at most N arrivals})}\\ &= \frac{p(n)}{\sum_{k=0}^N p(k)}\\ \end{align}$$ Which should give the distribution you are looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3254135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A non-exponential proof of $\lim_{x \to \infty} c^{1/x} = 1$ when $c>1$ My textbook asks me to prove this when we are just beginning to learn limits. $\lim \limits_{x \to \infty} c^{1/x} = 1, \ \ \ \ \ c>1$ Suppose otherwise, $$\forall N > 0, \exists x, \epsilon \ | \ \ \ \ \ \ \ x> N \ \ \ \ \ \Rightarrow \ \ \ \ \ \ \ \ \ c^{1/x} \geq 1 + \epsilon $$ $$ \begin{equation} \begin{split} &c \geq (1 + \epsilon)^x \ \ \ \ \ \ \ \ \ &&\text{since $c>0$ } \\ &c \geq (1 + \epsilon)^N \ \ \ \ \ \ \ \ \ &&\text{since $c^{x} > c^{N}$} \end{split} \end{equation}$$ Now this is obviously false, but I do not know how would you would prove it without a proof relying on a exponential function. The best I could come up with is this: Prove that for any $x,y > 1$ where $y > x$, there exists some $N$ such that $x^N > y$. For any such $y$, there exists $z > y$. Let $x^N = y$. But again, this assumes that such an $N$ exists. Could you suggest alternative ways of proving this? Or do you think that what I've done is satisfactory?
I shall assume that we are taking the limit only over natrual $x$, i.e., we can write it more suggestively as $$ \lim_{n\to\infty}c^{1/n}.$$ I shall also assume that you define $c^{1/n}$ as the unique positive real number $y$ such that $c=y^n$ (where the latter is just the product of $n$ identical factors). As $0<y\le1$ implies $y^n\le1$, we conclude that $c^{1/n}>1$ for our $c>1$ Let $\epsilon>0$. We want to find $N$ such that $|c^{1/n}-1|<\epsilon$ for all $n>N$. By the archimedean property of the real numbers, there exists $N\in\Bbb N$ with $N>\frac c\epsilon$. Then we find that for all $n>N$, by the Bernoulli inequality, $$ (1+\epsilon)^n\ge 1+n\epsilon>1+N\epsilon>1+c>c$$ and hence $1+\epsilon>c^{1/n}>1$ and ultimately $|c^{1/n}-1|<\epsilon$. (Note that we used $0<a<b\implies a^n<b^n$ somewhere along the way)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3254304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to use L Hospital's rule for $\lim_{x \to \infty} \sqrt{x} \sin( \frac{1}{x}) $ I want to find the following limit using L Hospital's rule: $$ \lim_{x \to \infty} \sqrt{x} \sin( \frac{1}{x}) $$ I know that this can be solved using squeezed theorem from Cal 1: $$ 0 < \sqrt{x}\sin( \frac{1}{x} ) < \frac{1}{x} $$ since $0 < \sin( \frac{1}{x}) < \frac{1}{x} $. What I have done so far is trying to convert it to fraction form $$ \lim_{x \to \infty} \sqrt{x} \sin( \frac{1}{x}) = \lim_{x \to \infty} \frac{\sin( \frac{1}{x})}{\frac{1}{\sqrt{x}} }$$ But what next?
Consider the Taylor series expansion for $\sin(x)$. $$\sum_{n=0}^{\infty} (-1)^n \frac{x^{2n+1}}{(2n+1)!}$$ For $x^{-1}$, this series is $$\frac{1}{x} - \frac{1}{3! *x^3} + \frac{1}{5! * x^5} - \ldots$$ This is equivalent to $O\Big(\frac{1}{x}\Big)$ (meaning $\sin(\frac{1}{x}) \to \frac{1}{x}$ as $x \to \infty$). So rewrite your equation as $$\lim_{x \to \infty} \sqrt{x} *O\Big(\frac{1}{x}\Big) = O\Big(\frac{1}{\sqrt{x}}\Big)$$ Which goes to $0$ as x goes to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3254408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proving $\int_0^{\pi/4}\sin^n(x)\,dx>\frac{1}{2^{n/2}(n+2)}$ I want to prove:$$\int_0^{\pi/4}\sin^n(x)\,dx>\frac{1}{2^{n/2}(n+2)}$$ This came up when I was working on this question that only asked for elementary calculus solution. Some trivial lower bounds such as: $$ \sin(x)\geq\frac{2\sqrt{2}}{\pi}x$$ or even a slightly stronger one: $$\sin(x)\geq \dfrac{3}{\pi}x\cdot 1_{[0,\pi/6]}+\left(\frac{6(\sqrt{2}-1)}{\pi}x+\dfrac{3-2\sqrt{2}}{2}\right)\cdot 1_{[\pi/6, \pi/4]}$$ were not tight enough to prove the assertion. Ideally, one could find the asymptotic expansion of: $$n2^{n/2}\int_0^{\pi/4}\sin^n(x)\,dx$$ which can be seen to be converging to $1.$
I would (optionally) substitute $\sin x=t$ and integrate by parts twice: \begin{align}\int_0^{1/\sqrt{2}}\frac{t^n\,dt}{(1-t^2)^{1/2}}&=\frac{1}{n+1}\left(\left.\frac{t^{n+1}}{(1-t^2)^{1/2}}\right|_0^{1/\sqrt{2}}-\int_0^{1/\sqrt{2}}\frac{t^{n+2}\,dt}{(1-t^2)^{3/2}}\right)\\&=\frac{1}{n+1}\left(2^{-n/2}-\frac{1}{n+3}\left.\frac{t^{n+3}}{(1-t^2)^{3/2}}\right|_0^{1/\sqrt{2}}+\ldots\right)\\&>\frac{2^{-n/2}}{n+1}\left(1-\frac{1}{n+3}\right)=2^{-n/2}\frac{n+2}{(n+2)^2-1}>\begin{bmatrix}\text{what}\\ \text{we}\\ \text{need }\end{bmatrix}\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3254540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Integral $\int_{0} ^{3} \dfrac {1} {(x-1)^{2/3}}dx$ Evaluate the Integral $$\int_{0} ^{3} \dfrac {1} {(x-1)^{2/3}}dx$$ I got the answer to be $3(x-1)^{1/3}+C$ by integral techniques, but my friend says that it's false because the integral is improper. I couldn't understand what he meant so can you explain?
One should be wary that the integrand has a singularity at $x=1$ and thus it is wise to split the integral (call it $I$) between the subdomains $(0,1)$ and $(1,3)$, so we have $$I = \int_0^1 \frac{1}{(x-1)^{2/3}}dx + \int_1^3 \frac{1}{(x-1)^{2/3}}dx.$$ Since we have just removed a single point, nothing really changes. However, the singularity's nature may make these integrals diverge, so it should be helpful to treat them as improper integrals and follow a limit process with the integration limits. To this end, replace $1$ by $\alpha$ on the left-hand side integral and by $\beta$ on the right-hand side term. Then, take limits as $\alpha\to 1_-$ and $\beta\to 1_ +$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3254618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Prove that $x \in \mathbb{R}$ is a limit point of a set $A \subset \mathbb{R}$ if and only if $d(x, A \setminus {x})=0$. Prove that $x \in \mathbb{R}$ is a limit point of a set $A \subset \mathbb{R}$ if and only if $d(x, A \setminus {x})=0$. I think I have it right but I would like to have it checked. We assume that $x$ is a limit point of the set $A$. Then we will have every neighbourhood $(x-r,x+r)$ where $r$ is the radius and is arbitrary intersects the set $A$, so we can phrase as $\forall r > 0 a \in A$ and $x \ne a$ such that $d(a,x)< r$. Now, $$\inf\{d(a,x) \text{ where } a \in A \text{ and } a \neq x\}=0. $$ Essentially letting $r$ become very small and close to $0$ to make $d(a,x)$ become $0$. Therefore, $d(x,A\setminus\{x\})=0$. Now, we assume that $d(x,A\setminus\{x\})=0$. Because $d(x,A\setminus\{x\}) = \inf\{d(a,x) \text{ where } a \in A \text{ and } a \ne x\}$, we will have $$ \inf\{d(a,x) \text{ where } a \in A \text{ and } a \ne x\} =0. $$ Now, forall $r>0$ $a \in A$ and $x \ne a$ such that $d(a,x)< r$ which means every neighbourhood of $x$ contains a point of $A$ that is different from $x$. Therefore, $x$ is a limit point of the set $A$.
Improvement suggestion, staying close to the definition of infimum (no "letting $r$ get smaller and smaller" vagueness, but definitions): If $x \in A'$ then let $s=d(x,A\setminus\{x\}) \ge 0$. Suppose for a contradiction that $s >0$, and consider $B(x,s)$, which is an open ball around $x$, and as $x$ is a limit point of $A$, we can find $y \neq x$ such that $y \in B(x,s)$, or equivalently $d(x,y) < s$. But then $y \in A\setminus \{x\}$ and so $(s=)d(x, A\setminus\{x\}) \le d(x,y)(<s)$ (a lower bound of a set is $\le$ to each of its elements), but then $s < s$ which is a contradiction, so $s=0$ and indeed $d(x,A\setminus\{x\})=0$ as required. Now suppose $d(x,A\setminus\{x\})=0$ and let $r>0$ be arbitrary. Then $r$ is not a lower bound of the set $D=\{d(x,y): y \in A\setminus\{x\}\}$, or it would be a strictly larger lower bound for $D$ which has largest lower bound $d(x,A\setminus\{x\})=0$. So some element of $D$ is smaller than $r$, or otherwise put, there is some $y \in A \setminus \{x\}$, so $y \in A, y \neq x$ such that $d(x,y) < r$. But this means this $y \in B(x,r)$ as required, and so $x$ is a limit point of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3254748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is this language recognizable (Turing machines) $L = \{ \langle M \rangle \mid \text{ M is a TM, M accepts some string of length 3 \}}$ Is this language recognizable? A string is $\Sigma = \{0, 1\}$. my attempt to prove its recognizable: let $w_1, w_2, w_3, \dots$ be an effective enumeration of $\Sigma^*$ where $\Sigma$ is the input alphabet. We give a TM R recognizes $L$ R = "On input <M> for s = 1 to infinity for i = 1 to s run M on w_i for s steps if M accepts w_i within s steps then if len(w_i) == 3 then accept I don't know if this is correct. My confusion is I don't know if we can use input alphabet method. I used input alphabet method to try and exhaust the number of strings but we limited the length to 3 so I think it would be better if we chose a enumerator of TM description. Not sure
1) R = "On input <M> 2) for s = 1 to infinity 3) for each string w of length 3 4) run M on w for s steps 5) if M accepts w within s steps then 6) accept You are using a "dove tailing" technique. Note that there are only 8 possible strings of length 3. So, the loop in line 3 is guaranteed to terminate. In addition, if M accepts w, it must do so in a finite number of steps s. So, the loop in line 1 is also guaranteed to terminate if M accepts w. However, M may possibly loop if it does not accept any string of length 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3254996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Vector fields on the $5$-sphere What is the maximum number of smooth, linearly-independent vector fields that can exist on the $5$-sphere?
1) There certainly exists at least $1$ smooth linearly independent vector field on $S^5$ (i.e. nonzero at each $x\in S^5$), namely $x=(x_1,x_2,x_3,x_4,x_5,x_6)\mapsto (x_2,-x_1,x_4,-x_3,x_6,-x_5)$ . 2) There do not even exist $2$ linearly independent continuous vector fields on $S^5$. This is proved in Steenrod's The Topology of Fibre Bundles, published in 1951 (before $K$-theory was invented): Theorem 27.11, page 142. No Chern classes are used either. The only tool used is (rather advanced) homotopy theory, which is developed in Part II of the book. Another reference As mentioned by @user10354138, Adams has reproved the result above, and much more !, using K-theory (a freshly invented technique in 1961) in his article Vector Fields on Spheres
{ "language": "en", "url": "https://math.stackexchange.com/questions/3255095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
variance of number of isolated vertices in random graph $G(n,p)$ Suppose we have random graph $G(n,p)$ from a uniform distribution with $n$ vertices and independently, each edge present with probability $p$. Calculating it's expected number of isolated vertices proves quite easy, chance of single vertex to be isolated is equal to $(1-p)^{n-1}$, then using linearity of probability, expected number of isolated vertices is equal to $n\times(1-p)^{n-1}$. However, I am tasked to calculate the variance of this number, or at least decent approximation of it, without any idea how to proceed.
Let $P_{n,k}$ be the probability of exactly $k$ isolated vertices in $G(n,p)$. Look at what happens when we add a new vertex gives: $$ P_{n+1,k}=q^n P_{n,k-1} + (1-q^{n-k})q^k P_{n,k} + \sum_{i=1}^{n-k}\binom{k+i}{i}p^iq^kP_{n,k+i} $$ where * *$q=1-p$ as usual *the first term is the new vertex being isolated *the second term is new vertex not isolated but there are $k$ isolated vertices we started off from $G(n,p)$ (so there is an edge from vertex $n+1$ to one of the $n-k$ vertices which gives the $1-q^{n-k}$ factor, and $n+1$ cannot join to any of the $k$ isolated vertices in $[n]$ so the other factor $q^k$ *the sum is for starting with a graph of $k+i$ isolated vertices and this new vertex is neighbour to exactly $i$ of these. Using this recurrence, you can show the probability generating function of the number of isolated vertices $$ G_n(z):=\sum_{k=0}^n P_{n,k}z^k $$ satisfies $$ G_n(z)=q^{n-1}(z-1)G_{n-1}(z)+G_{n-1}(1+q(z-1)). $$ This has closed form solution $$ G_n(z)=\sum_{k=0}^n\binom{n}{k}q^{nk-\binom{k}{2}}(z-1)^k $$ and so you obtain $$ \operatorname{Var}[\#\text{isolated vertices}]=nq^{n-1}((1-q^{n-1})+(n-1)pq^{n-2}). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3255436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Defining a measure on the space of infinite words over a finite alphabet All words in this question are over the alphabet $\{0,1\}$. Let $X$ be the set of words of infinite length. For a finite word $w$, write $w0$ for the word obtained by appending $0$ to the end of $w$. Define $w1$ similary. For each finite word $w$, fix a nonnegative real number $m_w$. Assume that $m_w=m_{w0}+m_{w1}$ for each word $w$. Assume that $m_\epsilon=1$, where $\epsilon$ is the unique word of length zero. Endow $\{0,1\}$ with the discrete topology and $\{0,1\}^\mathbb{N}$ with the product topology. The latter space is naturally identified with $X$, making $X$ into a topological space. For a finite word $w$, write $A_w\subset X$ for the set of infinite words that start with $w$. Is there a countably-additive Borel measure $\mu$ on $X$ such that $\mu(A_w)=m_w$ for every finite word $w$? Is it unique? It seems to me that the answer must be positive, but I'm still struggling with measure theory.
Yes, there are probability measures on sequence space satisfying your conditions. For instance, if $m_w=2^{-|w|}$, where $|w|$ is the length of $w$. And many many others. Part of the key is the Kolmogorov extension theorem, which in your case results in the $m_w$ for all finite length words inducing a measure on sequence space. More generally, any $0,1$-valued discrete time stochastic process gives an example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3255558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If compact embedding $H^1_0(\Omega) \hookrightarrow H^{-1}(\Omega)$ is not surjective, how can $H^{-1}(\Omega)$ be the dual of $H^1_0(\Omega)$? Let $\Omega$ be bounded with Lipschitz boundary. * *By Rellich compactness, $\iota: H^1_0(\Omega) \hookrightarrow L^2(\Omega)$ is compact embedding. It is also dense. *By Riesz representation, $L^2(\Omega) \overset{\sim}{\longrightarrow}(L^2(\Omega))^*$, so $L^2(\Omega)$ is identified with its dual. *By Hahn-Banach, the dual map $\iota^*: (L^2(\Omega))^* \hookrightarrow (H^1_0(\Omega))^* = H^{-1}(\Omega)$ is dense compact embedding. Thus, $\kappa: H^1_0(\Omega) \hookrightarrow H^{-1}(\Omega)$ is dense compact embedding. It is that the compact map in infinite dimensional space can not be surjective. Here is my confusion: $H^{-1}(\Omega)$ is dual of $H^1_0(\Omega)$. Thus, there must be an isometric isomorphism that maps every $x \in H^1_0(\Omega)$ to a dual $x^* \in H^{-1}(\Omega)$. So, the embedding $\kappa$ of $H^1_0(\Omega)$ to its dual should be surjective. Then, how can $H^1_0(\Omega)$ is compactly embedded to its dual?
There exists a compact embedding $H^1_0(\Omega)\to H^{-1}(\Omega)$. There also exists an isometric isomorphism $H^1_0(\Omega)\to H^{-1}(\Omega)$. There is nothing contradictory about this, because these are two different maps. (If you view a map from a normed space to its dual as a bilinear form on the space, the first map corresponds to the $L^2$ inner product restricted to $H^1_0(\Omega)$, while the second map corresponds to the standard inner product on $H^1_0(\Omega)$ that makes it a Hilbert space. These are two different bilinear forms on $H^1_0(\Omega)$, so they give two different maps.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3255687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Solve the following Algebraic Logarithmic inequalities Solve the following logarithmic inequality with all log base $10$. $$\left(\frac12\right)^{(\log x^2)} + 2 > 3\times2^{(-\log(-x))}$$ I have done many logarithmic inequalities but in this i am not able to crack the problem.please give the hint or the approach on should try while doing these type of question.i am preparing for international mathematics olympiad so any help would be appreciate. Thanks
Hint: We need $-x>0$. Let $\displaystyle y=2^{-\log(-x)}$. Then $\displaystyle \left(\frac12\right)^{\log x^2}=\left(2^{-\log(-x)}\right)^2=y^2$. The inequality can be written as $\displaystyle y^2 -3y+ 2 > 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3255804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Definitions of exists unique $\exists!x_0 \in S,P(x_0)$ Definitions: 1.$\exists x_0 \in S, P(x_0)\wedge(\forall x_1,x_2 \in S, P(x_1)\wedge P(x_2)\rightarrow x_1=x_2)$ 2.$\exists x_0 \in S, P(x_0)\wedge (\forall x_1 \in S,P(x_1)\rightarrow x_0=x_1)$ $3. \exists x_0 \in S, \ \forall x_1 \in S, (P(x_1) \leftrightarrow x_0 = x_1))$ Question: I saw people sometimes use first one and sometimes use the second one in uniqueness proofs, are they equivalent, if so, is it possible to prove it?
Passing from (1) to (2) is trivial, it suffices to let $x_2=x_0$. Now we will start with assuming (2) as true. We have $$\forall x_1,x_2 \in S \big( P(x_1) \land P(x_2) \Rightarrow x_0=x_1 \land x_0=x_2\big)$$which leads to$$\forall x_1,x_2 \in S \big( P(x_1) \land P(x_2) \Rightarrow x_1=x_2\big)$$and we have just demonstrated (1).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3255954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Calculate $\sum_{n=-\infty}^{\infty}\frac{1-\cos(an)}{(an)^2}$ After playing with some series in a numerical math website, it seems to me like the following identity holds: $$\sum_{n=-\infty}^{\infty}\frac{1-\cos(an)}{(an)^2}=\frac{\pi}{a}$$ It seems a little bit surprising to me, and I was wondering if there is an elementary way to see it. Convergence is trivial due to comparison with $\frac{1}{n^2}$, but the specific value is interesting. I would think that some Fourier analysis might be applicable, mainly because $\pi$ appeared here, but couldn't make it work. P.S: Even a way to see the behavior $\sum_{k=-\infty}^{\infty}\frac{1-\cos(an)}{(an)^2}\propto \frac{1}{a}$ is interesting to me, and presumably simpler. P.S2: It seems that for large $a$ (possibly just $a>2\pi$) the claim is incorrect, see comment. Still, it is interesting to calculate, even if only for $|a|<2\pi$.
Define $$f(x)=\sum_{n=-\infty}^{\infty}\frac{1-\cos(nx)}{(nx)^2}$$ for $x\in (0,2\pi]$. We can differentiate this function to get $$f'(x)=\frac{d}{dx}\left(\sum_{n=-\infty}^{\infty}\frac{1-\cos(nx)}{(nx)^2}\right)=\sum_{n=-\infty}^{\infty}\frac{d}{dx}\left(\frac{1-\cos(nx)}{(nx)^2}\right)$$ $$=\sum_{n=-\infty}^{\infty}\left(\frac{\sin (n x)}{n x^2}-\frac{2 (1-\cos (n x))}{n^2 x^3}\right)=\frac{1}{x^2}\sum_{n=-\infty}^{\infty}\frac{\sin(nx)}{n}-\frac{2}{x}f(x).$$ From the answer to this question, we know that for $x\in (0,2\pi)$ $$\sum_{n=1}^\infty \frac{\sin(nx)}{n}=\frac{\pi-x}{2}.$$ Since $\lim_{n\to 0}{\frac{\sin(nx)}{n}}=x$ and $\frac{\sin(nx)}{n}$ is even, this implies $$\frac{1}{x^2}\sum_{n=-\infty}^\infty \frac{\sin(nx)}{n}=\frac{1}{x^2}\left(x+2\sum_{n=1}^\infty \frac{\sin(nx)}{n}\right)=\frac{1}{x^2}(x+\pi-x)=\frac{\pi}{x^2}.$$ This then gives us the ODE $$f'(x)=\frac{\pi}{x^2}-\frac{2}{x}f(x).$$ Solving this, we get that $$f(x)=\frac{\pi}{x}+\frac{C}{x^2}$$ for some constant $C$. We can find this by noting that $$f(\pi)=\sum_{n=-\infty}^{\infty}\frac{1-\cos(nx)}{(nx)^2}=\frac{1}{2}+\frac{2}{\pi^2}\sum_{n=1}^\infty \frac{2}{(2n-1)^2}=\frac{1}{2}+\frac{2}{\pi^2}\frac{\pi^2}{4}=1.$$ Then $C=0$, and we get $f(x)=\frac{\pi}{x}$ for $x\in (0,2\pi)$. To finish the proof, note that $$f(2\pi)=\sum_{n=-\infty}^{\infty}\frac{1-\cos(n2\pi)}{(n2\pi)^2}=...+0+0+\frac{1}{2}+0+0+...=\frac{1}{2}$$ as expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3256090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Polish spaces are continuous images of the Baire space I'm having some troubles understanding the proof of Theorem 7.9 (pag. 39) in Kechris' "Classical Descriptive Set Theory": There are two points of the proof proposed that I don't quite understand. First of all, when we define the Lusin scheme, why do we need to specify that every $F_s$ is going to be a $F_\sigma$ if right afterwards we set $F_s = \bigcup_i \overline{F_{s^\smallfrown i}}$, making it a $F_\sigma$ set by definition. Moreover I truly don't get how does he manage, at the end of the proof, to write $C_{i+1} \setminus C_i = \bigcup_j E_j^{(i)}$ with $E_j^{(i)}$ being pairwise disjoint $F_\sigma$ sets of diameter $< \epsilon$. Where do $(E_j^{(i)})_j$ come from? How do we know that $C_{i+1} \setminus C_i$ can be covered by pairwise disjoint $F_\sigma$ sets of diameter $< \epsilon$?
(ii) is somewhat superfluous. (iii) is already the intended construction: each $F_s$ is partitioned by its successors $F_{s\smallfrown i}$ and these sets are all $F_\sigma$ too. (ii) is just to reinforce and anticipate (iii), I think. (iii) is a statement of intent, not the definition of $F_s$. The final point is more subtle: $C_{i+1}\setminus C_i$ is a relatively open set of the Polish space $C_{i+1}$. Every open subset of a Polish space can be written as a pairwise disjoint countable union of small diameter $F_\sigma$ sets. This follows as we can take a cover by small (diameter) open sets and as the set is hereditarily Lindelöf (being second countable) we can find a countable subcover of it, which we enumerate. After that, taking the standard trick of subtracting all previous sets, we get a countable disjoint family of sets that are all $F_\sigma$ (the difference of open sets in a metric space, using open sets are $F_\sigma$) and still small.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3256239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Variational principal question regarding functions that have a minimum at the origin under a restriction. I'm going over some old assignments from a couple terms ago and have come across a problem from my variational principals module. I looked at the function in the hint and noticed that for some points $f(x,y)<0$ but $f(0,0)=0.$ So my issue is in my understanding, particularly what is meant by 'function obtained by restricting $f(x,y)$ onto a straight line passing through the origin'. Does this mean that we take an line going through the origin in $\mathbb{R}^3$ or a line in the $x y$ plane going through the origin and consider the function $g(x)=f(x,kx)$ for some $k\in{\mathbb{R}}$. Or if neither of these, then what? Clearly the hint is to suggest a counter example but this function is not minimum at the origin when you restrict it as described (unless my understanding of the restriction is incorrect which is the most likely case).
Recall the definition of a restriction of a function, for instance, from Wikipedia. Let $f$ be a function from a set $E$ to a set $F$. If a set $A$ is a subset of $E$, then the restriction of $f$ to $A$ is the function $f|A:A\to F$ given by $f|_A(x) = f(x)$ for (each) $x$ in $A$. That is, we have to consider restriction of the function $f(x,y)$ onto straight lines of its domain $\Bbb R^2$ passing through the origin. Such straight lines $A$ are given by a linear equation $x=0$ or $y=kx$ for some real $k$. If $A=\{(x,y):x=0\}$ then $f(x,y)|_A=y^4$ and this function attains the global minimum at $(0,0)$. If $A=\{(x,y):y=kx\}$ then $f(x,y)|_A=(x-k^2x^2)(2x-k^2x^2)=x^2(1-k^2x)(2-k^2x)$. So $f(0,0)=0$, but $f|A$ not always attains its minimum at $(0,0)$. For instance, for $k\ne 0$ and $x=\tfrac{3}{2k^2}$ we have $f(x,kx)<0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3256362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Recursive formula for points of algebraic curves over finite fields Given $f$ an algebraic curve, let $N_f(p,k)$ the points of $f$ over $\mathbb{F}_{p^k}$ I need to prove that there exists a recursive formula of order $2g+2$ where $g$ is the genus of $f$. I know that according to Shparlinkski (page 160 from https://www.springer.com/la/book/9780792356622) it can be deduced from the expression $$N_f(p,k)=p^k+1-\sum_{i=1}^{2g}{w_i}^k$$ where $w_i\in \mathbb{C}$ are such that $|w_i|=\sqrt{p}$ and $\overline{w}_i=w_{i+g}$. Any help would be appreciated. Thanks!
Surely you mean $$N_f(p,k)=p^k+1-\sum_{i=1}^{2g}w_i^k?$$ If $b_1,\ldots,b_m$ and $c_1,\ldots,c_m$ are any numbers and we define $$a_n=\sum_{k=1}^m b_kc_k^n$$ then the sequence $(a_n)$ satisfies the recurrence $$a_n=-\sum_{k=1}^m u_ka_{m-k}$$ where $$\prod_{j=1}^m(X-c_j)=X^m+\sum_{j=1}^n u_jX^{m-j}.$$ In your example, the $c_k$ are $p,1,w_1,\ldots,w_{2g}$ and the $b_k$ are $1,1,-1,\ldots,-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3256525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Card picking dependant probability without replacement - P(6,6,Red) I'm trying to teach myself statistics online @ khan academy and although its a wonderful resource I have no one to turn to for help when I don't understand. I hope someone here might be able to give me some advice. I thought I understood it all and thought would test myself so I've made some questions up to try and work out the theoretical probability, then would cross x the answer against a simulation i've written in VBA/Excel to really understand it all. But.... I'm really stuck on working out the probability of picking card #1 with value 6, #2 with value 6 then #3 a Red card. Working out P(6,6) is easy for me = 4/52 * 3/51 ≈ 0.4525% My issue is that after having picked two 6's in a row, what is the probability of then picking out a red card - I just can't get my head around it. Because you could have already picked 2 red cards which would reduce the probability or you could have picked 1 or none. I don't know how you work it out from here, and googling it doesn't help as i'm not wording my question right and not getting the results I want. Can anyone help me / push me in the right direction. ANYTHING would be great. Thanks Lewis
You got it already. Since you are aware that first two picks being red or not affects the probability of third pick being red, you should consider it case by case: Case 1: First two 6's are black. First, we can find the probability of having two black 6's in our two picks. This is nothing but $\frac{2}{52}\cdot\frac{1}{51}$. In this case, having a red card on the third pick is $\frac{26}{50}$ since we still have $26$ red cards. Case 2: One of first two picks is red. In this case, probability of having one black 6 and one red 6 is nothing but $\frac{2}{52}\cdot\frac{2}{51}\cdot2$. Here, we multiplied by $2$ since we can have our first pick a red 6 and second pick a black 6; or first pick a black 6 and second pick a red 6. And these cases are just symmetric so instead of separating them, we just multiplied the result by $2$. Then having a red card on the third pick is $\frac{25}{50}$ because we are left with $25$ red cards after first two picks. Case 3: I am leaving it to you to find the probability for this case and the answer. I can check it if you write it as a comment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3256626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find a matrix of (any) linear map $ \varphi : \mathbb{R}^{4} \rightarrow \mathbb{R}^{3} $, so that conditions apply I'm trying to find a matrix of (any) linear map $ \varphi : \mathbb{R}^{4} \rightarrow \mathbb{R}^{3} $, for which the following conditions apply: * *the dimension of the image $ \varphi = 2$ *$ \varphi(1,1,1,1) = (1,2,1)$ *$ \varphi(1,0,1,0) = (2,1,0)$ I already determined, that the dimension of the kernel $\varphi$ will be $2$. What should be my next steps? I think I know how I would find such a matrix, that the dimension of the image is 2, but I don't know what to do about the second and the third point. Thanks!
Let's pretend for a moment that points two and three require that: * *$\phi(e_1) = w_1$ *$\phi(e_2) = w_2$ Where $e_i$ are the elements of the canonical basis for $\mathbb R^4$, $w_1 = (1,2,1)$ and $w_2 = (2,1,0)$. An immediate solution for this problem would be the linear function defined by the associations above and sending $e_3$ and $e_4$ to the zero vector in $\mathbb R^3$, represented by the matrix: $$ A= \left(\begin{matrix}1&2&0&0\\2&1&0&0\\1&0&0&0\end{matrix}\right) $$ The kernel of this function has obviously dimension 2. Now to solve the original problem, let $v_1=(1,1,1,1)$ and $v_2=(1,0,1,0)$, and consider the change of basis matrix below: $$ B= \left(\begin{matrix}1&1&0&0\\1&0&0&0\\1&1&1&0\\1&0&0&1\end{matrix}\right) $$ You can see that $Be_1 = v_1$ and $Be_2=v_2$; so inverting the matrix we have $B^{-1}v_1=e_1 \implies AB^{-1}v_1=w_1$, and similarly $AB^{-1}v_2=w_2$. $B^{-1}$ is invertible, so $\dim \ker AB^{-1} = \dim\ker A$, hence the linear function represented by $AB^{-1}$ satisfies your requirements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3256759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Intersection and distance between two affine subspaces Given are two $m$-dimensional affine subspaces embedded in $\mathbb{R}^n$ $$a_{1i}x_{1i} + a_{2i}x_{2i} + \cdots + a_{ni}x_{ni} = a_{0i}$$ $$b_{1i}x_{1i} + b_{2i}x_{2i} + \cdots + b_{ni}x_{ni} = b_{0i}$$ where $i = 1 ... m$ define the $m$ hyperplanes defining each of the subspaces. The goal is to compute their intersection. If they do not intersect, also compute the (orthogonal) shortest distance between the two subspaces. Not sure if I formulated the problem correctly, so I'll provide examples: * *The intersection of two lines in 3D (respectively defined by the intersection of two planes, i.e., $m=2$) can either be empty or a point. In case they do not intersect, I would like to compute the shortest distance between the lines. *The intersection of two planes in 4D (respectively defined by the intersection of 3 hyperplanes, i.e., $m=3$) can either be empty, a point, or a line. In case they do not intersect, I would like to compute the shortest distance between the two planes. My approach would be to put all the hyperplanes in a matrix and compute the nullspace of the matrix to compute the span of the solution. However, I would like to get this confirmed by somebody else and I am not sure how to compute the distance. The final goal is to implement this in a program, so any (pseudo) code would be highly appreciated.
Let's start with 3D and 2 lines. If they are parallel, normalizing their coefficients (the normal to both) , and dividing by the same factor the known term to the right, then the difference of the normalized known terms (taken in case with the absolute value) is the distance between the lines. Can you then proceed and extend ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3256872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is the series $\sum_{n=1}^\infty \frac{(i)^n}{n}$ convergent or divergent Does the series $$\sum_{n=1}^\infty\frac{(i)^n}{n}$$ converge or diverge? The only theorems I've covered thus far are: Theorem: A series $\sum_{n=1}^\infty (x_n+i y_n)$ converges if and only if $\sum_{n=1}^\infty x_n$ and $\sum_{n=1}^\infty y_n$ converge. Theorem: If $\sum_{n=1}^\infty z_n$ and $\sum_{n=1}^\infty w_n$ converge, then $\lim_{n\to\infty}z_n=0$. Theorem: If $\sum_{n=1}^\infty z_n$ and $\sum_{n=1}^\infty w_n$ converge, then $$\sum_{n=1}^\infty cz_n=c\sum_{n=1}^\infty z_n$$ and $$\sum_{n=1}^\infty(z_n+w_n)=\sum_{n=1}^\infty z_n+\sum_{n=1}^\infty w_n$$ Theorem: The comparison test. Thanks.
We can write $$\begin{align} \sum_{n=1}^{2N} \frac{i^n}{n}&=\sum_{n=1}^{2N} \frac{\left(e^{i\pi/2}\right)^n}{n}\\\\ &=\underbrace{\sum_{n=1}^{2N} \frac{\cos(n\pi/2)}{n}}_{\text{All of the odd terms are zero}}+i \underbrace{\sum_{n=1}^{2N} \frac{\sin(n\pi/2)}{n}}_{\text{All of the even terms are zero}}\\\\ &=\sum_{n=1}^{N} \frac{(-1)^n}{2n}+i \sum_{n=1}^{N} \frac{(-1)^{n-1}}{2n-1}\\\\ \end{align}$$ Now apply Leibniz's Test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3256964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Suppose $f(x)\geq 0$, and $\int_0^{+\infty} f^2(x)dx$ is convergent. Prove that $\lim\limits_{x \to \infty}\frac{\int_0^x e^t f(t) dt}{e^x}=0.$ Suppose $f(x)\geq 0$, and $\int_0^{+\infty} f^2(x)dx$ is convergent. Prove that $\lim\limits_{x \to \infty}\dfrac{\int_0^x e^t f(t) dt}{e^x}=0.$ Notice that we are not given the continuity of $f(x)$. Hence L' Hospital's rule can not work here. If we consider apply AM-GM inequality, we obtain $$e^{x}f(x)\leq \frac{f^2(x)+e^{2x}}{2},$$ where $\int_0^{+\infty} f^2(x)dx<+\infty$ but $\int_0^{\infty} e^{2t}=+\infty$, which gives nothing helpful. How to solve it? Thanks.
A proof from Kavi Rama Murthyk Since $\displaystyle\int_0^{+\infty}f^2(x)dx$ is convergent,by Cauchy's convergence test, we have $$\forall \varepsilon>0,\exists \xi>0,\forall x>\xi ~~~s.t.~~~ \int_{\xi}^{x} f^2(t)dt< 2\varepsilon^2.$$ Thus,as per Cauchy-Schwarz's inequality,we obtain $$\int_{\xi}^x e^t f(t)dt \leq \left(\int_{\xi}^x f^2(t)dt \cdot \int_{\xi}^x e^{2t}dt\right)^{\frac{1}{2}}< \left(2\varepsilon^2 \int_{\xi}^x e^{2t}dt\right)^{\frac{1}{2}}=\varepsilon \left(e^{2x}-e^{2\xi}\right)^{\frac{1}{2}},$$ which implies $$\frac{\int_{\xi}^x e^t f(t)dt}{e^x}\leq \varepsilon \left(1-e^{\frac{\xi}{x}}\right)^{\frac{1}{2}}\leq \varepsilon$$ holds for all $x>\xi$. Therefore, taking the limits of both sides as $x \to +\infty$, we have $$\lim_{x \to +\infty}\frac{\int_{\xi}^x e^t f(t)dt}{e^x}\leq \varepsilon.\tag{1}$$ Meanwihle, notice that,for the fixed $\xi$, $$\lim_{x \to +\infty}\frac{\int_0^\xi e^t f(t)dt}{e^x}=0.\tag{2}$$ $(1)$ plus $(2)$, we obtain $$\lim_{x \to +\infty}\frac{\int_0^x e^t f(t)dt}{e^x}\leq\varepsilon,$$ by the arbitariness of $\varepsilon>0$,which implies $$\lim_{x \to +\infty}\frac{\int_0^x e^t f(t)dt}{e^x}=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3257244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the theme of analysis? It is safe to say that every mathematician, at some point in their career, has had some form of exposure to analysis. Quite often, it appears first in the form of an undergraduate course in real analysis. It is there that one is often exposed to a rigorous viewpoint to the techniques of calculus that one is already familiar with. At this stage, one might argue that real analysis is the study of real numbers, but is it? A big chunk of it involves algebraic properties, and as such lies in the realm of algebra. It is the order properties, though, that do have a sort of analysis point of view. Sure, some of these aspects generalise to the level of topologies, but not all. Completeness, for one, is clearly something that is central to analysis. Similar arguments can be made for complex analysis and functional analysis. Now, the question is: As for all the topics that are bunched together as analysis, is there any central theme to them? What topics would you say that belongs to this theme? And what are the underlying themes in these individual subtopics? Add. It may be a subjective question, but having a rough idea of what the central themes of a certain field are helps one to construct appropriate questions. As such, I think it is important. I am not expecting a single answer, but more of a diverse set of opinions on the matter.
Mathematical analysis is a mental edifice built up to describe and understand phenomena of geometry, physics, and technics in terms of formulas involving finite mathematical expressions. The core of this all is the study of functions $f:\>{\mathbb R}\to{\mathbb R}$ and their properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3257398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56", "answer_count": 7, "answer_id": 3 }
Compute $\lim_{n \to \infty}n\int_{0}^{1}\frac{\cos x}{1+e^{nx}},\ n\in\mathbb{N}$ Let $$ I_n = n\int_{0}^{1}\frac{\cos x}{1+e^{nx}}\,dx\,,\quad n\epsilon \mathbb N^*. $$ Calculate $\lim_{n \to \infty}I_n$
Integrating by parts, $$ I_n=n\int_0^1\frac{\cos x}{1+e^{nx}}\,dx=-\int_0^1 \cos x\,\frac{d}{dx}\log(1+e^{-nx})\,dx \\=\log2-\cos(1)\log(1+e^{-n})-\int_0^1\sin x\,\log(1+e^{-nx})\,dx\,. $$ Now, $ \cos(1)\log(1+e^{-n}) $ tends to zero as $n\to\infty$ by continuity, while $$ \int_0^1\left|\sin x\,\log(1+e^{-nx})\right|dx\le\log2\int_0^1 |\sin x |\,dx=\log2[1-\cos(1)]\,, $$ so the integral $$ \int_0^1 \sin x\,\log(1+e^{-nx})\,dx $$ also tends to zero as $n\to\infty$ by the dominated convergence theorem and by continuity. Thus, $$ \lim_{n\to\infty}I_n=\log2\,. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3257544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Solve $\frac{f'(x)f'''(x)}{(f''(x))^2}=C$ Let $f$ be twice differentiable almost everywhere and continuous. $C$ is a constant. Solve for the following differential equation: $\frac{f'(x)f'''(x)}{(f''(x))^2}=C$ almost everywhere. If we remove the "almost everywhere", the solution is pretty standard: $f$ can be a log function, exponential function, or polynom. To be more specific, when $C=1$, it seems that $f(x)=ae^{bx}+c$ where $a,b,c$ are constants. When $C=2$, $f(x)=a\ln(x+b)+c$. When $c\neq 1$ or $2$, $f(x)=a(x+b)^c+d$ Can we get more solutions if we add back the "almost everywhere"?
Updated answer after the OP changed the wording of his question. $$\frac{f'(x)f'''(x)}{(f''(x))^2}=C$$ $y(x)=f'(x)$ $$\frac{y''}{y'}=C\frac{y'}{y}$$ $$\ln|y'|=C\ln|y[+\text{constant}$$ $$y'=c_1y^C$$ $$\frac{y'}{y^C}=c_1$$ $$\frac{y^{1-C}}{1-C}=c_1x+c_2\quad\text{in case of}\quad C\neq 1$$ $$y=(1-C)(c_1x+c_2)^{1/(1-C)}=f'$$ $$f(x)=\frac{(1-C)^2}{2-C}(c_1x+c_2)^{(2-C)/(1-C)}+c_3\qquad \begin{cases} C\neq 0 \\ C\neq 1 \\ C\neq 2 \end{cases}$$ or equivalently $$f(x)=(ax+b)^{(2-C)/(1-C)}+c \qquad \begin{cases} C\neq 0 \\ C\neq 1 \\ C\neq 2 \end{cases} \tag 1$$ $$f(x)=ax^2+bx+c \qquad C=0 \tag 2$$ You correctly found the solutions in the cases $C=1$ and $C=2$. $$f(x)=ae^{bx}+c \qquad C=1 \tag 3$$ $$f(x)=a\ln(x+b)+c\qquad C=2 \tag 4$$ with $a\neq 0$ in all cases so that $f''(x)\neq 0$. All above holds if $f(x),f'(x),f''(x)$ are differentiable everywhere. If it is not the case this supposes that the PDE is not valid is some particular points but valid almost everywhere. Then they are an infinity of continuous piecewise solutions : On each segment between those particular points $f'(x),f''(x),f'''(x)$ exist and $f(x)$ complies to the above equations $(1)$ or $(2)$ or $(3)$ or $(4)$. Two of the parameters can be determined so that there is no discontinuity between adjacent segments. So they are an infinity of continuous piecewise solutions If no additional condition is specified at each particular point, the third parameter remains arbitrary and $f(x)$ is likely to be multivalued that is with different branches
{ "language": "en", "url": "https://math.stackexchange.com/questions/3257782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Uniqueness in the Perron–Frobenius Theorem I'm working through proving the basics of Perron's theorem, but I'm stuck on uniqueness for the positive eigenvector. Given a positive square matrix $A$, I see how to use Brouwer's fixed point theorem to show the existence of a positive eigenvector $\mathbf{v}$. Call its eigenvalue $\lambda$. Clearly any constant multiple $c \mathbf{v}$ is also an eigenvector with eigenvalue $\lambda$. And following these notes I think I see why any other positive eigenvector $\mathbf{w}$ must have the same eigenvalue $\lambda$. But how do we know $\mathbf{w} = c \mathbf{v}$? Why can't $\mathbf{w}$ be independent of $\mathbf{v}$ despite having the same eigenvalue? This seems to be Corollary 1 in the linked notes, but the proof there has me stumped. (The definition of what gives us $\mathbf{u} = (1, 1, \ldots, 1)$? How could we possibly know this about $\mathbf{u}$? And how does that tell us $\mathbf{w} = c \mathbf{v}$?)
I found a nice solution here. Let $\mathbf{v}$ and $\mathbf{w}$ be positive eigenvectors of a positive matrix $A$, both associated with eigenvalue $\lambda$. And suppose for reductio that they are linearly independent. Because they're independent, we can find a constant $c$ so that $\mathbf{v}-c\mathbf{w}$ is nonnegative and nonzero, but has at least one zero entry. We just start with $c=0$, and increase until $cw_i = v_i$ for some $i$. Now observe that $$ \begin{aligned} \mathbf{v}-c\mathbf{w} &= \frac{A}{\lambda} \mathbf{v} - \frac{A}{\lambda} c\mathbf{w}\\ &= \frac{A}{\lambda} \left(\mathbf{v}- c\mathbf{w}\right). \end{aligned} $$ But $A$ is positive, so $A/\lambda$ is positive, which means $(A/\lambda) (\mathbf{v}- c\mathbf{w})$ must be positive too (recall $\mathbf{v}- c\mathbf{w}$ was nonnegative and nonzero). This contradicts our assumption that $\mathbf{v}- c\mathbf{w}$ had at least one zero. So $\mathbf{v}$ and $\mathbf{w}$ could not have been linearly independent. (Now I'm just stuck on showing that $\lambda$ must be the leading eigenvalue, i.e. $\lambda > |\lambda'|$ for every other eigenvalue $\lambda'$ of $A$. There's a short proof in Section 10.3 of Strang's Introduction to Linear Algebra, but I can't follow it. If anybody can explain it, or has another nice solution, feel free to combine it with the above solution to this problem and I'll accept their answer as the solution.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3257863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Matrix function from canonical base to base $B$ Let's say I have a function $F=\begin{bmatrix}\frac{1}{2} & 1 & \frac{1}{2}\\ 0 & 1 & 0\\ -\frac{1}{2} & 1 & \frac{3}{2}\end{bmatrix}$ defined with respect to the canonical basis. I found out that the respective function defined with three variables is: $$f(x,y,z)=(\frac{1}{2}x+y+\frac{1}{2}z,y, -\frac{1}{2}x+y+\frac{3}{2}z)$$ Now, I would like to define the function $F$ with respect to the basis $B=\{(1,0,1), (1,1,1), (1, 1, -1) \}$ So I did: $$f(1, 0, 1)_{|B}=(1, 0, 1)_{|B}, f(1, 1, 1)_{|B}=(2, 1, 2)_{|B}, f(1, 1, -1)_{|B}=(1, 1, -1)_{|B}$$ My textbook states that the matrix $F_{|B} = \begin{bmatrix}1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix}$, but shouldn't it be $F_{|B} = \begin{bmatrix}1 & 2 & 1 \\ 0 & 1 & 1 \\ 1 & 2 & -1\end{bmatrix}$?
Think of $f$ as a function that eats a vector $\mathbf v$ and spits out another vector $f(\mathbf v)$. If we represent $\mathbf v$ by its coordinates relative to the standard basis, which I’ll denote by $[\mathbf v]_{\mathcal E}$, then you have the formula $[f(\mathbf v)]_{\mathcal E} = F[\mathbf v]_{\mathcal E}$: the matrix $F$ expects coordinates relative to the standard basis and produces coordinates relative to the standard basis. This problem wants you to find some other matrix $F'$ such that $[f(\mathbf v)]_{\mathcal B}=F'[\mathbf v]_{\mathcal B}$ for some other ordered basis $\mathcal B$. Well, you can convert $[\mathbf v]_{\mathcal B}$ into $[\mathbf v]_{\mathcal E}$ by multiplying it by the appropriate change-of-basis matrix $B$: $[\mathbf v]_{\mathcal E} = B[\mathbf v]_{\mathcal B}$. Now you’ve got the input in the form that $F$ expects, but it still produces coordinates relative to the standard basis, so you also have to convert its output, that is, $$FB[\mathbf v]_{\mathcal B} = [f(\mathbf v)]_{\mathcal E}$$ and so $$[f(\mathbf v)]_{\mathcal B} = B^{-1}[f(\mathbf v)]_{\mathcal E} = B^{-1}FB[\mathbf v]_{\mathcal B}$$ from which $F' = B^{-1}FB$. It looks like you computed $FB$ instead—you converted the input, but forgot to convert the output.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3257938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Could someone verify my proof that if $f:(a,b) \rightarrow \mathbb{R}$ is uniformly continuous, then $f$ is bounded? For my proof, I had to inverse the definitions for uniform continuity and sequence convergence, so it feels a bit sketchy to me. I'd also appreciate suggestions if you have any. Let $f:(a,b) \rightarrow \mathbb{R}$ be uniformly continuous. For the sake of contradiction, assume $f$ is unbounded. Let the sequences $a_n \rightarrow a$, $b_n \rightarrow b$, and $x_n \rightarrow x_0 \in (a,b)$. Since $f$ is continuous, then $f(x_n) \rightarrow f(x_0) \in \mathbb{R}$. Since $f$ is unbounded, then either of the sequence $f(a_n),f(b_n)$ must diverge, otherwise $f$ would be bounded. Without loss of generality, assume the sequence $b_n$ diverges. Since $f$ is uniformly continuous, then for $\epsilon > 0$, there exists $\delta > 0$ such that $|x-y|<\delta$ implies $|f(x)-f(y)|<\epsilon$ for $x,y \in (a,b)$. Let $\delta > 0$. Since $b_n \rightarrow b$, then there exists some $N \in \mathbb{N}$ where $|b-b_n|<\delta$ for $n \geq N$. Restrict $x \in (a,b)$ such that $b_n<x<b$. Then $|x-b_n|<\delta$. Since $f(b_n)$ diverges, then there exists some $\epsilon > 0$ and $n \in \mathbb{N}$ with $n \geq N$ such that $|f(x)-f(b_n)| \geq \epsilon$. Therefore we can write for $|x - b_n|<\delta$ and $|f(x)-f(b_n)| \geq \epsilon$. Therefore $f$ is not uniformly continuous. This is a contradiction since $f$ is uniformly continuous. Therefore $f$ is bounded.
Suppose $f$ were unbounded on $(a,b).$ Let $x_1$ be any member of $(a,b).$ For $n\in \Bbb N$ let $x_{n+1}\in (a,b)$ with $|f(x_{n+1})|\ge 1+|f(x_n)|.$ Observe that this implies that $|f(x_m)-f(x_n)|\ge 1$ when $m\ne n.$ Let $(j(n))_{n\in \Bbb N}$ be some sub-sequence of $\Bbb N$ such that $(x_{j(n)})_{n\in \Bbb N}$ converges to some $x\in [a,b].$ Note that $m\ne n$ implies that $j(m)\ne j(n),$ which implies $|f(x_{j(m)})-f(x_{j(n)})|\ge 1.$ Then there does NOT exist $\delta>0$ such that $\forall u,v\in (a,b)\;(|u-v|<\delta \implies |f(u)-f(v)|<1).$ Because for any $\delta>0 $ the interval $I_{\delta}=(a,b)\cap (x-\delta/2,x+\delta/2)$ contains $x_{j(n)}$ for all but finitely many $n,$ so there exist $m,n$ with $m\ne n$ and $\{x_{j(m)},x_{j(n)}\}\subset I_{\delta}.$ And for such $m,n$ we have $$|x_{j(m)}-x_{j(n)}|<\delta \;\text { but }\; |f(x_{j(m)})-f(x_{j(n)})|\ge 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3258012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Why isn't this function affine? I'm studying convex optimization using Convex Optimization (Boyd & Vandenberghe) and had a question from an example used in Chapter 4.2: Convex Optimization. The specific example is as follows: $$\begin{array}{ll} \text{minimize} & f_0(x) = x_1^2 + x_2^2\\ \text{subject to} & f_1(x) = \frac{x_1}{1+x_2^2} \le 0\\ & h_1(x) = (x_1 + x_2)^2 = 0\end{array}$$ The textbook states that this problem is not a convex optimization problem because the equality constraint $h_1(x)$ is not affine. I'm aware that affine functions are linear functions composed with a translation, but when I plotted out the graph for $h_1(x)$ I get a line or plane, which are both affine sets. This led to my confusion as to why the function isn't affine if it's graph is an affine set. Why isn't it an affine function? Is my understanding of what an affine function is itself wrong?
The function $h_{1}(x)$ is not an affine function because it can't be written in the form $h_{1}(x)=a^{T}x+b$. Or, if you let $x=(1, 1)$ and $y=(2,2)$, you'll see that $(h_{1}(x)+h_{1}(y))/2=10$, while $h_{1}((x+y)/2)=9$. It's true that these points don't satisfy $h_{1}(x)=0$ and $h_{1}(y)=0$, but that doesn't make any difference in whether $h_{1}()$ is itself an affine function. The definition of "convex optimization problem" being used here requires that the functions $h_{i}(x)$ in the constraints $h_{i}(x)=0$ be affine. That's not the same thing as requiring that the set of $x$ such that $h_{1}(x)=0$ is an affine set. Under this definition of a convex optimization problem, it's possible (as this example illustrates) that the feasible region of an optimization problem is convex even though the problem is not a convex optimization problem under the definition. Why use this definition of "convex optimization problem"? One reason is that under this definition we can assume that the $g_{i}(x)$ functions are convex and the $h_{i}(x)$ functions are affine. You'll see these properties used from time to time in proofs later in the book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3258172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Numbers which cannot be formed We are given two numbers $a,b$ such that $a<b$. Now we have a set $\{a,a+1,a+2,\ldots, b\}$ (all number between a and b including them). Then, we have to find how many numbers cannot be formed from the above set. The only operation allowed on the set elements is addition. Note : We can add these number as many times we want. Only addition allowed on these numbers. Eg : If numbers are 3 and 5, the set is $\{3,4,5\}$ using these we only cannot make $1,2$. Can anyone help me with that? I am not getting any idea. I tried the cases when $b-a = 1$.
Let us consider the numbers that can be formed by $\alpha-1$ additions: \begin{align} \alpha =1 \text{ gives }& \mathcal{A}_1 =\{a,a+1,\ldots, a+(b-a)\}\\ \alpha =2 \text{ gives } & \mathcal{A}_2 =\{2a,2a+1,\ldots, 2a+2(b-a)\}\\ \alpha =3 \text{ gives } & \mathcal{A}_3 =\{3a,3a+1,\ldots, 3a+3(b-a)\}\\ &\vdots\\ \text{In general, } \alpha \text{ gives } & \mathcal{A}_\alpha =\{\alpha a,\alpha a+1,\ldots, \alpha a +\alpha(b-a)\} \end{align} Note that $ \mathcal{A}_\alpha$ is a set of consecutive numbers, and the size of the set$| \mathcal{A}_\alpha|=\alpha(b-a)$ increases with $\alpha$. Suppose for some $\alpha$, $\mathcal{A}_{\alpha+1}$ is the continuation of $\mathcal{A}_{\alpha}$, i.e., there is no gap between the two consecutive group of numbers that can be formed, then all numbers $\geq \alpha a$ can be formed. Therefore, if the last element of $\mathcal{A}_{\alpha}$ is one less than $\mathcal{A}_{\alpha+1}, i.e., \alpha a +\alpha(b-a)+1= (\alpha+1) a\implies \alpha=\frac{a-1}{b-a}$, then all numbers $\geq \alpha a$ can be formed using the set. Thus, the gap between the consecutive groups of numbers that can be formed is the set of numbers that can not be formed: $$\left\{\alpha b+1,\alpha b+2,\ldots,(\alpha+1)a-1, > \forall \; 0\leq \alpha < \frac{a-1}{b-a}\right\}$$ can not be formed using the given set. For example, \begin{align} a=3,b=5\implies&\frac{a-1}{b-a}=1 \implies \alpha = 0\\ &\left\{\alpha b+1,\alpha b+2,\ldots,(\alpha+1)a-1, \forall \; 0\leq \alpha < \frac{a-1}{b-a}\right\}=\left\{1,2 \right\}\\ a=3,b=4\implies&\frac{a-1}{b-a}=2 \implies \alpha = 0,1\\ &\left\{\alpha b+1,\alpha b+2,\ldots,(\alpha+1)a-1, \forall \; 0\leq \alpha < \frac{a-1}{b-a}\right\}=\left\{1,2,5 \right\}. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3258301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Existence of an homeomorphic between [0,1] to X × Y I'm doing a practice exam questions and am stuck at this question: Are there topological spaces X,Y (each with more than one point), such that [0,1] is homeomorphic to X×Y? What if we replace [0,1] with R? I'm not even sure how to start tackle it, any help and clues will be appreciated! My head is leading me to "cut-points" area, but I'm not sure abuot it./ Thanks in advance!
You had the right idea. $X$ and $Y$ are the image under the projections of $X\times Y$, so they must be path connected. Now on $[0,1]$ there are many points that after removing them make this space disconnected. Can this happen with $X\times Y$, assuming both have more than one point? (Hint: make a picture and try to connect two arbitrary points with a path.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3258372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Uniform convergence of: $f_n(x) = \frac{1}{(1+x^2)^n},\text{ with } x\in \mathbb{R}$ Does the function defined by: $f_n(x) = \frac{1}{(1+x^2)^n},\text{ with } x\in\mathbb{R}$ converge uniformly in $\mathbb{R}$ ?
Every nonzero point must converge to $0$, but for any $n$, a sufficiently small positive $x$ satisfies $f_n(x) > \frac{1}{2}$. So the function does not uniformly converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3258495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Write the linear application Is given the basis of $\mathbb{R^2}$ $v_1=(1,0) v_2=(1,1)$. Write the only linear application $f: \mathbb{R^2} \to \mathbb{R^2}$ with $f(v_1)=(0,1) f(v_2)=(1,0)$. I tried to solve it like $(x,y)= x(1,0) + y(1,1)$ and I found $f(x,y) = (y,x)$ that is wrong. Where am I wrong?
The linear function ("application") will be of the form $$f(v)=Mv$$ for some matrix $M$. You want this to satisfy $f(v_1)=(0,1)^T$ and $f(v_2)=(1,0)^T$. In other words, you want $$M\left(\begin{smallmatrix} 0&1\\1&1\end{smallmatrix}\right)=\left(\begin{smallmatrix} 0&1\\1&0\end{smallmatrix}\right)$$ To find a matrix $M$ to satisfy this property, we calculate $$\left(\begin{smallmatrix} 0&1\\1&1\end{smallmatrix}\right)^{-1}=\left(\begin{smallmatrix} -1&1\\1&0\end{smallmatrix}\right)$$ Multiplying by this on the right, we get $$M=M\left(\begin{smallmatrix} 0&1\\1&1\end{smallmatrix}\right)\left(\begin{smallmatrix} 0&1\\1&1\end{smallmatrix}\right)^{-1}=\left(\begin{smallmatrix} 0&1\\1&0\end{smallmatrix}\right)\left(\begin{smallmatrix} 0&1\\1&1\end{smallmatrix}\right)^{-1}=\left(\begin{smallmatrix} 0&1\\1&0\end{smallmatrix}\right)\left(\begin{smallmatrix} -1&1\\1&0\end{smallmatrix}\right)=\left(\begin{smallmatrix} 1&0\\-1&1\end{smallmatrix}\right)$$ Putting it all together, we get $f(x,y)=(x+0y,-x+y)$ as the desired function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3258671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to approach the proof of this formula for triangulations. I am currently working on an assignment for my discrete mathematics lecture. It is specifically about graphs which are triangulations and have a minimum degree of 3. Triangulation specifically means here, that every area in the graph is bordered by exactly 3 edges. The formula that is to be proven is the following: Let $v_i$ be the number of vertices of degree $i$. Show that the following is true: $3 v_3 + 2 v_4 + v_5 = v_7 + 2 v_8 + 3 v_9 + ... + (X - 6) v_X + 12$ Whereas $X$ is the maximum degree of the graph in question. Now, I am specifically supposed to use the Euler Formula, that is $|V| - |E| + g = 2$ I can also calculate the number of edges and areas relative to each other, as I know that every area is bordered by 3 edges and every edge is the border between 2 areas. That means that I know that $3|E| = 2g$. But I don't know how to go on from here. I specifically struggle with the fact, the the formula doesn't just count vertices, but also differentiates them based on their degree. I don't see how to get there from the Euler formula. I would appreciate any tips that could lead me into the right direction! Thanks for reading! Edit: I have just now realized, that the graph is maximally planar, being a triangulation. Now I can use the fact that $|E| = 3|V| - 6$, but I am not sure where that leads me. I feel like the $-6$ in there might be useful to get to the formula from above, but I can't seem to make the connection, if it even is actually there.
If you take all the terms to one side, you get $$-3v_3-2v_4-v_5+v_7+\cdots+(X-6)v_X+12.$$ This can be expessed as $$\sum_{i=3}^{X}(i-6)v_i+12,$$ i.e. every vertex $v$ counts for $d(v)-6$ in the sum. Now use the fact that the sum of the degrees is twice the number of edges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3258798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the use or importance of continuity and differentiability? In a lot of mathematical proofs I often see things like "Assume $f$ is continuous" or "Assume $f$ is differentiable" and sometimes I've even seen both "Assume $f$ is continuous and differentiable", though I believe (differentiability implies/requires continuity). What is the use or utility of this? Under what kind of circumstances, going into a problem or framework, would we want something to be continuous or differentiable? Continuous I can understand as a kind of "useful for things that don't have sudden jumps out of nowhere" but when would we want something to be differentiable as well? For instance when reading about lots of probability curves I often see that these curves are defined up front as both continuous and differentiable. Why? What pushes us to start off with these definitions? What's the motivation? What do we "lose" if we do away with these assumptions?
A function being differentiable is important in a lot of analysis as a lot of theorems (such as Rolle's theorem, for instance) simply don't hold when a function is not differentiable. It's basically so that we can assume the function is "well-behaved" in a way so that we can apply standard theorems to the function; otherwise, some of these theorems aren't applicable anymore. Also, you can still have continuous curves which have some strange properties that we may want to avoid; for example, we may not want functions to have sharp (but still continuous) corners.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3258917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
challenging sum $\sum_{k=1}^\infty\frac{H_k^{(2)}}{(2k+1)^2}$ How to prove that \begin{align} \sum_{k=1}^\infty\frac{H_k^{(2)}}{(2k+1)^2}=\frac13\ln^42-2\ln^22\zeta(2)+7\ln2\zeta(3)-\frac{121}{16}\zeta(4)+8\operatorname{Li}_4\left(\frac12\right) \end{align} where $H_n^{(m)}=1+\frac1{2^m}+\frac1{3^m}+...+\frac1{n^m}$ is the $n$th harmonic number of order $m$. This problem was proposed by Cornel Valean. Here is integral expression of the sum $\displaystyle -\int_0^1\frac{\ln x\operatorname{Li}_2(x^2)}{1-x^2}\ dx\quad $.
Different approach: \begin{align} S&=\sum_{n=1}^\infty\frac{H_n^{(2)}}{(2n+1)^2}\\ &=\sum_{n=1}^\infty H_n^{(2)}\int_0^1-x^{2n}\ln x\ dx\\ &=-\int_0^1\ln x\sum_{n=1}^\infty(x^2)^nH_n^{(2)}\\ &=-\int_0^1\frac{\ln x\operatorname{Li}_2(x^2)}{1-x^2}\ dx,\quad \operatorname{Li}_2(x^2)=2\operatorname{Li}_2(x)+2\operatorname{Li}_2(-x)\\ &=-2\int_0^1\frac{\ln x\operatorname{Li}_2(x)}{1-x^2}\ dx-2\int_0^1\frac{\ln x\operatorname{Li}_2(-x)}{1-x^2}\ dx\\ &=-\int_0^1\frac{\ln x\operatorname{Li}_2(x)}{1-x}-\int_0^1\frac{\ln x\operatorname{Li}_2(x)}{1+x}-\int_0^1\frac{\ln x\operatorname{Li}_2(-x)}{1-x}-\int_0^1\frac{\ln x\operatorname{Li}_2(-x)}{1+x}\ dx\\ &=-I_1-I_2-I_3-I_4 \end{align} \begin{align} I_1&=\int_0^1\frac{\ln x\operatorname{Li}_2(x)}{1-x}\ dx\\ &=\sum_{n=1}^\infty H_n^{(2)}\int_0^1 x^n\ln x\ dx\\ &=-\sum_{n=1}^\infty \frac{H_n^{(2)}}{(n+1)^2}\\ &=-\sum_{n=1}^\infty \frac{H_n^{(2)}}{n^2}+\zeta(4) \end{align} \begin{align} I_2&=\int_0^1\frac{\ln x\operatorname{Li}_2(x)}{1+x}\ dx\\ &=-\sum_{n=1}^\infty (-1)^n\int_0^1\ x^{n-1}\ln x\operatorname{Li}_2(x)\ dx\\ &=-\sum_{n=1}^\infty (-1)^n\left(\frac{H_n^{(2)}}{n^2}+\frac{2H_n}{n^3}-\frac{2\zeta(2)}{n^2}\right) \end{align} \begin{align} I_3&=\int_0^1\frac{\ln x\operatorname{Li}_2(-x)}{1-x}\ dx\\ &=\sum_{n=1}^\infty\frac{(-1)^n}{n^2}\int_0^1\frac{x^n \ln x}{1-x}\ dx\\ &=\sum_{n=1}^\infty\frac{(-1)^n}{n^2}\left(H_n^{(2)}-\zeta(2)\right) \end{align} \begin{align} I_4&=\int_0^1\frac{\ln x\operatorname{Li}_2(-x)}{1+x}\ dx\\ &=-\sum_{n=1}^\infty (-1)^nH_n^{(2)}\int_0^1x^n\ln x\ dx\\ &=-\sum_{n=1}^\infty \frac{(-1)^nH_n^{(2)}}{(n+1)^2}\\ &=\sum_{n=1}^\infty \frac{(-1)^nH_n^{(2)}}{n^2}+\frac78\zeta(4) \end{align} Combine the four integrals we get $$S=\frac98\zeta(4)+2\sum_{n=1}^\infty(-1)^n\frac{H_n}{n^3}-\sum_{n=1}^\infty(-1)^n\frac{H_n^{(2)}}{n^2}$$ Plugging the two sums we get $$S=\frac13\ln^42-2\ln^22\zeta(2)+7\ln2\zeta(3)-\frac{121}{16}\zeta(4)+8\operatorname{Li}_4\left(\frac12\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3259011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is $\left(1+\frac1n\right)^{n+1/2}$ decreasing? Using the Cauchy-Schwarz Inequality, we have $$ \begin{align} 1 &=\left(\int_n^{n+1}1\,\mathrm{d}x\right)^2\\ &\le\left(\int_n^{n+1}x\,\mathrm{d}x\right)\left(\int_n^{n+1}\frac1x\,\mathrm{d}x\right)\\ &=\left(n+\frac12\right)\log\left(1+\frac1n\right) \end{align} $$ which means that $$ \left(1+\frac1n\right)^{n+1/2}\ge e $$ This hints that $\left(1+\frac1n\right)^{n+1/2}$ might be decreasing. In this answer, it is shown that $\left(1+\frac1n\right)^n$ is increasing and $\left(1+\frac1n\right)^{n+1}$ is decreasing. The proofs use Bernoulli's Inequality. However, applying Bernoulli to $\left(1+\frac1n\right)^{n+1/2}$ is inconclusive. Attempt to show decrease: $$ \begin{align} \frac{\left(1+\frac1{n-1}\right)^{2n-1}}{\left(1+\frac1n\right)^{2n+1}} &=\left(1+\frac1{n^2-1}\right)^{2n}\frac{n-1}{n+1}\\ &\ge\left(1+\frac{2n}{n^2-1}\right)\frac{n-1}{n+1}\\[6pt] &=1-\frac{2}{(n+1)^2} \end{align} $$ Attempt to show increase: $$ \begin{align} \frac{\left(1+\frac1n\right)^{2n+1}}{\left(1+\frac1{n-1}\right)^{2n-1}} &=\left(1-\frac1{n^2}\right)^{2n}\frac{n+1}{n-1}\\ &\ge\left(1-\frac2n\right)\frac{n+1}{n-1}\\[6pt] &=1-\frac{2}{n(n-1)} \end{align} $$ Neither works. Without resorting to derivatives, is there something stronger than Bernoulli, but similarly elementary, that might be used to show that $\left(1+\frac1n\right)^{n+1/2}$ decreases?
Preliminaries: A couple of extensions to Bernoulli's Inequality. Bernoulli's Inequality says that $(1+x)^n$ is at least as big as the first two terms of its binomial expansion. It turns out, at least for $n\in\mathbb{Z}$, that a sharper inequality can be obtained using any partial sum with an even number of terma. Theorem $\bf{1}$: for $m\ge1$, $n\ge0$, and $x\gt-1$, $$ (1+x)^n\ge\sum_{k=0}^{2m-1}\binom{n}{k}x^k\tag1 $$ Proof (Induction on $n$): $(1)$ is trivial for $n=0$. Assume $(1)$ is true for $n-1$, then $$ \begin{align} (1+x)^n &=(1+x)(1+x)^{n-1}\tag{1a}\\[9pt] &\ge(1+x)\sum_{k=0}^{2m-1}\binom{n-1}{k}x^k\tag{1b}\\ &=\sum_{k=0}^{2m-1}\left[\binom{n-1}{k}+\binom{n-1}{k-1}\right]x^k+\binom{n-1}{2m-1}x^{2m}\tag{1c}\\ &\ge\sum_{k=0}^{2m-1}\binom{n}{k}x^k\tag{1d} \end{align} $$ Explanation: $\text{(1a)}$: factor $\text{(1b)}$: assumption for $n-1$ $\text{(1c)}$: multiply sum by $1+x$ $\text{(1d)}$: Pascal's Rule Thus, $(1)$ is true for $n$. ${\large\square}$ Theorem $\bf{2}$: for $m\ge1$, $n\ge0$, and $x\gt-1$, $$ (1+x)^{-n}\ge\sum_{k=0}^{2m-1}\binom{-n}{k}x^k\tag2 $$ Proof (Induction on $n$): Note that another way of writing $(2)$ is $$ (1+x)^n\sum_{k=0}^{2m-1}(-1)^k\binom{n+k-1}{k}x^k\le1\tag{2a} $$ $\text{(2a)}$ is trivial for $n=0$. Assume $\text{(2a)}$ is true for $n-1$, then $$ \begin{align} &(1+x)^n\sum_{k=0}^{2m-1}(-1)^k\binom{n+k-1}{k}x^k\\ &=(1+x)^{n-1}\sum_{k=0}^{2m-1}(-1)^k\binom{n+k-1}{k}x^k(1+x)\tag{2b}\\ &=(1+x)^{n-1}\sum_{k=0}^{2m-1}(-1)^k{\textstyle\left[\binom{n+k-1}{k}-\binom{n+k-2}{k-1}\right]}x^k-{\textstyle\binom{n+2m-2}{2m-1}}x^{2m}(1+x)^{n-1}\tag{2c}\\ &=(1+x)^{n-1}\sum_{k=0}^{2m-1}(-1)^k\binom{n+k-2}{k}x^k-\binom{n+2m-2}{2m-1}x^{2m}(1+x)^{n-1}\tag{2d}\\[9pt] &\le1\tag{2e} \end{align} $$ Explanation: $\text{(2b)}$: factor $\text{(2c)}$: multiply sum by $1+x$ $\text{(2d)}$: Pascal's Rule $\text{(2e)}$: assumption for $n-1$ Thus, $\text{(2a)}$ is true for $n$. ${\large\square}$ Note that for positive integer exponents, Bernoulli's Inequality is the case $m=1$ of Theorem $1$, and for negative integer exponents, it is the case $m=1$ of Theorem $2$. Answer: Use the case $m=2$ of Theorem $1$: $$ \begin{align} &\frac{\left(1+\frac1{n-1}\right)^{2n-1}}{\left(1+\frac1n\right)^{2n+1}}\\ &=\left(1+\frac1{n^2-1}\right)^{2n}\frac{n-1}{n+1}\\ &\ge\left(1+\frac{2n}{n^2-1}+\frac{2n(2n-1)}{2\left(n^2-1\right)^2}+\frac{2n(2n-1)(2n-2)}{6\left(n^2-1\right)^3}\right)\frac{n-1}{n+1}\\ &=1+\frac{n^2+n+6}{3(n-1)(n+1)^4}\\[9pt] &\ge1 \end{align} $$ That is, $\left(1+\frac1n\right)^{n+1/2}$ is decreasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3259175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
An urn contains $12$ balls, of which $7$ are black and $5$ are white. How many ways can we take $6$ balls out of the urn, $2$ of which are white? Uma urna contém $12$ bolas, das quais $7$ são pretas e $5$, brancas. De quantos modos podemos tirar $6$ bolas da urna, das quais duas sejam brancas? The original text is in Portuguese; an English version is: "An urn contains $12$ balls, of which $7$ are black and $5$ are white. How many ways can we take $6$ balls out of the urn, $2$ of which are white?"
If the balls are indistinguishable, then there are $1$ ways, because you could take $2$ white balls and $4$ black balls. If the balls are distinguishable, then you would have $\binom{5}{2}$ ways to choose $2$ white balls and $\binom{7}{4}$ ways to choose the $4$ black balls, leading to $350$ ways total.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3259275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding distribution of $y$ intercept given by line generated by two uniformly chosen points in $[0,1]^2$ The method I'm using seems super inefficient. What I did was define $4$ RVs namely $X_1,X_2,Y_1,Y_2$ and thus my two uniformly random points are $(X_1,Y_1),(X_2,Y_2)$ and hence the $y$ intercept is $-X_1(Y_2-Y_1)/(X_2-X_1)+Y_1$ but them I'll have to do Jacobian tranform multiple times. Is there any simpler method I seem to be missing?
Partial answer: Proofs that (A) $y$-intercept $\in [0,1]$ with probability $\frac12$, and (B) it is uniform within $[0,1]$. For shorthand, write $E=$ the event that $y$-intercept $\in [0,1]$. Easy proof of (A): Given any two points $P_1, P_2 \in [0,1]^2$, the line $L$ through them hits a corner of the square with probability $0$. Therefore, $L$ passes through exactly two sides of the square with probability $1$. Now rotate the square by $90^\circ, 180^\circ, 270^\circ$. The $4$ rotations are symmetric, and of the four lines exactly two of them passes through the left side of the square, i.e. has $y$-intercept $\in [0,1]$. Thus $P(E) = \frac12$. Harder proof of (A) and (B): Let $W = \max(X_1, X_2) =$ the $x$-coordinate of the point that is further to the right. We call this point $P$ and the other point $Q$. Conditioned on $W=w$, the point $Q$ is uniform in the rectangle $R$ with corners $(0,0), (0,1), (w,0), (w,1)$, i.e. the rectangle to the left of $x=w$. Meanwhile, the $y$-intercept $\in [0,1]$ iff $Q$ lies in the triangle $T$ formed by $P, (0,0), (0,1)$. This triangle $T$ is exactly $\frac12$ of rectangle $R$, so the conditional probability $P(E \mid W=w) = P(Q\in T \mid Q \in R) = \frac12$. Since this is true for any $w$, the unconditioned probability $P(E)$ is also $\frac12$, regardless of the distribution of $W$. More generally, the $y$-intercept $\in [0, c]$ (where $c\le 1$) iff $Q \in$ the smaller triangle $Z$ formed by $P, (0,0), (0,c)$. This triangle $Z$'s area is exactly $c$ of the area of $T$, or equivalently, ${c \over 2}$ of the area of $R$. This shows uniform distribution within $[0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3259536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is this equation correct? And if so, is this famous? $$\int_1^{\infty}x^n e^{-x}dx=\frac{1}{e} \sum_{k=0}^n\frac{n!}{(n-k)!}$$ My proof.1 \begin{align*} &\int_{1}^{\infty} e^{-\alpha x} \, \mathrm{d}x = \frac{e^{-\alpha}}{\alpha}, \\ &\Rightarrow \qquad \int_{1}^{\infty} (-x)^n e^{-\alpha x} \, \mathrm{d}x = \sum_{k=0}^{n} \frac{n!}{k!(n-k)!} (-1)^k k! \frac{1}{\alpha^k}(-1)^{n-k}e^{-\alpha} \\ &\Rightarrow \qquad \int_{1}^{\infty} x^n e^{-\alpha x} \, \mathrm{d}x = \sum_{k=0}^{n} \frac{n!}{(n-k)!} \frac{e^{-\alpha}}{\alpha^k}. \end{align*} $\alpha = 1$とすれば、 \begin{align*} \int_{1}^{\infty} x^n e^{-x} \, \mathrm{d}x = \frac{1}{e} \sum_{k=0}^{n} \frac{n!}{(n-k)!}. \end{align*} が成り立つ。 $\blacksquare$ Original images: (1, 2)
Is it famous? probalby not. Is it known? Yes. In: Gradshteyn and Rhyzik, Table of Integrals, Series, and Products formula 3.351.2 is $$ \int_u^\infty x^n e^{-\mu x} dx = e^{-u \mu} \sum_{k=0}^n \frac{n!}{k!}\; \frac{u^k}{\mu^{n-k+1}} $$ Now if you take $u=\mu=1$ you get your formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3259696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Why is there no solution set for $|x-7|<-4$? I am asked to find the solution set fot $|x-7|<-4$. I arrived at $(-\infty, 3)\cup(11, \infty)$ For $x - 7 > 0$: $x-7<-4$ => $x<3$ For $x-7 < 0$: $-(x-7)<-4$ => $-x+7<-4$ => $-x<-11$ => $x>11$ So, I arrive at a solution set of: $(-\infty, 3)\cup(11, \infty)$ However, my textbook says "no solution". Why is there no solution?
When the absolute value is on the "less than" side, the conjunction is "and", not "or." You've discovered that $x<3$ AND $x>11$, which is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3259852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Show $\frac{\sin x_1\sin x_2\cdots\sin x_n}{\sin(x_1+x_2)\sin(x_2+x_3)\cdots\sin(x_n+x_1)}\le\frac{\sin^n(\pi/n)}{\sin^n(2\pi/n)}$, for $\sum x_i=\pi$ Let $x_{i}>0$, ($i=1,2,\cdots,n$) and such that $$x_{1}+x_{2}+\cdots+x_{n}=\pi.$$ Show that $$ \dfrac{\sin{x_{1}}\sin{x_{2}}\cdots\sin{x_{n}}}{\sin{(x_{1}+x_{2})}\sin{(x_{2}+x_{3})}\cdots\sin{(x_{n}+x_{1})}}\le\left(\dfrac{\sin{\frac{\pi}{n}}}{\sin{\frac{2\pi}{n}}}\right)^n $$ This problem also post MO,Until now No one solve it,I think there might be a solution here, because I 've heard that there are a lot of people here who are good at and like inequality, so the possibility of solving this inequality is very high, and I really look forward to them.
Just to make it clear, this is not really an answer, more of a failed approach using an obvious idea to try We will assume $n \geq 3$, since otherwise the statement does not make a lot of sense (for $n = 2$ we would get $x_1 + x_2 = \pi$ so $\sin(x_1+x_2) = 0$ in the denominator, and similarly for $n = 1$). First note that since $x_i \gt 0$ and $x_1 + x_2 + \cdots + x_n = \pi$, we must have $0 \lt x_i \lt \pi$ and $0 \lt x_{i+1} + x_i \lt \pi$ for all $i = 1, \ldots n$. Therefore we also have $\sin x_i \gt 0$ and $\sin(x_{i+1} + x_i) \gt 0$ so each term is strictly positive. Taking logs we have $$\log\left[\frac{\sin{x_{1}}\sin{x_{2}}\cdots\sin{x_{n}}}{\sin{(x_{1}+x_{2})}\sin{(x_{2}+x_{3})}\cdots\sin{(x_{n}+x_{1})}}\right]$$ $$ = \log\left[\frac{\sin x_1}{\sin(x_1+x_2)}\right]+\log\left[\frac{\sin x_2}{\sin(x_2+x_3)}\right]+\cdots+\log\left[\frac{\sin x_n}{\sin(x_n+x_1)}\right]$$ Now consider the function $f(x, y) = \log\left[\frac{\sin x}{\sin(x + y)}\right] = \log\left(\sin x\right) - \log\left[\sin\left(x+y\right)\right]$ on the domain satisfying the constraints $0 \lt x \lt \pi$, $0 \lt y \lt \pi$ and $0 \lt x + y \lt \pi$. We can compute the Hessian matrix and check whether it is negative-semidefinite on this domain, which is equivalent to $f$ being concave. Assuming this was true, we could make use of Jensen's inequality for concave functions: $$\frac{f(x_1, x_2) + f(x_2, x_3) + \cdots + f(x_n, x_1)}{n} \leq f\left(\frac{x_1+x_2+\cdots+x_n}{n},\frac{x_1+x_2+\cdots+x_n}{n}\right)$$ This would imply $$\log\left[\frac{\sin{x_{1}}\sin{x_{2}}\cdots\sin{x_{n}}}{\sin{(x_{1}+x_{2})}\sin{(x_{2}+x_{3})}\cdots\sin{(x_{n}+x_{1})}}\right] \leq n \log\left[\frac{\sin\frac{\pi}{n}}{\sin\frac{2\pi}{n}}\right]$$ and the result would follow after taking exponents. So what's left is to actually compute the Hessian matrix and check the criterion for negative-semidefiniteness, and also make sure that Jensen's inequality holds in this way for multivariable functions. Now we find $\frac{\partial f}{\partial x} = \cot x - \cot(x+y)$ and $\frac{\partial f}{\partial y} = -\cot(x+y)$. Then we compute $$\frac{\partial^2 f}{\partial x^2} = \csc^2(x+y)-\csc^2 x$$ $$\frac{\partial^2 f}{\partial x\partial y} = \frac{\partial^2 f}{\partial y\partial x} = \csc^2(x+y)$$ $$\frac{\partial^2 f}{\partial y^2} = \csc^2(x+y)$$ Hence the Hessian matrix is given by $$\begin{pmatrix} \csc^2(x+y) - \csc^2 x & \csc^2(x+y) \\ \csc^2(x+y) & \csc^2(x+y) \end{pmatrix}$$ Its determinant is $-\csc^2(x+y)\csc^2 x$ so it is strictly negative in the domain we're working in. This means that the eigenvalues have opposite signs, and so the function is neither concave nor convex, which means that Jensen's inequality should not hold, and there ought to be counterexamples to the claim. One issue is that we are restricted in the kinds of points we can look at (they must all share a common component with another point, like $(x_1, x_2), (x_2, x_3), \cdots, (x_n, x_1)$) so it's conceivable that there might not be counterexamples that satisfy this additional constraint. I am not really sure how to deal with this at the moment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3259944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 7, "answer_id": 1 }
Order of the eigenvalues of Sturm-Liouville operator Suppose we are working on a bounded interval $[a,b]$ with Sturm-Liouville operator $L$ given by $$ Lf = \frac{1}{w(x)}\left[-\frac{d}{dx}\left(p(x)\frac{df}{dx}\right)+q(x)f\right]. $$ How can I prove that the eigenvalues of the operator can be ordered as an increasing sequence such that : $\lambda_0 < \lambda_1 < \lambda_2... < \lambda_n < ... \to + \infty$
The problem needs to be a regular problem on $[a,b]$; otherwise what you say may not be true at all. If the problem is regular, and if you have endpoint conditions $$ \cos\alpha f(a)+\sin\alpha f'(a) = 0 \\ \cos\beta f(b)+\sin\beta f'(b) = 0, $$ then you basically get what you want. One way to argue is by looking at the classical solutions of the Sturm-Liouville equation $Lf=\lambda f$ with endpoint conditions $$ f(a) = -\sin\alpha,\;\;\; f'(a)=\cos\alpha. $$ Then $f(x,\lambda)$ and $f'(x,\lambda)$ will depend holomorphically on $\lambda$, which means that the solutions where $\cos\beta f(b,\lambda)+\sin\beta f'(b,\lambda)=0$ will correspond to the zeroes of a holomorphic function of $\lambda$. Then some simple estimates of $\langle Lf,f\rangle$ will show that this form is bounded below, and, therefore, the eigenvalues are bounded below. Knowing that the eigenvalues are the zeros of a power series in $\lambda$ that is not identically $0$ tells you that the set of eigenvalues do not cluster and, therefore, can be ordered.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3260065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find $\int_0^1 \frac{\ln^2x\arctan x}{x(1+x^2)}\ dx$ I came across this integral while i was working on a tough series. a friend was able to evaluate it giving: $$\int_0^1 \frac{\ln^2x\arctan x}{x(1+x^2)}\ dx=\frac{\pi^3}{16}\ln2-\frac{7\pi}{64}\zeta(3)-\frac{\pi^4}{96}+\frac1{768}\psi^{(3)}\left(\frac14\right)$$ using integral manipulation. other approaches are appreciated.
Start with breaking the denominator $$I=\int_0^1 \frac{\ln^2x\arctan x}{x}\ dx-\int_0^1 \frac{x\ln^2x\arctan x}{1+x^2}\ dx$$ For the first integral, use $\arctan x=\sum_{n=0}^\infty\frac{(-1)^nx^{2n+1}}{2n+1}$ and for the second integral, use the identity $\frac{\arctan x}{1+x^2}=\frac12\sum_{n=0}^\infty(-1)^n\left(H_n-2H_{2n}\right)x^{2n-1}$ we have $$I=\sum_{n=0}^\infty\frac{(-1)^n}{2n+1}\int_0^1x^{2n}\ln^2x\ dx-\frac12\sum_{n=0}^\infty(-1)^n(H_n-2H_{2n})\int_0^1x^{2n}\ln^2x\ dx$$ $$=2\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)^4}-\sum_{n=0}^\infty(-1)^n\frac{H_n-2H_{2n}}{(2n+1)^3}$$ $$=2\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)^4}-\sum_{n=0}^\infty\frac{(-1)^nH_n}{(2n+1)^3}+2\sum_{n=0}^\infty\frac{(-1)^nH_{2n}}{(2n+1)^3},\quad H_{2n}=H_{2n+1}-\frac{1}{2n+1}$$ $$=\sum_{n=0}^\infty\frac{(-1)^{n-1}H_n}{(2n+1)^3}+2\sum_{n=0}^\infty\frac{(-1)^nH_{2n+1}}{(2n+1)^3}$$ Substitute $$\sum_{n=0}^\infty\frac{(-1)^{n-1}H_n}{(2n+1)^3}=\frac{7\pi}{16}\zeta(3)+\frac{\pi^3}{16}\ln2+\frac{\pi^4}{32}-\frac1{256}\psi^{(3)}\left(\frac14\right)$$ and $$\sum_{n=0}^\infty(-1)^n\frac{H_{2n+1}}{(2n+1)^3}=\frac1{384}\psi^{(3)}\left(\frac14\right)-\frac{1}{48}\pi^4-\frac{35}{128}\pi\zeta(3)$$ we obtain that $$I=\frac{\pi^3}{16}\ln2-\frac{7\pi}{64}\zeta(3)-\frac{\pi^4}{96}+\frac1{768}\psi^{(3)}\left(\frac14\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3260176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Solving DLP by Baby Step, Giant Step I want to solve the DLP $6\equiv 2^x\pmod {101}$ using Baby Step, Giant Step. $$$$ I have done the following: We have that $n=\phi (101)=100$, since $101$ is prime. $m=\lceil \sqrt{100}\rceil=10$. For each $j\in \{0,1,\ldots , 9\}$ we calculate $(j,2^j)$: \begin{align*}j \ \ \ & & \ \ \ 2^j \\ 0 \ \ \ & & \ \ \ 1 \\ 1 \ \ \ & & \ \ \ 2 \\ 2 \ \ \ & & \ \ \ 4 \\ 3 \ \ \ & & \ \ \ 8 \\ 4 \ \ \ & & \ \ \ 16 \\ 5 \ \ \ & & \ \ \ 32 \\ 6 \ \ \ & & \ \ \ 64 \\ 7 \ \ \ & & \ \ \ 27 \\ 8 \ \ \ & & \ \ \ 54 \\ 9 \ \ \ & & \ \ \ 7\end{align*} It holds that $2^{-10}=1024^{-1}\equiv 65$. For each $i\in \{0,1,\ldots , 9\}$ we calculate $6\cdot 65^i$: \begin{align*}i \ \ \ & & \ \ \ 6\cdot 65^i \\ 0 \ \ \ & & \ \ \ 6 \\ 1 \ \ \ & & \ \ \ 87 \\ 2 \ \ \ & & \ \ \ 100 \\ 3 \ \ \ & & \ \ \ 36 \\ 4 \ \ \ & & \ \ \ 17 \\ 5 \ \ \ & & \ \ \ 95 \\ 6 \ \ \ & & \ \ \ 14 \\ 7 \ \ \ & & \ \ \ 1\end{align*} Therefore for $i=1$ and $j=0$we get the same result, and so \begin{equation*}\log_26=7\cdot 10+0=70\end{equation*} Is everything correct?
Using An Introduction to Mathematical Cryptography, J. Hoffstein, J. Pipher, J. H. Silverman, let's use Shank's Babystep, Giantstep Algorithm: * *Let $G$ be a group and let $g \in G$ be an element of order $N \ge 2$. The following algorithm solves the discrete logarithm problem in $\mathscr{O}(\sqrt{N} . \log N)$ steps. *$(1)$ Let $N$ be the multiplicative order of $p$ and then $n = 1 + \lfloor \sqrt{N} \rfloor$, so in particular $n \gt \sqrt{N}$. *$(2)$ Create two lists, List $1: g, g^2, g^3, \ldots, g^n$, List $2: h, h g^{-n}, h g^{-2n},h g^{-3n},\ldots, hg^{-n^2}$. *$(3)$ Find a match between the two lists, say $g^{i} = h g^{-j\cdot n}$. *$(4)$ Then, $x = i + jn$ is a solution to $g^x = h \pmod {p}$. For this problem, we want to perform the algorithm to solve for $x$ in $$2^x\equiv 6\pmod {101}$$ We have $$g = 2, h = 6, p = 101$$ For $p = 101$, we have order $N = \phi{(101)} = 100$ in $\mathbb{F}^*_{101}$, so set $$n = 1 + \lfloor \sqrt{N} \rfloor = 1 + \lfloor\sqrt{100}\rfloor = 11$$ Set $$u = g^{-n}\pmod{101} = 2^{-11} \pmod{101} = 83$$ Create the table $$\begin{array}{|c|c|c|} \hline \text{k} & g^k & h u^k \\ \hline 1 & 2 & 94 \\ \hline 2 & 4 & 25 \\ \hline 3 & 8 & 55 \\ \hline 4 & 16 & 20 \\ \hline 5 & 32 & 44 \\ \hline 6 & 64 & 16 \\ \hline 7 & 27 & 15 \\ \hline 8 & 54 & 33 \\ \hline 9 & 7 & 12 \\ \hline 10 & 14 & 87 \\ \hline 11 & 28 & 50 \\ \hline \end{array}$$ From the table, we find the collision at $16$, so we have $$x = i + jn = 4 + (6)(11) = 70$$ Hence, $x = 70$ solves the problem $2^x \equiv 6 \pmod{101}$ in $\mathbb{F}^*_{101}$. Aside: Here is a calculator and nice description of the algorithm to verify or you can solve this using Wolfram Alpha.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3260370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
maximum and minimum value of $\frac{x^2+y^2}{x^2+xy+4y^2}$ If $x,y\in\mathbb{R}$ and $x^2+y^2>0.$ Then maximum and minimum value of $\displaystyle \frac{x^2+y^2}{x^2+xy+4y^2}$ Plan Let $$K=\frac{x^2+y^2}{x^2+xy+4y^2}$$ $$Kx^2+Kxy+4Ky^2=x^2+y^2\Rightarrow (4K-1)y^2+Kxy+(K-1)x^2=0$$ put $y/x=t$ and equation is $(4K-1)t^2+Kt+(K-1)=0$ How do i solve it Help me plesse
From the restriction $x^2+y^2>0$, we get that $x,y$ are not both zero. If $x=0$, then $K={\large{\frac{1}{4}}}$. Suppose $x\ne 0$. Letting $t={\large{\frac{y}{x}}}$, and following your approach, we get $$(4K-1)t^2+Kt+(K-1)=0$$ which has a real solution for $t$ if and only $K={\large{\frac{1}{4}}}$ or the discriminant $$K^2-4(4K-1)(K-1)$$ is nonnegative. Equivalently, either $K={\large{\frac{1}{4}}}$ or $$-15K^2+20K-4\ge 0$$ With the restriction $K\ne{\large{\frac{1}{4}}}$, the quadratic inequality solves as $$\frac{10-2\sqrt{10}}{15}\le K\le \frac{10+2\sqrt{10}}{15},\;\;K\ne{\large{\frac{1}{4}}}$$ Noting that $$\frac{10-2\sqrt{10}}{15}<\frac{1}{4}<\frac{10+2\sqrt{10}}{15}$$ it follows that * *The minimum value of $K$ is ${\large{\frac{10-2\sqrt{10}}{15}}}\approx .2450296455$.$\\[8pt]$ *The maximum value of $K$ is ${\large{\frac{10+2\sqrt{10}}{15}}}\approx 1.088303688$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3260474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
I have a doubt regarding the group {4,8,12,16}under multiplication modulo 20. I faced a question show that {4,8,12,16} is a group under multiplication mod 20.It is fine.I have solved the problem also.But I am feeling something strange in it.The idenity element of multiplication modulo 20 should be 1 which is not in the group.I mean does the identity element change it I change the set I am dealing with keeping the binary operation same.Please give me a similiar example in which identity element under the same operation changes due to changing the set in a group.
Set $S=\{4,8,12,16\}$ The identity element of algebraic structure $(S,\times_{20})$ is $16$. * *Is this group closed under modulo multiplication: YES *Is the operation associative: $(4\times_{20}8)\times_{20}12=12\times_{20}12=\textbf{4}=4\times_{20}(8\times_{20}12)=4\times_{20}16=\bf{4}$ *Is there exists identity : YES (16) *Does each element has its own inverse : YES ($4 = 4^{-1}, 8 = 12^{-1}, 12= 8^{-1}, 16 = 16^{-1}$) So, this algebraic structure is indeed a group under multiplication modulo 20.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3260615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why is the deleted comb space still connected? I read from wikipedia that the deleted comb space: $$ (\{0\} \times \{0,1\}) \bigcup (K \times [0,1]) \bigcup ([0,1] \times \{ 0 \}) $$ has a well known result of being connected.. But how is it connected if there's a separate point $p=<0,1>$? Or is it because since $p$ is a point and is not considered to be among the open sets of $\mathbb{R}$ which are in the form of intervals...
Clearly, if $Y=\bigl(K\times[0,1]\bigr)\cup\bigl([0,1]\times\{0\}\bigr)$, then $Y$ is path connected, and therefore connected. But your set lies between $Y$ is its closure. Therefore, it is connected too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3260755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can I solve $x^2 \equiv 19 \pmod {59}$. How can I solve $x^2 \equiv 19 \pmod {59}$? I know that we can just try squaring numbers from 1 to 58 , but this is a very slow method, is not their a quicker one?
Hint: You Gauss reciprocity theorem: $$\Big({19\over 59}\Big)\Big({59\over 19}\Big) = (-1)^{{59-1\over 2}\cdot {19-1\over 2}}= -1$$ Since $$59 \equiv 2\pmod{19}$$ and if $$x\equiv 0,\pm1,\pm2,\pm3,\pm4,\pm5,\pm6, \pm7,\pm8, \pm9 \pmod{19}$$ we have $$x^2\equiv 0,1,4,9,-3,6,-2, -8,6, 5 \pmod{19}$$ we see that $$\Big({59\over 19}\Big)=\Big({2\over 19}\Big) =-1$$ so $$\Big({19\over 59}\Big) =1$$ which means $19$ is a square modulo $59$ Now if you want explicit $x$ then you can use brute force method, if a modulo is not, like in this case, to big: $$x≡0,±1,±2,±3,±4,±5,...±29(\mod59)$$ and then calculate the square of $x$ and we can see $14^2≡19(\mod59)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3260996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
When is $y=|P(x)|$ is differentiable? Let $P:\mathbb{R}\to\mathbb{R}$ denote the polynomial function. When is $y=|P(x)|$ differentiable? I found out that $y=|P(x)|$ may be differentiable through-out $\mathbb{R}$ or it may not be. When it is not differentiable thorough out $\mathbb{R}$, the points of non- differentiability occur at $P(x)=0$. Is there any more detail which can be given about this?
I've upvoted Arthur's answer, but I'd like to add a perspective on why Arthur's answer is correct, since none of the answers seem to address the why of it. Let $$\newcommand\sgn{\operatorname{sgn}}\sgn(x) = \begin{cases} -1 &x<0\\1&x\ge0\end{cases}$$ be the sign function that returns the sign of its argument. Observe that $|x|=x\sgn(x)$. Thus if $f:\Bbb{R}\to\Bbb{R}$ is any real valued function, $|f|(x) = f(x)\sgn(f(x))$. If $f$ is differentiable, then when $f(x)\ne 0$, both $f(x)$ and $\sgn(f(x))$ are differentiable, so we can apply the product rule to get $$|f|'(x) = f'(x)\sgn(f(x)) + f(x)f'(x)\sgn'(f(x)),$$ but $\sgn'(x)=0$, when $x\ne 0$, so the second term is $0$. Thus $$|f|'(x) = f'(x)\sgn(f(x)),$$ when $f(x)\ne 0$. What about when $f(x)=0$? Well, let's take a look at the limit. $$|f|'(x) = \lim_{h\to 0} \frac{|f(x+h)|-|f(x)|}{h} = \lim_{h\to 0} \frac{|f(x+h)|}{h}= \lim_{h\to 0} \left|\frac{f(x+h)}{h}\right|\sgn(h).$$ This limit exists if and only if the left and right hand limits exist and agree, but in the right hand limit $\sgn(h)=1$, and we get $|f'(x)|$, and in the left hand limit $\sgn(h)=-1$, and we get $-|f'(x)|$. Thus when $f(x)=0$, $|f|$ is differentiable at $x$ if and only if $|f'(x)|=-|f'(x)|$, which occurs if and only if $f'(x)=0$. Thus we obtain Arthur's answer, if $f:\Bbb{R}\to \Bbb{R}$ is differentiable, then $|f|$ is differentiable at $x$ if and only if $f(x)\ne 0$ or $f(x)=f'(x)=0$. Moreover, when differentiable, $|f|'(x) = f'(x)\sgn(f(x))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3261140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Inverse Laplace transformation of Bessel and exponential I'm trying to find the inverse Laplace transform of $$\mathrm{e}^{-\beta\alpha}J_0(\frac{\beta\alpha}{2})^2$$ where $\alpha=const$. Would an idea be because since I'm looking for the $\alpha\rightarrow\infty$, to calculate the asymptotic form of $J_0(x)$ i.e., $\sqrt{\pi/x}\cos(x-\pi/4)$ and try it that way? But I can't seem to solve it that way either...
I think your idea works, so asymptotic form of it is $$\dfrac{2e^{-\alpha\beta}}{\pi\alpha\beta}\left(1+\sin \alpha\beta\right)\tag{1}$$ also $\dfrac{1+\sin \alpha\beta}{\alpha\beta}$ is bounded and $$\dfrac{e^{-\alpha\beta}}{\alpha\beta}\left(1+\sin \alpha\beta\right)\to0$$ as $\alpha\to\infty$ where $\beta$ is a constant, then $(1)$ has the inverse Laplace transform. With $f(u)=\dfrac{e^{-u}}{u}\left(1+\sin u\right)$, $u=0$ is a simple pole and the residue of $$f(u) e^{t u}$$ is $1$, sum od residues is $1$, then the final answer should be $\dfrac{2}{\pi}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3261250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is this a proper rendering for "$b$ is a power of 2" using TNT from Gödel, Escher, Bach In the book "Gödel, Escher, Bach" by Hofstadter introduces the system of "Typographical Number Theory". One of the exercises is to write 'b is a power of 2', I ended up coming up with the following $<(a \cdot c)=b \supset \exists a':(SS0 \cdot a')=a \land \exists c': (SS0 \cdot c')=c>$ My interpretation into English of the phrase is "$a$ and $c$ being factors of $b$ implies that both and $a$ and $c$ are even" which I believe only holds true for $b$ that are powers of 2. Is this a valid way of writing 'b is a power of 2' in the system, and if it is/isn't how can I be sure.
Your formula doesn't quite work as written, because every number has $1$ as a factor. For example, $8$ is a power of $2$, but we have $1\cdot 8 = 8$, and $1$ is not even. But the basic idea is correct. You can express "$b$ is a power of $2$" by "every factor of $b$ except for $1$ is even". I would write this as follows: $$\forall a\, ((\exists c\, a\cdot c = b) \rightarrow (a = S0 \lor \exists d\, (a = SS0\cdot d)))$$ I'm not sure my notation matches Hofstadter's (for example I strongly prefer $\rightarrow$ to $\supset$ as a symbol for implication), but I trust you can make the notational adjustments necessary to translate my formula into a formula of TNT. In the comments, you write "How can I be certain that there does not exist some spooky counter example?". You can be certain by proving it! Suppose $n$ is a natural number. We want to prove that $n$ is a power of $2$ if and only if every factor of $n$ is either even or equal to $1$. If $n$ is a power of $2$, then by uniqueness of prime factorization, the only prime number dividing $n$ is $2$. If $n$ has an odd factor $m$ with $m\neq 1$, then $m$ has an odd prime factor $p$, which is also an odd prime factor of $n$, contradiction. Conversely, if every factor of $n$ is either even or equal to $1$, then since $1$ is not prime and $2$ is the only even prime, the only prime number appearing in the prime factorization of $n$ is $2$, so $n$ is a power of $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3261398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
what is the symbol $⊭$ in logic? I know that $⊨$ symbol is entails symbol and $A⊨B$ means that if A is True then B must be True. But I'm confused about the $⊭$ symbol. which one is it? * *$A⊭B$ means if A is true then B is False? = $A⊨¬B$ *$A⊭B$ means the Trueness of A is not any guarantee for B? *$A⊭B$ means if A is False then B must be True? = $¬A⊨B$ thank you in advance.
$A \vDash B$ means For all assignments $v$, if $A$ is true under $v$, then $B$ is true under $v$. $A \not \vDash B$ simply is the (meta-logical) negation of this statement, that is Not for all assignments $v$ it is the case that if $A$ is true under $v$, then $B$ is true under $v$ which is equivalent to There is at least one assignment $v$ such that $A$ is true under $v$ but $B$ is not which means your second option is the right one. The other two options (1 and 3) are notated in the way you already figured out by yourself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3261512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Zeroes of $\sin(z)-z^2$ I am studying for my prelims exam. I stumbled upon the following question. Show that there are infinitely many zeroes of $\sin(z)-z^2$ in the complex plane. Had it just been $f(z)=\sin(z)-z$, one can observe that $f(z+2\pi)=f(z)-2\pi$. One can, therefore, see that if $f$ takes zeros only finitely often, then $f$ must also take the value $-2\pi$ only finitely often. Picard’s theorem, therefore, tells us that $f$ is a polynomial, which is absurd. This direct approach does not seem to work for my question. So, I tried using Rouche’s theorem. For that I take $g(z)=\sin(z).$ I know the zeroes of $\sin(z)$, my idea was to find a region containing $n$ zeroes of $sin(z)$ and show that on the boundary we have $$|f-g|<|f|+|g|.$$ I could not choose the suitable region such that the above relation holds on the boundary. I am not sure if it will work or not. Of course, if it works it will prove something stronger, namely, we will in a way have a handle on the location of zeroes. Any hint would be appreciated. Moreover, my guess is that if $p(z)$ is any polynomial then $f(z)=\sin(z)-p(z)$ will have infinitely many zeroes in the complex plane. I would like to see an argument for this case. I am trying to use Rouche’s theorem but I am not able to make any progress. Also, if there is an alternate approach (which avoids Rouche’s theorem), it will also be much appreciated.
Questions like this can typically be answered using the Hadamard factorization theorem. The function $f(z)=\sin z-z^2$ has finite order, so if it has finitely many zeroes then its Hadamard factorization has the form $P(z)e^{Q(z)}$ for some polynomials $P$ and $Q$. So, we would have the equation $$P(z)e^{Q(z)}=\sin z-z^2$$ for all $z\in\mathbb{C}$. Differentiating three times we get $$R(z)e^{Q(z)}=-\cos z$$ for some other polynomial $R$. But this is impossible, since the left side has finitely many zeroes and the right side has infinitely many zeroes. (The same argument applies with $z^2$ replaced by any polynomial; you just have to differentiate enough times to make it away.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3261812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
What is the general formula for a sequence that is neither arithmetic nor geometric? If a sequence is such that the next term is given by $a_n*a +b$, what is the general formula of the given sequence? e.g.: $-$ for $a=4,\;b=6$, the sequence is: $1,\quad1*4+6=10,\quad10*4+6=46,\quad46*4 +6=190,\;\dots$ What would the formula for the general term be? (I'm not necessarily asking specifically for this example but rather for a general sequence with the next term defined by $a_n*a +b$).
You have recursion $$x_{n+1} = a x_n + b \qquad a \neq 1, \quad b\neq 0$$ Note that $$ x_{n+1} - \frac{b}{1-a} = a x_n + b - \frac{b}{1-a} = a x_n - \frac{ab}{1-a} = a\big(x_{n+1} - \frac{b}{1-a}\big)$$ which means that $y_n = x_n - \frac{b}{1-a} $ is a geometric series: $$ y_{n+1} = a y_n$$ It can be solved $$ y_n = a^n y_0$$ and you get $$ x_n = y_n + \frac{b}{1-a} = a^n(x_0 - \frac{b}{1-a}) + \frac{b}{1-a}$$ $$ x_n = a^n x_0 + b\frac{1-a^n}{1-a}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3261910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
Solve $\arctan(x)-\ln(x)=0$ $f:(0, \infty)\rightarrow \mathbb{R}; f(x)=\arctan(x)-\ln(x).$ An interval $I$ in which the equation $f(x)=0$ has a real solution is: a. $(0,1)$; b.$(1,e)$; c.$(e,e^{2})$; d.$(e^{2}, \infty)$; I tried to solve $f(x)=0$ with Rolle's Theorem but the derrivative has no real solutions. Also, I tried with graphic method but I get stuck with intersection point of the functions.
For $x>0$, the function is indeed growing. Then $$f(0)>0,$$ $$f(1)>0,$$ $$f(e)>0,$$ $$f(e^2)<0,$$ $$f(\infty)<0$$ gives you the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3262095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Convergence of weak solution for second order parabolic equation (Evans) I am trying to understand how weak convergence works in the theorem of existence of weak solution from Evans (Theorem 3, chapter 7.1), in particular how pass from (30) to (31). Here the theorem with proof Does (31) follows from (30)? Is it necessary, passing from (30), to have (31) or can I have (31) only with weak convergence of sequence?
(31) does follow from (30). If the two sides of (31) are different on a positive measure set of times, then possibly restricting to a smaller set you can assume without loss of generality that $$ \langle \mathbf{u}',v\rangle + B[\mathbf{u},v;t] > \langle\mathbf{f},v\rangle$$ on a positive measure set of times. Then you should be able to cook up a test function $v$ (which depends on both space, and more crucially, time) to force the same inequality in (30), leading to contradiction. Something like the Lebesgue differentiation theorem should get you started. (30), or some analogue, is required to conclude a pointwise a.e. equality like (31). It is well known that weak convergence implies nothing whatsoever about pointwise convergence: there are weakly convergent sequences that do not converge pointwise anywhere, e.g. the typewriter sequence. This is typical in PDE: if you want to generate a solution with nice properties by using a sequence that converges weakly to it, it is typical that at some point the weak convergence needs to be upgraded to some stronger notion of convergence by using the properties of the PDE.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3262249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $f$ is invertible is $f\left(f^{-1}(x)\right)=x$ and $f^{-1}(f(x))=x$ always True? If $f$ is invertible, is $f\left(f^{-1}(x)\right)=x$ and $f^{-1}(f(x))=x$ always True? I have doubt in the second one since $$\sin^{-1}(\sin x)=x$$ is not always true. Any comments on this?
Yes, it is always true. It turns out that $\sin$ is not invertible. The function which is denoted by $\sin^{-1}$ (or, more generally, by $\arcsin$) is the inverse of the restriction of $\sin$ to $\left[-\frac\pi2,\frac\pi2\right]$. And so$$\left(\forall x\in\left[-\frac\pi2,\frac\pi2\right]\right):\arcsin\bigl(\sin(x)\bigr)=x$$and$$\bigl(\forall x\in[-1,1]\bigr):\sin\bigl(\arcsin(x)\bigr)=x.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3262408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why should the equality of mixed partials be "intuitively obvious"? I am reading Ted Shifrin's excellent book Multivariable Mathematics. It claims that the equality of mixed partials is "an intuitively obvious result, but the proof is quite subtle". However, I guess I must be thinking in the wrong way, because I do not see the intuition behind this result. This is how I think about it: Let $f:\mathbb{R}^2 \to \mathbb{R}$. I think of $f_x$ as a "field of slopes" in the $x$-direction. If we analyze the movement in the $y$ direction in this field of slopes, we get $f_{xy}$. Now $f_y$ is a "field of slopes" in the $y$-direction. If we analyze movement in the $x$ direction here, we get $f_{yx}$. It's unclear to me why movement in the $x$-direction in the "field of $y$-slopes" should be the same as movement in the $y$-direction in the "field of $x$-slopes".
If you write the difference quotient for a small change $\Delta x$ in $x$ and then the difference quotient for that when you change $y$ by $\Delta y$ the result is the symmetric expression $$\frac{ f(x + \Delta x, y + \Delta y) -f(x + \Delta x, y ) -f( x, y + \Delta y) +f(x,y) } {\Delta x \Delta y} . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3262481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
A question on "unlabeled Cayley graphs" Suppose $G$ is a finite group $S \subset G$. Let's define $UCG(G, S)$ (unlabeled Cayley graph) as a finite unordered simple graph $\Gamma(V, E)$, where $V = G$ and $E = \{(x, y) \in G \times G| x \neq y \text{ and } (y^{-1}x \in S \text{ or } x^{-1}y \in S)\}$. Unlabelled Cayley graph satisfy following properties: $UCG(G, S)$ has $[G: \langle S \rangle]$ components and each of them is isomorphic to $UCG(\langle S \rangle, S)$ Trivially follows from the definition A finite simple undirected graph $\Gamma$ is isomorphic to $UCG(G, S)$ for some finite group $G$ and its subset $S$ iff there exists such a group $H$ and its subset $X$, that each component of $\Gamma$ is isomorphic to $UCG(H, X)$ => Trivially follows from the first proposition <= Suppose $\Gamma$ has $n$ components. Then we can take $G = H \times C_n$ and $S = X \times \{e\}$. $UCG(G, S)$ is vertex transitive Consider the action of $G$ on itself by left multiplication. $G \leq Aut(UCG(G, S))$ Consider the action of $G$ on itself by left multiplication. My question, however, is: Does for any finite simple undirected vertex transitive graph $\Gamma$, there exist such a finite group $G$ and its subset $S$, that $\Gamma \cong UCG(G, S)$? Because of the second proposition we can without loss of generality also assume, that $\Gamma$ is connected. However, that did not help me much...
According to Lauri, Josef; Scapellato, Raffaele (2003), Topics in graph automorphisms and reconstruction, the Petersen graph is not a Cayley graph of any group $G$ and generating set $S$. There are only two groups of order $10$, $\mathbb Z/10\mathbb Z$ and $D_5$. For each, you can look at all generating sets for which $|S\cup S^{-1}|=3$ (necessary because the Petersen graph is $3$-regular), and note that all of them would result in a graph with diameter more than $2$, whereas the Petersen graph has diameter $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3262634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $\arctan (-\sqrt3) = \frac{2\pi}{3}$ using basic methods Background A recent exam I'm looking at, has the following trigonometric equation: $$\sin(\pi x) + \sqrt3 \cos(\pi x) = 0, \quad x \in [0, 2]$$ After a simple re-write, we get $$\tan(\pi x) = -\sqrt 3$$ Note, on this exam, the student doesn't have any tools or sheets available. They may or may not remember the inverse tangent here, and let's assume they do not. Question What are some solid ways that a student at this level, knowing basic trigonometry, could reason their way to the right answer here? My thoughts In a $30-60-90$ triangle, we soon discover that $\tan(60^\circ) = \sqrt3$, but this requires memorizing the ratios of the sides in a $30-60-90$. From here, we can use the unit circle to look at where the tangent function is positive and negative, and we would find the answer. Not entirely unreasonable, but if we assume they remember this, they may remember the inverse tangent just as well. Are there any better, more reasonable ways a student could be expected to find that $$\displaystyle\arctan(-\sqrt3) = -\frac{\pi}{3}$$?
I agree with one of the comments to your question that it’s worth memorizing some of the basic triangles and the associated sines and cosines. However, a 30-60-90 triangle can be quickly reverse-engineered from $\tan\theta = -\sqrt3$: we have $$\frac yx = -\sqrt3 \\ x^2+y^2=1$$ from which $4x^2=1$, so $x=\pm\frac12$ and $y=\mp\frac12\sqrt3$. The hypotenuse is, of course, $1$. I would hope that a student at that level could recognize either of the resulting triangles as half an equilateral triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3262813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Linear program with $\pm 1$ constraints I am trying to formulate a constraint as follows ($X, Y, Z$ are either $-1$ or $1$): If $Z$ and $Y$ both equal $-1$, then $X$ must be $1$. But, if either $Z$ or $Y$ are not $-1$, then $X$ can be $-1$ or $1$. I came up with $Z\cdot Y \le X$, but that limits $X$ to $1$ if both $Z$ and $Y$ are $1$ ($X$ can be $0$ if both are $1$). Any advice on a more accurate constraint? Thank you.
Note: If you're trying to keep a linear program, do not add constraints of the form ZY My solution: $-3(Y+Z)\leq 5+X$ I am not sure if a better solution exists, however!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3262918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Probability Dilemma The teacher gave us a question: We flip a coin. If it was Heads, we roll a die and if it was Tails, we flip three other coins. What's the probability of exactly having one coin as Heads? I first calculated $n(S)= 1 \cdot 6 + 1 \cdot 2 \cdot 2 \cdot 2$, then $n(a) = 1 \cdot 6 + 1 \cdot 1 \cdot 1 \cdot 3$. $P(a)=n(a)/n(S) = 9/14$. For calculating $n(S)$ I first considered the case of the (first) coin being Heads and multiplied by $6$ (for the die), then calculated the case of the (first) coin being Tails and a $2 \cdot 2 \cdot 2$ for the coins. You'll get a sense of what I did for calculating $n(a)$. The teacher told that my answer was incorrect and then wrote the answer as $6 \cdot (1/12) + 3 \cdot (1/16)=11/16$ and told me to find the problem of my answer for the next session. I don't think there's anything wrong with my answer and my teacher's wrong.
If the first flip is head, then the die won't produce coins showing head, hence we will have exactly one head. (You could just leave the die alone). If the first flip is tails, then there are exactly three out of eight outcomes from the three extra coins that result in exactly one head. Hence $$p=\frac12\cdot 1+\frac12\cdot\frac38 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3263051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Cartesian product of $S^1$ and $S^2$ Can anyone help me imagine $S^1 \times S^2$? I understand that $T^2=S^1 \times S^1$, but I don't know what to do with the spheres. I am not even sure if it's in 3D. Thank you!
This is a good example of a 3-dimensional manifold that cannot be embedded into the Euclidean space $\mathbb R^3$. The best way you can think of it is as a circle such that it has a sphere associated to each and every point of it. I bet you already know how to parameterise both the circle $S^1$ (call the parameter $\rho$, for instance) and the sphere $S^2$ (say the parameters are $\varphi$ and $\theta$). Then, you can just put your parameterisations together and get $$\Phi:(\rho, \varphi,\theta)\mapsto \Phi(\rho, \varphi,\theta)\in S^1 \times S^2. $$ If you are concerned with regards to its embedding into an Euclidean space $\mathbb R^n$, the best I can offer you is $n=5$. That would come with the $\Phi$ from above, which embeds $S^1$ into $\mathbb R^2$ and $S^2$ into $\mathbb R^3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3263299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find the number of routes from leftmost vertex to rightmost vertex Suppose that one can move from one vertex to another if, and only if, the two vertices are connected by a unique common edge. The number of routes that one can take from the leftmost vertex L through 6 edges and 5 intermediate vertices to the rightmost vertex R is ....... This is my scholarship practice exam (pre-university level) and the only way I did it correctly is by counting but I feel it was a bit messy but it can be simplified by writing with alphabets such as (u for up, d for down, s for go straight). However, there could be other solutions such as combination or permutation, probably. The answer key provided is 12. Please advise what you think and how to solve this problem.
Try labeling the vertices and writing down the resulting adjacency matrix. Then raise it to the power of 6, and read the entry in the (i, j) position, where i and j are the labels of the left- and right- most vertices, respectively. Since the adjacency matrix is 14 x 14, you should probably use a computer. I recommend using the Wolfram alpha website. Although it wouldn't be too hard by hand, since A^6 = (A^2)^2 A^2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3263441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Range of a function $|\sin x|+|\cos x|$ What is the range of function $Y=[|\sin x|+|\cos x|]$ where $[ ]$ denotes the greatest integer function. And what is range of function $Y=|\sin x|+|\cos x|$ For the second one, I have tried squaring : I got $Y^{2}=1+|\sin 2x|$ and range will be between $1$ and $\sqrt{2}$ Please help me with this...!!!
The most intuitive way to proceed might be to sketch the shape of the level sets $$ |x|+|y| = 0 \\ |x|+|y| = 1 \\ |x|+|y| = 1.5\\ |x|+|y| = 2 $$ and so forth. Which of them have points in common with the unit circle?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3263568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
How to compute $\sum_{k=1}^{\infty}{\frac{1}{k^2+2k}}$? To whom this may concern, i am struggling with partial sum formulas. I don't really get why you would need to perform a partial fraction decomposition or how you know that you have to. I started by getting trying to get a grasp of the series:$$\sum_{k=1}^{n}{\frac{1}{k^2+2k}}=\frac{1}{3}+\frac{1}{8}+\frac{1}{15}+\frac{1}{24}+\cdots + \frac{1}{n(n+2)}$$ But the partial sum formula is, according to WolframAlpha, $\sum_{k=1}^{n}{\frac{1}{k^2+2k}}=\frac{3n^2+5n}{4(n+1)(n+2)}$ HOW?????? I beg you to be as detailed as possible, i really want to understand WHY
Note that $\frac 1 {k^{2}+2k} =\frac 1 2(\frac 1 k -\frac 1 {k+2})$. In $(1+\frac 1 2+...+\frac 1 n) -(\frac 1 3+...+\frac 1 {n+2})$ all terms except the first two in the first term and the last two terms in the second term cancel. Can you now compute the partial sum. The answer given in WolframAlfa is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3263815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proof about monotonicity of functions Let $f:[a,b]\rightarrow\mathbf{R}$ Suppose $f'(x)>0$ for all x $\in (a,b)$ $\ \ \ \ \ \ \ $and $f$ is differentiable on $(a,b)$ $\ \ \ \ \ \ \ $and $f$ is continuous on $[a,b]$ Show $f$ is strictly increasing on $[a,b]$. What i tried so far: Let $x_1,x_2 \in (a,b)$, Assume $x_1<x_2$ By MVT exist $c$ in $(a,b)$ s.t. $f'(c) = \dfrac{f(x_2) - f(x_1)}{x_2 - x_1}$ With $f'(c)>0$ implies $f(x_1)<f(x_2)$ That $f$ is strictly in creasing on $(a,b)$ Also WTS for all x in (a,b), $f(a)<x<f(b)$ That is $f(a)<x$ and $x<f(b)$ Assume its negation is true Have $f(a)\geq x$ or $x \geq f(b)$ Now need to find some contradiction ... Any help would be appreciated. Please tell me if there is an easier proof.
Your proof is finished with the the line that begins "With $\;f'(c)>0\;$ ..."! Nevertheless, I would write it down as follows: Take any two different points $\;x_1,\,x_2\in [a,b]\,,\,\,x_1<x_2\;$ ( of course, it can be $\;x_1=a\;$ or $\;x_2=b\;$). Since the conditions of the MVT are fulfilled in $\;[x_1,x_2]\;$ , there exists $$\;c\in (x_1,x_2)\;\; s.t. \;\;\frac{f(x_2)-f(x_1)}{x_2-x_1}=f'(c)\stackrel{\text{given!}}>0\;$$ Since $\;x_2-x_1>0\;$, we get that it must be $\;f(x_2)-f(x_1)>0\implies f\;$ is mon. ascending.$\;$ Q.E.D
{ "language": "en", "url": "https://math.stackexchange.com/questions/3263956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculate $\lim\limits_{x\rightarrow 0^+} \int\limits_0^1 \ln(1+\sin(tx))dt$ Calculate $$\lim_{x\rightarrow 0^+}\int_0^1\ln(1+\sin(tx))dt$$ My try: $$\lim_{x\rightarrow 0^+} \int_0^1 \ln (1+ \sin (tx)) dt=\lim_{x\rightarrow 0^+} ([t \ln (1+\sin (tx))]^1_0 - \int_0^1 \frac{t \cos (tx) x}{1+ \sin (tx)} dt)$$ Then I want to use: $$u=1+\sin (tx), du=\cos (tx) x$$ But then I have: $$\lim_{x\rightarrow 0^+}([t \ln (1+\sin (tx))]^1_0 - \int_1^{1+\sin x} \frac{\arcsin(u-1)}{ux} du)$$ So I think my idea about $u$ is not helpfull and I need other idea.Can you help me?
It is not necessary to compute the integral explicitly. The following estimate is sufficient to determine the limit: If $0 \le x \le \pi$ then $\sin(tx) \ge 0$ for all $t \in [0, 1]$, so that $$ 0 \le \ln (1+ \sin (tx)) \le \sin(tx) \le tx $$ and therefore $$ 0 \le \int_0^1 \ln (1+ \sin (tx)) dt \le \frac x2 \, . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3264081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
The min of $x + k/x$ $f(x) = x + \dfrac{k}{x} $ ($k > 0$ and $x > 0$) given that the k is a constant, how do I solve the x that makes f(x) minimum?
To avoid differentiating, notice that $$x+\frac{k}{x}=\bigg(\sqrt{x}-\sqrt{\frac{k}{x}}\bigg)^2+2\sqrt{k}$$ and that $\sqrt{x}=\sqrt{k/x}$ when $x=\sqrt{k}$. (Note that we can use $\sqrt{x}$, since it is given that $x\gt 0$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3264200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Proving binomial sum equals $0$ My hypothesis is that, when $n \equiv 0 \mod 6$, then $$\sum_{k=0}^{n/3-1} \Bigg( \binom{n-2}{3k+1} 2^{3k+1}-\binom{n-2}{3k-1}2^{3k-1} \Bigg) = 0, \quad \binom{n-2}{3k-1} = 0 \text{ when } k=0$$ But I get stuck at finding a proof. I have tried induction, but that does not seem to work because the binomials change when increasing $n$. I have tried writing $$\binom{n-2}{3k-1} = \binom{n-2}{3k+1} \frac{3k(3k+1)}{(n-3k-2)(n-3k-1)}$$ but I got stuck again. I tried writing out the terms for small $n$, making 2 summations instead of $1$ but I could not figure out the answer. Perhaps there is a counterexample but with such big numbers it is hard to check.
Hint: Let $$f_n(x)=\sum_{j=0}^n\binom{n}{j}x^j=(1+x)^n$$ and consider $f_n(2e^{2\pi i/3})-f_n(2e^{-2\pi i/3})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3264293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Mathematics in Other Languages The intention of this community wiki is to aid in learning another language in the restricted topic of mathematics. It does matter whether one wants to learn another language for mathematics or use mathematics as a stepping stone into another language. This post focuses only on the written aspect of the language. For each language, there is a a small description, which includes its difficulty and pay-off, and a list of works to help learn it, which includes one or two grammar books and several works written in the language, which are accompanied by a description that includes their difficulty of reading. If it is a historical paper or it difficult to find in print, then a link to it will be provided if one is known. Since this is more about learning the language, works that are easier to read are preferred, although very famous works might also be mentioned, and, in the case of ancient languages, an exhaustive list list of all mathematical works in this language is ideal. Works that include a heavy amount of mathematics, but whose main topic is not mathematics but a related field, such as physics or astronomy, are sometimes acceptable, especially in the case when they are written in dead languages. Some General Tips: * *Often, dictionaries and translators do not translate mathematical vocabulary well. A good way to go about this is to use Wikipedia by switching the language of the page.
French This is one of the easier languages to learn as a native English speaker. Many of its words are recognizable by most English speakers, and its grammar is similar to the grammar of English. It also has one of the highest payoffs. Currently, more mathematical works are being published in French than any other language except for English. * *Essential French Grammar by Resnick (grammar book)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3264421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 6, "answer_id": 3 }
How to calculate the number of elements in all subsets of a set There are $2^N$ subsets of a set. For instance the set $\{ 1, 2 \}$ has the following subsets: $\{ \}$ $\{ 2 \}$ $\{ 1 \}$ $\{ 1, 2 \}$ I'm trying to calculate the total number of elements in all of the subsets. In the above example, it would be $4$: $\{ 2 \}$ has one element, $\{ 1 \}$ has one element and $\{ 1, 2 \}$ has two elements, giving us $4$. Is there a generic equation that could calculate this for me?
Let $A=\{a_1,a_2,\dots,a_N\}$. For $i\in\{1,2,\dots,N\}$ and subset $B$ of $A$. We either have $a_i\in B$ or $a_i\notin B$. There are two possibilities for each $i$. So there are totally $2^N$ possibilities. $A$ has $2^N$ subsets. For each $a_i$, there are $2^N\div2=2^{N-1}$ subsets of $A$ containing it. As there are $N$ values of $i$, the total number of elements of all the subsets of $A$ is $N\cdot2^{N-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3264585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Explain why a line can never intersect a plane in exactly two points. Why can a line never intersect a plane in exactly two points? I know this seems like a really simple question, but I'm having a hard time figuring out how to answer it. I also tried googling the question but I couldn't find an answer for exactly what I'm looking for.
A plane is a convex set. A convex (i.e. linear) combination of any two points in a convex set will be inside the set. So if you have two distinct points in the plane, you automatically have infinite points inside the plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3264677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 10, "answer_id": 3 }
Evaluate a power series by relating it to a geometric series (where coefficients depend on the index of summation). I am trying to evaluate the power series $f\left(z\right)=\sum_{n\geq0}n^{2}z^{n}$ by relating it to a geometric series. By the ratio test, the series converges (absolutely) for $\left|z\right|<1$. I assume it's not as simple as writing $f\left(z\right)=\sum_{n\geq0}\left(n^{2/n}z\right)^{n}$? Should I proceed as in the case of the standard geometric series, i.e., by writing down an expression for the $m$th partial sum and then taking the limit as $m\to\infty$?
Start with the fact that $$\frac{1}{1-x} = \sum_{n\geq 0} x^n$$ (This is the simplest formula for an infinite geometric series) Differentiate to obtain $$\frac{1}{(1-x)^2} = \sum_{n\geq 0} nx^{n-1}$$ and multiply by $x$: $$\frac{x}{(1-x)^2} = \sum_{n\geq 0} nx^{n}$$ Repeat the top two steps and you get a closed formula $$\frac{1+x}{(1-x)^3} = \sum_{n\geq 0} n^2 x^{n-1}$$ $$\frac{x(1+x)}{(1-x)^3} = \sum_{n\geq 0} n^2 x^{n}$$ Therefore, $$f(z) = \frac{z(1+z)}{(1-z)^3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3264870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A question regarding the group $G = GL_2(\mathbb Z/ p\mathbb Z)$. For any prime $p$ , consider the group $G =\mathrm{ GL}_2(\mathbb Z/ p\mathbb Z)$. Then which of the following are true? 1) $G$ has an element of order $p$. 2) $G$ has exactly one element of order $p$. 3) $G$ has no p-Sylow subgroups. 4) Every element of order $p$ is conjugate to a matrix $$A = \left[\begin{matrix} 1 & a \\ 0 & 1 \end{matrix}\right]$$ where $a\in (\mathbb Z/ p\mathbb Z)^*$. My try: 2 & 3 are false obviously. And 1 is true. But I am confused about 4. 4 is true for $p=2$, but will the statement be true for every prime?
The following is a description of the group $G =\mathrm{ GL}_2(\mathbb Z/ p\mathbb Z)$ that answers all your questions: 1) The order $n$ of $G$ is $n=p(p+1)(p-1)^2$. Indeed , for the first row we have the choice between $p^2-1$ possibilities. For the second row we have the choice of $p^2$ vectors minus the $p$ vectors who are linearly dependent on the first vector. 2)The matrices of the form $\left[\begin{matrix} 1 & a \\ 0 & 1 \end{matrix}\right]$ define a Sylow-p subgroup $P$ of order $p$ since the maximal exponent of $p$ occuring in n is $1$. 3) The normalizer $N$ of $P$ contains, apart from $P$ itself, the group $D$ of diagonal matrices which has order $(p-1)^2$. Since $P \cap D = {I_2}$ the order of $N$ is at least $p(p-1)^2$. 4) By the orbit/stabilizer theorem the orbit of P by the inner automorphisms of $G$ has at most $p+1$ elements (all the Sylow-p subgroups). Since every element of order $p$ lies in a Sylow-p subgroup it must be conjugate to a matrix of the form defined in 2). PS: As an exercise try to use the same kind of argument to prove the same facts for $G =\mathrm{ GL}_k(\mathbb Z/ p\mathbb Z)$ for an arbitrary positive integer $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3264983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculate local maximum and minimum of $f(x)=(2+\sin x)(5-\sin x)$ I am working on my scholarship exam practice which assumes high school or pre-university math knowledge. Could you please have a look on my approach? The minimum of the function $f(x)=(2+\sin x)(5-\sin x)$ is ...... First, I began with some basic approach. We know that $-1\leq\sin x\leq1$, so I just test both values $-1$ and $1$ in the function. $f(-1)=(2+(-1))(5-(-1))=6$ $f(1)=(2+1)(5-1)=12$ Since $-1$ is the minimum of function $\sin x$, I conclude that $f(-1)=6$, which is correct when checked with the value of answer key provided. Please let me know if my approach is not always true or can apply on other similar problems. But in exam I may be uncertain if my answer is right so I tried calculus approach to check my answer. $f(x)=(2+\sin x)(5-\sin x)=10+3\sin x-\sin^2 x$ $f'(x)=3\cos x-2\sin x\cos x=0$ $2\sin x\cos x=3\cos x$ $2\sin x=3$ $\sin x=\frac{3}{2}=1.5>1$ Since 1.5 exceeds the range of $\sin x$, then I cannot use this approach. I am wondering why this is the case. Why can't we use this method to find the minimum?
$$f(x)=(2+\sin x)(5-\sin x) = -\sin^2 x + 3\sin x + 10$$ this function has $D<0$. And obviously minimum when $\sin x$ is $-1$ and max when $\sin x$ is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3265097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Convert system of second order PDEs to a system of first order PDEs I am attempting so convert $$H_t = D_H H_{xx} - vH_x - \alpha H Z$$ $$Z_t = D_Z Z_{xx} + \beta H Z $$ into a system of first order equations. $D_H$, $D_Z$, $\alpha$ and $v$ are constants and $H$ and $Z$ describe the population of humans and zombies which interact as with the equations above. Following the post from Harry49 in >>here<< I first tried to rewrite the equations as a linear system of balance laws $\mathbf{q_t + A ~ q_x = S ~ q}$ where $\mathbf{q}$ = $(H, H_x, H_t, Z, Z_x, Z_t)^T$. This gives me $$\begin{bmatrix} H_t \\ H_{xt} \\ H_{tt} \\ Z_t \\ Z_{xt} \\ Z_{tt} \end{bmatrix} + \mathbf{A} \begin{bmatrix} H_x \\ H_{xx} \\ H_{tx} \\ Z_x \\ Z_{xx} \\ Z_{tx} \end{bmatrix} = \mathbf{S} \begin{bmatrix} H \\ H_{x} \\ H_{t} \\ Z \\ Z_{x} \\ Z_{t} \end{bmatrix}$$ My next step would be to determine the coefficients of the matrices $\mathbf{A}$ and $\mathbf{S}$, which is where I get confused due to the coefficients that couple the equations. I am not even sure if the approach I chose is a legitimate one for my kind of problem and would appreciate any kind of help! Thank you very much!
The basic idea is to introduce new variables $X = H_x$, $Y = Z_x$, that gives directly: $$ H_t = D_H X_x - v H_x - \alpha H Z \\ X = H_x \\ Z_t = D_Z Y_x + \beta H Z \\ Y = Z_x $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3265231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $p$, $q$, $r$ and $s$ are four sides of a quadrilateral, then find the minimum value of $\frac{p^2+ q^2 + r^2}{s^2}$ with logic If $p$, $q$, $r$ and $s$ are four sides of a quadrilateral then find the minimum value of $\frac{p^2+ q^2 + r^2}{s^2}$ with logic. Please help me with this.
By the triangle inequality and by C-S we obtain: $$\frac{p^2+q^2+r^2}{s^2}>\frac{p^2+q^2+r^2}{(p+q+r)^2}=\frac{(1+1+1)(p^2+q^2+r^2)}{3(p+q+r)^2}\geq\frac{(p+q+r)^2}{3(p+q+r)^2}=\frac{1}{3}.$$ The equality does not occur, but easy to see that $\frac{1}{3}$ is an infimum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3265362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does $\frac{z^2}{(z^2-1)^{\frac{1}{2}}}$ have a simple pole at $\infty$? If we define $w=\frac{1}{z}$ and Laurent expand the function about $w=0$ we have: $$\frac{1}{w(1-w^2)^{\frac{1}{2}}}=\frac{1}{w}+\frac{1}{2}w+\frac{3}{8}w^3+...$$ This implies that there is a simple pole about $w=0$, because the inverse powers of $w$ only extend to $-1$. However, is we write the expansion in terms of $z$: $$\frac{z^2}{(z^2-1)^{\frac{1}{2}}}=z+\frac{1}{2z}+\frac{3}{8z^3}+...$$ I'm not sure how to read this. How does this series, when written in terms of $z$, imply there being a pole at $z=\infty$? Furthermore, I am working through a problem that asks me to use this Laurent expansion to evaluate $\int_{C_{\infty}}\frac{z^2}{(z^2-1)^{\frac{1}{2}}}$, where $C_{\infty}$ is the circle at infinity. If my expansion is correct how would I go about solving this integral? Surely the integral is infinite?
I think your error here is where you expanded the function. Notice that $$g(z)=\frac{z^2}{(z^2-1)^{\frac{1}{2}}}$$ has no singularity at zero, but it does have a singularity at $z=-1$ and $z=1$. You cannot use the Laurent series about $z=0$ of $$f(z)=\frac{1}{({z^2-1})^{\frac{1}{2}}}$$ to evaluate something as $z\rightarrow\infty$. We know that the Laurent series for a function about a point $z_0$ converges everywhere inside a circle of radius equal to the distance from $z_0$ to the closest singularity to $z_0$. Thus the radius of convergence is $R=1$ for the Laurent series about $z=0$. For this reason, there is no series expansion that will work for $f(z)$ as $z\rightarrow\infty$ because the radius of convergence will always be capped by however far your $z_0$ is from $z=1$. With all of that said, there is no reason to use the Laurent series for $f(z)$ because $f(z)$ converges near infinity. We simply see that $$\lim_{z\rightarrow\infty}g(z)=\lim_{z\rightarrow\infty}\frac{z^2}{(z^2-1)^{\frac{1}{2}}}=\lim_{z\rightarrow\infty}z$$ which is equivalent to (if $w=z^{-1}$) $$\lim_{w\rightarrow 0}\frac{1}{w}$$ so $g(z)$ has a pole of order $1$ as $z\rightarrow \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3265470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can you help with this integration problem? Integration of $\displaystyle \int \dfrac{x}{x^2-\sqrt{x}} dx.$ I attempted substituting $\sqrt{x}$ with $u$ and then solving it using partial fractions but I'm not getting the right answer. I know I could used conjugates but I want to understand why the way I solved it at first was wrong.
You were by the right way, let me continue your work, if $$I=\displaystyle \int \dfrac{x}{x^2-\sqrt{x}} dx.$$ First you make $u=\sqrt{x}$, then $du=\dfrac{1}{2\sqrt{x}}dx$, so $$I=2\displaystyle \int \dfrac{u^2}{u^3-1} du.$$ Then, you make the substitution $s=u^3-1$, so $ds=3u^2du$ and $$I=\frac{2}{3}\displaystyle\int \dfrac{1}{s}ds = \frac{2}{3} \ln(s)+C $$ From that $$I=\frac{2}{3} \ln(u^3-1)=\frac{2}{3} \ln(\sqrt{x}^3-1)+C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3265713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Quadratic inequality with a variable consisting of more than one unknown The initial inequality was $$2^{2ax+1}+2^a ≤ 2^{ax}+2^{ax+a+1},$$ after substituting $2^{ax}$ with $y$ (so that $y>0$), and transferring all the terms from the RHS to the left I got $$2\cdot y^2 − y\cdot({2^{a+1} +1)} + 2^a ≤ 0.$$ I am stuck there, I tried finding the roots of its quadratic equation and the vertex of the function but nothing worked. So my question is how to find which values of $y$ satisfy the inequality for a and x being real parameters.
Setting $y=2^{ax}$ and a rearranging a bit yields $$2y^2-y\leq 2^{a+1}y-2^a,$$ and both sides factor nicely, giving $$y(2y-1)\leq2^a(2y-1).$$ Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3265819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Improper Lebesgue integral $\int_{\mathbb R}\frac{\sin(x)}x~\mathrm dx$ While thinking over the idea of an improper Lebesgue integral, I came up with the following: $$\int_0^\infty\mu\{x~|~f(x)>t\}-\mu\{x~|~f(x)<-t\}~\mathrm dt$$ with $\mu$ being the Lebesgue measure. And I wanted to tackle this integral: $$\int_{\mathbb R}\frac{\sin(x)}x~\mathrm dx$$ Though I'm not even sure if this converges, and if so, to what? I can't make very good progress on this due to a lack of theory behind this sort of improper integral. This can be interpreted as finding $$\lim_{\epsilon\to0^+}\int_{\mathbb R}f_\epsilon(x)~\mathrm dx$$ where $$f_\epsilon(x)=\begin{cases}\frac{\sin(x)}x-\epsilon\operatorname{sgn}(\sin(x)),&\left|\frac{\sin(x)}x\right|\ge\epsilon\\0,&\left|\frac{\sin(x)}x\right|<\epsilon\end{cases}$$
Partial answer: We can attempt to compare the difference of the improper Lebesgue integral and the improper Riemann integral as follows: \begin{align}R(n)&=\int_{-2N\pi}^{2N\pi}\frac{\sin(x)}x~\mathrm dx-\int_{\mathbb R}f_{1/2N\pi}(x)~\mathrm dx\\&=2\int_0^{2N\pi}\min\left\{\frac1{2N\pi},\left|\frac{\sin(x)}x\right|\right\}\operatorname{sgn}(\sin(x))~\mathrm dx\\&=2\sum_{n=0}^{2N-1}\int_{n\pi}^{(n+1)\pi}\min\left\{\frac1{2N\pi},\left|\frac{\sin(x)}x\right|\right\}\operatorname{sgn}(\sin(x))~\mathrm dx\\&=2\sum_{n=0}^{N-1}\int_0^\pi\min\left\{\frac1{2N\pi},\frac{\sin(x)}{x+2n\pi}\right\}-\min\left\{\frac1{2N\pi},\frac{\sin(x)}{x+(2n+1)\pi}\right\}~\mathrm dx\end{align} It is notable that for the majority of the time, we have $$\min\left\{\frac1{2N\pi},\frac{\sin(x)}{x+2n\pi}\right\}-\min\left\{\frac1{2N\pi},\frac{\sin(x)}{x+(2n+1)\pi}\right\}=0$$ and when we don't, we have $$\min\left\{\frac1{2N\pi},\frac{\sin(x)}{x+2n\pi}\right\}-\min\left\{\frac1{2N\pi},\frac{\sin(x)}{x+(2n+1)\pi}\right\}=\mathcal O(\min\{1/N,1/n^2\})$$ which is not strong enough to deduce the limit. I suspect something can be done on each interval to show that the integral over it is $\mathcal O(1/N^2)$ or something like that, since the amount of values where we have a large difference gets smaller and smaller. Intuitively the error from the integral is dictated by the left and right sides of $[0,\pi]$, where the function forms a triangular-ish shape. For small $n$, these triangles have $\mathcal O(1/N^2)$ area. For large $n$, we can use the above and get $\mathcal O(1/n^2)$ error.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3265954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit of sequence using taylor's formula on trigonometric functions I had the following question in an exam. I wasn't able to solve the problem and I don't understand the solution. I am supposed to find the limit of the following sequence: $$ \lim_{x\to 1} \frac{\ln(x) - \sin(\pi x)}{\sqrt{x -1}} $$ where x > 1. Now the solution proceeded as follows $$ \ln(x) = x - 1 + O(|x-1|^2) $$ which didn't make sense. I know that the taylor expansion of the natural logarithm is $ \ln(x+1) = x + O(x^2) $. But why did they add the -1 and why replace x with (x-1)? They then set: $$ \sin(\pi x) = -\pi (x-1) + O(|x-1|^2) $$ which again isn't very clear to me. The taylor expansion of sin(x) is: $ \sin(x) = x + O(x^3) $. So how did they get to this expansion and how should I proceed to reach the same results? Thank you
Since $\ln(x+1)=x+O(x^2)$, $\ln(x)=\ln\bigl((x-1)+1\bigr)=x-1+O\bigl((x-1)^2\bigr)$. And since $\sin(x)=x+O(x^2)$,\begin{align}\sin(\pi x)&=-\sin(\pi x-\pi)\\&=-\sin\bigl(\pi(x-1)\bigr)\\&=-\pi(x-1)+O\bigl((x-1)^2\bigr).\end{align}So\begin{align}\frac{\ln(x)-\sin(\pi x)}{\sqrt{x-1}}&=\sqrt{x-1}\frac{\ln(x)-\sin(\pi x)}{x-1}\\&\to0\times(1-\pi)\\&=0.\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3266133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Hahn-Banach extension for positive functionals Let $H$ be a Banach space and $C\subset H$ be a convex cone (I'm thinking of $H=L^2(U)$ or $C(U)$ and $C=\{\text{positive functions on }U\}$ for some open $U\subset\mathbb R^d$). Moreover, let $H'\subset H$ a subspace and $\phi:H'\to\mathbb R$ be a functional which is * *continuous in the norm of $H$ *nonnegative on $H'\cap C$. Does there exist an extension $\tilde\phi:H\to\mathbb R$ which is both bounded on $H$ and nonnegative on $C$?
This is not true in every infinite-dimensional Banach space. Let $x^*$ be a linear functional on $H$ which is not continuous. We choose $C=\{x\in H : x^*(x)\ge 0\}$. Let $x\in C\subset H$ be such that $x^*(x)=1$. For $H'$ we choose a one-dimensional subspace generated by $x$. We then choose $\phi$ as the restriction of $x^*$ to $H'$. It is clear that $\phi$ is continuous on $H'$ and nonnegative on $H'\cap C$. Suppose there is a bounded extension $\bar\phi:H\to\mathbb R$ of $\phi$ such that $\bar\phi$ is nonnegative on $C$. Since $x^*$ is not continuous, there exists a sequence $x_n$ such that $x_n\to 0$ but $x^*(x_n)=1$. It follows that $x^*(x_n-x)=0$ and thus $x_n-x\in C$. Therefore, $\bar\phi(x_n-x)\geq0$. By continuity of $\bar\phi$ it follows that $\bar\phi(-x)\geq0$. However, we also have $\bar\phi(-x)=-\phi(x)=-1$, which is a contradiction to $\bar\phi(-x)\geq0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3266259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How is this induction argument here valid I have this proof of 'existence of splitting fields' from Gallian's Contemporary Abstract Algebra (I have seen it elsewhere as well, and it goes in similar lines): Let $F$ be a field and let $f(x)$ be a non-constant element of $F[x]$. Then there exists a splitting field $E$ for $f(x)$ over $F$. Proof: We proceed by induction on $\deg f(x)$. If $\deg f(x) = 1$, then $f(x)$ is linear. Now suppose that the statement is true for all fields and all polynomials of degree less than that of $f(x)$. Then, there is an extension $E$ of $F$ in which $f(x)$ has a zero, say, $a_1$ . Then we may write $f(x) = (x - a_1 )g(x)$, where $g(x) \in E[x]$. Since $\deg g(x) < \deg f(x)$, by induction, there is a field $K$ that contains $E$ and all the zeros of $g(x)$, say, $a_2 , . . . , a_n$ . Clearly, then, a splitting field for $f(x)$ over $F$ is $F(a_ 1 , a_ 2 , . . . , a_ n )$. I can't digest how can we, in the induction step, assume that 'the statement is true for all fields '. I have seen this kind of argument for the first time. Please help me!
Elaborating on Lee Mosher's comment, consider the following assertion, $P(n)$, about a natural number $n$: $P(n)$: Let $F$ be a field and let $f(x)$ be a non-constant element of $F[x]$ of degree at most $n$. Then there exists a splitting field $E$ for $f(x)$ over $F$. The statement you want to prove can be written as $(\forall n)(P(n))$ and is thus a candidate for a proof by induction. The fact that $P(n)$ itself contains two implicit universal quantifiers, including "for all fields $F$", might seem too good to be true ... but mathematics is, quite simply, that good! especially (as we see here) for its ability to package whole infinite families of assertions together in a single statement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3266530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What is the solution to this equation: $e^x+2x=0$ I cant find any program that actually solves this type of equations and I cant find anything helpful about this type. What is the name of these equations and how do I solve this one? Thanks.
Such an equation does not have an easy formula or a "closed form" like quadratic formula for example. If you only care about the answer, you can try to plot it as in Plot using Desmos .It turns out that it has a real root at approximately: $x=-0.352$. Alternatively you could try a "numerical method" such as Newtons's iterative method. Such methods start at a guess and keep on improving the assumption by applying a formula. See for example: Newton's Method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3266785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Calibrations vs. Riemannian holonomy I've began to study the relationship between calibrations and holonomy, mainly through D.D. Joyce's Riemannian Holonomy Groups and Calibrated Geometry and partly through internet material. Pretty much everyone explains this relationship by the holonomy principle: if $H=\text{Hol}_x$ and $\varphi_0$ is an $H$-invariant $k$-form in $T_pM$, then there is a parallel $k$-form $\varphi$ in $M$ with $\nabla\varphi=0$. In particular, this means $d\varphi=0$. Rescaling $\varphi_0$ if necessary, we get that $\varphi$ is a calibration. So far, so good. After this, people start saying something about special holonomy and invariably mention Berger's classification. 1) What does special mean in this context? I thought this was an informal adjective used by Joyce, but apparently everyone uses it and I haven't found a definition for it. 2) I understand Berger's list is interesting, for they deal with irreducible manifolds. But why don't they mention symmetric manifolds, which are not on the list, like $\mathbb{R}^n$, $\mathbb{S}^n,\mathbb{R}H^n$, compact Lie groups etc. They seem pretty interesting (and numerous) to me, so why not consider them?
(1) A generic metric has (restricted) holonomy group $SO(n)$ (More proper statement: the set of holonomy $SO(n)$ metrics is comeagre in the space of all Riemannian metrics). Hence the adjective special is coined (as in the opposite of "generic") when we can reduce it to smaller subgroups. It definitely predates Joyce (certainly Harvey and Lawson used that in their seminal paper introducing calibrations in the early 1980s). (2) Elie Cartan proved that for Riemannian symmetric spaces $G/H$, the restricted holonomy group is the identity component of the isotropy group $H$. So this is just a pure algebra problem as to which Lie group is a subgroup of another Lie group (or equivalently which Lie algebra is a subalgebra of another), hence not interesting (as in having little if not no geometry content).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3266923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Linear maps take 0 to 0 Suppose $T$ is a linear map from $V$ to $W$ where $V$ and $W$ are subspaces of a finite-dimensional vector space over some generic field $\mathcal{F}$. Show $T(0)=0$. Proof: By additivity of a linear map, (1) $T(0)=T(0+0)=T(0)+T(0)$. Since $T(0)\in W$ and $W$ is a subspace, there exists an additive inverse for $T(0)$: denote as $k$. Add $k$ on both sides of (1), we obtain $T(0)=0$. Is this proof correct? Reference: Axler, Sheldon J. $\textit{Linear Algebra Done Right}$, New York: Springer, 2015.
Well, given two groups $(G,\cdot,e)$ and $(G',\circ,e')$ (think of the additive groups of vector spaces) and a homomorphism $\phi:G\rightarrow G'$, i.e., $\phi$ is a mapping with $\phi(g\cdot h) =\phi(g)\circ \phi(h)$. Then $e'\circ \phi(e) = \phi(e) = \phi(e\cdot e) = \phi(e)\circ \phi(e)$. By multiplying with $\phi(e)^{-1}$ from the right, we obtain $e'=\phi(e)$. Done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3267233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conformal circles (A property of inversions) I came across the following proposition in this book: If C is a circle in the Euclidean plane, iC is conformal, that is it preserves angles. Also, iC takes circles not containing the center of C to circles, circles containing the center to lines, lines not containing the center to circles containing the center, and lines containing the center to themselves. I could not figure out what this proposition means. I need to understand this statement before I move on to its proof.
A holomorphic mappingg $\mathbb{\hat{C}} \rightarrow \mathbb{\hat{C}}$ that has a holomorphic inverse is called a conformal mapping of $\mathbb{\hat{C}}$. It turns out that every conformal mapping of $\mathbb{\hat{C}}$ is a Möbius transformation. The proposition you ask about asserts that a conformal map will send the points along a circle to either a circle or a line (depending on whether or not $0$ is on the circle) and sends the points on a line to lines or circle (depending on whether or not $0$ is on the circle). I give a relatively easy to follow proof of this below. Theorem Any Möbius transformation is a composition of translations, dilation and inversions. (A Möbius transformation is a map of the form $\phi(z) = \displaystyle\frac{az+b}{cz+d}$.) A translation is of the form $\phi(z) = z+c$, a dilation is of the form $\phi(z) = \lambda z$, and an inversion is of the form $\displaystyle \phi(z) = \frac{1}{z}$. Proof Let $\phi$ be given. If $\phi(\infty)=\infty$, then we know that $\phi(z) = a z+b=(z→z+b)◦(z→a z)$. Suppose $\phi(∞)=c \neq ∞$. Then $\psi=(z→1/z)◦(z→z−c)◦φ$ is a Möbius transformation fixing $\infty$: $$\infty \xrightarrow{\psi} c \xrightarrow{z-c} 0 \xrightarrow{1/z} \infty$$ Thus, $ψ$ is a composition of translations, dilations and inversions. The same is true for $φ$. Now that there is some background on the conformal maps we move to the actual problem at hand. Theorem Möbius transformations preserve the class of circles and lines. Proof Circles and lines are clearly preserved by translations and dilations. It suffices to check that they are preserved under inversion. Consider a circle $C=\{z∈\mathbb{C}||z−a|=r\},\ r>0$. Put $w=1/z$. \begin{align*} z∈C&⇔|z−a|^2=r^2\\ &⇔|z|^2−\overline{a} z−a\overline{z}+|a|^2=r^2 \end{align*} If|a|=r (equivalently, C goes through $0$), then \begin{align*} z∈C&⇔|z|^2−\overline{a} z−a\overline{z}=0\\ &⇔1−\overline{a w}−a w=0\ \text{or}\ w=∞\\ &⇔2Re(a w)=1\ \text{or}\ w=∞\\ &⇔Re(e^{iarg(a)}w)=\frac{1}{2|a|}\ \text{or}\ w=∞ \end{align*} This last condition describes a line. If $|a|\neq r$(equivalently, C does not go through 0), then \begin{align*} z∈C&⇔|z|^2−\overline{a} z−a\overline{z}+|a|^2−r^2=0\\ &⇔1−\overline{a w}−a w+(|a|^2−r^2)|w|^2=0\\ &⇔|w|^2−\frac{\overline{a w}+a w}{|a|^2−r^2}+\frac{1}{|a|^2−r^2}=0\\ &⇔|w|^2−\frac{\overline{a w}+aw}{|a|^2−r^2}+\frac{|a|^2}{(|a|^2−r^2)^2}+\frac{1}{|a|^2−r^2}−\frac{|a|^2}{(|a|^2−r^2)^2}=0\\ &⇔\left|w-\frac{|a|^2}{|a|^2−r^2}\right|^2−\frac{r^2}{(|a|^2−r^2)^2}=0 \end{align*} The last equation describes a circle. A similar calculation shows that under inversion, the image of a line $\ell$ is a circle (if $0∉ \ell$) or a line (if $0∈\ell$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3267384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why can an algebraic extension of a local field be written as a tower of finite extensions? Let $K$ be a local field and $L$ an algebraic extension of $K$. I found a statement that $L$ can be written as $$ L = \bigcup_{n=0}^\infty L_n, $$ where $L_n$ are finite extensions of $K$ with $L_n \subseteq L_{n+1}$. I could not find any reference for this but one can show it using the known statement that there are only finitely many extensions of $K$ of a fixed degree. Does anyone know if it is possible to state a more elementary proof?
$K$ is a completion of a global field $K'$. Every finite extension $E$ of $K$ can be obtained as a completion of a finite extension $E'$ of $K'$. This implies that there are only countably many finite extensions of $K$, as $K'$ and hence $K'[x]$ are countable. Every algebraic extension $L$ of $K$ is the directed union of all finitely generated subextensions $L=\bigcup E_i$. Since there are only countably many, we can choose an enumeration $(E_n)_{n \in \Bbb N}$ and then set $L_n=E_1E_2\dots E_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3267563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that there is no isometry from $\mathbb R^3$ to $\mathbb R^2$ How should I start of? I know if $T$ is an isometry then $T=A \vec x+\vec b$ where A is orthogonal matrix and $\vec b$ is a column vector. If I can prove that there is no orthogonal matrix from $\mathbb{R^3}$ to $\mathbb{R^2}$ then will it suffice?(which I don't think is true?) (Also my understanding of isometries is quite basic, I know isometries preserve distance and dot product)
Consider a tetrahedron in $\mathbb R^3$. Its vertices are four points equidistant from each other. An isometry with $\mathbb R^2$ would create such a set in $\mathbb R^2$, but this is clearly impossible. One way to see is that since distinct circles in $\mathbb R^2$ only intersect in at most two locations. If you pick two points $A,B$ distance $d$ apart, and you want to find two more points distance $d$ away from $A,B$, then you must select the intersections $C,D$ of circles radius $d$ around $A,B$. But unfortunately, $C$ and $D$ are necessarily $d\sqrt{3}$ apart, not $d$!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3267713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }