Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Matlab function input problem, can only see 1 root but supposed to see 2 – Noob question Thank you for reading this question. I'm studying numerical methods and I'm using Matlab for the practical parts. The problem: I'm supposed to find 2 different roots, y = 0, for positive x, x > 0, with the function below. When I plot it in Matlab I can only see 1. I've checked and changed my code countless times but for the life of me I can't see what I'm doing wrong. Please help, it would be greatly appreciated. Thank you. To clarify, I'm not asking for help with an algorithm, I only need help with what I'm doing wrong with inputing the function in Matlab. Again, I can only see it having 1 root, y = 0, and not 2, which is the start of the actual problem I am to solve. The function: $$ f(x) = 98x - \biggl(\frac{x^2 + x + 0.2}{x + 1}\biggr)^9 + 5xe^{-x} = 0 $$ Here is my Matlab code: x = 0:0.001:1000; y = 98.*x - ((x.^2 + x + 0.2)./(x + 1)).^9 + 5.*x.*exp(-x); What am I doing wrong? Thank you, regards / euro
Sorry for a late update, been busy. Thank you very much for the input on my problem. You are correct, the scale and step-size I was using was the culprit. Using this revised code shows that there is indeed a small positive root very close to x = 0. x = -0.001:1*exp(-10):0.001; y = 98.*x-((x.^2+x+ 0.2)./(x+1)).^9 + 5.*x.*exp(-x); figure;plot(x,y); grid; axis([ 0.0000000045 0.0000000055 -0.00000001 0.00000001]) Again, thank you very much, regards / euro
{ "language": "en", "url": "https://math.stackexchange.com/questions/2349354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why is u-equivalence useful? If $X$ is a standard set, we define, for $\alpha\in\ ^*X$, $U_{\alpha}=\{A\subset X\mid\alpha\in\ ^*A\}$. We can see that $U_\alpha$ is an ultrafilter on $X$. We thus define an equivalence relation on $^*X$ which is $\alpha\sim\beta$ iff $U_{\alpha}=U_{\beta}$. Is it possible, using big enough saturation, that there are distinct $\alpha,\beta\in\ ^*X$ such that $\alpha\sim\beta$ (for example, using this) ?
The paper by Di Nasso you cite was published in 2002. In a latter paper by Benci, Forti and Di Nasso: Benci, Vieri; Forti, Marco; Di Nasso, Mauro. The eightfold path to nonstandard analysis. Nonstandard methods and applications in mathematics, 3–44, Lect. Notes Log., 25, Assoc. Symbol. Logic, La Jolla, CA, 2006 in section 9 seeks to present models of hyperintegers as subsets of the Stone-Cech compactification of $\mathbb Z$. This suggests that for those models distinct integers necessarily correspond to distinct ultrafilters. Meanwhile Di Nasso in chapter 11 of Nonstandard analysis for the working mathematician, page 445, points out that $c^+$ saturation implies that the map from hypernaturals to $\beta \mathbb N$ (the Stone-Cech compactification) is onto. It suffices to choose a model of the hypernaturals of high enough cardinality (namely, higher than that of $\beta\Bbb N$) to ensure that there are distinct hypernaturals that define identical ultrafilters on $\Bbb N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2349477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Getting the RHS into summation form Some time ago, I wrote down this identity$$\frac 4\pi=1+\left(\frac 12\right)^2\frac 1{1!\times2}+\left(\frac 12\times\frac 32\right)^2\frac 1{2!\times2\times3}+\ldots$$And being the idiot I was, I didn't write down the RHS into a compact sum. Question: How do you write the RHS with a summation?$$1+\left(\frac 12\right)^2\frac 1{1!\times2}+\left(\frac 12\times\frac 32\right)^2\frac 1{2!\times2\times3}+\ldots=\sum\limits_{k=0}^{\infty}\text{something}$$ Obviously, there is a $k!$ in there, but that's as much as I know. The sum also includes pochhammer symbols$$(a)_n=a(a+1)\cdots(a+n-1)$$ because the RHS is a hypergeometric function.
If the general term has the form $$ a_k=\left(\frac{(2k-1)!!}{2^k}\right)^2\frac{1}{k!(k+1)!} = \left(\frac{(2k)!}{4^k k!}\right)^2\frac{1}{k!(k+1)!}$$ then $$ \sum_{k\geq 0}a_k = \sum_{k=0}\frac{1}{4^k}\binom{2k}{k}\frac{1}{4^k(k+1)}\binom{2k}{k}=\frac{1}{2\pi}\int_{0}^{2\pi}\frac{2 d\theta}{\sqrt{1-e^{i\theta}}\left(1+\sqrt{1-e^{-i\theta}}\right)} $$ equals $\frac{8}{2\pi}=\color{red}{\frac{4}{\pi}}$ by the residue theorem or by the explicit computation of a primitive. This proves $$ \phantom{}_2 F_1\left(\frac{1}{2},\frac{1}{2};2;1\right)=\frac{4}{\pi}$$ that is related with a complete elliptic integral of the first kind by $$ \phantom{}_2 F_1\left(\frac{1}{2},\frac{1}{2};2;1\right)=\frac{2}{\pi}\int_{0}^{1}K(\sqrt{k})\,dk $$ previously solved here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2349563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
$2^n=7x^2+y^2$ solutions My problem is related to the equation from above. It actually is a very particular one. I noticed that for every positive integer $n$ there's ONE SINGLE solution $(x_1,y_1)$ so that $x_1$ and $y_1$ are ODD positive integers ( I didn't prove it, I tested it with a program through roughly 30 tests). If that's true , then how can I prove it? I tried to make a proof by absurd reduction ( assuming the contrary of sentence , that is , there could be at least one more solution for some values of n but not necessarly any n and run into a contradiction) which didn't work, and personally I can't "feel" why it can't be more than one solution. I hope you could help me to prove it. Thanks in advance! P.s.: $n>=3$ .
2^n ≡ y^2 mod 7 ⇒ y^2 < 7 It can be seen that y^2 can only have values 1, 2 and 4. Only 1 and 4 are acceptable if y is integer, that is y= (+ or -) 1 or y =(+ or -) 2. With these values of y some powers of n give integers for x, for example y=2, x =6 for n=8 or x=y=2 for n=5. This is the reason for existing only one single x and y for every n.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2349672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $f(0) = 0$ and $|f'(x)|\leq |f(x)|$ for all $x\in\mathbb{R}$ then $f\equiv 0$ Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be a continuous and differentiable function in all $\mathbb{R}$. If $f(0)=0$ and $|f'(x)|\leq |f(x)|$ for all $x\in\mathbb{R}$, then $f\equiv 0$. I've been trying to prove this using the Mean Value Theorem, but I can't get to the result. Can someone help?
Intuitively, the solution to $|f'| \leq |f|$ with $f(0) = c$ can not grow out of the region bounded by the solutions to $f' = +f$ and $f' = -f$ with the same initial condition $f(0) = c$. The boundary solutions are $f_\pm(x) = c e^{\pm x} = 0$ with $c = 0$ we have $f_\pm(x) \equiv 0$. It's so obvious intuitively, I think, but was not at all as easy to prove as I thought, but finally I came up with a proof. But first a lemma: Lemma Let $h : \mathbb [0, \infty) \to \mathbb R$ be differentiable and satisfy * *$h(x_0) > 0$, *$h'(x) > 0$ when $x > x_0$ and $h(x) > 0$. Then $h(x) > 0$ for all $x \geq x_0$. Proof of lemma Assume that $h(a) \leq 0$ for some $a > x_0$. Since $h$ is continuous, by the intermediate value theorem, $h$ takes the value $0$ in at least one point between $x_0$ and $a$. Let $x_1 = \inf \{ t \in (x_0, a) \mid h(t) = 0 \}$. Since $h$ is continuous and $h(x_0) > 0$ we have $x_1 > x_0$ and $h(x) > 0$ when $x_0 < x < x_1$. Then $h(x_1) - h(x_0) < 0$ and by the mean value theorem there exists some $\xi \in (x_0, x_1)$ such that $h'(\xi) = (h(x_1) - h(x_0))/(x_1 - x_0) < 0$. But this contradicts that $h'(x) > 0$ when $x > x_0$ and $h(x) > 0$. Thus $h(x) > 0$ for $x \geq x_0$. Proof of statement in question Take $\lambda>0$. Let $g(x) = \lambda e^x$. Then $g-f$ satisfies the conditions of the lemma with $x_0=0$. Thus, for all $x > 0$ we have $(g-f)(x) > 0$, i.e. $f(x) < \lambda e^ x$. In the same way, taking $g(x) = -\lambda e^x$ we have $f-g$ satisfying the conditions of the lemma so $f(x) > -\lambda e^x$ for $x > 0$. Thus, for $x > 0$ we have $-\lambda e^x < f(x) < \lambda e^x$. Since $\lambda>0$ was arbitrary we must have $f(x) \equiv 0$ for $x > 0$. Reversing the function, i.e. letting $f(x) \to f(-x)$ in the above, we also get that $f(x) \equiv 0$ for $x < 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2349761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
if $X$ is $T_1$ and limit point compact then $X$ is countably compact A space $X$ is said to be countably compact if every countable open covering of $X$ contains a finite subcollection that covers $X$. I want to show that if $X$ is a $T_1$ space and limit point compact then $X$ is countably compact.
Suppose that the space is limit compact and is not countably compact. The space is not countably compact open sets $U_1,U_2,\dots$ exist with $X=\bigcup_{i=1}^{\infty}U_i$ and $X\neq\bigcup_{i=1}^{n}U_i$ for every $n\in\mathbb N$. So the sets $F_i:=U_i^c$ are closed with $\bigcap_{i=1}^{\infty} F_i=\varnothing$ and $\bigcap_{i=1}^{n} F_i\neq\varnothing$ for each $n$. Let $x_n\in\bigcap_{i=1}^{n} F_i$ for each $n$ and let $A:=\{x_n\mid n\in\mathbb N\}$. For every $n$ the set $\{k\in\mathbb N\mid x_k=x_n\}$ must be finite (if not then $x_n\in\bigcap_{i=1}^{\infty} F_i$). Then $A$ must be an infinite set so has a limit point $x$ since the space is limit compact. Now fix $m\in\mathbb N$. Since the space is $T_1$ we can find a neighborhood $U$ of $x$ that has empty intersection with $\{x_1,\dots,x_{m-1}\}$ and conclude that $x$ must be a limit point of set $\{x_m,x_{m+1}\dots\}\subseteq F_m$. This implies $x\in F_m$ and this is true for every $m$, so that $x\in\bigcap_{i=1}^{\infty} F_i$. A contradiction is found.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2349880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Poisson equation using complex analytic method I have the following question in a complex analysis text: Find a particular solution to the following Poisson equation: $$\nabla^2u(r, \theta) = r^2 \cos \theta.$$ The solution method outlined in the text uses Wirtinger derivatives to simplify the equation, then integrate twice. So here we would have: $$4 \frac{\partial^2u}{\partial z \partial \bar{z}} = z \bar{z} \cos (\arg z).$$ Integrating with respect to $\bar{z}$ would then give: $$4 \frac{\partial u}{\partial z} = z \left( \frac{\bar{z}^2}{2} \right) \cos (\arg z) + f(z).$$ Now, here I'm stuck. I can't get this integration to work, in part because I'm not sure you can integrate $\arg(z)$. Any pointers on where I'm going wrong would be greatly appreciated. Thanks in advance.
If $u$ is a solution to this one and $v$ is a solution to $\nabla^2 v(r,\theta) = r^2 \sin \theta)$, then $w = u + i v$ satisfies $$ 4 \dfrac{\partial^2 w}{\partial z \partial \overline{z}} = \nabla^2 w = r^2 \exp(i\theta) = r z= z^{3/2} \overline{z}^{1/2}$$ Integrate with respect to $z$ and $\overline{z}$, and you find one solution is $$ w = \frac{z^{5/2} \overline{z}^{3/2}}{15} = \frac{r^{3} z}{15} = \frac{r^4}{15} \exp(i\theta)$$ So taking the real part, $$u = \frac{r^4}{15} \cos(\theta)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2350063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Mapping of zero vector under a linear transformation Let $T$ be a linear transformation from a vector space $V$ to $W$ that are defined over a field $\mathbb F$. Now, $T(\mathbf0)=\mathbf0$. Which implies the zero vector in $V$ can't move or be transformed to a new vector in $W$. Why is it so? Does it also have a geometrical significance?
Since you're asking for a geometrical interpretation: First of all, if you have a linear transformation $T$ and use it on all vectors in a vector space $T$, then the mapped vectors will form a vector space again. What's important is that the zero vector is a part of every vector space. So intuitively, when you map all vectors from a space $V$ to another space $W$, then there must be some vector that gets mapped to the zero vector. (As it has already been pointed out, it follows from the linearity criterion that $T(0)=0$, which means that this vector we are looking for is indeed the zero vector from $V$) Let's visualise what a transformation $T: \mathbb R^2 \rightarrow \mathbb R^2$ does to our space. In this case, we are not moving from one vector space to another, but rather just mapping $\mathbb R^2$ onto itself. This leads to a distortion of the space. For example, if we take the matrix $$B = \begin{pmatrix}1 & 0 \\ 1 & 1\end{pmatrix}$$ and apply it to all vectors on the unit circle, the result will look like this: Basically, what is happening is that we stretch and turn our space in different directions. Think of drawing a circle on a balloon and then, depending on your transformation, you just stretch out the surface. It should be clear now, why the zero vector gets mapped to itself: If you stretch something that has $0$ length, then of course it will have no length afterwards either. You could again think of the balloon example and see for yourself that a small dot would not really change that much, no matter how much you stretch the balloon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2350310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Number of transitive relations We have a set $A$ with cardinality $n$. How to find the number of transitive relations on $A$? Also how do we get the following results? Number of reflexive relations on $A=2^{n^2-n}$ Number of symmetric relations on $A=2^{\frac{n(n+1)}{2}}$
The problem of finding the number of transitive relations on a set of n elements is non-trivial. The number of relations defined on the set itself grows exponentially ($2^{n^2}$) For finding the other two, lets consider a matrix form of representing relations (assume rows & columns are ordered by the elements - where a 1 corresponds to existence of an element (ordered pair) & a 0 corresponds to non-existence). On a set n elements, the matrix is a square matrix with n rows & n columns (where each element corresponds to an ordered pair of elements from the set). Make the following observations - All of the diagonal elements correspond to Reflexive Relation. The mirror elements across the diagonal correspond to Symmetric Relation. Total no of diagonal elements is $n$ Total no of non-diagonal elements is $n^2 - n$ Total no of mirror-element pairs is $\frac{n^2 - n}{2}$ Now, for a Relation to be Reflexive, all of the diagonal elements must be 1, the other elements may or may not exist (either 0 or 1). In Symmetric Relations, all of the mirror elements occur in pairs i.e., either both 1 or both 0, the diagonal elements may or may not exist (either o or 1). Counting this way, its clear that the number of Reflexive relations is $2^{\frac{n^2 - 2}{2}}$ and the number of Symmetric Relations is $(2^n)(2^{\frac{n^2 - n}{2}})$ which is equal to $2^{\frac{n^2 + n}{2}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2350416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
let $f$ be a differentiable function. Compute $\frac{d}{dx}g(2)$, where $g(x) = \frac{f(2x)}{x}$. let $f$ be a differentiable function and $$\lim_{x\to 4}\dfrac{f(x)+7}{x-4}=\dfrac{-3}{2}.$$ Define $g(x)=\dfrac{f(2x)}{x}$. I want to know the derivative $$\dfrac{d}{dx}g(2)=?$$ I know that : $$\dfrac{d}{dx}g(2)=\dfrac{4(\dfrac{d}{dx}f(4))-4f(4)}{4}$$ and : $$\lim_{x\to 4}\dfrac{f(x)-f(4)}{x-4}=a\in\mathbb{R}$$ so : $$\lim_{x\to 4}\dfrac{f(x)-f(4)+7+f(4)}{x-4}=\dfrac{-3}{2}$$ $$\lim_{x\to 4}\dfrac{f(x)-f(4)}{x-4}+\dfrac{f(4)+7}{x-4}=\dfrac{-3}{2}$$ now what ?
HINT \begin{align*} \frac{dg}{dx}(2) & = \lim_{x\rightarrow 2} = \frac{g(x) - g(2)}{x - 2} = \lim_{x\rightarrow 2}\frac{\displaystyle\frac{f(2x)}{x} - \frac{f(4)}{2}}{x - 2} = \lim_{x\rightarrow 2}\frac{2f(2x)-xf(4)}{2x(x-2)}\\ & = \lim_{u\rightarrow 4}\frac{2f(u) - u\displaystyle\frac{f(4)}{2}}{u\left(\displaystyle\frac{u}{2}-2\right)} = \lim_{u\rightarrow 4}\frac{4f(u)-uf(4)}{u(u-4)} = \lim_{u\rightarrow 4}\frac{4f(u)-uf(4)}{u^2 - 4u}\\ & \overset{\mathrm{L'H}}{=}\lim_{u\rightarrow 4}\frac{4f'(u)-f(4)}{2u-4} = \frac{4f'(4)-f(4)}{4} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2351494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Approximation of Fermi-Dirac integral $\int \text{d}x \frac{f(x,\beta)}{1+e^{\beta x}}$ In physics is quite common to find integrals of the type \begin{align} I(\beta) = \int_{-\infty}^{\infty}\text{d}x \frac{f(x)}{1+e^{\beta x}} \tag{1} \end{align} where $f(x)$ is some quantity we want to average over the Fermi-Dirac distribution $n(x) = \left(1+e^{\beta x}\right)^{-1}$, and $\beta >0$ is a positive real parameter representing the the 'inverse of the temperature'. Since many times physicists are only interested in the 'low temperature' regime $\beta\gg 1$, it is common to consider the following Sommerfeld approximation: \begin{align} I(\beta)\underset{\beta \gg 1}{=} \int_{-\infty}^{0}\text{d} x~f(x)+\frac{\pi^2}{6\beta^2} f'(0)+O(\beta^{-4}). \tag{2} \end{align} Which sometimes appears in a mnemonic fashion as an expansion for $n(x,\beta)$ itself, \begin{align} \frac{1}{1+e^{\beta x}}\underset{\beta\gg1}{=} \theta(-x)-\frac{\pi^2} {6\beta^2}\delta'(x)+O(\beta^{-4}) \tag{3} \end{align} My question is Can we understand Eq. (3) in a rigorous fashion? E.g. as the expansion of a distribution. If yes, can we understand the convergence of Eq.(2) as the condition for a series expansion under integral sign? E.g. dominated convergence. My motivation I need to study an integral similar to Eq. (1) but with the crucial difference that f(x) is also a function of the parameter $\beta$, and wanted to make a similar expansion as in Eq. (2). My idea was to understand Eq. (2) as a series expansion Eq.(3) under integrals sign, and also expand my $f$ in $\beta$. To check if this is safe, I wanted to use dominated convergence. But I am not confident this makes sense. References for rigorous discussions of Eq. (2) or Eq. (3) are welcome.
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Note that $\ds{{1 \over \expo{\beta x} + 1} = \left\{\begin{array}{lcl} \ds{\Theta\pars{-x} + {\mrm{sgn}\pars{x} \over \expo{\beta\,\verts{x}} + 1}} & \mbox{if} & \ds{x \not= 0} \\[2mm] \ds{1 \over 2} & \mbox{if} & \ds{x = 0} \end{array}\right.}$ $\ds{\Theta}$ is the Heaviside Step Function. Then, \begin{align} \left.\vphantom{\Large A}\mrm{I}\pars{\beta}\right\vert_{\ \beta\ >\ 0} & = \int_{-\infty}^{\infty}{\mrm{f}\pars{x} \over \expo{\beta x} + 1}\,\dd x = \int_{-\infty}^{0}\mrm{f}\pars{x}\,\dd x + \int_{-\infty}^{\infty}{\mrm{sgn}\pars{x} \over \expo{\beta\,\verts{x}} + 1}\,\mrm{f}\pars{x}\,\dd x \\[5mm] & = \int_{-\infty}^{0}\mrm{f}\pars{x}\,\dd x + \int_{0}^{\infty} {\mrm{f}\pars{x} - \mrm{f}\pars{-x} \over \expo{\beta x} + 1}\,\dd x \\[5mm] & = \int_{-\infty}^{0}\mrm{f}\pars{x}\,\dd x + 2\sum_{n = 0}^{\infty} {\mrm{f}^{\pars{2n + 1}}\pars{0} \over \pars{2n + 1}!}\int_{0}^{\infty} {x^{2n + 1} \over \expo{\beta x} + 1}\,\dd x \\[5mm] & = \int_{-\infty}^{0}\mrm{f}\pars{x}\,\dd x + 2\sum_{n = 0}^{\infty} {\mrm{f}^{\pars{2n + 1}}\pars{0} \over \pars{2n + 1}!} \,\beta^{-\pars{2n + 2}}\int_{0}^{\infty}{x^{2n + 1} \over \expo{x} + 1}\,\dd x \\[5mm] & = \bbx{\int_{-\infty}^{0}\mrm{f}\pars{x}\,\dd x + \sum_{n = 0}^{\infty} \bracks{2\,\mrm{f}^{\pars{2n + 1}}\pars{0} \pars{1 - 2^{-2n - 1}}\zeta\pars{2n + 2}}\,\beta^{-\pars{2n + 2}}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2351604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Prove that number of non-isomorphic ordered tree with 'n' vertices is nth catalan number. according to wikipedia $C$n is the number of non-isomorphic ordered trees with n vertices. But I can't seem to be able to prove this result. How do we do that? where $n$th catalan number is: $$ C_n = \frac 1{n+1} \binom{2n}{n} $$
We have from basic principles for the species of ordered trees the species equation $$\mathcal{T} = \mathcal{Z} + \mathcal{Z} \mathfrak{S}_{\ge 1}(\mathcal{T}).$$ This yields the functional equation for the generating function $T(z)$ $$T(z) = z + z \frac{T(z)}{1-T(z)}$$ which is $$T(z) (1-T(z)) = z (1-T(z)) + z T(z) = z.$$ We claim that $$[z^n] T(z) = \left.\frac{1}{m+1} {2m\choose m}\right|_{m=n-1} = \frac{1}{n} {2n-2\choose n-1}.$$ The shift in index of the Catalan numbers represents the species $\mathcal{T}$ which has one ordered tree on two nodes and not two (root node with one child node). We then have $$[z^n] T(z) = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n+1}} T(z) \; dz.$$ Put $w = T(z)$ so that $w (1-w) = z$ and $dz = (1-2w) \; dw$ to get $$\frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n+1} (1-w)^{n+1}} w (1-2w) \; dw.$$ This yields $$[w^{n-1}] \frac{1}{(1-w)^{n+1}} - 2 [w^{n-2}] \frac{1}{(1-w)^{n+1}} \\ = {n-1+n\choose n} - 2 {n-2+n\choose n} \\ = {2n-1\choose n} - 2 {2n-2\choose n} = \left(\frac{2n-1}{n}-2\frac{n-1}{n}\right) {2n-2\choose n-1} \\ = \frac{1}{n} {2n-2\choose n-1}.$$ Remark. If an alternate approach is desired we can use formal power series on $$T(z) = \frac{1-\sqrt{1-4z}}{2}$$ where we have solved the functional equation taking the branch that has $T_0 = 0$ as in the given problem. Coefficient extraction then yields $$-\frac{1}{2} (-1)^n 4^n {1/2\choose n} = - 2^{2n-1} \frac{(-1)^n}{n!} \prod_{q=0}^{n-1} (1/2-q) \\ = - 2^{n-1} \frac{(-1)^n}{n!} \prod_{q=0}^{n-1} (1-2q) = - 2^{n-1} \frac{(-1)^n}{n!} \prod_{q=1}^{n-1} (1-2q) \\ = 2^{n-1} \frac{1}{n!} \prod_{q=1}^{n-1} (2q-1) \\ = 2^{n-1} \frac{1}{n!} \frac{(2n-2)!}{(n-1)! 2^{n-1}} = \frac{1}{n} {2n-2\choose n-1}.$$ Returning to the complex variables approach and taking the branch of the logarithm with the branch cut on the negative real axis we obtain that $T(z)$ is analytic in a neighborhood of the origin (branch point $z=1/4$ and branch cut $[1/4,\infty)$) with series expansion $z+z^2+\cdots$ so that the image of $|z|=\epsilon$ is a closed near-circle (modulus of the first term dominates) that may be deformed to a circle $|w|=\gamma.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2351676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to calculate $\lim_{x\to\infty}\frac{x}{x-\sin x}$? I tried to solve $$ \lim_{x\to\infty}\frac{x}{x-\sin x}. $$ After dividing by $x$ I got that it equals to: $$ \lim_{x\to\infty}\frac{1}{1-\frac{\sin x}{x}}. $$ Now, using L'hopital (0/0) I get that $$ \lim_{x\to\infty}\frac{\sin x}{x} = \lim_{x\to\infty}\cos x $$ and the lim at infinity for $\cos x$ is not defined. So basically I get that the overall limit of $$ \lim_{x\to\infty}\frac{x}{x-\sin x} $$ is $1$ or not defined?
With the sandwich theorem $\dfrac{x}{x+1}\leq \dfrac{x}{x-\sin x}\leq \dfrac{x}{x-1} $ $\lim \limits_{x \to +\infty}\dfrac{x}{x+1}=1\quad \text{and}\quad \lim \limits_{x \to +\infty}\dfrac{x}{x-1}=1\implies \lim \limits_{x \to +\infty}\dfrac{x}{x-\sin x}=1 $
{ "language": "en", "url": "https://math.stackexchange.com/questions/2351866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
One-Way Inverse My Algebra $2$ teacher stressed the fact that when you find the inverse $g$ of a function $f$, you must not only check that $$f \circ g=\operatorname{id}$$ but you must also check that $$g \circ f=\operatorname{id}$$ For example, if $$f(x)=x^2$$ then $$g(x)=\sqrt{x}$$ is not its inverse, because $$f(g(x))=\sqrt{x^2}=|x|\ne x$$ However, I feel that this is minor... $|x|$ is equal to $x$ half of the time (if $x$ is real) and the other half of the time, it is just $-x$. Can anyone think of an example of two functions $f$ and $g$ such that $$f \circ g=\operatorname{id}$$ but, when composed in the other order, the result is something totally wacky that is almost never equal to $\operatorname{id}$?
Let $g$ be $\arctan$ and let $f$ be $\tan$ when defined and $17$ for the rest of the inputs (for $\dfrac{\pi}{2}+k\pi$ for integers $k$). Then $f\circ g=\mathrm{id}_{\mathbb R}$ but $g\circ f(x)=x$ only for $x\in\left(-\dfrac{\pi}{2},\dfrac{\pi}2\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2351951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 1 }
Calculate Derivative of a map Consider the maps from $R^2 \to R^2$ such that $F(u, v) = (e^{u + v}, e^{u - v})$ and $G(x, y) = (xy, x^2 - y^2)$. Calculate $D(F \circ G)(1, 1)$ by directly composing. I got $F \circ G = (e^{x^2 +xy - y^2}, e^{y^2 + xy - x^2})$ But how do I get the derivative matrix?
Remember that the derivative of a vector field is its Jacobian. If $F:\mathbb{R}^2\to\mathbb{R}^2$ is differentiable function with $F(x,y)=(f_1(x,y),f_2(x,y))$ where both $f_i$ are differentiable, then we have: $$D_{(x,y)}F = \left( \begin{array}{cc} \frac{\partial}{\partial x}f_1(x,y) & \frac{\partial}{\partial y}f_1(x,y) \\ \frac{\partial}{\partial x}f_2(x,y) & \frac{\partial}{\partial y}f_2(x,y) \\ \end{array} \right)$$ For this problem, to calculate $D(F\circ G)$ you can either use the chain rule or calculate $F\circ G$ and then calculating the Jacobian directly like you are asking. Solution There are two ways to calculate the Jacobian for $F\circ G$. First by calculating directly $D(F\circ G)$ which is:$$D(F\circ G) = D(e^{x^2 + xy - y^2},e^{y^2 + xy - x^2}) = \left( \begin{array}{cc} e^{x^2+y x-y^2} (2 x+y) & e^{x^2+y x-y^2} (x-2 y) \\ e^{-x^2+y x+y^2} (-2x+y) & e^{-x^2+y x+y^2} (x+2 y) \\ \end{array} \right)$$ And we can also apply the chain rule by noticing that $D(F\circ G)=D(F(G))D(G) $ where $D(F(G))$ is $D(F)$ evaluated at $G$. We have: $$D(F)= \left( \begin{array}{cc} e^{x+y} & e^{x+y} \\ e^{x-y} & -e^{x-y} \\ \end{array} \right)\text{, } D(G) = \left( \begin{array}{cc} y & x \\ 2 x & -2 y \\ \end{array} \right)\text{ and }$$ $$D(F(G))= \left( \begin{array}{cc} e^{x^2+y x-y^2} & e^{x^2+y x-y^2} \\ e^{-x^2+y x+y^2} & -e^{-x^2+y x+y^2} \\ \end{array} \right)$$ We multiply $D(F(G))D(G) $ to obtain: $$\left( \begin{array}{cc} e^{x^2+y x-y^2} & e^{x^2+y x-y^2} \\ e^{-x^2+y x+y^2} & -e^{-x^2+y x+y^2} \\ \end{array} \right)\left( \begin{array}{cc} y & x \\ 2 x & -2 y \\ \end{array} \right) = \left( \begin{array}{cc} e^{x^2+y x-y^2} (y+2x) & e^{x^2+y x-y^2} (x-2 y) \\ e^{-x^2+y x+y^2} (y-2 x) & e^{-x^2+y x+y^2} (x+2 y) \\ \end{array} \right)$$ All that is left is to evalute $(x,y) = (1,1)$ which is: $$D_{(1,1)}(F\circ G) = \left( \begin{array}{cc} 3 e & -e \\ -e & 3 e \\ \end{array} \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2352064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The derivative of absolute value of complex function $f(x,z)$ where $x \in \mathbb{R}$ and $z \in \mathbb{C}$ Let $f: \mathbb{R} \to \mathbb{R}$ be a real function and let $z \in \mathbb{C}$ be a complex number such that $$ f(x)=|x \cdot z| $$ Let's calculate the derivative of $f$ if we applicate the derivation rules: $$ f'(x)=\dfrac{x \cdot z}{|x \cdot z|} \cdot z $$ but it's wrong indeed $$ f(x)=|x \cdot z| = |x| \cdot |z| $$ and now $$ f'(x)=\dfrac{x}{|x|} \cdot |z| $$ so what's the derivative of $f$? In general what's the derivative of absolute value of a function $|f(x,z)|$ respect the real variable $x$ and $z \in \mathbb{C}$? Thanks.
I'm going to deal with the general problem: Given a complex valued function $$g:\quad{\mathbb R}\to {\mathbb C},\qquad x\mapsto g(x)=u(x)+i v(x)\ ,$$ one has $|g(x)|=\sqrt{u^2(x)+v^2(x)}$ and $g'=u'+i v'$. Therefore $${d\over dx}\bigl|g(x)\bigr|={u(x)u'(x)+v(x)v'(x)\over\sqrt{u^2(x)+v^2(x)}}={{\rm Re}\bigl(g(x) \overline{ g'(x)}\bigr)\over|g(x)|}\ .\tag{1}$$ In the example at hand $z$ is a constant, and $g(x):=xz$, so that $g'(x)=z$. According to $(1)$ one then has $${d\over dx}\bigl|x\,z\bigr|={{\rm Re}\bigl(xz\,\bar z\bigr)\over|x\,z|}={x\,|z|^2\over |x|\,|z|}={x\over|x|}\,|z|\qquad\bigl(xz\ne0)\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2352341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
The equation $a^x=x$ for $a>1$. The following problem arises when I was sketching the inverse of $f(x)=a^x$ graphically for $a>1$: 1) Is there an $a>1$ such that the equation $a^x=x$ has a unique solution $x\in\mathbb{R}$? 2) If so, then how do we find such $a$ explicitly if possible? The answer to the first question seems to be yes, I tried solving the equation $a^x=x$ for $a=1.4$ and $a=1.5$, which has two and no solutions respectively; but I personally would love to see a proof without geometry argument.
I guess I will add my two cents $$a^x=x$$ $$a=x^{{1}/{x}}$$ now $$e^{x}\ge 1+x$$ $$e^{(x-e)/{e}}\ge x/e$$ $$e^{x/e}\ge x$$ $$e^{1/e} \ge x^{1/x}$$ Therefore maximum of $a=e^{1/e}$ for solution. Thus there exists solution for $a\in\mathbb{(1,e^{1/e})}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2352463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Unsure of how to interpret the set $\mathbb{R}^X$ where $X$ is a real vector space? I recently came across the notation $\mathbb{R}^X$ and I'm not exactly sure what it means or how to 'visualize it'. The text it comes from is the following: Let $X$ be a real vector space and let $\mathcal{F} \subset \mathbb{R}^X$ be a set of real-valued functionals on $X$. How come the set of real functionals is a subset of $\mathbb{R}^X$? Is it possible to demonstrate why this is so with some simple example?
The notation $B^A$ for sets $A$ and $B$ represent the set of all functions from $A$ to $B$. It is a generalization of the meaning of the notation $X^n:=\overbrace{X\times X\times\cdots\times X}^{n\text{ times}}$ where we can visualize it as the set of functions from $\{1,2,\ldots,n\}$ to $X$. In your case the set of functionals in a vector space $V$ are defined as the maps such that $V\to \Bbb F$, where $\Bbb F$ is the field of the vector space $V$. Then clearly any set of functionals on $V$ is contained in $\Bbb F^V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2352545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
what is it called if numbers $a$ and $b$ are such that their sum is equal to unity Question: Is there a way to definitionally describe rational numbers $a$ and $b$ when $a+b=1$? Answer: My guess is that $a$ and $b$ may be defined as `unitary additive complements,' but this is just a guess.
In the same way that the error function, erf($x$), and the complementary error function, erfc($x$), sum to one (i.e., erf($x$) + erfc($x$) = 1) [1], so too then are $a$ and $b$ complementary under addition if $a + b = 1$. [1] https://en.wikipedia.org/wiki/Error_function#Complementary_error_function
{ "language": "en", "url": "https://math.stackexchange.com/questions/2352654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do you develop a recurrence relation for the function $f(n) = 5n^2 +3$, where $n \in \mathbb{Z}^+$? On an exam of mine, I was asked to find a recurrence relation for the function $f(n) = 5n^2 +3$, where $n \in \mathbb{Z}^+$. I needed to provide a base case and the actual relation itself. I know the base case is for $n = 1$, were $f(1) = 8$, but I have no idea how to derive the relation from here. The professor's answer key is as follows, but I don't understand where the intuition/motivation comes from for this solution: $f(1) = 8, f(n) = 5n^2 + 3 = 5(n - 1)^2 + 3 + 5(2n - 1) = f(n - 1) + 10n - 5$ Where do I start? The above steps seem, at least to me, to be arbitrarily and magically plucked from nowhere...
The motivation behind rewriting $f(n)$ this way, is to acquire an equation of the form $$f(n)=g(f(n-1)),$$ i.e. we have to rewrite $f(n)$, so that we get some function of $f(n-1)$. Here, this function is $$g(x)=x+10n-5.$$ Now you can directly deduct the recurrence relation, by plugging $g$ into $x_{n+1}=g(x_n)$. Here, we get the result $$x_1=8,~~x_{n+1}=x_n+10n-5$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2352723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If $T$ is a positive operator then $I+T$ is invertible Let $T$ be a positive operator on a Hilbert space $H$, prove that $I+T:H\to H$ is invertible and $(I+T)^{-1} \in B(H)$. Now, If I prove $I+T$ is invertible, the bounded inverse theorem implies the second part. Now while proving that $I+T$ is invertible, I have proved that $I+T$ is one-one. But now I have to prove that $I+T$ is onto. In doing so my idea is to prove that $I+T$ is bounded below, so that $Range(I+T)$ becomes closed and then show that $Range(I+T)^{\perp}=\{\ 0 \}\ $, then by projection theorem we will have $Range(I+T)=H$. But I couldn't execute this idea. Other ideas will also be appreciated Thanks in advance!!
Note that $$ I+T-\lambda I=T-(\lambda -1)I. $$ So $\lambda\in\sigma(I+T)\iff \lambda-1\in\sigma(T)$. In other words, $$ \sigma(I+T)=\{\lambda+1:\ \lambda\in\sigma(T)\}. $$ As $T$ is positive, $\sigma(T)\subset[0,\infty)$. Thus $\sigma(I+T)\subset [1,\infty)$. It follows that $0\not\in\sigma(I+T)$, so $I+T$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2352929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Why does the empty set matter? I'm trying to understand why the empty set is a useful tool for mathematicians. Are there any nontrivial theorems that could only be achieved by the existence of the empty set, or is the recognition of the empty set just a conventional standard that mathematicians have adopted?
The same reason it's a useful tool in everyday life. There are lots of questions that ask for sets as answers, like... * *Which crew members are still aboard the airplane? *What dishes are in the sink and need to be put away? *What problems does this piece of software have that must be fixed before we can release it? *What songs have you practiced on the piano lately? And sometimes the answer to one of these questions is, "There aren't any." That's the empty set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2353071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proof the following! $$a_0(x)\frac{d^2y}{dx^2}+a_1(x)\frac{dy}{dx}+a_2(x)y=0$$ A) Let $f_1$ & $f_2$ are two solutions to the above differential equation. Show that $f_1$ & $f_2$ are linearly independent on $a \leq x \leq b$ and $A_1$,$A_2$,$B_1$&$B_2$ are constants such that $A_1B_2-A_2B_1\neq 0$ then the solution $A_1f_1+A_2f_2$ & $B_1f_1+B_2f_2$ are also linearly independent on $a \leq x \leq b$ My work is down! I assume that the solution of the differential equation are linearly independent then we can write them as follows $$A_1f_1+A_2f_2=0$$ $$\frac{f_1}{f_2}=-\frac{A_1}{A_2}$$ $$B_1f_1+B_2f_2=0$$ $$B_1 f'_1+B_2 f'_2=0$$ $$\frac{f'_1}{f'_2}=-\frac{B_1}{B_2}$$ Since the two solutions are linearly independent their Wronskian are not zero! $$W[f_1(x),f_2(x)]=f_1f_2'-f_2f_1'\neq 0$$ $W(x)\neq 0$ therefore $ W'(x)\neq 0$ $$W(x)[f_1,f_2]=(-A_1)(-B_2)-(A_2)(B_1)$$ $$(-A_1)(-B_2)-(A_2)(B_1)\neq 0$$ What I do is very foolish. Can someone propose a proper way of doing things! B) Let set ${f_1,f_2}$ be two solutions to the above differential equation and ${g_1,g_2}$ be another set then show that the wronskian is $W[f_1(x),f_2(x)]=cW[g_1(x),g_2(x)]$ such that $c\neq 0$ Since $f_1$ & $f_2$ are solutions, then $$a_0f_1''+a_1f_1'+a_2f_1+a_0f_2''+a_1f_2'+a_2f_2=0=0$$ $$a_0(f_1f_2''-f_2f_1'')+a_1(f_1f_2'-f_2f_1')+a_2(f_2f_1-f_1f_2)=0$$ $$a_0W'[f_1(x),f_2(x)]+a_1W[f_1(x),f_2(x)]=0$$ $$W'[f_1(x),f_2(x)]=-\frac{a_1}{a_0}W[f_1(x),f_2(x)]$$ $$W'[g_1(x),g_2(x)]=-\frac{a_1}{a_0}W[g_1(x),g_2(x)]$$ $$\int\frac{dW[g_1(x),g_2(x)]}{W[g_1(x),g_2(x)]}=\int \frac{dW[f_1(x),f_2(x)]}{W[f_1(x),f_2(x)]}$$ $$W[f_1(x),f_2(x)]=cW[g_1(x),g_2(x)]$$ Totally stuck! This is bad. I can't even proceed from the question! Hope someone help me in this question
This is pure linear algebra: If $f_1$ and $f_2$ are linearly independent vectors in some vector space and $ad-bc\ne0$ then $g_1:=a f_1+b f_2$ and $g_2:=c f_1+d f_2$ are again linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2353178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Expansion coefficients Suppose that we are given the function $f(x)$ in the following product form: $$f(x) = \prod_{k = -K}^K (1-a^k x)\,,$$ Where $a$ is some real number. I would like to find the expansion coefficients $c_n$, such that: $$f(x) = \sum_{n = 0}^{2K+1} c_n x^n\,.$$ A closed form solution for $c_n$, or at least a relation between the coefficients $c_n$ (e.g. between $c_n$ and $c_{n+1}$) would be great!
Let the function $f_n(x)$ be given by $$ f_n(x) = \prod_{k=-n}^{n} \left( 1 - a^k x \right) $$ Since it is clear that it is a polynomial of degree $2 n +1$, it can be expressed as: $$ f_n(x) = \sum_{k=0}^{2n+1} c_{n,k} x^k $$ in some yet unknown coefficients $c_{n,k}$. For these coefficients it is easy to see that $c_{n,0}=1$ and $c_{n,2n+1}=-1$. More generally one could show that $c_{n,k} = -c_{n,2n+1-k}$. The functions for different values of $n$ are related by: $$ f_n(x) = f_{n-1}(x) * \left(1 - a^n x\right)\left(1-a^{-n}x\right) = f_{n-1} * \left[1 - \left(a^n + a^{-n}\right)x + x^2\right] $$ If we now substitute the expansion in to this expression and group the terms with the same power of $x$ on both the left and right side we can find a recurrence relation between the coefficients $c_{n,k}$ of successive functions: $$ c_{n,k} = c_{n-1,k} - \left(a^n + a^{-n}\right) c_{n-1,k-1} + c_{n-1,k-2} $$ Together with the condition $c_{0,0} = 1$ and $c_{0,k}$ for $k \neq 0$ this completely specifies the coefficients and one could set up a program to evaluate them, because in general a simple and compact expression for them might not exist. In this particular case, however, there is such a "simple" expression: $$ c_{n,k} =\frac{\prod_{i=0}^{k-1} \left(a^{2n+1} - a^i \right)}{a^{k n}\prod_{i=1}^{k} \left(1 - a^i \right)} $$ with $0 \leq k \leq 2n+1$ and the product by definition is unity if no terms are present. By rearranging the limits of products a few other but equivalent expressions exist. Deriving them is a lot of work so I simply presented them. The fact that they are correct however, is a lot easier to show. For this one first observes that $c_{0,0}=1$. From there we only need to show that the expression satisfies the recurrence relation shown above and the correctness follows from induction. I leave that proof as an exercise, but give the following hint. In the recurrence relation there are three coefficients on the right hand side for the same value of $n$. If you compare the coefficients $c_{n-1,k}$, $c_{n-1,k-1}$, and $c_{n-1,k-2}$ there are quite a few factors from the numerator and denominator that they have in common and which also appear in $c_{n,k}$. The simplest way to see such a thing is to write the recurrence relation out for some chosen values for $n$ and $k$. As another remark consider $c_{n,k}$ for $k>n$, then the number of factors in the numerator and denominator keep on growing which appear to lead to very long expressions. This is however not the case and can be seen as well if you write all the factors out for some particular values of $n$ and $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2353272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Convergence of $\sum_{n=1}^\infty \cos(\pi n^2)(\sqrt{n+1}-\sqrt{n})$ I want to test the convergence of $$\sum_{n=1}^\infty \cos(\pi n^2)(\sqrt{n+1}-\sqrt{n})$$ First of all, $\cos(\pi n^2)=-1$ if $n$ is odd, and $\cos(\pi n^2)=1$ if $n$ is even. That is, $\cos(\pi n^2)=(-1)^n$. So the summation reduces to $$\sum_{n=1}^\infty (-1)^n(\sqrt{n+1}-\sqrt{n})$$ and I don't know what to do from here. I tried the ratio test, root test and comparison test, got nothing. and I don't feel it would converge since $$\sum_{n=1}^\infty (\sqrt{n+1}-\sqrt{n})=\sum_{n=1}^\infty\int_n^{n+1}\frac23 x^\frac32 dx=\int_1^\infty\frac23 x^\frac32 dx=\infty$$ Maybe I can use this result but I don't know how, or is this approach wrong?
Apply alternating series test. You have $a_n:= \sqrt{n+1} - \sqrt n$ decreasing, nonnegative and converging to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2353362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Limit of quotient of inverse cdfs I am trying to obtain $$\lim_{x\to0}\frac{\Phi^{-1}(1-x)}{\Phi^{-1}(1-x/n)}$$ where $\Phi^{-1}$ is the inverse cdf of the standard normal distribution and $n>0$. As there is an indeterminate form ($\infty/\infty$), I am applying l'Hôpital's rule, but the resulting expression (I mean, of the derivatives of both the numerator and denominator) is of the form $\infty/\infty$ as well. Would you give me any advice on how to proceed?
Here is another much simpler way to solve. Note that, $$\frac{\partial \Phi^{-1}(x)}{\partial x} = \frac{1}{\phi(\Phi^{-1}(x))} \ \ \text{ and } \ \ \frac{\partial \Phi^{-1}(x/n)}{\partial x} = \frac{1}{n\phi(\Phi^{-1}(x/n))}$$ Also, $$\frac{\partial \phi(\Phi^{-1}(x))}{\partial x} = \frac{-\Phi^{-1}(x)\phi(\Phi^{-1}(x))}{\phi(\Phi^{-1}(x))} = -\Phi^{-1}(x) $$ $$ \frac{\partial \phi(\Phi^{-1}(x/n))}{\partial x} = \frac{-\Phi^{-1}(x/n)\phi(\Phi^{-1}(x/n))}{n\phi(\Phi^{-1}(x/n))} = -\frac{\Phi^{-1}(x/n)}{n}$$ $$\begin{align} L &= \lim_{x \rightarrow 0} \frac{\Phi^{-1}(1-x)}{\Phi^{-1}(1-x/n)} = \lim_{x \rightarrow 0} \frac{-\Phi^{-1}(x)}{-\Phi^{-1}(x/n)} = \color{red}{\lim_{x \rightarrow 0} \frac{\Phi^{-1}(x)}{\Phi^{-1}(x/n)}} = \frac{\rightarrow -\infty}{\rightarrow -\infty}\\\\ &= \lim_{x \rightarrow 0} \frac{\frac{\partial \Phi^{-1}(x)}{\partial x}}{\frac{\partial \Phi^{-1}(x/n)}{\partial x}} = \lim_{x \rightarrow 0} \frac{n\phi(\Phi^{-1}(x/n))}{\phi(\Phi^{-1}(x))} = \frac{\rightarrow 0}{\rightarrow 0} \\\\ &= \lim_{x \rightarrow 0} \frac{n\frac{\partial \phi(\Phi^{-1}(x/n))}{\partial x}}{\frac{\partial \phi(\Phi^{-1}(x))}{\partial x}} = \color{red}{\lim_{x \rightarrow 0} \frac{\Phi^{-1}(x/n)}{\Phi^{-1}(x)}} = 1/L \end{align}$$ Therefore, $$L^2 = 1 \implies L = \pm 1$$ Clearly, $L$ cannot be negative, so $L=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2353441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Galois Theory books (in association with Abstract Algebra books) I know that there have been written similar posts, and I used them as a source for my question. I ' m looking for a book for Galois Theory (Construction of fields. Algebraic extensions - Classical Greek problems: constructions with ruler and compass. Galois extensions - Applications: solvability of algebraic equations - The fundamental theorem of Algebra - Roots of unity - Finite fields), which has the following characteristics: * *Logical order in the presentation of the theorems, definitions and generally of all concepts. *Thorough analysis of each proof, example etc. *Many examples and good exercises to solve. *Also, to be suitable for self-study and for the first touch in the subject. I should notice that I don't like Stewart's and Rotman's book. What's your opinion for 1) Galois Theory by Bakers, 2) Galois Theory by Roman, 3) Fields and Galois Theory by Howie, 4) Galois Theory by Jean-Pierre Escofier. And do you believe that it is better to read from a general Abstract Algebra book, such that Fraleigh's/ Dummit's and Foote's/Gallian's? Thank you in advance.
I recommend Galois' Theory of Algebraic Equations, by Jean-Pierre Tignol (2nd edition, World Scientific, 2016).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2353531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
If $a + \frac{1}{a} = -1$, then the value of $(1-a+a^2)(1+a-a^2)$ is? If $a + \frac{1}{a} = -1$ then the value of $(1-a+a^2)(1+a-a^2)$ is? Ans. 4 What I have tried: \begin{align} a + \frac{1}{a} &= -1 \\ \implies a^2 + 1 &= -a \tag 1 \\ \end{align} which means \begin{align} (1-a+a^2)(1+a-a^2) &=(-2a)(-2a^2) \\ &=4a^{3} \end{align} as $1 + a^{2} = -a$ and $1 + a = -a^{2}$ from $(1)$.
Without solving the quadratic: $$ a^3 = - a^2 - a = a + 1 - a = 1 $$ which was found by using the equation $a^2 + a + 1 = 0$ twice. This means that $a$ is a non-real cube root of unity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2353620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
How to conclude that the minimal polynomial is the characteristic? I am given the following matrix $$A=\begin{bmatrix} 0 & 0 & 4 & 1\\ 0& 0 & 1 & 4\\ 4 & 1 & 0 &0\\ 1 & 4 & 0 & 0 \end{bmatrix}$$ And I have to find the minimal polynomial of the matrix. The characteristic polynomial is $$K(\lambda)=-(\lambda -5)(\lambda +5)(\lambda-3)(\lambda +3)$$ The minimal polynomial $m(\lambda)$ divides the characteristic polynomial. I know that the characteristic polynomial is the minimal, but how do i eliminate the possibilities of the linear, quadratic and qubic factors in the polynomial. When do i know the minimal is actually the characteristic polynomial?
In this case you can do easily without the characteristic polynomial. It is easy to see that even powers of $A$ will have their nonzero entries in the $2\times2$ blocks at the top left and bottom right, and the odd powers of $A$ have them (like $A$ itself) in the bottom left and top right $2\times2$ blocks. You are looking for a power$~A^k$ that equals a linear combination of lower powers, and by what I just remarked the exponents of the contributing lower powers will have the same parity as$~k$. Since $A$ is nonsingular a minimal polynomial with only odd degree terms is not possible (it would have $0$ as a root), so the smallest $k$ for which a relation exists will be even. Also you know that the minimal polynomial has degree at most$~4$, the size of the matrix. So all there is to it is compute $A^0=I$, $A^2$ and $A^4$ and find a linear combination. Concretely $A^2$ has as top left block $B=(\begin{smallmatrix}17&8\\8&17\end{smallmatrix})$ (repeated at the bottom right), and $A^4$ has as as top left block $B^2=(\begin{smallmatrix}353&272\\272&353\end{smallmatrix})$. Now $B^2$ minus $272/8=34$ times $B$ is a multiple of the identity, or you can apply the Cayley-Hamilton theorem for $B$; either way you find $B^2-34B+225I_2=0$ whence $A^4-34A^2+225I_4=0$, and $X^4-34X^2+225$ is the minimal polynomial you were after.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2353799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving determinant is zero Let $n\ge 2$ and $A=[\overset{\to}{a_1},\overset{\to}{a_2},\ldots,\overset{\to}{a_n}]$ an $n\times n$ matrix such that there exists $i\neq j$ such that $$\overset{\to}{a_j}=k\overset{\to}{a_i}$$ where $k\neq 0$. Show that $\det(A)=0$. How would I approach this proof or solve it?
If $a_i=ka_j$ for some $k\neq 0$ then $a_i$ and $a_j$ are linearly dependent. This imply that the rank of the matrix is less than $n$, thus if $A$ represent a linear operator it cannot be injective, hence it is not invertible and $\det(A)=0$. P.S.: Im not sure that this answer will be useful for you, I take a little "roundabout" when I talk about operator representation. In any case the above assume that you knows that if $A$ is not invertible then it determinant is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2353921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum: $\sum\limits_{n=0}^\infty \frac{n!}{(2n)!}$ I'm struggling with the following sum: $$\sum_{n=0}^\infty \frac{n!}{(2n)!}$$ I know that the final result will use the error function, but will not use any other non-elementary functions. I'm fairly sure that it doesn't telescope, and I'm not even sure how to get $\operatorname {erf}$ out of that. Can somebody please give me a hint? No full answers, please.
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{n = 0}^{\infty}{n! \over \pars{2n}!} & = 1 + \sum_{n = 1}^{\infty}{\Gamma\pars{n} \over \pars{n - 1}!}\, {\Gamma\pars{n + 1} \over \Gamma\pars{2n + 1}} \\[5mm] & = 1 + \sum_{n = 0}^{\infty}{1 \over n!}\,\ \overbrace{% {\Gamma\pars{n + 1}\Gamma\pars{n + 2} \over \Gamma\pars{2n + 3}}} ^{\ds{\substack{\ds{=\ \mrm{B}\pars{n + 1,n + 2}.}\\[1mm] \ds{\mrm{B}:\ Beta\ Function}}}} \\[5mm] & = 1 + \sum_{n = 0}^{\infty}{1 \over n!}\ \overbrace{% \int_{0}^{1}t^{n}\pars{1 - t}^{n + 1}\,\dd t}^{\ds{\mrm{B}\pars{n + 1,n + 2}}} \\[5mm] & = 1 + \int_{0}^{1}\pars{1 - t}\sum_{n = 0}^{\infty} {\bracks{t\pars{1 - t}}^{\,n} \over n!}\,\dd t \\[5mm] & = 1 + \int_{0}^{1}\pars{1 - t}\expo{t\,\pars{1 - t}}\,\dd t \\[5mm] & = 1 + \int_{-1/2}^{1/2}\pars{{1 \over 2} - t}\exp\pars{{1 \over 4} - t^{2}}\,\dd t \\[5mm] & = 1 + \expo{1/4}\ \overbrace{\int_{0}^{1/2}\expo{-t^{2}}\,\dd t} ^{\ds{{1 \over 2}\,\root{\pi}\,\mrm{erf}\pars{1 \over 2}}}\qquad \pars{~\mrm{erf}:\ Error\ Function~} \\[5mm] & = \bbx{1 + {1 \over 2}\,\root{\pi}\expo{1/4}\,\mrm{erf}\pars{1 \over 2}} \approx 1.5923 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2354004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
Existence of random variable given first $k$ moments A sequence of real numbers $\{m_k\}$ is the list of moments of some real random variable if and only if the infinite Hankel matrix $$\left(\begin{matrix} m_0 & m_1 & m_2 & \cdots \\ m_1 & m_2 & m_3 & \cdots \\ m_2 & m_3 & m_4 & \cdots \\ \vdots & \vdots & \vdots & \ddots \\ \end{matrix}\right)$$ is positive definite. (Source: https://en.wikipedia.org/wiki/Hamburger_moment_problem) My question is, given only the first $k$ moments, is it sufficient that the top left $k \times k$ minor of the Hankel matrix be positive definite for there to exist a real random variable with those first $k$ moments? In other words, can a $k \times k$ positive definite Hankel matrix always be extended to an infinite positive definite Hankel matrix?
Yes, this works. There should really be an easy direct argument, but all I can think of right now is the following: The finite moment problem $\int x^n\, d\mu(x)=m_n$, $n=0,1,\ldots , k$, can be solved in the same way as the full problem $n\ge 0$. Namely, run Gram-Schmidt on $1,x,\ldots , x^N$ (with $2N=k$); the orthogonal polynomials will satisfy a three term recurrence $$ a_n p_{n+1} + a_{n-1}p_{n-1} + b_n p_n = xp_n , $$ and the spectral measures of the associated Jacobi matrix will solve the moment problem. In particular, you can extend to the half line $n\ge 1$ by just making up coefficients $a_n,b_n$ for $n\ge N$ at will, and any such measure will have the given moments $m_0,\ldots , m_k$ (because these only depend on the first coefficients). Its subsequent moments will give you the desired extension. In fact, there is a description of all solutions to a finite moment problem (sometimes called the Nevanlinna parametrization), which more or less works like this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2354090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Intuitive reason why $\sqrt[n]n\to 1$ as $n\to\infty$? We are aware of the limit $$ \lim_{n\to\infty}\sqrt[n]n = 1; $$ is there any geometric or otherwise intuitive reason to see why this limit holds? Edit: I am adding some context, since this question was previously put on-hold, and I think one of the main reasons was that it was poorly motivated. From theorem 8.1 of Baby Rudin, suppose the series $$ \sum_{n=0}^\infty c_nx^n $$ converges for $|x|<R$, and define $$ f(x) = \sum_{n=0}^\infty c_nx^n \qquad (|x|<R). \tag{1} $$ Among other conclusions, the function $f$ is differentiable in $(-R,R)$, and $$ f'(x) = \sum_{n=0}^\infty nc_n x^{n-1} \qquad (|x|<R). \tag{2} $$ Rudin uses the fact that $\sqrt[n]n\to 1$ as $n\to\infty$ to justify that the series in $(1)$ and the series in $(2)$ have the same radius of convergence. I recognized the limit, but it is just such a nice combination of $n$ and the $n$th-root, that I thought there ought to be some nice intuitive way to understand it, hence this question.
Here the issue is that $n \rightarrow \infty$, but for any fixed $x > 0$ we have $x^{1/n} \rightarrow 1$. So to consider $n^{1/n}$ you have to ask which "wins": the $n$ at the base or the $1/n$ in the exponent. To think about this, it might help to compare to, say, $(2^n)^{1/n}$. This tends to (and is equal to) $2$. The exponent of $1/n$ has the power to take a huge number like $2^n$ and reduce it to a constant. Since $n$ is much, much smaller than $2^n$, you might expect then that the power of $1/n$ would "win" in the end, giving a result of $1$. This is not a proof, but it gives the right intuition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2354280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 15, "answer_id": 1 }
Why don't previous events affect the probability of (say) a coin showing tails? Why doesn't a previous event affect the probability of (say) a coin showing tails? Let's say I have a fair and unbiased coin with two sides, heads and tails. For the first time I toss it up the probabilities of both events are equal to $\frac{1}{2}$. This much is intuitive and clear to me. Now suppose that I toss it up $1000000000$ times and the scores are, $501000000$ Heads $499000000$ Tails Now, for the $1001000000^{th}$ toss, shouldn't the probability of a tail coming up be greater than that of heads showing up? I have seen many books which say that even for the $1001000000^{th}$ toss, the probabilities of both events are equal to $\frac{1}{2}$. This seems wrong to me since the same books affirm that if a coin is tossed a large number of times, the quantity $\frac{heads}{tails}$ will approach $1$. I know this is very elementary and naive, yet I had only superficially studied probability and I hope you all will bear with me. My Objections with some of the top-voted answers It isn't that future flips compensate for the imbalance, it is that there are so many of them it doesn't matter. I don't get this statement. What exactly does the second sentence mean? Moreover, if what you said is true then, the following comment by a user should be wrong, Law of large numbers So these are contradicting each other I feel. Please bear with my lack of knowledge.
The assumption for a coin is that is has no memory. That means that the chance of heads is the same on every toss. For a fair coin, that chance is $\frac 12$ regardless of the history. If you toss $100$ times and get heads every time (very unlikely, but it could happen) the most probable event after a million tosses (including the $100$ you already did) is $500050$ heads and $499950$ tails. It isn't that future flips compensate for the imbalance, it is that there are so many of them it doesn't matter. Look how close the head/tail ratio would be to $1$ at that point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2354383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
To find a limit involving integral How to find the following limit : $\lim _{x \to \infty} \dfrac 1 x \int_0^x \dfrac {dt}{1+x^2 \cos^2 t}$ ? I am not even sure whether the limit exists or not . I tried applying L'Hospital , but then in the numerator we have differentiation under integration, and the derivative comes out to be lot messier than the original integral. Please help . Thanks in advance
$$\forall x > 0, \exists n \in \mathbb{N}\ s.t. \ x \in ]2n\pi, 2(n+1)\pi]$$ $$ 0 \leq \frac{1}{x}\int_0^x\frac{dt}{1+x^2\cos^2(t)} \leq \frac{1}{2n\pi}\int_0^{2(n+1)\pi}\frac{dt}{1+(2n\pi)^2\cos^2(t)}$$ $$ \leq \frac{1}{2n\pi}\sum_{k=0}^n\int_0^{2(k+1)\pi}\frac{dt}{1+(2n\pi)^2\cos^2(t)}$$ $$ \leq \frac{1}{2n\pi}\sum_{k=0}^n\int_0^{2\pi}\frac{dt}{1+(2n\pi)^2\cos^2(t)}$$ $$ \leq \frac{1}{2n\pi}\sum_{k=0}^n4\int_0^{\pi/2}\frac{dt}{1+(2n\pi)^2\cos^2(t)}$$ $$ \leq \frac{2(n+1)}{n\pi}\int_0^{\pi/2}\frac{dt}{1+(2n\pi)^2\cos^2(t)}$$ $$ \leq \frac{2(n+1)}{n\pi}\int_0^{1}\frac{dt}{1+(2n\pi)^2t^2}$$ $$ \leq \frac{2(n+1)}{n\pi}\frac{\pi}{4n\pi} \to 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2354619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Boolean representation of logical statements of more than two extreme conditions I had a question to determine the highest and and lowest and the middle paid of three employees. I tried to solve the problem using logical values T or F, then getting the connection between the truth tables, but I got stuck since the first statement concerns the highest and the second concerns the lowest paid and there is a hole about the middle one that I could't represent, using truth values since not highest may be lowest or middle and vice versa. The question is below from Rosen discrete math book. Steve would like to determine the relative salaries of three coworkers using two facts. First, he knows that if Fred is not the highest paid of the three, then Janice is. Second, he knows that if Janice is not the lowest paid, then Maggie is paid the most. Is it possible to determine the relative salaries of Fred, Maggie, and Janice from what Steve knows? If so, who is paid the most and who the least? Explain your reasoning.
Steve would like to determine the relative salaries of three coworkers using two facts. First, he knows that if Fred is not the highest paid of the three, then Janice is. $$(F{\lt}J\vee F{\lt}M)\to (F{<}M{<}J\vee M{<}F{<}J)\tag 1$$ Clearly that means that: $$F{<}J\to (F{<}M{<}J\vee M{<}F{<}J) \tag{1.1}$$ ... and, less obviously, that: $$F{<}M\to (F{<}M{<}J)\tag {1.2}$$ Second, he knows that if Janice is not the lowest paid, then Maggie is paid the most. $$(F{<}J\vee M{<}J)\to (F{<}J{<}M\vee J{<}F{<}M) \tag 2$$ Likewise, this infers that: $$F{<}J\to F{<}J{<}M \tag{2.1}$$ ... and also that: $$M{<}J\to \bot\tag{2.2}$$ Is it possible to determine the relative salaries of Fred, Maggie, and Janice from what Steve knows? If so, who is paid the most and who the least? Explain your reasoning. Fred cannot be paid less than Jannice (1.1 and 2.1 contradict), and Fred cannot be paid less than Maggie (1.2 and 2.1 contradict), and Maggie cannot be paid less than Jannice (2.2).   Which means Fred is paid the most, and Jannice the least. $$J{<}M{<}F\tag{done}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2354755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
On the evaluation of a limit of a definite integral Why is it that $$ \lim_{\epsilon\to 0} \, \frac{2}{\pi} \int_0^\epsilon \frac{f(x)}{\sqrt{\epsilon^2-x^2}} \, \mathrm{d}x = f(0) \, ? $$ In the particular case when $f(x) = c$ is a constant, the identity follows forthwith. Is it possible to show that this is true for an arbitrary real valued function $f(x)$? Thanks Best fede
For any $\varepsilon>0$ $$ \frac{2}{\pi}\int_{0}^{\varepsilon}\frac{f(x)\,dx}{\sqrt{\varepsilon^2-x^2}}\stackrel{x\mapsto \varepsilon z}{=}\int_{0}^{1}f(\varepsilon z)\frac{2}{\pi\sqrt{1-z^2}}\,dz \tag{1}$$ and we have $\int_{0}^{1}\frac{2\,dz}{\pi\sqrt{1-z^2}}=1$. In particular, if $\lim_{u\to 0^+} f(u)$ exists then $$ \lim_{\varepsilon\to 0^+}\frac{2}{\pi}\int_{0}^{\varepsilon}\frac{f(x)\,dx}{\sqrt{\varepsilon^2-x^2}} = \lim_{u\to 0^+} f(u) \tag{2}$$ by the dominated convergence theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2354865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
using Laplace transform for advection equation I use Laplace transform to solve an advection-diffusion equation with given boundary and initial conditions. I am stuck on the special case that only advection is considered. The advection equation is, $$ \frac {\partial{T}}{\partial{t}}+u\frac{\partial{T}}{\partial{x}}=0 $$ with initial condition $T(x,t=0)$ and boundary condition $T=T_0$ at $x=0$. Using Laplace transform, $$ s\overline{T}-T(x,t=0)+u\frac{\partial{\overline{T}}}{\partial{x}}=0 $$ The analytical solution in Laplace domain is, $$ \overline{T}=T_0(e^{-sx/u}+T(x,t=0)/s) $$ And the solution in real domain is thus, $$ T=T_0(\delta(t-x/u)+T(x,t=0)) $$ However, the solution seems incorrect. I think the correct solution should be a stepwise function. For $x<ut$,$T=T_0$, and for $x>ut$,$T=T(x,t=0)$.Any comments are appreciated.
$$ s\overline{T}-T(x,t=0)+u\frac{\partial{\overline{T}}}{\partial{x}}=0 \qquad \text{is OK.} $$ with condition $\overline{T(0,t)}=\frac{T_0}{s}$ The mistake is in the solving of this equation. HINT : To make it more clear, let $\begin{cases} f(x)=T(x,t=0) \quad\text{a given function,}\\ \overline{T}=y(x) \quad\text{for each } s \\ \text{Condition : } y(0)=\frac{T_0}{s} \end{cases} \quad\to\quad sy+u\frac{dy}{dx}=f(x) $ $$y=\frac{1}{u}e^{-\frac{s}{u}x} \left( \int_0^x f(\xi)e^{\frac{s}{u}\xi}d\xi +\frac{u}{s}T_0 \right)$$ Back to the original symbols : $$\overline{T}=\frac{1}{u}e^{-\frac{s}{u}x}\left(\int_0^x T(\xi,t=0)e^{\frac{s}{u}\xi}d\xi +\frac{u}{s}T_0\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2354946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it possible to determine the value of a matrix element given the dominant eigenvalue and all other elements? I'm working on a population dynamics model and have a matrix of vital rates representing the survival and fecundity of different life stages of the animal which are set out in a 6 x 6 matrix: \begin{pmatrix} 0 & 0 & 0 & 0 & 3 & 4\\ 0.5 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0.6 & 0 & 0 & 0 & 0\\ 0 & 0 & 0.7 & 0 & 0 & 0\\ 0 & 0 & 0 & 0.8 & 0 & 0\\ 0 & 0 & 0 & 0 & 0.9 & x \end{pmatrix} I also have a vector of initial population sizes: \begin{pmatrix} 10\\ 20\\ 25\\ 25\\ 70\\ 80 \end{pmatrix} I already have an estimate for the x element in the matrix which allows me to calculate the population growth rate or dominant eigenvalue lambda. I was wondering if it's possible to calculate what value should x take if I want a lambda of 1 which is of biological relevance to the growth rate.
I don't know if negative values of $x$ make sense for your model (perhaps not), but I got $x\simeq-0.22$. For that $x$, there are only two real eigenvalues, one of which is (very nearly) $1$; the other one is about $-0.85$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2355041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Linear operator norm equality Let $A:\mathbb R^n \to \mathbb R^m$ be a linear operator. How can I show this equality in the operator norm? $$\sup_{x\in \mathbb R^n\setminus\{0\}} \frac{||Ax||}{||x||} = \max_{x\in \mathbb R^n , ||x||=1}||Ax||$$ I've tried to use the fact that for every $x\in \mathbb R^n$ we have $\frac{x}{||x||}\in S^{n-1}$, but didn't understand what to do next. Thanks.
If $x\neq0$, then $\left\|\frac x{\|x\|}\right\|=1$, and therefore$$\frac{\|Ax\|}{\|x\|}=\frac1{\|x\|}\|Ax\|=\left\|A\left(\frac x{\|x\|}\right)\right\|\leqslant\sup_{\|x\|=1}\|Ax\|.$$ On the other hand, if $\|x\|=1$, then$$\|Ax\|=\frac{\|Ax\|}{\|x\|}\leqslant\sup_{x\neq0}\frac{\|Ax\|}{\|x\|}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2355163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $T^{m-1}v\neq 0$ but $T^mv=0$. Show that $v,Tv,\ldots,T^{m-1}v$ are linearly independent Let $T\in\mathcal L(V)$ and $T^{m-1}v\neq 0$ but $T^mv=0$ for some positive integer $m$ and some $v\in V$. Show that $v,Tv,\ldots,T^{m-1}v$ are linearly independent. I had written a proof but Im not sure if it is correct. And in the case it would be correct I dont know how to write it better and clearly. So I have two questions: * *It is the proof below correct? *If so, how I can write it better using the same ideas? The attempted proof: 1) If $T^{m-1} v$ would be linearly dependent of $T^{m-2} v$ then exists some $\lambda\neq 0$ such that $$T^{m-1}v=\lambda T^{m-2}v\implies T^mv=\lambda T(T^{m-2}v)=\lambda T^{m-1}v=0\implies \lambda=0$$ Then $T^{m-2}v$ is linearly independent of $T^{m-1}v$. 2) Now observe that $$\lambda_1v+\lambda_2Tv+\lambda_3T^2v=0\implies T^{m-2}(\lambda_1v+\lambda_2Tv+\lambda_3T^2v)=\lambda_1T^{m-2}v+\lambda_2T^{m-1}v=0$$ so $\lambda_1,\lambda_2=0$ as we had shown previously, so the original equation reduces to $$\lambda_3T^2v=0\implies \lambda_3=0$$ thus $v,Tv,T^2v$ are linearly independent. 3) Repeating recursively the analysis in 2) for longer lists of vectors of the form $v,Tv,\ldots,T^kv$ for $k< m$ we can show that the list $v,Tv,\ldots,T^{m-1}v$ is linearly independent.
More simply put: if you had a linear combination $c_0 v + c_1 T v + \ldots c_{m-1} T^{m-1} v = 0$ with $c_j$ not all zero, let $c_i$ be the first nonzero coefficient. Then $$T^i v = - \sum_{j=i+1}^{m-1} (c_j/c_i) T^j v$$ Applying $T^{m-i-1}$ to both sides, $$T^{m-1} v = - \sum_{j=i+1}^{m-1} (c_j/c_i) T^{m+j-i-1} v = 0$$ But all terms on the right are $0$, so $T^{m-1} v = 0$, contradiction!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2355252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is a set of unit vectors uniquely determined by its distances? Let $X = \{x_1, \dots, x_n\}$ be a finite set of points in $\mathbb{R}^d$. We can associate to $X$ its multiset of distances $$ D_X := \{ \lVert x_i - x_j \rVert : 1 \le i,j \le n \} \qquad \text{(read as a multiset)} $$ where $\lVert \cdot \rVert$ denotes the Euclidean norm, and each pair $(i,j)$ counts once in the multiset, so that $\#D_X = n^2$. A natural question is to ask whether $X$ can be uniquely reconstructed from $D_X$ up to Euclidean isometry (translation, rotation, and reflection). The answer turns out to be no, and there are even infinitely many counterexamples in dimension $d=1$. However, suppose we restrict the points $x_i$ to lie on the unit sphere $S^{d-1} \subset \mathbb{R}^d$. Is this constraint sufficient to ensure that $X$ is uniquely determined by $D_X$ up to Euclidean isometry?
Even for $d = 2$ there are such sets of points. Thus, for $ n = 4: \lbrace 0,1,2,5 \rbrace $ and $\lbrace 0,1,5,6 \rbrace \mod 8$. You can also specify homometric systems with equal sets of distances. For $n = 6$, you can find at least 5, and for $n = 12$ - at least 18 homometric sets of points on the circle with equal sets of distances.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2355345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proof problem involving set theory and relations Problem: If $R_1$ is defined on $\mathbb{R}$ by the relation $R_1=\{(a,b):1+ab>0\ , a, b \in \mathbb{R}\}$, then prove that $(a,b) \in R_1$ and $(b,c) \in R_1 \implies (a,c) \in R_1$ is not true for all $a,b,c \in \mathbb{R}$. My attempt: $$(a,b) \in R_1 \implies 1+ab>0 \tag{1}$$ $$(b,c) \in R_1 \implies 1+bc>0 \tag{2}$$ $$(a,c) \in R_1 \implies 1+ac>0 \tag{3}$$ But (1) and (2) do not imply (3). Therefore $(a,b) \in R_1$ and $(b,c) \in R_1 \implies (a,c) \in R_1$ is not true for all $a,b,c \in \mathbb{R}$. My problem: Is this procedure correct. Also are there any alternate approaches to prove the same result?
That (1) and (2) do not imply (3) is what you were trying to prove - you cannot assume it midway through your proof. Put another way - just because you personally don't see a way to deduce (3) from (1) and (2) doesn't mean there isn't one. In general, to show that $A$ does not imply $B$, you must give an example of $A$ occurring without $B$. In this case, that means that you need to supply an example of $a,b,c \in \mathbb{R}$ so that $1 + ab > 0$ and $1 + bc > 0$ (that's $A$) but $1 + ac \leq 0$ (i.e., $B$ does not hold). The simplest approach is to choose an example that's the easiest to work with - I like using $0$ a lot. $a = 0$ or $c = 0$ won't work, because then $1 + ac$ will automatically be $1$. But we could try $b = 0$. Then we automatically get $1 + ab > 0$ and $1 + bc > 0$, so we just need to choose an $a$ and $c$ so that $1 + ac \leq 0$. $a = 1$ and $c = -1$ works, so my counterexample is $a = 1$, $b = 0$, and $c = -1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2355444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Area under a never-continuous function I was thinking about the following function, infamous for being nowhere continuous: $$f(x\in\mathbb Q)=0$$ $$f(x\notin \mathbb Q)=1$$ How would I calculate the area under this "curve" from $x=0\to1$? Can it be done? If so, I suspect the result would be very interesting... it would tell us "what proportion of numbers are rational". But my intuition tells me it cannot be done, because of the lack of a limit anywhere in this function. Also, something tells me it should be zero, because when using a Riemann Sum, the input of each term will be rational as the interval is divided into rational subintervals. Any ideas?
The area under this curve is called its Lebesgue integral and is equal to $1$, corresponding to the intuition that most numbers in the interval $[0,1]$ are actually irrationnal. The Lebesgue integral is an extension of the Riemann integral to a wider class of functions (called "measurable functions"). In your example, $f$ is not Riemann-integrable so there is no hope to compute its integral using Riemann sums.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2355561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fourier transformation on $L^2(G)$. We define Fourier transform on $L^1(\mathbb{R})$ and extend the definition to $L^2(\mathbb{R})$. But for a compact group $G$, we define Fourier transform for $L^2(G)$ and there is no such thing of defining for $L^1(G)$. My question is why such unusual thing happens for groups? Is this because $L^2(G)$ is Hilbert space? I want an answer from very deep point.
If $G$ is a compact group and $\mu$ is the left-invariant Haar measure on $G$, then $\mu(G)$ is finite, so $L^2(G)\subset L^1(G)$ by Holder's inequality and we can directly define the Fourier transform on $L^2(G)$ since all the integrals in question are defined. However $L^2(\mathbb{R})$ is not a subset of $L^1(\mathbb{R})$, which is why it is necessary to extend the Fourier transform to $L^2(\mathbb{R})$ using some dense subset.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2355693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Where did I go wrong? Analyze the logical form of $\{n^2+n+1 | n \in \mathbb{N}\} \subseteq \{2n+1 | n \in \mathbb{N}\}$ This question is taken from Velleman's $\textit{How to Prove it}$. It is in the exercises section of 2.3, Question 1c: $\{n^2+n+1 | n \in \mathbb{N}\} \subseteq \{2n+1 | n \in \mathbb{N}\}$ My work is as follows, The statement is equivalent to $\forall x(x \in \{n^2+n+1 | n \in \mathbb{N}\} \to x \in \{2n+1 | n \in \mathbb{N}\})$ (definition of a subset) $x \in \{n^2+n+1 | n \in \mathbb{N}\} \equiv \exists n \in \mathbb{N}(x=n^2+n+1)$ and $x \in \{2n+1 | n \in \mathbb{N}\} \equiv \exists n \in \mathbb{N}(x = 2n+1)$ so the final expression ends up as, $\forall x (\exists n \in \mathbb{N}(x=n^2+n+1) \to \exists n \in \mathbb{N}(x=2n+1))$ The solution provided is $\forall n \in \mathbb{N} \: \exists m \in \mathbb{N}(n^2+n+1=2m+1)$ which makes perfect sense to me looking at it in retrospect. I guess the overarching concern that I have is the methodology involved in solving the question - my approach was to break the statement down into smaller parts and then to rewrite after interpreting each individual segment (not an unreasonable strategy IMO, that seems to be what Velleman has been advocating up to this point). Was that approach incorrectly applied in this situation? Or did I make a technical error that I am just not aware of? Thanks in advance for any help/insight that can be given!
There is no error. Your solution is correct too. Two things, however: * *Although what you wrote is correct, it is always a good idea not to use the same symbol ($n$, in your case) for two different purposes. So, I would have written$$\forall x (\exists n \in \mathbb{N}(x=n^2+n+1) \to \exists m \in \mathbb{N}(x=2m+1))$$ *The provided solution is also correct, but shorter and easier to understand because there is no need to use the symbol $x$. It's as if you had written “for every number, if it belongs to the first set, then it also belongs to the second one” and the proposed solution was “every element of the first set belongs to the second one”.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2355790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Proving uniform convergence on an unbounded interval Prove that the following series $$\sum_{n=1}^{\infty}\frac{nx}{1+n^2\log^2(n)x^2}$$ converges uniformly on $[\epsilon,\infty)$ for any $\epsilon>0.$ What I have done: The function $$f_n(x)=\frac{nx}{1+n^2\log^2(n)x^2}$$ is a decreasing function on $[1,\infty)$. So on $[1,\infty)$ its maximun value is $$f_n(1)=\frac{n}{1+n^2\log^2(n)}\leq\frac{n}{n+n^2\log^2(n)}=\frac{1}{1+n\log^2(n)}=M_n.$$ By Cauchy Condensation Test, $\sum M_n$ is convergent. Therefore by the Weierstrass-M Test $\sum f_n(x)$ is convergent on $[1,\infty)$. I am not entirely sure about the following statement. Please correct me if I am wrong: If we can prove that the $\sum f_n(x)$ is uniformly convergent on both $[\epsilon,1]$ and $[1,\infty)$, then it is uniformly convergent on $[\epsilon, \infty)$. But I don't know how to prove that $\sum f_n(x)$ is uniformly convergent on $[\epsilon,1]$. I need some help.
The function $f_n$ attains its maximum when $x=\frac1{n\log n}$, if $\frac {1}{n\log n}<\varepsilon$; its value there is $\frac1{\log n}$. Of course, if $n$ is large enough, $\frac{1}{n\log n}\leqslant\varepsilon$ and so the maximum of $f_n$ will be then $f_n(\varepsilon)$. But$$f_n(\varepsilon)=\frac{n\varepsilon}{1+n^2\log^2(n)\varepsilon^2}<\frac1{n\log^2(n)\varepsilon}.$$As you wrote, we can use the Cauchy condensation test here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2355872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Definition of Continuity of Real Valued Functions Definition: Let $F$ be a real valued function defined on a subset $E$ of $\mathbb{R}$. We say that $F$ is continuous at a point $ x \in E$ iff for each $\epsilon > 0$, there is a $\delta > 0$, such that if $x' \in E$ and $|x'-x|<\delta$, then $|f(x') - f(x)| < \epsilon$. This definition is taken straight out of Royden-Fitzpatrick Real Analysis. My question is more related to the intuition behind this definition: If I take the following function $F$ defined on the natural numbers(which are a subset of $\mathbb{R}$ of course), for which $F(x) = x$, that is, the identity on the natural numbers but that treats them as a subset of $\mathbb{R}$. Now this function is continous at every point of $\mathbb{N}$. If we take $\epsilon \le 1$, then we can always take some $\delta < 1$. If we take an $\epsilon > 1$, then we can always take a respective $\delta < \epsilon$, but $\delta > $ the largest natural number smaller than epsilon. So, indeed this function is continuous. Is this supposed to happen, and can't we somehow use this definition to show continuity of functions which are intuitively discontinuous at some points. Is the key element here that we say that the function is continuous/discontinuous at $x$ as a point of a specific set $E$?
A foremost condition for evaluating the limit of a function $f:E\subseteq \Bbb R\to \Bbb R$ at some point say $x_0\in \Bbb R$ is that $x_0$ must be a limit point of $E$. In the case where $E=\Bbb N$, the set has no limit points. So forget about testing continuity at some point in $\Bbb N$ we cannot even compute the limits at any point!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2355995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
If $ a=\frac{\sqrt{x+2} + \sqrt {x-2}}{\sqrt{x+2} -\sqrt{x-2}}$ then what is the value of $a^2-ax$ is equal to If $$ a=\frac{\sqrt{x+2} + \sqrt {x-2}}{\sqrt{x+2} -\sqrt{x-2}}$$ then the value of $a^2-ax$ is equal to: a)2 b)1 c)0 d)-1 Ans. (d) My attempt: Rationalizing $a$ we get, $ x+ \sqrt {x^2-4}$ $a^2=(x+\sqrt{x^2-4)^2}=2x^2-4+2x\sqrt{x^2-4}$ Now, $a^2-ax=2x^2-4+2x\sqrt{x^2-4}-x^2-x\sqrt{x^2-4}=x^2+x\sqrt{x^2-4}-4=xa-4$ Why am I not getting the intended value?
$$\dfrac a1=\dfrac{\sqrt{x+2}+\sqrt{x-2}}{\sqrt{x+2}-\sqrt{x-2}}$$ calling for Componendo and Dividendo $$\dfrac{a+1}{a-1}=\dfrac{\sqrt{x+2}}{\sqrt{x-2}}$$ Squaring we get $$\dfrac{a^2+1+2a}{a^2+1-2a}=\dfrac{x+2}{x-2}$$ Again apply componendo and dividendo, $$\dfrac{a^2+1}{2a}=\dfrac x2$$ Now simplify
{ "language": "en", "url": "https://math.stackexchange.com/questions/2356126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 0 }
Term by term integration fourier series $s= \sigma +it$, for $-1<\sigma<0$, we have $$ \int_{0}^\infty \sum_{n=1}^\infty \frac{ \sin 2 n \pi x}{n x^{s+1}} \, dx = \sum_{n=1}^\infty \frac{1}{n} \int_0^{\infty}\frac{\sin 2n \pi x }{ x^{s+1}} \, dx $$ For the justification, the author writes, As $\sum_{n=1}^\infty \frac{\sin 2n\pi x }{ n \pi } $ is boundedly convergent, term by term integration over any finite range is permissible. It suffices to show that $$ \lim_{\lambda \rightarrow \infty} \sum_{n=1}^\infty \frac{1}{n} \int_\lambda^\infty \frac{ \sin 2 n \pi x }{ x^{s+1}} \, ds = 0 $$ I know the series is uniformly bounded, but what is meant in the bolded sentences? Why is term by term integration permissible over finite range? Also why is this sufficient? What I think: I believe there are two steps here. * *Let $f_n(x) = \frac{ \sin 2 n \pi x }{ nx^{s+1} } $. We first prove it for "finite range", $$ \int_0^{\lambda} \sum f_n \, dx = \sum \int_0^{\lambda} f_n \, dx $$ *If we show it for the finite case, it "suffices" to prove $$ \lim_{\lambda \rightarrow 0} \sum \int_0^{\lambda} f_n - \sum \int_0^{\infty} f_n = \lim_{\lambda \rightarrow \infty} \sum \int_{\lambda}^\infty f_n = 0 $$ EDIT: Proof 1: Let $g(x)=\frac{1}{x^{s+1}}, s_m(x) = \sum_1^m \frac{ \sin 2 n \pi x }{ n \pi }$ and $s(x) = \sum_1^\infty \frac{ \sin 2 n \pi x }{ n \pi }$ . It suffices to show equality for a fixed $s$, with $-1 < \sigma < 0$. Also, as $\lambda<\infty$, it suffices to show it holds for closed unit interval $[ k , k +1 ] $ for $k \in \mathbb{N}$. Consider $$ \int_0^{1} g s \, dx - \lim _{ m \rightarrow \infty} \int_0^{1} gs_m \, dx = \lim _{ m \rightarrow \infty} \int_{\delta}^{1 - \delta} + \lim _{ m \rightarrow \infty} \Big \{ \int_0^{\delta} + \int_{\delta}^{1 - \delta} g [s - s_m] \, dx \Big \} $$ For the first term, $g$ is bounded on $[ \delta, 1-\delta]$, $s_m$ converges uniformly on this interval, the term limits to $0$ as $m \rightarrow \infty$. For the second term, as $s_m$ is boundedly convergent in $\mathbb{R}$, and $$ \int_0^{\delta} + \int_{1- \delta} ^{1} \frac{1}{x^{\sigma +1 } }\, dx \le \frac{-x^{\sigma} }{\sigma} \Big|_0^{\delta} + \Big| _{1 - \delta }^{1 } \rightarrow 0 $$ as $\delta \rightarrow 0$. The integral also limits to $0$ as $m \rightarrow \infty$. Hence, we have equality for finite range. EDIT: Proof 2(Using DCT) Let $h(x) = \sum \frac{2 n\pi x}{ n} $, which is convergent to $\pi ( [x] -x + \frac{1}{2} )$. We apply Dominated Convergence Theorem. We consider the series, $s_m(x) = \sum_1^m \frac{2n \pi x}{nx^{x+1} }$. So for any finite interval $[0 , \lambda] $, we have $$ |s_m(x) | \le M \frac{1}{x^{s+1}} $$ Further, we have, from noting that \begin{align*} \int_0^{\lambda} \frac{1}{x^{s+1}} \,ds &= \frac{-1}{s x^s} \Big|_1^{\lambda} + \pi \int_0^1 \frac{\frac{1}{2}- x }{x^{s+1}} \\ &= C_1 + C_2 x^{-s} \Big|_0^1 + C_3 x^{-s+1} \Big|_0^1 < \infty \end{align*} as $-1 < \sigma < 0 $. Thus, we have a sequence of functions $s_m(x) \rightarrow s(x)$ such that it is dominated by an integrable function. By Dominated Convergence Theorem, we have $$ \int_0^{\lambda} \sum_1^\infty \frac{ \sin 2 n \pi x }{n \pi x^{s+1}} \, dx = \sum_1^\infty \frac{1}{n \pi }\int_0^{\lambda} \frac{\sin 2 n \pi x }{x^{s+1} } \, dx $$
Read again Titchmarsh, he wrote something else. Given what you know, I would sketch a proof of the functional equation : For $\Re(s) \in (0,1)$ $$\int_0^\infty \sin(nx) x^{-s-1}dx= n^{s}\int_0^\infty \sin(x) x^{-s-1}dx$$ Thus $$\eta(1-s) \int_0^\infty \sin(x) x^{-s-1}dx = \lim_{N \to \infty} \sum_{n=1}^N (-1)^{n+1} n^{s-1}\int_0^\infty \sin(x) x^{-s-1}dx \\=\lim_{N \to \infty} \int_0^\infty \left(\sum_{n=1}^N \frac{(-1)^{n+1}}{n}\sin(nx)\right) x^{-s-1}dx$$ Also by absolute convergence and convergence of the Fourier series in $L^2(0,2\pi)$ it is clear that $$\lim_{N \to \infty} \int_1^\infty \left(\sum_{n=1}^N \frac{(-1)^{n+1}}{n}\sin(nx)\right) x^{-s-1}dx = \int_1^\infty \left(\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}\sin(nx)\right) x^{-s-1}dx\\ = (2\pi)^{s} s(2^s-1) \zeta(s) \tag{1}$$ Thus we are left with the limit as $N \to \infty$ of $$\sum_{n=1}^N \frac{(-1)^{n+1}}{n} \int_0^1 \sin(nx) x^{-s-1}dx= \sum_{n=1}^N (-1)^{n+1}n^{s-1} \int_0^{n} \sin(x) x^{-s-1}dx \tag{2}$$ Since $\sum_{n=1}^\infty (2n-1)^{s-1}-(2n)^{s-1}$ converges absolutely as well as $\int_0^\infty |\sin(x) x^{-s-1}|dx$, we get that $(2)$ converges to $C$. Finally since $\int_0^1 \left(\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}\sin(nx)\right) x^{-s-1}dx=0$ we get that $C=0$ and hence $$\eta(1-s)\int_0^\infty \sin(x) x^{-s-1}dx = (2\pi)^{s} s(2^s-1) \zeta(s) \qquad \Re(s) \in (0,1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2356256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to find the vertex form of a cubic? In a calculus textbook, i am asked the following question: Find a cubic polynomial whose graph has horizontal tangents at (−2, 5) and (2, 3) A vertex on a function $f(x)$ is defined as a point where $f(x)' = 0$. So the slope needs to be 0, which fits the description given here. So i am being told to find the vertex form of a cubic. Further i'd like to generalize and call the two vertex points (M, S), (L, G). I understand how i'd get the proper x-coordinates for the vertices in the final function: I need to find the two places where the slope is $0$. So i need to control the x-intercepts of a cubic's derivative. I start by: $(x + M) * (x + L)$ which becomes: $x^2 + x*(M+L)+M*L$ I now compare with the derivative of a cubic in the form: $ax^3 + bx^2 + cx + d$: $3a*x^2 + 2b*x + c = x^2 + (M+L)*x+M*L$ . From this i conclude: $3a = 1$, $2b=(M+L)$, $c=M*L$, so, solving these: $a=1/3$, $b=\frac{L+M}{2}$, $c=M*L$. So, putting these values back in the standard form of a cubic gives us: $\frac{1}{3} * x^3 + \frac{L+M}{2} * x^2 + L*M*x + d$ And that's where i get stumped. This works but not really. If both $L$ and $M$ are positive, or both negative, the function starts giving wrong results. But the biggest problem is the fact that i have absoloutely no idea how i'd make this fit certain requirements for the $y$-values. Only thing i know is that substituting $x$ for $L$ should give me $G$. And substituting $x$ for $M$ should give me $S$. Any help is appreciated, have a good day!
$f(x) = ax^3 + bx^2+cx +d\\ f'(x) = 3ax^2 + 2bx + c$ We have some requirements for the stationary points. $f'(x) = 3a(x-2)(x+2)\\ f'(x) = 3ax^2 - 12a = 3ax^2 + 2bx + c$ Note, in your work above you assumed that the derivative was monic (leading coefficient equal to 1). This seems to be the cause of your troubles. $b = 0, c = -12 a\\ f(x)= ax^3 - 12ax + d$ Now fit your points to find $a, d$ $f(-2)= 16k + d=5\\ f(2)= -16k + d=3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2356404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Make up a reasonable definition for the bipartite complement of a bipartite graph I am shooting from the hip here and seeing what sticks. I tried this definition below which I am not sure works. If it doesn't, please, suggest more accurate definitions. The reason I need this is that I want to be able to replace the degrees in $(4,3,3,3,3,3,3,2,2)$ (which is a degree sequence of a bipartite graph) with smaller numbers. Let $V$ be the set of vertices of a bipartite graph $G.$ Then $V$ is a union of two partite sets $X, Y$ and $\sum_{v_i \in X}deg(v_i) = \sum_{u_j \in Y}deg(u_j) = m.$ We can define graph $H = (A \cup B, F) = (\text {union of vertices, edges})$ to be bipartite complement of $G$ if $\sum_{d_i \in A}deg(d_i) = \sum_{e_j \in B}deg(e_j) = m.$ For example, suppose a bipartite(?) graph $T$ has partite sets of vertices represented by their degrees as $\{3, 4\}, \{2, 5\}.$ Then $T$'s bipartite complement has partite set of vertices represented by their degrees as $\{1, 6\}, \{1, 2, 4\}.$
What you've written really doesn't make sense on a few levels. First of all, your definition of a bipartite complement of a graph is literally just another bipartite graph with the same number of edges. Additionally, in the example in the last paragraph that you give, I don't see have $\{3, 4\}$, $\{2, 5\}$ can be construed to be the degrees of all the vertices of a graph, no less a bipartite one (there aren't even five vertices, so how can a vertex have degree 5?). Also, with your first example, I don't exactly know what you mean by "able to replace the degrees in $(4,3,3,3,3,3,3,2,2)$...with smaller numbers". Let me tell you what the ordinary definition of the bipartite complement of a bipartite graph is, and you can hopefully tell if this fits your needs. Let $G$ be a bipartite graph with parts $X$ and $Y$. Let $K(X, Y)$ be the complete bipartite graph with parts $X$ and $Y$. We define the bipartite complement of $G$ as $K(X, Y) - G$. That is, the bipartite complement of $G$ is the graph that has the same two parts as $G$ and has an edge between two vertices of different parts if and only if $G$ does not have an edge between those two vertices. This definition easily generalizes to $k$-partite graphs. For example, let $G = C_6$, which is a bipartite graph as drawn below (with parts $X$ and $Y$). Then the bipartite complement of $G$, which I'll denote by $\overline{G}$, is composed of three disjoint edges:
{ "language": "en", "url": "https://math.stackexchange.com/questions/2356520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the number of triples (a, b, c) of positive integers such that the product $a.b.c=1000$ , and $a \leq b \leq c $? What is the number of triples (a, b, c) of positive integers such that the product $a.b.c=1000$, and $a \leq b \leq c$? My try: The prime factorization of $1000$ is $2^3\cdot 5^3$ $a\cdot b \cdot c = 2^3\cdot 5^3$ $a=2^{a_1}\cdot 5^{a_2}$ $b=2^{b_1}\cdot 5^{b_2}$ $c=2^{c_1}\cdot 5^{c_2}$ $abc=2^{a_1+b_1+c_1}\cdot 5^{a_2+b_2+c_2}=2^3\cdot 5^3 $ $a_1+b_1+c_1=3$ How many ways are there such that $a_1+b_1+c_1=3$ Star's and Bar's method: - Number of ways to chose $2$ separators($0s$) in a string of $5 $$ = {5\choose 2 }=10$ $N(a_1+b_1+c_1=3)=10$ Similarly, $N(a_2+b_2+c_2=3)=10$ $N(abc=1000)=10\cdot 10=\boxed{100}$ Is that okay ? Please write down any notes
Your computation of $N=10$ is correct and $100$ is the number of ordered triples that have product $1000$. You have failed to account for the condition that $a \le b \le c$. All of the unordered triples that have three distinct elements have shown up six times, so you have overcounted. Those that have two or three equal elements have been counted differently. Keep going.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2356648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Holomorphic function equal to polynomial. I have this question Let $h$ an holomorphic function in $\mathbb{C}$ and suppose that $$|h(z+7) -3|^2\leq |z-1|^9 + 42$$ for all $z$ such that $|2z+63|>177$. Prove that $h$ is a polynomial. How can I attack this problem? I can't see how because I wanted to use the Liouville theorem, but it isn't bounded in all $\mathbb{C}$. Could anyone give me an explanation?
The meromorphic function $g(z) = h(z+7)/(z-1)^9$ is bounded outside some disk, so its singularity at $\infty$ is removable. Being a meromorphic function on the Riemann sphere, it is a rational function, and so $h(z)$ is a rational function with no poles in $\mathbb C$, i.e. a polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2356754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Compactness- Euclidean metric Hi Could you help me to solve this question? IF $||.||$ be any norm on $\mathbb{R}^m$ and let $B = \{ x \in \mathbb{R}^m : ||x||≤ 1 \}$ . Prove that $B$ is compact. Hint: It suffices to show that $B$ is closed and bounded with respect to the Euclidean metric.
The euclideian metric induces the norm $d_2(x,0)=||x||_2=\sqrt{x_1^2+x_2^2+...+x_m^2}$. Now in a finite dimensional space $X$ all the norms are equivalent thus are equivalent with $||.||_2$ i.e: $\exists C_1,C_2>0 $ such that $C_1||x||_2 \leqslant||x|| \leqslant C_2||x||_2, \forall x \in X$. Now $B$ is bounded with respect to the norm $||.||_2$ thus form the equivalence of norms it is bounded with respect to $||.||$ Also let $x_n \in B$ such that $x_n \rightarrow x$. We have that $||x_n|| \leqslant 1$ and also we have that $||x_n|| \rightarrow ||x||$(the norm is a continuous function). Thus $||x|| \leqslant 1 \Rightarrow x \in B$. We proved that $B$ is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2356875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
proof that $n^4+22n^3+71n^2+218n+384$ is divisible by $24$ Let $n$ be a positive integer number. How do we show that the irreducible polynomial $n^4+22n^3+71n^2+218n+384$ is divisible by $24$ for all $n$?
Another way to do it is to note: $n^4+22n^3+71n^2+218n+384 \equiv $ $n^4 - 2n^3 - n^2 + 2n \mod 8$. If $n \equiv 0, \pm 1, \pm 2, \pm3, 4 \mod 8$ then $n^4 - 2n^3 - n^2 + 2n\equiv$ $0, 1\mp 2 - 1 \pm 1, 16 \mp 16 -4 \pm 4, 81 \mp 54 - 9 \pm 6, 4^4 \mp 2*4^3 - 16 + 8 \equiv$ $0, 0, 72 \mp 48, 0 \equiv 0 \mod 8$. So $8$ divides $n^4+22n^3+71n^2+218n+384$ and $n^4+22n^3+71n^2+218n+384 \equiv $ $n^4 + n^3 - n^2 -n \mod 3$ If $n \equiv 0, \pm 1 \mod 3$ then $n^4 + n^3 - n^2 - n \equiv 0, 1 \pm 1 -1 \mp 1 \equiv 0 \mod 3$. So $3$ divides $n^4+22n^3+71n^2+218n+384$. So $3*8 =24$ divides $n^4+22n^3+71n^2+218n+384$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2356982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Riesz representation theorem giving a different result? Evans p298 - Lax Milgram Theorem Let $H$ be a Hilbert space. Denote the inner product by $(\cdot,\cdot)$ and the natural dual pairing of spaces by $\langle\cdot,\cdot\rangle$. He gives us bilinear form $B[\cdot,\cdot]:H\times H \to \Bbb R$ and then he says that for each fixed element $u\in H$ the mapping $v\mapsto B[u,v]$ is a bounded linear functional on $H$, and hence Riesz representation theorem gives us a unique element $w\in H$ such that: $$B[u,v]=(w,v),\qquad (v\in H)$$ How exactly does this follow from RRT? Riesz gives us that any time we consider $u^*\in H^*$ there is some $u\in H$ such that: $$\langle u^*,v\rangle = (u,v).$$ So I don't follow how the RRT gives the above, I mean unless the RRT doesn't hold just for the natural pairing, but rather for any bilinear form?
Well, $v \to B[u,v]$ is a linear functional on $H$. The RRT tells you that any bounded linear functional $l$ on $H$ can be represented as $l(v)=(u,v)$ for some $v$, and that the map $l \to u$ is in fact an isometric isomorphism. That is why you usually write $l=u^*$. Hence, since $v \to B[u,v]$ is a bounded linear functional you will find a $w$ such that $B[u,v]=(w,v)$, you may write $B[u,\cdot]=w^*$ if you like.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2357126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Surjections and Injections Please examine following Theorem and the accompanying proof. I understand the idea behind the proof, I am just concerned that I might not have put it in the correct words. Is the argument correct? If so can the write up be improved? Given that $f:A\to B$ and that $g:B\to C$. Prove that if $f$ is not onto and $g$ is one-to-one, then $g\circ f$ is not onto. Proof. Assume that $f$ is not onto, $g$ is one-to-one and $g\circ f$ is onto. Since $f$ is not onto it follows that for some $x\in B$ it is that case that $$\forall a\in A(f(a)\neq x)\ (1)$$ Since $g:B\to C$ it must be that for some $c\in C$, $g(x)=c$ and since $g$ is one-to-one, $$\forall y\in B(g(y)=c\implies y=x)\ (2)$$ Since $g\circ f$ is onto it follows that for some $z\in A$, $g(f(z))=c$ but this implies the existence of some $\alpha\in B$ such that $(z,\alpha)\in f$ and $(\alpha,c)\in g$ but $g(\alpha)=c$ and $(2)$ entails that $\alpha=x$ but $(1)$ suggests that no element in $A$ has an image $x$ under $f$ thus no such $\alpha$ exists consequently $$\neg\exists a\in A(g(f(a))=c)$$ contradicting the assumption that $g\circ f$ is onto. $\blacksquare$
The proof is correct. On the other hand you can avoid contradiction, by proving the contrapositive. Proposition. Given the maps $f\colon A\to B$ and $g\colon B\to C$, if $g$ is one-to-one and $g\circ f$ is onto, then $f$ is onto. Proof. Let $b\in B$. Since $g\circ f$ is onto, there exists $a\in A$ such that $$ g(b)=g\circ f(a)=g(f(a)) $$ so $b=f(a)$ because $g$ is one-to-one. Therefore $f$ is onto.$\quad\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2357203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
If $|z|<1$, Is $|z+1|+|z-1|<2√2 $ true? If $|z|<1$, Is $|z+1|+|z-1|<2√2 $ true? My attempt:- I got 4 as an upper bound, when I applied triangular inequality on $|z+1|+|z-1|$. I got lower bound as 2, $2≤|z+1|+|z-1|$. I randomly pick complex numbers in the given disk. I got the result correct. How to prove or disprove analytically? Please help me.
Taking the square of the magnitude of the LHS: $$\require{cancel} \begin{align} \big| |z+1|+|z-1|\big|^2 &= |z+1|^2+|z-1|^2+2 |z+1||z-1| \\ &= (z+1)(\bar z+1)+(z-1)(\bar z -1) + 2 \big|(z+1)(z-1)\big| \\ &= |z|^2+\cancel{z}+\bcancel{\bar z}+1 + |z|^2-\cancel{z}-\bcancel{\bar z} +1 + 2 |z^2-1|\\ &\le 2 |z|^2 + 2 + 2\big(|z|^2+1\big) \\ & \lt 8 \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2357299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Doubt in this question involving logarithm If $\log2= 0.301$, then how many number of digits are in $2^{64}$? What I did: $$\log(2)^{64}=\log2^{64}=64\log2=64\log2=19.264$$ Number of digits comes out to be $5$. But answer is $20$? I have written $\log 2$ raise to the power $64$
Because number of digits it's$$[64\log2]+1=20,$$ which gives the answer. In the general case if $n$ is a number of digits of natural $N$ then $n=[\log{N}]+1$ Indeed, let $N=a_0\cdot10^{n-1}+a_1\cdot10^{n-2}+...+a_{n-1}$, where $a_i\in\{0,1,...,9\}$ and $a_0\neq0$. Hence, $n$ is a number of digits. We see that $10^{n-1}\leq N<10^n$ or $n-1\leq\log{N}<n$, which says $[\log{N}]=n-1$ and we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2357390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Domain of the function $f(x)=\frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}}$ The domain of the function $f(x)=\frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}}$ is said to be $\mathbb R-\{0,1\}$, given $f(x)$ is a real valued function. I understand why that is the case, since for both $1$ and $0$ the denimonator becomes $0$ and the value is undefined. But, $$ f(x)=\frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}}=\frac{x+1}{x-1} $$ Now I don't see any problem with $x$ taking the value $0$. What really is the domain of the function and how do I justify both the scenarios ?
The domain of $f $ is $$D=\{x\in \mathbb R \;\;: \;x\ne 0 \land \frac {1}{x}\ne 1\}$$ but $$\frac {1}{x}=1\iff x=1$$ hence $$D=\{x\in\mathbb R \;\;:\;x\ne 0\land x\ne 1\} $$ $$=\mathbb R \backslash \{0,1\} $$ $$=(-\infty,0)\cup (0,1)\cup (1,+\infty) $$ and $$x\in D \implies f (x)=\frac {x+1}{x-1} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2357433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 6 }
Where is my mistake in proving a language is not regular? Trying to prove that $\big\{0^n1^m | n < m\big\}$ is not a regular language. If I take $w=0^n1^{n+x}=xyz$ with obvious $|w| > n$ then $|xy|\leq n $. Furthemore if I let $z=1^{n+x}$ then $x$ and $y$ contain $0$'s and given that $y\neq\epsilon$ then y must contain at least one $0$ Pumping lemma says that for all k $k \in \mathbb{N}$ $xy^kz \in L$ but if I take $k=x+10$ then suppose $w=0^{n-1}1y^{x+10}1^{n+x}$ then $ w \notin L$ because $n + x + 9 > n + x$. Have I made a mistake anywhere in the following proof?
I hope it can help you we proof by contradiction. So, we first must assume something, then contradict it. Here we will assume the L is regular, then get a contradiction. * *assume L is regular *By the pumping lemma there exists a pumping length $p\ge1$ *take string $w=0^p1^{p+1}\in L $ such that $|w|\ge p$ *breaks $w$ into $xyz$ such that $|y|\ge1 \,,\,|xy|\le p $ then $\,\,xy^iz\in L $ $\,\,\,\,\forall \,i\ge 0$ $$w=\overbrace{\underbrace{0000....0000}_{xy}\underbrace{...111....1111}_z}^{2p+1}$$ case 1: $$w=\overbrace{\underbrace{0000....0000}_{|xy|=q=p}\underbrace{111....1111}_z}^{2p+1}$$ case 2: $$w=\overbrace{\underbrace{0000....000}_{|xy|=q\lt p}\underbrace{\underbrace{00...00}_{p-q}111....1111}_z}^{2p+1}$$ To show that $L$ is not regular language, you need to consider all the decompositions of $w=xyz$ $.^{[1]}$ * *$ |xy| = q\,\,\,,$ $1\le q \le p$ *$y=0^r\,\,\,\,\,\,\,$ , $\,1\le r\le q$ *from $[4] $ we have $\forall i\ge0$ then by the pumping lemma $xy^iz\in L $ so $q+ir +(p-q)=p+ir\lt p+1$ , ($n_0(w)+ir\lt n_1(w)$). for $i=1$ : $xy^1z$ then $p+r\lt p+1$ so $xy^1z\notin L$ is a contradiction we find integer $i=1$ : and we have ${p+ir \ge p+1}$ ,($n_0(w)+ir\ge n_1(w)$) for $i=1$,this is contradiction to the definition of our language $L$ Therefore the language $L$ is not regular. [1] $xy$ contain $0$'s ($n_0(w)= p$) but you can't just let $z=1^{p+1}$ because you need to consider all the decompositions. $z=0^t1^{p+1}$ such that $0\le t \le p-1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2357566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $f(x) = |x|$ is continuous on $\mathbb R$ I want to use the sequential criterion to prove that $f(x) = |x|$ is continuous on $\mathbb R$. For reference, here is the sequential criterion according to Introduction to Real Analysis by Bartle: $f:A \rightarrow \mathbb R$ is continuous at the point $c\in A$ if and only if for every sequence $(x_n)$ that converges to $c$, the sequence $(f(x_n))$ converges to $f(c)$. So my attempt was this: Let $x \in \mathbb R$ and define the sequence $(x_n) = x + \frac{1}{n}$ for $n \in \mathbb N$ so $(x_n)$ converges to $x$. Then $(f(x_n)) = |x_n| = |x + \frac{1}{n}|$ so $(f(x_n))$ converges to $|x|$. Since this is true for all $x \in \mathbb R$, therefore $f(x) = |x|$ is continuous on $\mathbb R$. My main source of doubt comes from that it seems you can show a lot of functions are continuous simply by slapping on a "$+\frac{1}{n}$". Is this proof valid? Or is there a constraint that I am not meeting?
This is not appropriate. Notice that the sequential criterion says that for EVERY sequence that converges to c. You have only looked at a single sequence, the sequence $x+1/n$. Instead, you would have to start your proof "Let $(x_n)$ be a sequence that converges to $(c)$. Then, from this, use the definition of sequence convergence to derive that $f(x_n)$ converges to $f(c)$. Edit: By your approach, the function $f(x) = 0,$ if x is rational and 1, if x is irrational, would be continuous at all rational points! That's a problem, isn't it?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2357931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Integral $\int_{0}^{\infty}x^{-x}dx$ I'm trying to find a closed form for this integral:$$\int_{0}^{\infty}x^{-x}dx$$ Here's the integrand graph: Clearly it is convergent. My attempt is to obtain a closed form for the area under the curve. Is this possible?
$\int\limits_0^\infty x^{-x}dx=\int\limits_0^1 x^{-x}dx+\int\limits_0^1 x^{-2+1/x}dx$ The second integral is unpleasant to develope into a series. But if we can use the solution $\,z_0\,$ for $\,\int\limits_0^1 (x^{xz} - x^{-2+1/x})dx=0$ with $\,z=z_0\approx 1.45354007846425$ , then we get: $$\displaystyle \int\limits_0^\infty x^{-x}dx=\sum\limits_{n=1}^\infty \frac{1+(-z_0)^{n-1}}{n^n}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2358024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How to calculate $\lim_{x\to 0^+} \frac{x^x- (\sin x)^x}{x^3}$ As I asked, I don't know how to deal with $x^x- (\sin x)^x$. Please give me a hint. Thanks!
$$\lim_{x\rightarrow0^+}\frac{\ln{x}-\ln{\sin{x}}}{x^2}=\lim_{x\rightarrow0^+}\left(\frac{\ln\left(1+\frac{x}{\sin{x}}-1\right)}{\frac{x}{\sin{x}}-1}\cdot\frac{\frac{x}{\sin{x}}-1}{x^2}\right)=$$ $$=\lim_{x\rightarrow0}\left(\frac{x-\sin{x}}{x^3}\cdot\frac{x}{\sin{x}}\right)=\lim_{x\rightarrow0}\frac{1-\cos{x}}{3x^2}=\frac{1}{6}\lim_{x\rightarrow0}\frac{\sin^2\frac{x}{2}}{\frac{x^2}{4}}=\frac{1}{6}$$ and since $$\frac{x^x-(\sin{x})^x}{x^3}=\frac{1+x\ln{x}+\frac{x^2\ln^2x}{2!}+...-1-x\ln\sin{x}-\frac{x^2\ln^2\sin{x}}{2!}-...}{x^3}=$$ $$=\frac{\ln{x}-\ln{\sin{x}}}{x^2}+O(x^2),$$ we obtain that our limit is $\frac{1}{6}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2358173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Gradient of a matrix? I was following Stephen Boyd's convex optimisation course and came across the following slide: Can somebody explain to me how the gradient was calculated for the quadratic and least-squares objective. Is there a general method to find the gradient of a matrix?
$f$ is an normal real valued function. If you want you can write it componentwise as $$f(x) = {1\over 2}\sum_j\sum_k p_{jk}x_jx_k + \sum_j q_jx_j + r$$ Now the first double sum contains the $x_jx_k$ term twice if $j\ne k$ and if $j=k$ it becomes an $x_j^2$ term, so the derivate with respect to $x_j$ becomes: $$f'_j(x) = \sum p_{jk}x_k + q_j$$ Which in matrix notation becomes $$\nabla f(x) = Px + q$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2358321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Show that $X$ is simply connected iff any two paths in $X$ with the same initial and terminal points are path-homotopic Let $X$ be a path connected space. Show that $X$ is simply connected iff any two paths in $X$ with the same initial and terminal points are path-homotopic Recall $X$ simply connected means $\pi_1(X, p) = \{[c_p]\}$, and $X$ is path-connected. My Proof : Suppose $X$ is simply connected, and let $f, g$ be two paths in $X$ with initial point $p$, and terminal point $q$, then $f \cdot \bar{g} \sim c_p$, let $G$ denote this homotopy, i.e $G: f \cdot \bar{g} \sim c_p$ Define $H(s, t) = G(s, t) \cdot g$, which can be easily verified is a path homotopy between $f$ and $g$ Conversely suppose any two paths in $X$ with the same initial and terminal points, are path-homotopic. Let $h$ be any loop at a point $p \in X$, pick a point $q = h(a)$ for some $a \in [0, 1]$, then $f =h|_{[0, a]}$, and $g =\overline{h|_{[a, 1]}}$ are two paths with the same initial and terminal points, hence $f$ is path homotopic to $g$ and therefore $f \cdot \bar{g}$ is homotopic to $c_p$, and $f \cdot \bar{g} = h$, therefore $h \sim c_p$, and $\pi_1(X)$ is trivial. $\square$ Is this proof satisfactory and rigorous enough? Any comments on my proof writing skills are also greatly appreciated.
Strictly speaking $G\cdot g$ gives you a path homotopy between $g$ and $f\cdot \overline{g}\cdot g$. It is clear that you can then collapse $\overline{g}\cdot g$ with a further homotopy, but maybe it is worth adding (!?) In the converse direction you are implicitly using the same idea. $f\cdot \overline{g}$ homotopies to $f\cdot \overline{f}$ (or to $\overline{g}\cdot g$) and that lets you collapse the latter to the constant path based on $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2358399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Sum of four squares I was looking for numbers who can be expressed as sum of exactly four squares and not less. And I think I have found them. They are all the integers of the form $$4^{n}\,(7+8k);\;k,\,n\in\mathbb{N}$$ I have no idea how to prove this statement and I wonder if ALL the numbers which need four squares are this kind of numbers. Edit Thanks to the comments and the answer the statement can be more precise A number can be expressed as the sum of four squares and not less if and only if it has the form $4^n\,(7+8k);\;k,\,n\in\mathbb{N}$
At least it is not hard to see that $4^n(7+8k)$ cannot be written as sum of three squares: $a^2+b^2+c^2\equiv m\pmod 4$ where $m$ is the number of odd squares on the left. Hence for $n\ge 1$, any representation of $4^n(7+8k)$ must use three even squares. But then this corresponds to the representation $4^{n-1}(7+8k)=(a/2)^2+(b/2)^2+(c/2)^2$ with smaller $n$. We are thus reduced to the case $n=0$, i.e., $7+8k=a^2+b^2+c^2$. By the above argument, $a,b,c$ must be odd. As odd squares are $\equiv 1\pmod 8$, we obtain $7+8k\equiv 3\pmod 8$, contradiction. The other direction is less trivial (see comments about Legendre's theorems)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2358534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is aleph-omega a Mahlo cardinal? So I've read up a little on Mahlo cardinals and found a definition along the lines of of $\kappa$ is Mahlo iff it is inaccessible and regular cardinals below it form a stationary set. Using this definition, I can see why $\aleph_0$ and $\aleph_1$ are not Mahlo, in that the first is not inaccessible because it is countable and that the second is a successor cardinal (and therefore not inaccessible). But then I get to $\aleph_\omega$... Which is inaccessible, because it is uncountable and a weak limit cardinal. The stationary subset part is tripping me up though. The cardinals below it are $\{\aleph_0, \aleph_1, ..., \aleph_n, ... | n < \omega\}$. So I guess, is $\aleph_\omega$ Mahlo? If yes, why? And if no, why not? Further, what is the "smallest" Mahlo cardinal?
A Mahlo cardinal has to be regular, which $\aleph_\omega$ is not. $\aleph_\omega =\bigcup\aleph_n$, so $\operatorname{cf}(\aleph_\omega)=\aleph_0$. Every strong inaccessible $\kappa$ satisfies $\kappa=\aleph_\kappa$, but even that is not enough as the lowest $\kappa$ satisfying that has $\operatorname{cf}(\kappa)=\aleph_0$. As we can't prove even that strong inaccessibles exist, we can't say where they are in the $\aleph$ heirarchy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2358653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Integral of $e^{x^3+x^2-1}(3x^4+2x^3+2x)$ I understand that this looks exactly like a "Do my Homework" kinda question but trust me, I've spent hours(I won't go into detail as it's off topic). Note: I'm a High School student, our teacher gave this question as a challenge. I'm struggling with $$\int e^{x^3+x^2-1}(3x^4+2x^3+2x)\ dx$$ My Progress: I tried finding integral by parts, and found that$$\int e^{x^3+x^2-1}\ dx$$ was the only trouble maker(for now). So, I tried finding it's integral then gave up and tried using a calculator to see what I missed. The website said: That it's antiderivative is not elementary. I didn't even know what that means. New Approach: Now, I went on to trying to plot the graph of $$ e^{x^3+x^2-1}$$ to see if I could related it to $$\int_{-a}^x e^{x^3+x^2-1}\ dx$$ and say whether $$\int e^{x^3+x^2-1}\ dx$$ exists or not. I was hoping for some discontinuity in the graph of the definite integral but I didn't seem to find any. Note: I drew the graph of the definite integral through observation and intuition, I don't think there was any other method. So is there anyway of helping me?
Assume the solution to be of the form $\exp(x^3+x^2-1)Q(x)$ and differentiate this expression to get $\exp(x^3+x^2-1)\left[(3x^2+2x)Q(x)+Q'(x)\right]$. Compare this with $\exp(x^3+x^2-1)\left[3x^4+2x^3+2x\right]$ to obtain the ODE: $$(3x^2+2x)Q(x)+Q'(x)=3x^4+2x^3+2x.$$ The particular solution is $x^2$, you could also solve this ODE with separation of variables and variation of parameters to get $$Q(x)=x^2+c_1\exp(-x^3-x^2).$$ Setting $c_1=0$ will result in the particular solution $x^2$. So, the integral is given by $\exp(x^3+x^2-1)x^2+c$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2358765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 1 }
Domination problem with sets Let $M$ be a non-empty and finite set, $S_1,...,S_k$ subsets of $M$, satisfying: (1) $|S_i|\leq 3,i=1,2,...,k$ (2) Any element of $M$ is an element of at least $4$ sets among $S_1,....,S_k$. Show that one can select $[\frac{3k}{7}] $ sets from $S_1,...,S_k$ such that their union is $M$. Partial solution: I can find a family of ${13\over 25}k$ such sets that no element in $M$ is in more then 3 set from that family. Thus we have a family of the size ${13\over 25}k$ instead of ${4\over 7}k$. Say $|M| =n$. Let' s take any set independently with a probability $p$. Let's mark with $X$ a number of a chosen sets and with $Y$ a number of elements that are ''bad'' i.e. elements which are in at least 4 sets among a chosen sets. Note that $4n\leq 3k$. Then we have $$E(X-Y)=E(X)-E(Y) \geq kp-np^4 \geq kp (1-3p^3/4)$$Since a function $x \mapsto x(1-3x^3/4)$ achives a maximum at $x=\sqrt[3]{1/3}$ we have $E(X-Y)\geq {\sqrt[3]{9}\over 4}k> {13\over 25}k$. So with the method of alteration we find constant ${\sqrt[3]{9}\over 4}$ which is about $0,051$ worse then ${4\over 7}$. The question is cross-posted to mathoverflow For a full solution with probabilistic method I'm offering $\color{red}{500}$ points of bounty at any time.
An algorithmic solution: Clearly we can assume that each element in $M$ appears in exactly $4$ subsets. Let $|M| =n$. Stage 1. Take maximal subfamily $\mathcal{A} \subseteq \{S_1,....,S_k\} =:\mathcal{S} $ such that: $\bullet$ every member of that family $\mathcal{A}$ has 3 elements; $\bullet$ all sets in $\mathcal{A}$ are pairwise disjunct. Let $|\mathcal{A}| =a$ and let $A= \cup _{X\in \mathcal{A}} X$. Then $|A|=3a$. Now, since each $a\in A$ appears exactly $4$ times it must appear exactly $3$ times in sets not in $\mathcal{A}$. So by double counting between $M$ and $\mathcal{S}\setminus \mathcal{A}$ we have (elements in $A$ have a degree $3$ and other $4$) $$ 3\cdot 3a +4(n-3a)\leq 3\cdot (k-a)\;\;\Longrightarrow \;\;4n \leq 3k\;\;\;...(1)$$ Let us now erase all the elements in $M$ which appears in $A$ and let this new set be $M_1$, so $M_1 = M\setminus A$ (so $|M_1| = n-3a$) and do the same thing in remaining sets in $\mathcal{S}\setminus \mathcal{A}$ and we get new family of sets $\mathcal{S}_1$. Notice that each element in $M_1$ appears still $4$ times in sets from $\mathcal{S}_1$ and that each set in $\mathcal{S}_1$ has at most $2$ elements. (Why? If some of it, say $X$, has $3$ elements, that means that no element in $X$ was erased, so no element in $X$ is in $A$. But then we could put $X$ in $\mathcal{A}$ and we would get bigger family than $\mathcal{A}$ which is already maximal.) Also, let $k_1=|\mathcal{S}_1|$ Stage 2. Now take a maximal subfamily $\mathcal{B} \subseteq \mathcal{S}_1 $ such that: $\bullet$ every member of that family $\mathcal{B}$ has 2 elements; $\bullet$ all sets in $\mathcal{B}$ are pairwise disjunct. Let $|\mathcal{B}| =b$ and let $B= \cup _{X \in \mathcal{B}} X$. Then $|B|=2b$. Now, since each $b\in B$ appears exactly $4$ times it must appear exactly $3$ times in sets not in $\mathcal{B}$. So by double counting between $M_1$ and $\mathcal{S}_1\setminus \mathcal{B}$ we have (elements in $B$ have a degree $3$ and other $4$) $$ 3\cdot 2b +4(n-3a-2b)\leq 2\cdot (k_1-b)\;\;\Longrightarrow \;\;2n \leq k+5a\;\;\;...(2)$$ Let us now erase all the elements in $M_1$ which appears in $B$ and let this new set be $M_2$, so $M_2 = M_1\setminus B$ (so $|M_2| =n-3a-2b$) and do the same thing in remaining sets in $\mathcal{S}_1\setminus \mathcal{B}$ and we get new family of sets $\mathcal{S}_2$. Notice that each element in $M_2$ appears still 4 times in sets from $\mathcal{S}_2$ and that each set in $\mathcal{S}_2$ has at most 1 elements. (Why? If some of it, say $X$, has 2 elements, that means that no element in $X$ was erased, so no element in $X$ is in $B$. But then we could put $X$ in $\mathcal{B}$ and we would get bigger family than $\mathcal{B}$ which is already maximal.) Also, let $k_2=|\mathcal{S}_2|$. Final stage. Now take a maximal subfamily $\mathcal{C} \subseteq \mathcal{S}_2 $ such that: $\bullet$ every member of that family $\mathcal{C}$ has 1 element; $\bullet$ all sets in $\mathcal{C}$ are pairwise disjunct. Let $|\mathcal{C}| =c$ and let $C= \cup _{X \in \mathcal{C}} X$. Then $|C|=c$. Now, since each $c\in C$ appears exactly 4 times it must appear exactly 3 times in sets not in $\mathcal{C}$. So by double counting between $M_2$ and $\mathcal{S}_2\setminus \mathcal{C}$ we have (elements in $C$ have a degree $3$ and other $4$) $$ 3\cdot c +4(n-3a-2b-c)\leq 1\cdot (k_2-c)\;\;\Longrightarrow \;\;4n \leq k+11a+7b\;\;\;...(3)$$ Clearly $C=M_2$ so we are finish with the process, that is $c+2b+3a = |M| =n$. All we have to check if resurrected sets (that is, we refile all the sets with erased elements) satisfies $$a+b+c\leq {3\over 7}k\;\;\;\; {\bf ?}$$ Using (1) and $3a+2b+c=n$ we get: $$ 12a+8b+4c\leq 3k$$ Using (2) and $3a+2b+c=n$ we get: $$ a+4b+2c\leq k$$ Using (3) and $3a+2b+c=n$ we get: $$ 2a+2b+8c\leq 2k$$ If we add these three inequalites we get So $$ 14(a+b+c)<15a+14b+14c\leq 6k$$ and thus a conclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2358882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 1, "answer_id": 0 }
Is a bijective morphism between metric spaces necessarily an isomorphism Does the inverse morphism for a bijective isometry necessarily preserve the metric or should the preservation of the metric for the inverse morphism be stated seperately? To make myself clear my question is that does the inverse morphism in metric spaces automatically preserve the distance (anologus to the case of algebraic structures that isomorphism is a bijective homomorphism) or is the situation like e.g. toplogical spaces that the continuity should be stated seperately for the invesre function in homeomorphisms? EDIT: In short, the OP asks "If $f:X\to Y$ is a bijective map between metric spaces such that $$d_Y(f(x_1),f(x_2))=d_X(x_1,x_2) \; \; \; \forall x_1,x_2 \in X$$ then is the inverse map $f^{-1}:Y\to X$ also distance preserving i.e. do we have $$d_X(f^{-1}(y_1),f^{-1}(y_2))=d_Y(y_1,y_2) \; \; \; \forall y_1,y_2 \in Y?$$
A metric space isomorphism is an isomorphism on the induced topologies is a bicontinuous bijection. It is possible to have a continuous bijection (at least in general topologies, not sure about metric spaces) that is not bicontinuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2358960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find the condition for $f^{-1}$ to exist. If $f: \mathbb R \to \mathbb R$ defined by $f(x)= x^3+ px^2 + qx+ k \sin{x}$, where $k,p,q \in \mathbb R$. Find the condition for $f^{-1}$ to exist. Can somebody please give me a Hint how to solve this problem. EDIT(After getting hints): If we can show that $f$ is strictly increasing, then we can get that $f$ is injective. So we want $f^{'}(x)>0$ and that gives me $p^2 < 3(q + k \cos{x}).$ As $f^{'}(x)$ will be a quadratic polynomial and we want it to be strictly positive or strictly negative but as coefficient of $x^2$ is positive, we can only get derivative to be strictly positive if discriminant is $<0.$
Guide: Check $\lim_{x \rightarrow \infty} f(x)$. Check $\lim_{x \rightarrow -\infty} f(x)$. Note that $f$ is continuous. Find conditions to make it strictly increasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2359054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Factorial Inequality Proof I need some help proving the following fact about factorials. Let $a$ and $b$ be positive integers. Prove that if $a > b$, then $a! > b!$.
If $a>b$ then $$a!=a \cdot (a-1) \cdot \ \dots \ \cdot (b+1) \cdot b \cdot (b-1) \cdot \ \dots \ \cdot 1$$ so we would have $$b!=\frac{a!}{a \cdot (a-1) \cdot \ \dots \ \cdot(b+1)} < a! \implies b! < a!.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2359166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How does the trigonometric identity $1 + \cot^2\theta = \csc^2\theta$ derive from the identity $\sin^2\theta + \cos^2\theta = 1$? I would like to understand how would the original identity of $$ \sin^2 \theta + \cos^2 \theta = 1$$ derives into $$ 1 + \cot^2 \theta = \csc^2 \theta $$ This is my working: a) $$ \frac{\sin^2 \theta}{ \sin^2 \theta } + \frac{\cos^2 \theta }{ \sin^2 \theta }= \frac 1 { \sin^2 \theta } $$ b) $$1 + \cot^2 \theta = \csc^2 \theta $$ How did the $ \tan^2 \theta + 1 = \sec^2 \theta$ comes into the picture?
The same way; you start with $\sin^2\theta + \cos^2\theta = 1$ and divide both sides by $\cos^2\theta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2359280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Polar Coordinates tranformation for Linear Homogeneous Differential Equations (1st order) While studying a book of Differential Equations I found this problem so interesting. Suppose \begin{equation} M(x,y)dx+N(x,y)dy=0 \tag 1 \end{equation} is a homogeneous ODE. Show that the transformation $x=r \cos (\theta) $ and $y= r \sin (\theta) $ reduces the equation to a separable equation in the variables $r$ and $\theta$ Is from the book Diff. Eq's, Shepley L. Ross. So starting from the hypothesis that the equation is homogenous then $(1)$ is equivalent to \begin{equation} \frac{dy}{dx}=g\left(\frac{y}{x}\right) \tag 2 \end{equation} So the thing is thatI don't know how to relate $x$ and $y$ , or more precisely how to find the relation $\dfrac{dr}{d\theta}$ (or maybe the other way around) The first thing that came to mind was $r^2=x^2+y^2$ but how do I differentiate it ? I mean, I don´t see clearly how to use the chain rule I have seen this $2rr'=2xx'+2yy'$. Although, still not clear how did they do it. Later, a silly approach (I think so) was to take the differentials of either $x$ and $y$ with respect to $r$ and $\theta$, respectively. Therefore: $$x=r \cos (\theta) \Rightarrow dx=\cos (\theta) dr $$ and $$y= r \sin (\theta) \Rightarrow dy= r \cos (\theta) d\theta .$$ Later $$\frac{dy}{dx}=\frac{r \cos (\theta) d\theta} { \cos (\theta) dr} = \frac{r d\theta} {dr}$$ So after substituting in $(2)$ \begin{equation} \frac{r d\theta} {dr}=g\left(\frac{\sin \theta }{ \cos \theta }\right) \end{equation} which reduces it to a separable equation \begin{equation} \frac{dr } {r }=\frac{d\theta} { g(\tan \theta )} \end{equation} But.... come on! At least I tried... Later the book has also as an exercise to prove that the same equations is invariant under the tranformations $x=k\alpha$ and $y=k\beta$ with $k$ constant. But I think that the previous one seems more approachable. Could someone help me with this kind of problems? Thanks. :)
This is what I've got: Given this fancy thing (At least I think so..) $$ \frac{dy}{dx}=\frac{dy}{d\theta} \times \frac{d\theta}{d x} = \frac{ \frac{dy}{d\theta} } { \frac{dx}{d\theta}} $$ So $x=r\cos(\theta)$ and $y=r \sin(\theta)$ ,yield: $$\frac{dx}{d\theta}= \cos (\theta) \frac{dr}{d\theta}-r \sin(\theta)$$ and $$\frac{dy}{d\theta}= \sin (\theta) \frac{dr}{d\theta}+r \cos(\theta)$$ Therefore: $$ \frac{dy}{dx}=\frac{ \sin (\theta) \frac{dr}{d\theta}+r \cos(\theta) } { \cos(\theta) \frac{dr}{d\theta}-r \sin(\theta)} $$ (*) Knowing that (1) is equivalent to $$ \frac{dy}{dx} = g(\frac{y}{x}) $$ , thus plugging (*)... $$\frac{ \sin (\theta) \frac{dr}{d\theta}+r \cos(\theta) } { \cos (\theta) \frac{dr}{d\theta}-r \sin(\theta)} = g(\frac{y}{x}) $$ Doing a little of algebra, (Hopefully I did it right, tho)... $$ \frac{dr}{r} = \frac{ \sin (\theta)+ \cos(\theta) } { \sin(\theta) - \cos (\theta) } \frac{d\theta}{g( \tan \theta )} $$ What do you think? :s
{ "language": "en", "url": "https://math.stackexchange.com/questions/2359363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
How to find $ \lim_{n\to\infty}\frac{\sin(1)+\sin(\frac{1}{2})+\dots+\sin(\frac{1}{n})}{\ln(n)}$ $$ \lim_{n\to\infty}\frac{\sin(1)+\sin(\frac{1}{2})+\dots+\sin(\frac{1}{n})}{\ln(n)} $$ I tried applying Cesaro Stolz and found its $(\sin 1/(n+1))/\ln(n+1)/n$ where $\ln$ is $\log_e$ and it would be $1$ and so the limit is $0$, but in my book the answer is $2$. Am I doing something wrong or can't Cesaro be applied here?
If your aim is to compute $$ \lim_{n\to\infty}\frac{\sin1+\sin\frac{1}{2}+\dots+\sin\frac{1}{n}}{\ln n} $$ you can indeed try and apply Stolz-Cesàro with $$ a_n=\sin1+\sin\frac{1}{2}+\dots+\sin\frac{1}{n} \qquad b_n=\ln n $$ This leads to computing $$ \lim_{n\to\infty}\frac{a_{n+1}-a_n}{b_{n+1}-b_n} = \lim_{n\to\infty}\frac{\sin\frac{1}{n+1}}{\ln(n+1)-\ln n} $$ This limit will exist if the limit $$ \lim_{x\to0^+}\frac{\sin x}{\ln\frac{1}{x}-\ln(\frac{1}{x}-1)}= \lim_{x\to0^+}-\frac{\sin x}{\ln(1-x)} $$ exists and they'll be equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2359492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does $\lim\limits_{(x,y)\rightarrow0}$ mean and how to show $ \lim\limits_{(x,y)\rightarrow0}\frac{x^3}{x^2+y^2}=0$? Consider $f:\mathbb{R}^2 \rightarrow \mathbb{R}$ where $$f(x,y):=\begin{cases} \frac{x^3}{x^2+y^2} & \textit{ if } (x,y)\neq (0,0) \\ 0 & \textit{ if } (x,y)= (0,0) \end{cases} $$ If one wants to show the continuity of $f$, I mainly want to show that $$ \lim\limits_{(x,y)\rightarrow0}\frac{x^3}{x^2+y^2}=0$$ But what does $\lim\limits_{(x,y)\rightarrow0}$ mean? Is it equal to $\lim\limits_{(x,y)\rightarrow0}=\lim\limits_{||(x,y)||\rightarrow0}$ or does it mean $\lim\limits_{x\rightarrow0}\lim\limits_{y\rightarrow0}$? If so, how does one show that the above function tends to zero?
$\lim_\limits{(x,y)\to 0}$ likely means $\lim_\limits{(x,y)\to(0,0)}$, which means that $x$ and $y$ are both tending to $0$. One could use polar coordinates where $x=r\cos(\theta)$ and $y=r\sin(\theta)$ to obtain: $$\lim_{(x,y)\to(0,0)}\frac{x^3}{x^2+y^2}=\lim_{r\to 0}\frac{r^3\cos^3(\theta)}{r^2}=\lim_{r\to 0} r\cos^3(\theta)$$ Then note that $|\cos^3(\theta)|\leq 1~~~\forall\theta\in\Bbb R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2359621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 1 }
Proving the multiplicative property of limits. If $\lim_{x\rightarrow a}f(x) =L$ and $\lim_{x\rightarrow a}g(x) =M$ then prove $\lim_{x\rightarrow a}f(x)g(x) =LM$ Attempt (Should there be any statement that I should write at start of this proof?) $$|f(x)g(x) -LM|=|f(x)g(x)+Lg(x)-Lg(x) -LM|=|g(x)[f(x)-L]+L[g(x)-M]<|g(x)|f(x)-L|+|L||g(x)-M|$$ Let $\epsilon>0$ then there exists $\delta_1, \delta_2$ such that $$|f(x)-L|<\frac{\epsilon}{2g(x)}~~ whenever~~ 0<|x-a|<\delta_1$$ $$|g(x)-M|<\frac{\epsilon}{2L}~~ whenever~~ 0<|x-a|<\delta_2$$ Choose $0<|x-a|<\delta$ where $\delta = \min(\delta_1,\delta_2)$ then we have $$|f(x)g(x) -LM|<\epsilon$$ Is this proof correct? The reason I am concerned is in the second step where I defined the inequalities for the individual expression, I wrote it in terms of $g(x)$ and $L$. Is this step mathematically correct? Edit: Is this continuation correct? Let $\epsilon>0$ then there exists $\delta_1, \delta_2$ such that $$|f(x)-L|<\frac{\epsilon}{2(1+|M|)}~~ whenever~~ 0<|x-a|<\delta_1$$ $$|g(x)-M|<\frac{\epsilon}{2(1+|L|)}~~ whenever~~ 0<|x-a|<\delta_2$$ Choose $0<|x-a|<\delta$ where $\delta = \min(\delta_1,\delta_2)$ then we have $$|f(x)g(x) -LM|<|g(x)|\frac{\epsilon}{2(1+|M|)}+|L|\frac{\epsilon}{2(1+|L|)}<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$$
Your approach is fine but a few details are not correct. You should replace $$|f(x)-L|<\frac{\epsilon}{2g(x)}$$ with something like $$|f(x)-L|<\frac{\epsilon}{2M'}.$$ where $M'$ is a positive constant to be decided. Recall that $\lim_{x\rightarrow a}g(x) =M$ implies that $g$ is bounded in a neighbourhood of $a$. Moreover to avoid problems with the case when $L=0$, replace $$|g(x)-M|<\frac{\epsilon}{2L}$$ for example with $$|g(x)-M|<\frac{\epsilon}{2(1+|L|)}.$$ Are you able to finish to proof correctly now? P.S. In your edited work add th following line $|g(x)|<{(1+|M|)}$ whenever $0<|x-a|<\delta_3$ and define $\delta = \min(\delta_1,\delta_2,\delta_3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2359711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Limit Definition for Half-Derivative The derivative of a function $f$ is defined as $$\lim_{h\to 0} \frac{f(x+h)-f(x)}{h}$$ Let $$d_1(f,h)=\frac{f(x+h)-f(x)}{h}$$ and, in fact, let all $d_n$ be defined by $$\lim_{h\to 0} d_n(f,h)=f^{(n)}(x)$$ In order to obtain $d_2$, we can plug $d_1$ into itself to get $$\frac{\frac{f(x+2h)-f(x+h)}{h}-\frac{f(x+h)-f(x)}{h}}{h}$$ $$\frac{f(x+2h)-f(x+h)-f(x+h)-f(x)}{h^2}$$ $$\frac{f(x+2h)-2f(x+h)-f(x)}{h^2}$$ and, in general, we can use induction to prove that, for natural $n$, $$d_n(f,h)=\frac{1}{h^n}\sum_{k=0}^n (-1)^k \binom{n}{k}f(x+(n-k)h)$$ However, I am interested in finding $d_\frac{1}{2}$. It should satisfy $$d_\frac{1}{2}(d_\frac{1}{2}(f,h),h)=d_1(f,h)=\frac{f(x+h)-f(x)}{h}$$ So that when it is composed with itself as shown, $d_1$ is the result. It can possibly be obtained by figuring out how to extend the expression $$\frac{1}{h^n}\sum_{k=0}^n (-1)^k \binom{n}{k}f(x+(n-k)h)$$ to non-integer $n$. Does anybody have any ideas about how to do this?
One can extend this in a similar manner that the binomial expansion theorem is extended. Note that $\binom nk=0$ if $k>n$ and $n$ is a natural number. Thus, we have the Grunwald-Letnikov derivative, $$f^{(\alpha)}(x)=\lim_{h\to0}\frac1{h^\alpha}\sum_{k=0}^\infty(-1)^k\binom\alpha kf(x+(\alpha-k)h)\tag{$\alpha\ge0$}$$ An interesting point to note is that for $\alpha=-1$, we get $$f^{(-1)}(x)=\lim_{h\to0}h\sum_{k=0}^\infty f(x-(1+k)h)$$ Or as you may better recognize it, $$f^{(-1)}(x)=\lim_{n\to\infty}\frac1n\sum_{k=1}^\infty f\left(x-\frac kn\right)$$ Which is extraodinarily similar to $$\int_{x-1}^xf(t)~\mathrm dt=\lim_{n\to\infty}\frac1n\sum_{k=1}^nf\left(x-\frac kn\right)$$ Which I suppose would mean that a better all-enveloping fractional derivative could be given by $$f^{(\alpha)}(x)=\lim_{h\to0}\frac1{h^\alpha}\sum_{k=0}^{\lfloor1/|h|\rfloor}(-1)^k\binom\alpha kf(x+(\alpha-k)h)\tag{$\alpha\in\mathbb C$}$$ However, for non integer $\alpha$, this isn't well defined as $h\to0^-$, and so we assume we can replace this with the limit from the positive side: $$f^{(\alpha)}(x)=\lim_{n\to\infty}n^\alpha\sum_{k=0}^{\lfloor n\rfloor}(-1)^k\binom\alpha kf\left(x+\frac{\alpha-k}n\right)\tag{$\alpha\in\mathbb C$}$$ Which now gives us sort of anti-derivatives as well. Mainly, $$\int_0^nf(x)~\mathrm dx=\sum_{k=1}^nf^{(-1)}(k)$$ Of course, one might note nothing unique about this particular extension. One could include more parameters: $$_\beta^+\mathbb D_x^\alpha f(x)=\lim_{n\to\infty}n^\alpha\sum_{k=0}^{\beta n}(-1)^k\binom\alpha kf\left(x+\frac{\alpha-k}n\right)$$ And if we used $\frac d{dx}f(x)=\lim_{h\to0^+}\frac{f(x)-f(x-h)}h$, $$_\beta^-\mathbb D_x^\alpha f(x)=\lim_{n\to\infty}n^\alpha\sum_{k=0}^{\beta n}(-1)^k\binom\alpha kf\left(x+\frac{k-\alpha}n\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2359871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to integrate: $\int_0^{\infty} \frac{1}{x^3+x^2+x+1}dx$ I want to evaluate $\int_0^{\infty} \frac{1}{x^3+x^2+x+1}$. The lecture only provided me with a formula for $\int_{-\infty}^{\infty}dx \frac{A}{B}$ where $A,B$ are polynomials and $B$ does not have real zeros. Unfortunately, in the given case $B$ has a zero at $z=-1$ and is not even. Is there a straight forward way to solve this in terms of complex analysis?
$$\int_{0}^{+\infty}\frac{x-1}{x^4-1}\,dx = \int_{0}^{1}\frac{1-x}{1-x^4}\,dx +\int_{0}^{1}\frac{x^2-x}{x^4-1}\,dx = \int_{0}^{1}\frac{dx}{1+x^2}=\frac{\pi}{4}$$ by just breaking the integration range as $(0,1)\cup (1,+\infty)$ and applying the substitution $x\mapsto\frac{1}{x}$ on the second "half".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2359994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Find $\lim\limits_{x \rightarrow \infty} \sqrt{x} (e^{-\frac{1}{x}}-1)$ Find $$\lim\limits_{x \rightarrow \infty} \sqrt{x} (e^{-\frac{1}{x}}-1)$$ $$\lim\limits_{x \rightarrow \infty} \sqrt{x} (e^{-\frac{1}{x}}-1) = \lim\limits_{x \rightarrow \infty} \frac{e^{-\frac{1}{x}}-1}{x^{-0.5}} = \lim\limits_{x \rightarrow \infty} \frac{[e^{-x^{-1}}-1]'}{[x^{-0.5}]'} = \lim\limits_{x \rightarrow \infty} \frac{x^{-2}e^{-x^{-1}}}{-0.5x^{-1.5}} = 2 \cdot \lim\limits_{x \rightarrow \infty} \frac{1}{e^{x^-1}x^{0.5}} $$ So as $x \rightarrow \infty$, $\frac{1}{e^{x^-1}x^{0.5}} \rightarrow \frac{1}{1 \cdot \infty} \rightarrow 0$ Is this correct ? any input is much appreciated
Note the fundamental limit formula $$\lim_{t\to 0}\frac{a^{t}-1}{t}=\log a$$ and put $x=1/t,a=1/e$ to get $$\lim_{x\to \infty} x(e^{-1/x}-1)=-1$$ and then dividing by $\sqrt{x} $ we can see that the desired limit is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2360103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why are vectors and <-b,a> perpendicular? If i have the vectors: X = {a, b} Y = {-b, a} How can I explain that these vectors will always be perpendicular? I know I can prove this very easily via the dot product, but I need to explain it in a layman's way.
One way: Show that the triangle built on them satisfies the Pythagorean theorem, which implies that it's a right triangle Another way: Show that the four points $(a,b)$, $(-b,a)$, $(-a,-b)$, and $(b,-a)$ form a rhombus, each of whose vertices are equidistant from the center. This implies that they form a square. A third way: Form the right triangle with vertices $(0,0)$, $(0,b)$, and $(a,b)$. In addition, form the right triangle with vertices $(0,0)$, $(0,a)$, and $(-b,a)$. Observe that these are congruent, and fill in the angles.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2360146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 6 }
Find the number of points of differentiability for the following function. If $$f(x)=\begin{cases} \cos x^3&;x\lt0\\ \sin x^3 - |x^3-1|&;x\ge0 \end{cases}$$ then find the number of points where $g(x)=f(|x|) \text { is non differentiable.}$
The question is asking about $f(|x|)$, by symmetry, since $g(x)$ is not differentiable at $x=1$, it is not differentiable at $x=-1$ as well. To show that it is not differentiable at $x=1$: If it is differentiable at $x=1$, then the following limit exists. \begin{align}\lim_{x \rightarrow 1} \frac{f(x)-f(1)}{x-1}&= \lim_{x \rightarrow 1} \frac{\sin x^3 - \sin 1-|x^3-1|}{x-1}\end{align} since $\sin x^3$ is differentiable everywhre, the limit exists if and only if the following limit exists. $$\lim_{x \rightarrow 1} \frac{|x^3-1|}{x-1}=\lim_{x \rightarrow 1} sign(x-1)|x^2+x+1|=\lim_{x \rightarrow 1}3sign(x-1)$$ Since $\lim_{x \rightarrow 1^+} sign(x-1) = 1 \neq -1 = \lim_{x \rightarrow 1^-} sign(x-1)$ The function is not differentiable at $x=1$. Also, we have to verify that the function is differentiable at $x=0$. $$f(|x|) = \begin{cases} \sin x^3-|x^3-1| & x \geq 0 \\ -\sin x^3 -|x^3+1| & x<0\end{cases}$$ When $|x|<1$, $$f(|x|) = \begin{cases} \sin x^3-1+x^3 & 0 < x < 1 \\ -\sin x^3 -x^3-1 & -1<x<0\end{cases}$$ $$\lim_{x \rightarrow 0}\frac{f(|x|)-f(0)}{x}=\lim_{x\rightarrow 0}\frac{sign(x)(\sin x^3+x^3)}{x}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2360233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fractional linear functions of complex analysis - don't understand this section In my notes under "Fractional Linear Transformations" of Complex Numbers, it says the following: Let $M$ be a $2\times 2$ complex matrix $$M=\begin{pmatrix} a & b \\ c & d\end{pmatrix}.$$ We write $T_M$ for the associated fractional linear transformation: $$T_M(z) = \frac{az+b}{cz+d}.$$ Observe now that $T_{\lambda M} = T_M$ if $\lambda \neq 0$, and recall that $\det (\lambda M) = \lambda ^2 \det (M)$. This means that when $\det (M) \neq 0,$ if we set $M'$ = $(\det M)^{-\frac{1}{2}}M$, then $\det M' = 1$ and $T_M = T_M'$. Thus, we may restrict our attention to fractional linear transformations associated to matrices with determinant $1$. I don't see why this equality $T_{\lambda M} = T_M$ holds. I feel like the $\lambda$ shouldn't be a subscript. However, assuming it's true, when it says "we may restrict our attention...." is it saying that if we are ever given a transformation matrix $M$ with non-unit and non-zero determinant, then we can just "unify" it by slamming the constant $(\det M)^{-\frac{1}{2}}$ in front and calling the new transformation $M'$ (will this new transformation actually be the same as the old one??)? I hope someone could help clear this up for me
It's simple! $\;\lambda\begin{pmatrix}a&b\\c&d \end{pmatrix}=\begin{pmatrix}\lambda a&\lambda b\\\lambda c&\lambda d \end{pmatrix}$, so $$T_{\lambda M}(z)=\frac{\lambda az+\lambda b}{\lambda cz+\lambda d}=\frac{az+b}{cz+d}.$$ There indeed results from the considerations in the post that $$\det M'=\det\bigl((\det M)^{-1/2}M)=(\det M)^{-1}\det M=1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2360335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find biholomorphism between these domains I am pretty stuck on constructing a biholomorphism between: $G_1 = \{z \in \mathbb{C}: Re(z) >0, Im(z)>0 \}$ and $G_2 =\{z \in \mathbb{C}:|z|<1, Re(z) >0, Im(z)>0 \}$ I am able to visualize this as some kind of stretching, so first I thought about something like $$f(z)=\frac{z}{|z|}$$ Unfortunately, I cant handle this at all and from my lecture notes I strongly suspect that this boils down to somehow using Möbius transformations. The conceptual problem I have is, that I don't see how to choose the Möbius transformation. E.g. $$g(z) = \frac{z-i}{z+i}$$ for this function I do not see to which angles it maps points. We checked in the lecture that it maps indeed to the unit disk but did not say anything about angles. Any help appreciated, starting from the basics if possible. I suppose I lack intuition for this kind of problem. Edit: From other questions, I feel like there is a way to "know" how the image will look like just from this 3-point-consideration. I would be glad if someone could explain this to me.
So the map you have given $g(z)$ is the Cayley Transform which maps the upper half plane to the unit disc. Now observe that if you restrict this map, to the first quadrant which is essentially $G_{1}$, then the image is the lower semi-circle. Now you take a map from the lower semi circle to the upper semcircle(just rotate it by $e^{i\pi}$.) So call $g:G_{1}\to D_{1}$ as your definition of $g$, where $D_{1}$ is the lower semi-circle. Then take $g_{2}:D_{1}\to D_{2}$ as $g_{1}(z)=e^{i\pi}g(z)$, where $D_{2}$ is the upper semi circle. Then take $g_{3}:D_{2}\to G_{2}$ , the square-root map, which maps $D_{2}$ to $G_{2}$. Hope it's fine!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2360480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Value of the double sum of product of cosines For $m \neq n$, $a \in (0,1/2)$ \begin{equation} \sum_{n\geq 1}^{\infty} \sum_{m\geq 1}^{\infty}\int_{a \pi}^{\pi/2} \frac{\cos(2mx)\cos(2nx)}{mn} \, dx \end{equation} I know that the value of the integral is \begin{equation} \sum_{n\geq 1}^{\infty} \sum_{m\geq 1}^{\infty} \left(-\frac{\left(n-m\right)\sin\left(2{\pi} a n+2{\pi}a m\right)+\left(n+m\right)\sin\left(2{\pi}a n -2{\pi}a m\right)}{4mn\left(n^2-m^2\right)} \right) \end{equation} Does this double sum have a close form? If so, what is it?
Since $\sum_{m\geq 1}\frac{\cos(2mx)}{m}$ is pointwise convergent to $-\log(2\sin x)$ on $\left(0,\frac{\pi}{2}\right)$ we have: $$\int_{a\pi}^{\pi/2}\sum_{m,n\geq 1}\frac{\cos(2mx)\cos(2nx)}{mn}\,dx= \int_{a\pi}^{\pi/2}\log^2(2\sin x)\,dx \tag{1}$$ and $$ \int_{a\pi}^{\pi/2}\sum_{m=1}^{+\infty}\frac{\cos^2(2mx)}{m^2}\,dx = \int_{a\pi}^{\pi/2}\left[\frac{\pi^2}{6}-x(\pi-2x)\right]\,dx \tag{2}$$ is an elementary integral. $(1)$, however, depends on $\text{Li}_2$ (the dilogarithm function). In the special case $a=0$ we have $$ \int_{0}^{\pi/2}\log^2(2\sin x)\,dx = \int_{0}^{1}\frac{\log^2(2)+\log^2(x)+2\log(2)\log(x)}{\sqrt{1-x^2}}\,dx \tag{3}$$ and the RHS of $(3)$ can be computed by substituting $x=\sqrt{z}$ and by applying Feynman's trick to Euler's Beta function, leading to: $$ \int_{0}^{\pi/2}\log^2(2\sin x)\,dx =\frac{\pi^3}{24}.\tag{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2360600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Formula for Natural logarithm of $\pi$ Does any formula or expansion exist that gives $\ln \pi$ ? The expansion should not just be any formula of $\pi$ with a $\ln$ before it. For example $\ln \pi$ = k + $\sum f(x)$ or something of this type.
One could use this formula to get $$\ln(\pi)=\gamma+\sum_{n=1}^\infty\left[2\ln(n)-2\ln(n-0.5)-\frac1n\right]$$ Using the Euler product of the Riemann zeta function, $$2\ln(\pi)=\ln(6)-\sum_{p\in\mathbb P}\ln(1-p^{-2})$$ where $p$ is prime. Another interesting series: $$\ln(\pi)=\ln(2)+2\lim_{x\to-1^+}\sum_{n=2}^\infty x^n\ln(n)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2360710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 0 }
Order of element in field extension Given a field extension $\mathbb{F}_{p^n}$ and some element $\alpha \in \mathbb{F}_{p^n}$ which is not contained within any proper subfield of $\mathbb{F}_{p^n}$, is there a lower bound on the order of $\alpha$? I understand that the nonzero elements of a finite field form a cyclic group generated by some primitive element $\beta \in \mathbb{F}_{p^n}$. However, if we don't know whether $\alpha$ is primitive, what can we say about its order (without actually computing anything)?
Well, for any $m\mid n$, the subfield $\mathbb{F}_{p^m}$ consists of all elements of order dividing $p^m-1$. So the possible orders of elements of $\mathbb{F}_{p^n}$ which are not in any proper subfield are all the factors of $p^n-1$ that are not factors of $p^m-1$ for any proper divisor $m$ of $n$. This gives a simple way to compute the possibilities in any individual case if you're willing to do some integer factoring. I don't see any simple way to get a general explicit lower bound from this, though. Using the Frobenius automorphism, as in Quinn Greicius's answer, you can find a lower bound of $n+1$. This is tight in some cases: for instance, if $p=3$ and $n=4$, then the possibilities are factors of $3^4-1=80$ which are not factors of $3^2-1=8$, and the smallest such factor is $5=n+1$. But it is often not tight: for instance, it never can be if $n+1$ is divisible by $p$, since $p^n-1$ has no factors which are divisible by $p$. Or in the case that $n+1$ is prime, it will be tight iff $p$ is a primitive root mod $n+1$ (since that means exactly that $p^n-1$ is divisible by $n+1$ but $p^m-1$ is not for all smaller $m$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2360794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Set containing open ball and contained in closed ball has the same boundary I'm having some difficult time trying to figure out how to prove that if $U$ is an open unit ball and $\overline{U}$ is a closed unit ball in $\mathbb{R}^n$ and $U\subseteq A\subseteq \overline{U}$ then the boundary of $A$ is the same as the boundary of $U$ or $\overline{U}$. Would definitely appreciate a hint. My attempt: $\mbox{bd}(\overline{U}) = \overline{U}\setminus U \supseteq \overline{U}\setminus A \supseteq \overline{U}\setminus U \supseteq \overline{U}\setminus \overline{U} = \emptyset$, so that $\overline{U}\setminus A = \mbox{bd}(\overline{U})=\mbox{bd}(U)$, which implies that bd$(A)= \mbox{bd}(U)$. Unfortunately, my argument does not appear to be convincing enough.
Say that $U=B(0,1)$. Since $\overline U$ is closed, we have that $\overline A\subset\overline U$. Since $U$ is an open ball, every point of $U$ is an interior point of $A$. Hence, $\partial A\subset \overline U\setminus U=\{x:\,|x|=1\}=\partial U$. Taking $x\in\partial U$, for every $r>0$, $B(x,r)$ intersects both $U\subset A$ and $\mathbb{R}^n\setminus \overline U\subset \mathbb{R}^n\setminus A$ and so $x\in \partial A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2360923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving convergence in $L^p$ Suppose $f \in L^p(\mathbb{R}) , 1 \leq p < \infty$. Prove that $$\lim_{h\rightarrow 0} \int_{\Bbb R} |f(x+h)-f(x)|^p = 0$$ I was thinking about showing that the sequence $\{f(x+\frac{1}{n}) - f(x)\}_n$ converges to zero in $L^p(\mathbb{R})$ so that I could pass the limit under the integral and conclude that the limit is zero, but I am having problems showing this convergence. I'm also not sure what other approach to take/what other convergence theorems I could use.
Let $f=1_[a,b]$ where $A \subseteq \mathbb{R}$. $\forall h \in (0,1)$ we have that $(\int_{\mathbb{R}}|1_{[a,b]}(x+h)-1_{[a,b]}(x)|^p)^{1/p}=(\int_{\mathbb{R}}|1_{[a,b]-h}(x)-1_{[a,b]}(x)|^p)^{1/p}=(\int_{([a,b]-h) \triangle [a,b]})^{1/p}=m(([a,b]-h) \triangle [a.b])^{1/p} \leqslant m([a-h,a) \cup(b-h,b])^{1/p} \leqslant (2h)^{1/p} \rightarrow 0 $ In the same way you can prove the above $\forall h \in (-1,0)$ Now if $f$ is a step function with compact support then its easy to see that $\int_{\mathbb{R}}|f(x+h)-f(x)|^p \rightarrow 0$ from the fist step. Now we know that the set of step functions with compact support is dence in $L^p(\mathbb{R})$. Let $\epsilon >0$ and $f \in L^p(\mathbb{R})$ Exist a step function $h$ with compact support such that $||f(x)-s(x)||_p \leqslant \epsilon$ $\forall h \in \mathbb{R}$ we have that $$\||f(x+h)-f(x)||_p \leqslant$$ $$\||f(x+h)-s(x+h)||_p+||s(x+h)-s(x)||_p+||f(x)-s(x)||_p$$ $$=2||f(x)-s(x)||_p+||s(x+h)-s(x)||_p$$ Thus $$\limsup_{h \rightarrow 0}||f(x+h)-f(x)||_p \leqslant 2 \epsilon +0=2 \epsilon$$ Therefore $\int_{\mathbb{R}}|f(x+h)-f(x)|^p \rightarrow 0$ as $h \rightarrow 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2361135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What practical applications had the number pi in the ancient worlds and what was the motivation for calculating it? I know that modern sciences have many many applications for the number PI, many of them outside of geometry, but I do not understand what practical applications had this constant in the ancient world. What motivated the Greeks, Babylonians and the Egyptians to try to calculate this number?
For all practical applications in ancient time daily life, the various rational approximations in vogue where more than enough, even considering that the instruments to measure the length where prone to higher errors. I think the interest in finding ever more precise value of $\pi$ was actually due to its irrationality, as given by the method of exhaustion, and thus on the existence of incommensurable quantities, infinite convergent series, etc. And later, that it is transcendent, so the impossibility to solve the famous problem of Quadratura Circuli. There is an interesting resume of the history in the world of pi
{ "language": "en", "url": "https://math.stackexchange.com/questions/2361236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How to use density argument to obtain inequality? Let $(X, \|\cdot \|)$ and $(Y,\|\cdot\|)$ be a Banach spaces. Assume that $T$ is a linear operator from $X$ to $Y$ ($T:X\to Y$). Assume that $D\subset X$ is dense in $X.$ The operator $T$ satisfies the inequality $\|Tf\|_{Y} \leq \|f\|_{X}$ for all $f\in D.$ Question: Can we expect $\|Tf\|_{Y}\leq \|f\|_{X}$ for all $f\in X$? Edit: This page tells that: "A common procedure for defining a bounded linear operator between two given Banach spaces is as follows. First, define a linear operator on a dense subset of its domain, such that it is locally bounded. Then, extend the operator by continuity to a continuous linear operator on the whole domain." Can anybody elaborate this or any reference for the details?
The answer to your question is no. There are linear operators $T$ that are bounded on a dense subspace, but not bounded. See this example which has $Y=\mathbb R$, but is not constructive. For your edit: Here, the operator is not given on $X$, but only a dense subset $D$. Therefore we can choose which values $T$ will have on the points $X\setminus D$ You have to do following steps (i will still leave out some details): * *define $T f= \lim T f_n$ where $f_n\in D$ such that $f_n\to f$ for all $f\in X$. *in order to justify 1., show that the limit actually exists (hint: use Cauchy sequences). *in order to justify 1., show that the definition is independent of the chosen sequence $f_n$. *show that the resulting operator $T:X\to Y$ is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2361381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Direct products - Is this an abuse of notation? Let $G=\langle x\rangle\times\langle y\rangle$ where $|x|=8$ and $|y|=4$. Find all pairs $a,b$ in $G$ s.t. $G=\langle a\rangle\times\langle b\rangle$, where $a$ and $b$ are expressed in terms of $x$ and $y$. (Abstract Algebra: Dummit & Foote, Direct products, Ex. 15) Is this an abuse of notation?: here elements of $G$ are ordered pairs in the form of $(x^i,y^j)$. If $a,b\in G$, we can say that $a=(x^{i_1},y^{j_1})$ and $b=(x^{i_2},y^{j_2})$, so $G=\langle(x^{i_1},y^{j_1})\rangle\times\langle(x^{i_2},y^{j_2})\rangle$(!), with elements being ordered pairs of ordered pairs? So I decided to treat elements like $(x^i,y^j)\in G$ as $x^iy^j$, just like the authors always do. So $G=\{1,x,x^2,\dots,y,xy,x^2y,...\}$. But then for $G=\langle a\rangle\times\langle b\rangle$, we have $\langle x\rangle=\langle a\rangle$ and $\langle y\rangle=\langle b\rangle$ (or not?), so $a=x, x^3, x^5,$ or $x^7$ and $b=y$ or $y^3$. Can I also take $a=xy$, for example, because with such use of shorthand notation, we may as well take $a=xy$ and $b=y$, not necessary that $\langle x\rangle=\langle a\rangle$ and $\langle y\rangle=\langle b\rangle$ (?)
You must just find all pairs of elements $a, b\in G$ such that $|a| \cdot |b| = |G|$ and $\langle a \rangle \cap \langle b \rangle = \{0\}$. Since $|G|=32$ and $G$ obvioulsy does not contain elements of order greater than $8$, the only possibilities are $|a|=4, \, |b|=8$ and $|a|=8, \, |b|=4$. Every element of $G$ is of the form $x^iy^j$, so you can now perform an easy case-by case analysis and complete the exercise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2361563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why does $\frac{\cos\Delta x - 1 }{\Delta x} \to 0$ I'm watching Lecture 3 in MIT single variable calculus. https://www.youtube.com/watch?v=kCPVBl953eY&list=PL590CCC2BC5AF3BC1&index=3 And at one point the instructor does the following: I was under the impression that when evaluating limits we need to avoid having $0/0$ in the denominator. However, in the notes here, it says that $\frac{\cos\Delta x - 1 }{\Delta x} \to 0$ How does this work?
hint $$1-\cos (d)=2\sin^2(\frac {d}{2}) $$ $$|\cos (d)-1|\le \frac {d^2}{2} $$ if we know that $$|\sin (A)|\le | A |$$ hence $$\Bigl |\frac {\cos (\Delta x)-1}{\Delta x}\Bigr |\le \frac {|\Delta x|}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2361672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 2 }
Why dense subsets are convenient to prove theorems Could you please explain the following concept (preferably by examples) about dense subsets: If you want to prove that every point in $A$ has a certain property that is preserved under limits, then it suffices to prove that every point in a dense subset $B$ of $A$ has that property. What are the examples of properties that are preserved under limits? Why is it sufficient to prove such properties for a dense subset $B\subseteq A\subseteq closure(B)$?
Let $B$ be a dense subset of $A$. Suppose property $P$ is preserved under limits and we know that every point in $B$ satisfies property $P$. Now pick an arbitrary point of $A$, say $a$. Then $a=\lim_n b_n$ of elements in $B$ each of which have the property and since our property even holds for the limit, it holds for $a$. Motivation: The best and most relevant examples of this occur when you are working in a space $X$ where your points are functions. So we can talk about several properties like continuous, (insert-mathematician's name)-integrable, differentiable, bounded, harmonic, analytic, etc. To give but one important example to highlight the importance of such a notion, let me mention that there have been entire generations of mathematicians whose illustrious careers were dedicated to studying the conditions under which the integral of a limit of functions is the limit of the integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2361816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 2 }
Laurent series of $\frac{z}{z^2+1}$ I am interested in the Laurent series of $f(z)=\frac{z}{z^2+1}$ in $D_{0,2}(i)$. What I did: We have $\displaystyle f(z)=\frac{1}{2}\frac{1}{z+i}+\frac{1}{2}\frac{1}{z-i}$, so * *$\displaystyle \frac{1}{2}\frac{1}{z+i}=\frac{1}{2i}\frac{1}{1-\left(-\frac{z}{i}\right)}=\frac{1}{2i}\sum_{n=0}^\infty(iz)^n$ *$\displaystyle \frac{1}{2}\frac{1}{z-i}=-\frac{1}{2i}\frac{1}{1-\left(\frac{z}{i}\right)}=-\frac{1}{2i}\sum_{n=0}^\infty(-iz)^n$ My problem is that these series only converge in $D_{0,1}(0)$ so how do I obtain the laurent Series centered at $i$?
It is the right idea to use the geometric series. Try it this way: $$ \frac1{z+i}=\frac1{2i+z-i}=\frac1{2i}\cdot\frac1{1+\frac{z-i}{2i}}=\frac1{2i}\cdot\frac1{1-\left(-\frac{z-i}{2i}\right)}=\frac1{2i}\sum_{n=0}^\infty\left(\frac{i}2\right)^n(z-i)^n. $$ The term $\frac1{z-i}=(z-i)^{-1}$ is already a part of the Laurent series and stays as it is. Together you get $$ f(z)=\frac12(z-i)^{-1}+\sum_{n=0}^\infty\frac1{4i}\left(\frac{i}2\right)^n(z-i)^n. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2361969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How many points $(x, y)$ with integer coordinates satisfy the inequality $x^2+y^2 \leq 25$? I had this questions from a previous exam that I couldn't answer, I am apologizing for any English mistakes or for any stupid questions, I tried to solve them and I searched the internet and I couldn't find answers or at least ones with explanations. 1-if $$x^2 + y^2 \leq 25$$ How many INTEGER pairs of $x,y$ satisfy the inequality? *I tried to think of combinatorics but I didn't know how it can help me, I had to use brute force in the end. Thanks for taking the time to read the question, if anyone has tips for my exam or know any challenging problems I would be really thankful if he/she could tell me about them! Thanks!
This is not an answer but a comment or a hint (so please don't down-vote it). Consider this figure:
{ "language": "en", "url": "https://math.stackexchange.com/questions/2362063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
Pre-calculus $\frac 27 = \frac 1a + \frac 1b$ I had this questions from a previous exam that I couldn't answer, I am apologizing for any English mistakes or for any stupid questions, I tried to solve them and I searched the internet and I couldn't find answers. 4-if $\frac 27$ could be written in a unique way in the form of $$\frac 1a +\frac 1b$$ $$a,b \in \mathbb Z^+$$ $$a\neq b$$What is a+b ? *It seemed so easy maybe I am missing something isn't there infinite values that satisfy this equation? $7(a+b) = 2ab$ Thanks for taking the time to read the question !
The given problem boils down to finding lattice points on a hyperbola. The equation $$ 2ab=7(a+b) \tag{1}$$ is equivalent to $$ (2a-7)(2b-7) = 49\tag{2}$$ and $49=7^2$ cannot be written as a product of integers in too many ways. From the assumption $(2a-7)=1$ and $(2b-7)=49$ we get the non-trivial solution $\color{red}{(a,b)=(4,28)}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2362140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Evaluating the determinants of matrices A and B I am stuck on a homework question that asks to evaluate the det(A) and det(B). We are only given the following information: A and B are 3x3 matrices, such that det((2A)⁻¹Bᵀ)=1 and det(4B⁻¹A³)=1/2, how could I solve this?
I'm going to work this out for $n \times n$ matrices, since there is really no more effort involved. The solution depends almost entirely on the three properties $\det(XY) = (\det X)(\det Y), \tag{1}$ and $\det X^T = \det X, \tag{2}$ which hold for any two $n \times n$ matrices $X$, $Y$,and the fact that $\det D$, where $D$ is a diagonal matrix, is the product of its diagonal entries. From (1), we have, for invertible $X$, $(\det X)(\det X^{-1}) = \det (XX^{-1}) = \det(I) = 1, \tag{3}$ whence $\det X^{-1} = (\det X)^{-1}; \tag{4}$ Since $\det ((2A)^{-1} B^T) = 1, \tag{5}$ we have $\det (2^{-1}I) \det A^{-1} \det B^T = \det ((2A)^{-1} B^T) = 1, \tag{6}$, and also, $\det (2^{-1}I) = 2^{-n}, \tag{7}$ so using (2) and (4) we find $2^{-n} (\det A)^{-1} \det B = 1, \tag{8}$ or $\det B = 2^n \det A; \tag{9}$ also, $\det (4I) \det B^{-1} \det A^3 = \det (4IB^{-1}A^3) = \det (4B^{-1}A^3) = 2^{-1}, \tag{10}$ and $\det(4I) = 4^n = 2^{2n}, \tag{11}$ so with the aid of (1), (2) and (4) again $2^{2n} (\det B)^{-1} (\det A)^3 = 2^{-1}, \tag{12}$ so $\det B = 2^{2n + 1} (\det A)^3; \tag{13}$ combining (9) and (13): $2^n \det A = 2^{2n + 1} (\det A)^3; \tag{14}$ thus since we have been (tacitly) assuming $\det A \ne 0$, $(\det A)^2 = 2^{-(n + 1)}, \tag{15}$ or $\det A = \sqrt{2^{-(n + 1)}}; \tag{16}$ by (9), $\det B = 2^n\sqrt{2^{-(n + 1)}} = \sqrt{2^{(n - 1)}}. \tag{17}$ With $n = 3$ we have $\det A = \sqrt{2^{-4}} = 2^{-2}, \tag{18}$ and $\det B = \sqrt {2^2} = 2. \tag{19}$ These results are easily checked.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2362217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }