Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Countably additive property of Lebesgue measure I am farily new to the world of measure theory, and I am working on the Billingsey (third edition). At page 23 the field $B_0$ is defined as the field of finite disjoint union of subintervals of $\Omega$. However, to prove that the Lebesgue measure is countably additive probability measure on the field B0, it says (page 26) Suppose that $A=\bigcup_{k=1}^\infty A_k$, where $A$ and $A_k$ are $B_0$-sets and the $A_k$ are disjoint. How can the union of infinite elementes be in a set which is defined as the set of finite unions? Of course, this raises a more foundamental question: if $A=\bigcup_{k=1}^\infty A_k$ is in $B_0$, and we already know that $B_0$ is a field, then isn't $B_0$ also a $\sigma$-algebra, since it is closed under countable unions? The book states that this is not the case (and invests quite few pages later to extend the Lebesgue measure on a Borel set, a $\sigma$-algebra), but I don't understand why. I am certainly missing some important point here.
The operative phrase here is suppose that. In general $B_0$ will not be closed under taking countable unions; it's not a $\sigma$-algebra and nobody is claiming that it is. But it could happen that there is some sequence of disjoint sets $A_1, A_2, \dots$ from $B_0$, such that their union $A = \bigcup_{k=1}^\infty A_k$ just happens to be another set in $B_0$. When this happens, we want it to be the case that $m(A) = \sum_{k=1}^\infty m(A_k)$. I'm not sure whether "subintervals" here means open or closed or what. Let's say $B_0$ is the field of finite unions of half-open intervals in $\Omega = [0,1)$. In general a countable union of such sets need not be a finite union of half-open intervals. Consider something like $A_1 = [0,1/2)$, $A_2 = [3/4, 7/8)$, $A_3 = [15/16, 31/32)$, and so on. The union $A = \bigcup_{k=1}^\infty A_k$ is definitely not in $B_0$, in this case. But in some other cases it could be. For example, suppose $A_1 = [0,1/2)$, $A_2 = [1/2, 3/4)$, $A_3 = [3/4, 7/8)$, and so on. Each of the sets $A_k$ is in $B_0$, and their union is $[0,1)$ which is also in $B_0$. We can then check that $m([0,1)) = 1 = \sum_{k=1}^\infty 2^{-k} = \sum_{k=1}^\infty m(A_k)$ so countable additivity checks out for this case. The claim is that it holds in all such cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1514798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Pointwise convergence implies $L^p$ Simply, why is it that convergence pointwise, $u_j \rightarrow u$, implies convergence in $L^p$ if $|u_j(x)| \le g(x)$ for some $g$ in $L_+^p$?
Simply, Lebesgue's dominated convergence theorem. "Domination" by $g$ is necessary: a traveling square wave $f_n=\chi_{[n,n+1]}$ converges pointwise to zero but has norm 1 for all $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1514956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Riemann like sums for Lebesgue integrable function Let $f(x)$ be a non-negative function on $\mathbb{R}$ such that $\int_{-\infty}^{\infty}f(x) \ dx=1.$ Actually, f(x) is a probability density function for a continuous random variable. Can I justify that $$ \lim_{n \rightarrow \infty} \sum_{m \in\mathbb{Z}} \frac{1}{n} f(\frac{m}{n}+\frac{z}{n})= \int_{-\infty}^{\infty} f(x) \ dx $$ where $z \in [0,1]$ is fixed. I am trying to use Lebesgue Dominated convergence Theorem but which function should I pick up as a dominator to justify the exchange of limits and integrals ?
This is not true even for continuous nonnegative $f.$ For $N=1,2,\dots,$ we define $f$ on the disjoint intervals $[N-1/(N^2e^N),[N+1/(N^2e^N)]$ to be an isosceles triangular spike of height $e^N$ centered over this interval. Define $f$ to be $0$ everywhere else. Then $f$ is continuous everywhere, and $$\int_{-\infty}^{\infty}f(x) \ dx = \sum_{N=2}^{\infty}1/N^2 < \infty.$$ But for each $N,$ $$\sum_{m \in\mathbb{Z}} \frac{1}{N} f \left( \frac{m}{N}\right) > \frac{1}{N}f \left( \frac{N^2}{N} \right) = \frac{e^N}{N}\to \infty$$ as $N\to \infty.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1515086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
proving the existence of a complex root If $x^5+ax^4+ bx^3+cx^2+dx+e$ where $a,b,c,d,e \in {\bf R}$ and $2a^2< 5b$ then the polynomial has at least one non-real root. We have $-a = x_1 + \dots + x_5$ and $b = x_1 x_2 + x_1 x_3 + \dots + x_4 x_5$.
We will prove the slightly stronger statement Claim: Let $$ p(x) = x^n + ax^{n-1} + bx^{n-2} + \cdots + c $$ be a polynomial with real coefficients. If $n=2,3,4,5$ and $2a^2 < 5b$ then $p$ has at least one non-real zero. If $n \geq 6$ then $p$ may have all real zeros regardless of whether $2a^2 < 5b$ or $2a^2 \geq 5b$. For example, the polynomial $$ \left(x - \sqrt{\frac{2}{n(n-1)}}\right)^n = x^n - \sqrt{\frac{2n}{n-1}} x^{n-1} + x^{n-2} + \cdots + \left(\frac{2}{n(n-1)}\right)^n $$ has all real zeros with $2a^2 < 5b$. (In fact $p$ has at least two real zeros since they will come in complex conjugate pairs.) As noted in the question, if $x_1,\ldots,x_n$ are the zeros of $p$ then $$ a = -\sum_{k=1}^{n}x_k \qquad \text{and} \qquad b = \sum_{1 \leq j < k \leq n} x_j x_k, $$ so that $$ \begin{align} 2a^2 &= 2\sum_{k=1}^{n} x_k^2 + 4\sum_{1 \leq j < k \leq n} x_j x_k \\ &= 2\sum_{k=1}^{n} x_k^2 + 4b \end{align} $$ or, rearranging, $$ 2a^2 - 5b = 2\sum_{k=1}^{n} x_k^2 - \sum_{1 \leq j < k \leq n} x_j x_k. $$ To prove your statement it therefore suffices to show that $$ 2\sum_{k=1}^{n} x_k^2 \geq \sum_{1 \leq j < k \leq n} x_j x_k \tag{1} $$ for $x_1,\ldots,x_n \in \mathbb R$ and $n = 5$. It's clearly true if $$ \sum_{1 \leq j < k \leq n} x_j x_k \leq 0, $$ so suppose that $$ \sum_{1 \leq j < k \leq n} x_j x_k > 0 $$ and set $$ y_\ell = \left( \sum_{1 \leq j < k \leq n} x_j x_k \right)^{-1/2} x_\ell, \qquad \ell = 1,\ldots,n. $$ Then $$ \sum_{k=1}^{n} x_k^2 = \left(\sum_{1 \leq j < k \leq n} x_j x_k \right) \sum_{k=1}^{n} y_k^2, $$ so that $(1)$ becomes $$ 2 \sum_{k=1}^{n} y_k^2 \geq 1. \tag{2} $$ We also note that $$ \sum_{1 \leq j < k \leq n} y_j y_k = 1. $$ Our mode of attack to prove the new inequality $(2)$ will be to use Lagrange multipliers to minimize $\sum_{k=1}^{n} y_k^2$ subject to this constraint. To this end, define $$ f(y_1,\ldots,y_n,\lambda) = \sum_{k=1}^{n} y_k^2 + \lambda\left(\sum_{1 \leq j < k \leq n} y_j y_k - 1\right). $$ We calculate $$ f_{y_k}(y_1,\ldots,y_n,\lambda) = 2y_k + \lambda \sum_{j\neq k} y_j = (2-\lambda) y_k + \lambda \sum_{j=1}^{n} y_j, \qquad k=1,\ldots,n, $$ and thus we need to solve the $n+1$ equations $$ \begin{align} 0 &= (2-\lambda) y_k + \lambda \sum_{j=1}^{n} y_j, \qquad k=1,\ldots,n, \\ 1 &= \sum_{1 \leq j < k \leq n} y_j y_k. \end{align} $$ If $\lambda \neq 2$ then $$ y_k = \frac{\lambda}{\lambda-2} \sum_{j=1}^{n} y_j $$ and so $y_1 = y_2 = \cdots = y_n$, and if $y_k \neq 0$ then this yields $$ \lambda = \frac{2}{1-n}. $$ Further, the equation $1 = \sum_{1 \leq j < k \leq n} y_j y_k$ becomes $$ y_k = \sqrt{\frac{2}{n(n-1)}}. $$ We will omit checking that this is a local minimum of the problem. Finally, toward $(2)$ we calculate $$ 2\sum_{k=1}^{n} y_k^2 = \frac{4}{n-1} \geq 1 $$ for $n = 2,3,4,5$, from which the claim, and the result in the question, follows. In particular, taking $x_\ell = y_\ell$ furnishes the counterexample at the end of the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1515227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Any finite-dimensional vector space is the dual space of another Any finite-dimensional vector space is the dual space of another? This is from a true/false section in my book and the statement is supposedly true. I can't say I see the reasoning behind this and any hint/direction is greatly appreciated. My intuition says it has something to do with the isomorphic relationship between finite-dimensional vector spaces and their duals. Edit: Here is the question as exactly phrased. Every vector space is the dual of some other space? True or False . . . I don't know whether or not they mean isomorphic, equal to, etc...
For a finite dimensional inner-product space $V$, we can define any linear functional $f \in V^{\ast}$ as $f(w) = \langle v,w\rangle$ for a unique $v \in V$, this is often written as $f = \langle v,-\rangle$. For example, given the basis $\{e_1,\dots,e_n\}$ (the standard basis) we have the dual basis: $\{\pi_1,\dots,\pi_n\}$, where, if $w = w_1e_1 + \cdots w_ne_n$, then $\pi_j(w) = w_j$, so that if $v = v_1e_1 + \cdots + v_ne_n$ our linear functional $\langle v,-\rangle$ is: $v_1\pi_1 +\cdots + v_n\pi_n$ (we can add two linear functionals, and take scalar multiples of them using the operations of the underlying field $F$: $(f+g)(v) = f(v) + g(v)\\ (cf)(v) = c(f(v)).)$ In the expression $f(w) = \langle v,w\rangle$, $f$ is the function, and $w$ is the variable, we might re-write this as $f = v^{\ast}$ ($f$ is clearly a linear functional "derived" from the vector $v$). But if we pull a "switcheroo" (this is a technical term) letting $w$ be the function, and $v$ the variable, we can define an element of $(V^{\ast})^{\ast}$ like so: $w^{\ast\ast}(v^{\ast}) = v^{\ast}(w)$. The mapping $w \mapsto w^{\ast\ast}$ is a ($F$-linear) isomorphism. While describing $V^{\ast}$ forced us to choose a basis, and define a dual basis (to squeeze "numbers" (scalars) out of vectors, we usually need coordinates), the correspondence $w \mapsto w^{\ast\ast}$ is basis-free, which is what texts means by saying it is "canonical".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1515352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does cancelling change this equation? The equation $$2t^2 + t^3 = t^4$$ is satisfied by $t = 0$ But if you cancel a $t^2$ on both sides, making it $$2 + t = t^2$$ $t = 0$ is no longer a solution. What gives? I thought nothing really changed, so the same solutions should apply. Thanks
No such thing as cancelling actually exists. What you do in those moments is to divide both sides of an equation by the same value. This means, cancelling like this: $2t^2 + t^3 = t^4$ to $2t + t^2 = t^3$ Is actually a shortcut for: $2t^2 + t^3 = t^4$ to $(2t^2 + t^3)/t = t^4/t$ to $2t + t^2 = t^3$ And you can only do that by assuming $t \neq 0$, so you discard that solution by doing such operation. Perhaps $0$ would still be a valid solution if the exponents are $> 2$ in each term (so $0$ would still be a valid root for them) but you should never assume that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1515492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 6, "answer_id": 2 }
Prove that the point of intersection of the diagonals of the trapezium lies on the line passing through the mid-points of the parallel sides Prove,by vector method,that the point of intersection of the diagonals of the trapezium lies on the line passing through the mid-points of the parallel sides. My Attempt: Let the trapezium be $OABC$ and that the O is a origin and the position vectors of $A,B,C$ be $\vec{a},\vec{b},\vec{c}$.Then the equation of $OB$ diagonal is $\vec{r}=\vec{0}+\lambda \vec{b}................(1)$ And equation of $AC$ diagonal is $\vec{r}=\vec{a}+\mu(\vec{c}-\vec{a}).......(2)$ And the equation of the line joining the mid points of $OA$ i.e.$\frac{\vec{a}}{2}$ and $BC$ i.e. $\frac{\vec{b}+\vec{c}}{2}$ is $\vec{r}=\frac{\vec{a}}{2}+t(\frac{\vec{b}+\vec{c}}{2}-\frac{\vec{a}}{2}).........(3)$. Here $\lambda,\mu,t$ are scalars. I do not know how to solve $(1)$ and $(2)$ and put into $(3)$ to prove the desired result. Please help me.Thanks.
The trapezium's diagonals intersect at $X.$ We want to show that $B,X,\text{ and }R$ are collinear.$$ $$ Triangle $XOP$ is similar to triangle $XQA.$ So $$\frac{XO}{XQ}=\frac{OP}{QA}\\=\frac1\lambda.$$ Now, $$\vec{BX}=\vec{OX}-\vec{OB}\\=\frac1{\lambda+1}\vec{OQ}-\vec{OB}\\=\frac1{\lambda+1}\left(\mathbf{a}+2\lambda\mathbf{b}\right)-\mathbf{b}\\=\frac1{\lambda+1}\left[\mathbf{a}+\left(\lambda-1\right)\mathbf{b}\right],$$ while $$\vec{BR}=\vec{BO}+\vec{OA}+\vec{AR}\\=-\mathbf{b}+\mathbf{a}+\lambda\mathbf{b}\\=\mathbf{a}+\left(\lambda-1\right)\mathbf{b}.$$ Therefore $$\vec{BR}=(\lambda+1)\vec{BX}.$$ Thus $B,X,\text{ and }R$ are collinear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1515600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Solve equations like $3^x+4^x=7^x$ How can I solve something like this? $$3^x+4^x=7^x$$ I know that $x=1$, but I don't know how to find it. Thank you!
Defining $f(x):=\mathrm{log}_7\left(3^x+4^x\right)$, you want to search for fixed points of $f$. But $$ f^\prime(x)=\frac{1}{\ln 7}\cdot \frac{3^x\ln 3+4^x \ln 4}{3^x+4^x}>0 $$ and $$ f^{\prime\prime}(x)=\frac{1}{\ln 7}\cdot \frac{3^x4^x}{(3^x+4^x)^2} \cdot ((\ln 4)^2+(\ln 3)^2-2\ln 3 \ln 4) $$ which is positive too by arithmetic-geometric mean inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1515776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
on a continous function dominated by a certain function and taking certain values In a book by Charalambos D. Aliprantis and Owen Burkinshaw, Positive Operators, in page 13, Example 1.16, there is a statement which I cannot see why it is always true. The statement is as follows: Take $0<f∈C[−1, 1]$, and let $0<c<2π$. Also, for each $n∈{\mathbb{N}}$, let $\displaystyle t_n = \frac{1}{c+2nπ}$ and note that $t_n → 0$. Next pick some $g_n ∈ C[−1, 1]$ with $0≤g_n≤f$ such that $g_n(\text{sin} \ c)=f(\text{sin} \ c)$ and $g_n(\text{sin}(c+t_n))= 0$. Now, my question is: Which theorem (or result) guarantees that, for each $n∈{\mathbb{N}}$, there is a function $g_n$ with these properties? (Is it an application of Urysohn Lemma? If so, how? (Here $C[−1, 1]$ denotes the vector space of continuous functions on the interval $[-1,1]$.) Can anybody help me to understand/know this passage in the above aforementioned example?
Let $h_n$ be any non-negative continuous function such that $h_n(\sin(c+t_n))=0$ and $h_n(\sin c)=f(\sin c)$, for instance $$ h_n(x)=\frac{|x-\sin(c+t_n)|}{|\sin c-\sin(c+t_n)|}\,\,f(\sin c). $$ Then take $g_n(x)=\min(f(x),h_n(x))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1515875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve $ \left|\frac{x}{x+2}\right|\leq 2 $ I am experiencing a little confusion in answering a problem on Absolute Value inequalities which I just started learning. This is the problem: Solve: $$ \left|\frac{x}{x+2}\right|\leq 2 $$ The answer is given to be $x\leq-4$ or $x \geq-1$ This is my attempt to solve the problem: By dividing, $\left|\frac{x}{x+2}\right|\leq 2$ is equivalent to $\left |1-\frac{2}{x+2}\right|\leq2$ which is also equivalent to $\left |\frac{2}{x+2}-1\right|\leq2$ So, $-2\leq\frac{2}{x+2}-1\leq2$ which is equivalent to $-\frac{1}{2}\leq\frac{1}{x+2}\leq\frac{3}{2}$ Case 1: $x+2>0$. Solving $-\frac{1}{2}\leq\frac{1}{x+2}\leq\frac{3}{2}$, I get $x\geq-4$ and $x\geq-\frac{4}{3}$ which is essentially $x\geq-\frac{4}{3}$. Case 2: $x+2<0$. Solving $-\frac{1}{2}\times(x+2)\geq{1}\geq\frac{3}{2}\times(x+2)$, I get $x\leq-4$ and $x\leq-\frac{4}{3}$ which is essentially $x\leq-4$. So, the solutions are: $x\leq-4$ or $x\geq-\frac{4}{3}$. I couldn't get $x \geq-1$ as a solution. Did I do anything wrong? The book I am using is Schaum's Outlines-Calculus. Another question I would like to ask is that am I using 'and' and 'or' correctly in the above attempt to solve the problem? I have had this problem many times.
the case $x+2>0$, you do not need to solve $-1/2\le 1/(x+2)\leftarrow x+2>0$. the case $x+2<0$, the inequality you solve it's not the original one. It should be $-1/2 \le 1/(x+2) \le 3/2$ and in case $x+2<0\rightarrow 1/(x+2)<3/2$, so it's just $-1/2 \le 1/(x+2)$. Think it simply.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1515986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Prove that taking the orthogonal projection of a vector is continuous Let $Y$ be a closed subspace of the Hilbert space $X$, and define $T:X\to Y$ as $$Tx = Proj_Y x$$ Then I want to check that $T$ is continuous. So I did the following, I took a converging sequence $(x_n) \in X$ with limit $x$, then I want to prove that $(Tx_n) \to Tx$. To this end I considered that $Tx_i=y_i$ and $Tx=y$, therefore $$||y_n-y||=||y_n-x_n+x_n-y|| <||y_n-x_n||+||x_n-y||<||y^{*}-x_n||+||x_n-y||$$ Now,since the last inequality holds for all $y^{*} \in Y$ we pick this point such that $||y^{*}-x_n||<\frac{\epsilon}{3}$, so we get $$||y^{*}-x_n||+||x_n-y||<\frac{\epsilon}{3}+||x_n-x+x-y||<\frac{\epsilon}{3}+||x_n-x||+||x-y^{**}||$$ and by the same argument as above, but know with $x$ we finally get: $$\frac{\epsilon}{3}+||x_n-x||+||x-y^{**}||<\frac{\epsilon}{3}+\frac{\epsilon}{3}+\frac{\epsilon}{3}=\epsilon$$ The thing is that I am not sure of my above proof, in the part of choosing those $y^{*}$ and $y^{**}$. I was thinking to use the Pythagoras identity, but I don't know how. Can someone help me to prove correctly the above result please? Thanks a lot in advance. Note Thm. Let $Y$ a closed subspace of the Hilbert space $X$. then, for $x ∈ X$, there exists a unique $y_ 0 ∈ Y$ such that $$||x − y_ 0 || ≤ ||x − y||$$ for all $y ∈ Y$ . This is, $y _0$ is the nearest vector in $Y$ to $x$. We call $y_0$ the orthogonal projection of $x$ in $Y$
Hint (for a complete proof of the result). I suppose that you define $Tx = Proj_Y x=p_Y(x)$ as the point at smallest distance of $x$ belonging to $Y$. By the Hilbert projection theorem, $p_Y(x)$ exists and is uniques as $Y$ is supposed to be closed. Now you can prove that a point in $Y$ is equal to $p_Y(x)$ if and only if $$\mathcal{Re} \langle x-p_Y(x),y-p_Y(x) \rangle \le 0 \text{ for all } y \in Y.$$ Based on that, you can prove that for $u,v \in X$, you have $$\Vert p_Y(u)-p_Y(v)\Vert \le \Vert u - v \Vert \text{ for all } u,v \in X \tag{1}.$$ To do so, take $$z=u-v-(p_Y(u)-p_Y(v))=(u-p_Y(u))-(v-p_Y(v))$$ You can write $u-v=p_Y(u)-p_Y(v) + z$ and prove that $$\Vert u-v \Vert^2=\Vert p_Y(u)-p_Y(v) \Vert^2 + \Vert z \Vert^2 + 2 \mathcal{Re} \langle z, p_Y(u)-p_Y(v) \rangle $$ with $\mathcal{Re} \langle z, p_Y(u)-p_Y(v) \rangle \ge 0$ considering the paragraph above. Inequality (1) proves that the projection is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1516084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Optimization of a split cone This was a thought that I've been pondering on for several days now, however I can not seem to optimize using calculus correctly in relation to the following statement. Any help would be greatly appreciated. Assume you have 3 dimensional cone with a perfectly circular base. Let the height of the cone be $h$ and the radius $r$. Now, a certain distance $x$ from the base, there is an imaginary ring drawn parallel to that of the base. Assume where all other statements involving "rest of the area" to be the whole surface area of the cone above that of the imaginary ring $x$ from the base. So, when will the area of the surface bound between the base and the imaginary ring be equal to one eighth the rest of the surface area? Thanks, this should put my mind at ease!
The problem itself is not very difficult. I still don't have enough reputation to comment, so I'm going to suggest this as an answer: define surface area as a function of $h$ and use pythagoras' theorem; also, consider that if, and I'm taking a guess here because it is unclear in your statement, the ring touches the edge of the cone, your original figure gets divided into a "cone segment" below, and a new, smaller cone above. Actual answer: To solve this problem, you don't actually have to use optimization. All you need to do is remember your algebra. First, consider that if the cone has height $H$, and radius $R$, then the cone will have a surface area $S_{t} = \pi R^{2} + \pi R \sqrt{H^{2} + R^{2}}$ Now, if we choose a number $x$ between $0$ and $R$ and truncate the cone, we can deduce that $$ \frac{H}{R}=\frac{h_{1}}{x} $$ Where $h_{1}$ is the height of the upper cone created by the truncation of the original cone, simply because $\tan{\alpha} = \frac{H}{R}$ but also $\tan{\alpha} = \frac{h_{1}}{x}$, hence the above. Now, let us define the surface area $S_{1} = \pi x^{2} + \pi x \sqrt{x^{2} + \left(\frac{xH}{R}\right)^{2}}$, which represents the surface area of the upper cone, and the $S_{2}$ represents the surface area of the lower cone segment truncated by $x$. But since $S_{2} = \frac{1}{8}S_{t}$, we have $$ S_{t} = S_{1} + \frac{1}{8}S_{t} $$ $$ \frac{9}{8}S_{t}=S_{1} $$ $$ \frac{9}{8}\left(\pi R^{2} + \pi R \sqrt{H^{2} + R^{2}}\right) = \pi x^{2} + \pi x \sqrt{x^{2} + \left(\frac{xH}{R}\right)^{2}} $$ $$ x=\frac{3R}{2\sqrt{2}} $$ Computed with WolframAlpha for $x$, $R$ and $H$ positive
{ "language": "en", "url": "https://math.stackexchange.com/questions/1516220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it possible that $A\subseteq A\times B$ for some non empty sets $A,B$? I was wondering if there exist two non empty sets $A,B$ such that $$A\subseteq A\times B.$$ I know that always exists a subset of $A\times B$ with the same cardinality of $A$, but i'm requesting here that $A$ is a subset of $A\times B$ without using any identification map. At first i thought that this was not possible because $A$ and $B\times A$ are two sets containing different kind of elements: the second contains pairs like $(a,b)$ with $a\in A, b\in B$; the first just single elements $a\in A$. Moreover, suppose $A\subseteq A\times B$ holds and take $a \in A$. Then $a=(a_1,b_1)$ for some $a_1 \in A, b_1\in B$. For the same reason $a_1=(a_2,b_2)$ and so $a=((a_2,b_2),b_1)$. Following this argument I got some sort of recursive infinite definition for $a$ that made me suspect something is wrong. However if I take $$A=\mathbb{N}^{\mathbb{N}} ;B=\mathbb{N}$$ is it true that $A=A\times B$ or I'm missing something? Moreover, if $A\subseteq A\times B$ can be true, are there other examples? edit: I add another strange example: take $A=\bigcup_{i=1}^{\infty} \mathbb{N}^i $ and $B=\mathbb{N}$, then $A \times B \subset A$. This makes me think that maybe exists also an example for the other inclusion.
An element of $\Bbb N^{\Bbb N}$ is an infinite sequence of natural numbers. Buy what do we mean by a sequence. $n_0,n_1,\ldots n_k,\ldots$ is really nothing more than a function $f:\Bbb N\to\Bbb N$. For example, $f(k)=n_k$. So, $\Bbb N^{\Bbb N}=\{f:\Bbb N\to\Bbb N\}$. So, what is an element of $A\times\Bbb N$? It is a pair $(f,n)$. The are not elements of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1516538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Permutations of bit-sequence(discrete math) How many bit-squences with length 8 has 1 as it's first bit and 00 as the two last bits(e.g $1011 1100$) I thought the solution to this problem would be $1 * 2 * 2 * 2 * 2 * 2 * 1 * 1 = 2^5$, but my teacher proposed the following solution: amount that starts with 1: $1 * 2 * 2 * 2 * 2 * 2 * 2 = 2^7$ amount that ends with 00: $2 * 2 * 2 * 2 * 2 * 2 * 1 * 1 = 2^6$ amount that starts with 1 and ends with 00: $1 * 2 * 2 * 2 * 2 * 2 * 1 * 1 = 2^5$ answer = $2^7 + 2^6 - 2^5 = 160$. Isn't the amount of bit-squences that starts with 1 and ends with $00$ the answer to the question "How many bit-squences with length 8 has 1 as it's first bit and $00$ as the two last bits"? Isn't the answer simply $2^5$ ?
Yes, the answer is simply $2^5$. Your teacher is calculating some other count. OK, your teacher is calculating the count of those sequences which start with 1 or end with 00. And she's using the inclusion exclusion principle for counting them. Inclusion-Exclusion Principle Maybe you misunderstood the problem. You thought there was an and there in the problem statement but it's actually an or.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1516640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Twelve people travelling in three cars - clarification on existing answer Please refer this question Twelve people travel in three cars, with four people in each car. Each car is driven by its owner. Find the number of ways in which the remaining nine people may be allocated to the cars. (The arrangement within the car doesn't matter) Selected answer of this question is ${9\choose3}\times{6\choose3}\times{3\choose3} \times3!$ But I believe this answer includes overcounting also. I think correct answer is ${9\choose3}\times{6\choose3}$ only (reference : explanation provided here for a similar question) Am I correct here or my understanding is wrong and correct answer is still ${9\choose3}\times{6\choose3}\times{3\choose3} \times3!$?
Yes, you are right. The mistake I did was to take extra care of the dissimilarity of the cars, thus causing repetitions. Please check the updated answer, thanks :-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1516754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many bit strings of length 10 either begin with three 0s or end with two 0s? The question : How many bit strings of length 10 either begin with three $0$s or end with two $0$'s? My solution : $0$ $0$ $0$ X X X X X $0$ $0$ = $2^5 = 256$ editing** I noticed the word"or" so I changed the solution to $2^7$ (three $0$'s) +$2^8$(two $0$'s) - $2^5$(both) =416 is this the correct way to do it?
These are the binary words $000x$ ($2^7$ many $x$) and $y00$ ($2^8$ many $y$) minus $000z00$ ($2^5$ many $z$). Looks good. But I calculate $352$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1516842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Solve $z^3=(sr)^3$ where $r,s,z$ are integers? Let $x,y,z$ be 3 non-zero integers defined as followed: $$(x+y)(x^2-xy+y^2)=z^3$$ Let assume that $(x+y)$ and $(x^2-xy+y^2)$ are coprime and set $x+y=r^3$ and $x^2-xy+y^3=s^3$ Can one write that $z=rs$ where $r,s$ are 2 integers? I am not seeing why not but I want to be sure.
Yes. If $z^3 = r^3s^3$ we can take cube roots of both sides to get $z = rs$. It's valid to do this because $n$ is the only solution to $({n^3})^{1/3}$ for real $n$. If we go to complex numbers we have to be more careful because cube roots have 3 solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1516925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Complex integral computation Evaluate$$ \oint f\left( z \right )dz$$ where C is the unit circle $$\left | z \right |=3$$ $$f\left ( z \right )=\frac{1}{z^{3}+2z^{2}}$$ Let $$z=3e^{it}$$ $$z^{3}=27e^{i3t}$$ $$dz=3ie^{it}$$ $$2z^{2}=18e^{i2t}$$ Substituting everything into the initial equation and reducing, we get $$\int_{0}^{2\pi}\frac{i}{9e^{i2t}-6e^{it}}$$ How should I proceed further? Is it sensible to use the identity Cos(x)+iSin(x)=1 for the complex exponential?
One can use the Residue Theorem to show that the value of the integral is zero. If one wishes to proceed directly, then we have $$\begin{align} \oint_{|z|=3} \frac{1}{z^3+2z^2}\,dz&=\frac14 \oint_{|z|=3}\left(\frac{1}{z+2}+\frac2{z^2}-\frac{1}{z}\right) \,dz\\\\ &=\frac i4\int_0^{2\pi}\frac{3e^{it}}{3e^{it}+2}\,dt+\frac i2\int_0^{2\pi}\frac{3e^{it}}{9e^{i2t}}\,dt-\frac i4\int_0^{2\pi}\frac{3e^{it}}{3e^{it}}\,dt\\\\ &=-\frac i4\int_0^{2\pi}\frac{1}{3e^{it}+2}\,dt\\\\ &=-\frac i4\int_0^{2\pi}\frac{3\cos t+2}{12\cos t+13}\,dt-\frac 14\int_0^{2\pi}\frac{3\sin t}{12\cos t+13}\,dt\\\\ &=-\frac i2\int_0^{\pi}\frac{3\cos t+2}{12\cos t+13}\,dt-\frac 14\int_{-\pi}^{\pi}\frac{3\sin t}{12\cos t+13}\,dt\\\\ &=-\frac i2\left.\left(\frac{t}{4}-\frac12 \arctan\left(5\cot(t/2)\right)\right)\right|_{0}^{\pi}+\frac {i}{8}\left.\log\left(12\cos t +12\right)\right|_{-\pi}^{\pi}\\\\ &=0 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1517002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Estimates for a product involving primes Is $$\prod_{p\leq z}\bigg(1 + \frac{p}{(p-1)}\frac{\log z}{\log p}\bigg) = O(z)?$$ where $p$ ranges over the primes less than $z$. I believe I can show that it is $O(z\log \log z)$.
Let $f(z)=\exp\left((\log z)^{1/2}\right),$ and consider$$\prod_{p\leq f(z)}\left(1+\frac{\log z}{\log p}\right).$$ This is clearly a lower bound for your product, and this is bounded below by $$\prod_{p\leq f(z)}\left(\frac{\log z}{\log f(z)}\right)=\left(\log z\right)^{\pi(f(z))/2}=\exp\left(\frac{1}{2}\pi(f(z))\log\log z\right).$$ Now, $$\pi(f(z))=\pi\left(e^{\sqrt{\log z}}\right)\gg_{A}(\log z)^{A}$$ for any $A$ , and hence the product is $$\geq\exp\left(C_{A}(\log z)^{A}\right)$$ for any $A>0$ where $C_{A}>0$ is a constant depending on $A$ . Hence the product cannot be $O(z)$ or even $O(z\log\log z)$ as the above quantity grows faster than $z^{k}$ for any $k>0$ . I believe that if you are careful, then you can show that this function grows like $$e^{Cz(1+o(1)}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1517074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
At what points does the function $f(z)=Arg(z)$ , $ z \in \mathbb Z-\{0\}$ have a limit? At what points does the function $f(z)=Arg(z)$, $z \in \mathbb Z-\{0\} $ have a limit ? I know that $-\pi<Arg(z) \leq\pi$. Then how can I use that property to find points where limit exists ? A where is $f(z)$ is continuous ?
Note that, $$ \mathrm{Arg}(z)=\mathrm{Im}\,\log z. $$ The function $\log z$ is definable in every simply connected domain $\Omega$ which does not contain $z=0$. Hence, $\mathrm{Arg}(z)$ is continuous (and indeed real analytic) in every such $\Omega$. Conversely, if $\mathrm{Arg}(z)$ is defined, as a continuous function, in a simply connected domain $\Omega$ which does not contain $z=0$, then $\log z$ defined as an analytic function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1517161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Adjoint of an operator on an inner product space Find the adjoint of $$\left ( Lf \right )\left ( t \right )=t^{2}\frac{df}{dt}+tf$$ where $$f\left ( 0 \right )=1$$ and$$ f'\left ( 1 \right )=2$$ under $$\left \langle f,g \right \rangle=\int_{0}^{1} f\left (t \right )g\left ( t \right )dt$$ Working out I get $$\int_{0}^{1}\left ( t^{2}f'\left ( t \right )g+tf\left ( t \right )g\left ( t \right ) \right )dt$$ The 'second' integral that is the terms $$tf\left ( t \right )g\left ( t \right )$$ is a concern. Can I have some help?
The first term is given by integration by parts, the second one by noticing that $$(tf(t))g(t) = f(t)(tg(t)).$$ Therefore, $$\langle tf(t),g(t)\rangle = \langle f(t),tg(t)\rangle$$ so that the operator $f(t)\mapsto tf(t)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1517265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Collection of all open sets whose closure is contained in a cover is a cover (of a regular space) Let $X$ be a regular space. Let $\mathcal{A}$ be an open cover of $X$. We define $\mathcal{B}$ as the collection of all open sets $U$ such that $\overline{U}$ is contained in an element of $\mathcal{A}$. How to prove that $\mathcal{B}$ covers $X$? (My attempts to bring in regularity failed so far)
For $x\in X$ there is some $V\in\mathcal A$ with $x\in V$ or equivalently $x\notin V^c$. Set $V^c$ is closed so the regularity of $X$ ensures the existence of open sets $U,W$ s.t. $x\in U$ and $V^c\subseteq W$ and $U\cap W=\varnothing$. Set $W^c$ is closed, so $U\subseteq W^c$ implies that $\overline{U}\subseteq W^c\subseteq V\in\mathcal A$. This shows that $U\in\mathcal B$ so we have: $x\in U\in\mathcal B$. Proved is now that $\mathcal B$ covers $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1517352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find complete integral of $(y-x)(qy-px) = (p-q)^{2}$ Find complete integral for partial differential equation $(y-x)(qy-px) = (p-q)^{2}$ where $p={ \partial z \over \partial x},q={ \partial z \over \partial y}$. My attempt: The given equation is f(x,y,z,p,q) = $(y-x)(qy-px) - (p-q)^{2}$ The Charpit's auxilary equation will be given by $${dp \over 2px-(p+q)y}={dq \over 2qy-(p+q)x}={dx \over {2(p-q)-x2+xy}}={dy \over {-2(p-q)+xy-y2}}$$ From here I tried to proceed but no solvable fraction turned out.
Hint: Let $\begin{cases}u=x+y\\v=x-y\end{cases}$ , Then $\dfrac{\partial z}{\partial x}=\dfrac{\partial z}{\partial u}\dfrac{\partial u}{\partial x}+\dfrac{\partial z}{\partial v}\dfrac{\partial v}{\partial x}=\dfrac{\partial z}{\partial u}+\dfrac{\partial z}{\partial v}$ $\dfrac{\partial z}{\partial y}=\dfrac{\partial z}{\partial u}\dfrac{\partial u}{\partial y}+\dfrac{\partial z}{\partial v}\dfrac{\partial v}{\partial y}=\dfrac{\partial z}{\partial u}-\dfrac{\partial z}{\partial v}$ $\therefore v\left(v\dfrac{\partial z}{\partial u}+u\dfrac{\partial z}{\partial v}\right)=4\left(\dfrac{\partial z}{\partial v}\right)^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1517446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
How can I rearrange a formula containing both cot and cosec? I want to rearrange the following: $$\cot{\theta}-\csc{\theta}=\dfrac{2B\mu}{bmg}$$ to get $\theta$. I am unsure how to reduce a cot and cosec sum. Are there any maths tricks or identities that I am missing?
$$\cot\theta - \csc \theta = k$$ $$\cot\theta \pm \sqrt{\cot^2 \theta+1} = k$$ $$\pm \sqrt{\cot^2 \theta+1} = k-\cot\theta$$ $$\cot^2 \theta+1 = (k-\cot\theta)^2$$ $$\cot^2 \theta+1 = k^2-2k\cot\theta+\cot^2\theta$$ $$1 = k^2-2k\cot\theta$$ $$\cot\theta = \frac{k^2-1}{2k}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1517518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Show $(m^2+n^2)(o^2+p^2)(r^2+s^2) \geq 8mnoprs$ Let $m,n,o,p,r,s$ be positive numbers. Show that: $$(m^2+n^2)(o^2+p^2)(r^2+s^2) \geq 8mnoprs$$ I tested an example, let $m,n,o,p,r,s=1,2,3,4,5,6$ respectively. Then: $$(1+4)(9+16)(25+30)=(5)(25)(55) \geq 8123456$$ but I get $6875$ which is not greater than the right side. So am i misunderstanding the problem? Any ideas?
Yes, you misunderstood the problem, for $8mnoprs$ has tacit multiplication signs, it does not mean concatenation. Therefore, your test case is $$(1 + 4)(9 + 16)(25 + 30) = 6875 \geq 8 \times 1 \times 2 \times 3 \times 4 \times 5 \times 6.$$ But the first case I would test is the minimal case $m = n = o = p = r = s = 1$. Then $$2 \times 2 \times 2 \geq 8 \times 1 \times 1 \times 1 \times 1 \times 1 \times 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1517726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find $dy/dx$ if $xe^{9y}+y^4\sin(4x)=e^{8x}$ If $xe^{9y}+y^4\sin(4x)=e^{8x}$ implicitly defines $y$ as a function of $x$ then what is $\displaystyle \frac{dy}{dx}$? So far, I have made the following steps: 1) Get all parts of the equation onto one side 2) Find the derivative of the whole equation 3) This equals $9e^{9y}+y^4(4cos(4x))+4y^3(sin(4x))-8e^{8x}$ 4) Now, I think I am to get 'y' by itself. However, I am not sure how to do this, since there are $y$ variables as exponents, as well as non-exponent $y$ variables. Does anyone know how I may find the derivative of this function? All help is appreciated.
Implicitl differentiation means that we need to treat $y$ as a function of $x$ and apply Chain Rule. You also forgot to apply Product Rule to the first term. The derivative of the first term is: \begin{align*} \frac{d}{dx}[xe^{9y}] &= x\left[\frac{d}{dx}e^{9y}\right] + \left[\frac{d}{dx}x\right]e^{9y} \\ &= xe^{9y}\left[\frac{d}{dx}9y\right] + \left[1\right]e^{9y} \\ &= xe^{9y} \left[9\frac{dy}{dx}\right] + e^{9y} \\ &= 9xe^{9y}y' + e^{9y} \\ \end{align*} Now do the same thing for the second term, then solve for $y'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1517818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does series with factorials converge/diverge: $\sum\limits_{n=1}^\infty \frac{4^n n!n!}{(2n)!}$? $$\sum_{n=1}^\infty {{4^n n!n!}\over{(2n)!}}$$ I tried the ratio test but got that the limit is equal to 1, this tells me nothing of whether the series diverges or converges. if I didn't make any errors when doing the ratio test, it may diverge, but I need help proving that. Is there any other test I could try.
Note that $$\frac{(2n)!}{n!n!}\leq\sum_0^{2n}\binom{2n}{k}=(1+1)^{2n}=4^n$$ So $$\frac{4^n n!n!}{(2n)!}\geq 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1517927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Define $g :\ell_2 \to \mathbb R$ by $g(x)= \sum_{n=1}^{\infty} \frac{x_n}n$. Is $g$ continuous? Define $g :\ell_2 \to \mathbb R$ by $$g(x)= \sum_{n=1}^{\infty} \frac{x_n}n $$ Is $g$ continuous? I need to solve this but I could not see how to tackle it? any hints or suggestion?
Hint: $g$ is linear. A linear operator is continuous if and only if it is continuous at $x=0$, if and only if it is bounded. Try to find a constant $C>0$ such that for all $x$: $$ |g(x)|\leq C \|x\| $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1518031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluate the limit of $(f(2+h)-f(2))/h$ as $h$ approaches $0$ for $f(x) = \sin(x)$. If $f(x)=\sin x$, evaluate $\displaystyle\lim_{h \to 0} \frac{f(2+h) - f(2)}{h}$ to two decimal places. Have tried to answer in many different ways, but always end up getting confused along the way, and am unable to cancel terms. Any help is always appreciated!
Remember that $$f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$$ Here we see that $f(x) = \sin x$ and $x = 2$. Therefore, all we have to do is find $f'(2)$. If you know that the derivative of the sine function is $\cos x$ then you immediately have the answer as $\cos (2)$ (Remember that this is in Radians... all of Calculus should be done in radians)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1518107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Finding derivative with integral Came to this problem on my test study guide. I thought for finding the derivative you just took the antiderivative, which would be -cos(theta). How is it sin(x)? Or did I come across another error? At this point In this class I never know anymore.
$$g(x)=\int_\pi^x\sin\theta\,d\theta=-\cos x+\cos\pi=-\cos (x)-1$$then $$g'(x)=[-\cos (x)-1]'=\sin x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1518340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Answering Part of Durrett Exercise 3.4.13 4th edition I am trying to prove part (iii) of Durrett exercise 3.4.13 Suppose $P(X_j=j)=P(X_j=-j)=\frac{1}{2j^\beta}$ and $P(X_j=0)=1-j^{-\beta}$ where $\beta>0$. Show that if $\beta=1$ then $\frac{S_n}{n}\implies \aleph$ where $E(\exp(it\aleph))=\exp\left(-\int\limits_0^1 x^{-1}(1-\cos(xt))dx\right)$ and $\implies$ denotes convergence in distribution. The expectation part leads me to believe that the proof involves characteristic functions somehow, but I don't really know how to proceed. Any help would be greatly appreciated. Thanks!
It is a well-known theorem of Levy that if $\{X_n\}$ is a collection of random variables and $Y$ is another random variable then $X_n \Rightarrow Y$ iff $\phi_{X_n}(t) \rightarrow \phi_Y(t)$ as $n \rightarrow \infty$ and $\phi_Y$ is continuous at $t = 0$. Moreover, by properties of Fourier transforms, $\phi_{S_n/n}(t) = \prod_{1 \leq j \leq n} \phi_{X_j/n}(t)$. Now, \begin{equation*} \phi_{X_j/n}(t) = \int_{\mathbb{R}} d\lambda e^{it\lambda} \mathbb{P}(\frac{X_j}{n} = \lambda) = 1-\frac{1}{j} + \frac{1}{2j}(e^{it\frac{j}{n}} + e^{-it\frac{j}{n}}) = 1-\frac{1}{j}(1-\cos(tj/n)). \end{equation*} This is clearly real-valued and positive, so that we can write \begin{equation*} \log\phi_{S_n/n}(t) = \sum_{j = 1}^n \log(1-\frac{1}{n}\cdot \frac{n}{j}(1-\cos(tj/n)), \end{equation*} so, up to an $O(1/n)$ error term, we have \begin{equation*} \log \phi_{S_n/n}(t) = \frac{1}{n}\sum_{j=1}^n \frac{n}{j}(1-\cos(tj/n)) + O\left(\frac{1}{n}\right). \end{equation*} The sum on the right side is a Riemann sum for the exponential in your problem, so taking $n \rightarrow \infty$, we get $\phi_{S_n/n}(t) \rightarrow E\left(e^{it\aleph}\right)$, in your notation, the latter of which is continuous at zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1518430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof that group of even order contains element of order 2 I tried to prove the following claim: If $G$ is a group of even order then $G$ contains an element of order $2$. Please could someone check my proof and tell me if it is correct? My proof: Let $|G|=2n$ for some $n$ and let $\{k_1, \dots, k_{2n}\}$ be the set of all orders of elements of $G$. If there exists any $g\in G$ such that $k_i$ is even, say, $k_i = 2s$, then $g^{s}$ has order $2$. If all $k_i$ are odd pick any $k_i$ and the corresponding element $g$. Then since $k_i$ divides $2n$ we have $2n = sk_i = 2tk_i$ for some $t$. Then $g^{tk_i}$ has order $2$. This concludes the proof.
Consider the set $G-\{e\}$. Its cardinality must be odd because $|G|$ is even. If no element has order $2$ in $G$ then for every $g \in G-\{e\}$ there exists its inverse $g^{-1} \in G$ with $g \neq g^{-1}$. But then we will have an even number of elements in $G-\{e\}$. A contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1518577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
how to measure a circle. As we all know we can measure a line with a scale or any instrument but right now I have studied circles and was wondering if there was any way , instrument or method to measure circle i.e the circumferrence without the formula $2πr$.
To measure the circumference of a circle without using $C = 2πr $ you need to find a way to make it linear. This can be done by simply wrapping a string around the circumference, cutting it to make it the correct length, then measuring the length of the string.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1518682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to compute this finite sum $\sum_{k=1}^n \frac{k}{2^k} + \frac{n}{2^n}$? I do not know how to find the value of this sum: $$\sum_{k=1}^n \frac{k}{2^k} + \frac{n}{2^n}$$ (Yes, the last term is added twice). Of course I've already plugged it to wolfram online, and the answer is $$2-\frac{1}{2^{n-1}}$$ But I do not know how to arrive at this answer. I am not interested in proving the formula inductively :)
Let $$S=\sum_{k=1}^{n}\frac{k}{2^k}$$ or, $$S=\frac{1}{2^1}+\frac{2}{2^2}+\frac{3}{2^3}+\frac{4}{2^4}+\frac{5}{2^5}+....+\frac{n}{2^n}$$ and $$\frac{S}{2}= \frac{1}{2^2}+\frac{2}{2^3}+\frac{3}{2^4}+\frac{4}{2^5}+....+\frac{n-1}{2^n}+\frac{n}{2^{n+1}}$$ Subtracting we get, $$\frac{S}{2}= \frac{1}{2^1}+\frac{1}{2^2}+\frac{1}{2^3}+\frac{1}{2^4}+\frac{1}{2^5}+....+\frac{1}{2^n}-\frac{n}{2^{n+1}}$$ or, $$S= 1+\frac{1}{2^1}+\frac{1}{2^2}+\frac{1}{2^3}+\frac{1}{2^4}+....+\frac{1}{2^{n-1}}-\frac{n}{2^n}=2\left(1-\frac{1}{2^n}\right)-\frac{n}{2^n}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1518757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Functional separation of regular open sets of a topological space Let $\langle X,\mathscr{O}\rangle$ be a topological space. We say that disjoint $A,B\in 2^X$ are functionally separated iff there exists a continuous function $f\colon X\to [0,1]$ such that: * *if $x\in A$, then $f(x)=0$ *if $x\in B$, then $f(x)=1$ We define $\langle X,\mathscr{O}\rangle$ to be completely regular iff for a closed set $C\in 2^X$ and a point $x\notin X$, $F$ and $\{x\}$ are functionally separated. Let $\langle\mathrm{r}\mathscr{O},+,\cdot,-,\emptyset,X\rangle$ be the complete boolean algebra of regular open subsets of completely regular space $\langle X,\mathscr{O}\rangle$. In this algebra define: \[ A\ll B\iff \mbox{$A$ and $-B$ are functionally separated.} \] What I am trying to prove are the following two properties of $\ll$: * *if $A\ll C$ and $B\ll D$, then $A\cdot B\ll C\cdot D$ *if $A\ll C$, then $\exists_{B\in 2^X\setminus\{\emptyset\}}\,A\ll B\ll C$ Concerning 1. suppose $f\colon X\to[0,1]$ and $g\colon X\to[0,1]$ are such that: * *$x\in A\rightarrow f(x)=0$ and $x\in B\rightarrow g(x)=0$ *$x\in -C\rightarrow f(x)=1$ and $x\in -D\rightarrow g(x)=1$ Now put $h(x):=\max\{f(x),g(x)\}$. Thus we have: * *$x\in A\cdot B\rightarrow x\in A\wedge x\in B\rightarrow f(x)=0 \wedge g(x)=0\rightarrow h(x)=0$ *$x\in -C+-D\rightarrow x\in\mathrm{Int}\,\mathrm{Cl}\, (-C\cup-D)\rightarrow\: ???$. And I got stuck at the second dot above. Could you please give my any hint how to proceed? Concerning 2. I would appreciate any suggestion.
If $x\in -(C\cdot D)$, $$\begin{split}\Rightarrow & x \in \text{Cl}(-C)\text{ or }x\in \text{Cl}(-D) \\ \Rightarrow & f(x) =1\text{ or }g(x)=1 \\ \Rightarrow &h(x) = 1 \end{split}$$ as $f, g$ has range $[0,1]$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1518838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
how to find an infinity limit in a fraction I don't understand how to find the limits of this expression when $x\to\infty$ and $x\to-\infty$: $$\left(\frac{3e^{2x}+8e^x-3}{1+e^x}\right)$$ I've searched for hours. How to compute these limits?
$$\lim_{x\to\infty}\frac{3e^{2x}+8e^x-3}{1+e^x}=$$ $$\lim_{x\to\infty}\frac{-3e^{-x}+8+3e^x}{1+e^{-x}}=$$ $$\lim_{x\to\infty}\left(8+3e^x\right)=$$ $$\lim_{x\to\infty}8+\lim_{x\to\infty}\left(3e^x\right)=$$ $$\lim_{x\to\infty}8+3\lim_{x\to\infty}e^x=$$ $$\lim_{x\to\infty}8+3\exp\left(\lim_{x\to\infty}x\right)=$$ $$8+3\exp\left(\lim_{x\to\infty}x\right)=\infty$$ $$\lim_{x\to-\infty}\frac{3e^{2x}+8e^x-3}{1+e^x}=$$ $$\frac{\lim_{x\to-\infty}\left(3e^{2x}+8e^x-3\right)}{\lim_{x\to-\infty}\left(1+e^x\right)}=$$ $$\frac{\lim_{x\to-\infty}\left(3e^{2x}+8e^x-3\right)}{\lim_{x\to-\infty}1+\lim_{x\to-\infty}e^x}=$$ $$\frac{\lim_{x\to-\infty}\left(3e^{2x}+8e^x-3\right)}{1+\exp\left(\lim_{x\to-\infty}x\right)}=$$ $$\frac{\lim_{x\to-\infty}\left(3e^{2x}+8e^x-3\right)}{1}=$$ $$\lim_{x\to-\infty}\left(3e^{2x}+8e^x-3\right)=$$ $$3\left(\lim_{x\to-\infty}e^{2x}\right)+8\left(\lim_{x\to-\infty}e^x\right)+\lim_{x\to-\infty}(-3)=$$ $$3\exp\left(\lim_{x\to-\infty}2x\right)+8\exp\left(\lim_{x\to-\infty}x\right)-3=$$ $$3\exp\left(2\lim_{x\to-\infty}x\right)+8\exp\left(\lim_{x\to-\infty}x\right)-3=$$ $$3\cdot 0+8\cdot 0-3=-3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1518925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
Prove that $\tau=\{ \mathbb{R},\emptyset\} \cup \{(-t,t)|t\in\mathbb{I}^+\}$,is not a topology on $\mathbb{R}$. Prove that $\tau=\{ \mathbb{R},\emptyset\} \cup \{(-t,t) \,\, | \,\, t\in\mathbb{I}^+\}$ is not a topology on $\mathbb{R}$, where $\mathbb{I}^+$ denotes the positive irrational numbers. Any pointers on how to do that?
Assuming that your topology is $\tau = \{ \mathbb R , \emptyset \} \cup \{ (-t,t) : t \in \mathbb Q^c_+ \}$ Let $t_n$ be an increasing sequence of positive irrational numbers converging to a rational number, say $a$. (You should find such a sequence) Then it is easy to see that although $(-t_n \ ,t_n)$ is in your topology for each $n \in \mathbb N$, the union $$\bigcup_{n=1}^\infty (-t_n \ , t_n )$$ equals to $(-a,a)$ which, clearly, is not in your topology. So, actually the only thing you have to do is to find such sequence $t_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Comments on my proof of the transitive property of subsets? Because I am in an advanced Calc III course that is quite proof-based, my university does not require that I take the "introduction to formal proofs" course before proceeding to higher level math classes because they assume the class already knows everything. But as a personal sanity check, I'd like to attempt to prove something simple (the transitive property of subsets) and have you critique my proof, letting me know if I messed up in any way: Theorem: Let $A$, $B$, $C$ be sets. If $A \subseteq B$ and $B \subseteq C$, then $A \subseteq C$. Proof: Since $A \subseteq B$, we know that every element in $A$ is contained in $B$. We also know that because $B \subseteq C$, every element in $B$, which includes every element in $A$, is contained in $C$. Since every element in $A$ is contained in $C$, then by definition $A \subseteq C$. $\qquad\qquad\qquad\qquad\qquad \Box$
When someone says write a formal proof, what they usually mean is you do things strictly by the definitions. By definition, $X \subseteq Y$ if and only if for any $x \in X$, we have $x \in Y$. Let $x \in A$. Since $A \subseteq B$, we have $x \in B$. Since $B \subseteq C$, we have $x \in C$. We showed that for any $x \in A$, we have $x \in C$, so $A \subseteq C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find $n$ if $n!$ ends in exactly $100$ zeroes and $n$ is divisible by $8$. This question was in school maths challenge. I dont know how to approach this one.. any help would be appreciated.
Ends in 100 zeros means that $5^{100}$ is a divisor. If $n \ge 5^im$ then $n > 2^im$ so if $5^{100}$ is a divisor so is $2^{100}$ so is $10^{100}$. so $5^{100}$ divisor is sufficient to end in 100 zeros. To end in exactly 100 zeros means that in {1,2, ...., n} there are precisely 100 occurrences of 5 and added powers of 5. Let $5M \le n <= 5(M+1)$. There are M mulitiples of 5. 1/5 of those will be multiples of 25 and 1/25 will be multiples of 125 etc. In total $5^m$ will yield $5^{m-1}$ multiples of 5, $5^{m-2}$ multiples of 25, etc. So $5^m!$ will have $5^{m-1} + 5^{m-2} + 5^{m - 3}+ ....$ zeros. So 125! will have 25 multiples of 5, 5 multiples of 25,, and 1 multiple of 125 so 125! ends with exactly 31 zeros. So 375! will end with exactly 93 zeros. (As there are 75 multiples of 5, 15 multiples of 25, and 3 multiples of 125). We need 7 more zeros. that is 6 more multiples of 5 and one more multiple of 25. So 405! has exactly 100 zeros. (81 multiples of 5, 16 multiples of 25 and 3 multiples of 125). So the $405 \le n < 410$. As 8|n. n = 408.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
prove polynomial division for any natural number Show that for any natural numbers $a$, $b$, $c~$ we have $~x^2 + x + 1|x^{3a+2} + x^{3b+1} + x^{3c}$. Any hints on what to use?
We have $$x^3-1=(x-1)\left(x^2+x+1\right)$$ This means that $$\begin{align}x^3-1&\equiv0\pmod{x^2+x+1}\\x^3&\equiv1\pmod{x^2+x+1}\end{align}$$ Now, substitute it $$\begin{align}x^{3a+2}+x^{3b+1}+x^{3c}&\equiv x^2\left(x^3\right)^a+x\left(x^3\right)^b+\left(x^3\right)^a\\&\equiv x^2\cdot1^a+x\cdot1^b+1^c\\&\equiv x^2+x+1\\&\equiv0\pmod{x^2+x+1}\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Use of the symbol $\lneq$ This answer uses kind of a "proper less equal" symbol: $\lneq$ I would have expected that $<$ is sufficient, because it seems to be the same relation. For $\subset$ vs $\subsetneq$ some authors interpret $\subset$ allowing self-inclusion $A \subset A$ to be true, while others do not, so the extra symbol offers less ambiguity for more ink. What is this typically used for? Update: We have $\ngeq$ too. :-)
Yes $<$ and $\lneq$ are the same relation. However, be careful about set inclusion: older texts, and some current mathematicians, use $\subset$ to mean $\subseteq$, and to indicate strict inclusion they might use $\subsetneq$ or $\subsetneqq$. Other authors who use both $\subset$ and $\subseteq$ will use $\subset$ to mean strict inclusion. These days, $\subset$ can be confusing. If you just jump into a book or paper and see $\subset$, it's best to check Section 0 or Chapter 0 to be sure what it means. So $\subset$ and $\subseteq$ aren't necessarily different relations, universally, though they almost certainly are in a text that uses both. To avoid possible confusion, though at the cost of losing the nice analogy with $<$ and $\le$, it's probably best to stick to $\subseteq$ and $\subsetneq$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Let $V$ be a linear space s.t. $\dim\, V=n$ and $f_1,f_2,\dots,f_m$ be linear functionals ($m Let $V$ be a linear space s.t. $\dim\, V=n$ and $f_1,f_2,\dots,f_m$ be linear functionals ($m<n$). Show that $\exists x\neq 0$ s.t. $f_i(x)=0$ $\forall i$. We can suppose that these $f_i$ are linearly indepent. Let $\{e_i\}_{i=1}^n$ be a basis to $V$ and $\{\delta_i\}_{i=1}^n$ be its dual basis. We can also set $f_i=\sum_{k=1}^n\alpha_{ik}\delta_i$. Then, $f_i(e_j)=\alpha_{ij}$ Let $x\in V$, then $x=\sum_{j=1}^n \beta_j e_j$. Therefore, we have that $f_i(x)=\sum_{j=1}^n\beta_j\alpha_{ij}$. And I'm stuck here.
$F = (f_1,\ldots,f_m)$ defined by $F(x) = (f_1(x),\ldots,f_m(x))$ is a linear map from $V$ to $F^m$. $\dim(\ker(F)) + \dim(\operatorname{im}(F)) = \dim(V)$. We know that the right hand side equals $n$. The image of $F$ has dimension at most $m < n$. So the dimension of $\ker(F)$ cannot be $0$, so has some vector $x \neq 0$ in it. This $x$ works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to Find the Linearization of $f(x) = 8\cos^2(x)$ at the Point Where $x = \pi/4$ I am looking to find the linearization of $f(x) = 8\cos^2(x)$ at the point where $x = \pi/4$ Now, I know that the general formula for finding linearization is $L(x) = f(x) + f'(x) (x-a)$. So, I have made the following steps to get my answer: 1) Use the above formula to get $L(x) = 8\cos^2(\pi/4)+16\cos(\pi/4)(-\sin(\pi/4))(\pi/4-a)$ 2) Simplify to get $4-8(\pi/4-a)$ However, I am not sure if this is completely correct. Does anyone know whether this is the correct way to find this answer, and if it is not, how to arrive at the true answer?
First find the derivative: $f'(x)=16\cos(x)(-\sin(x))=-16\sin(x)\cos(x)$. Then $f'(\frac{\pi}{4})=-16\cos(\frac{\pi}{4})\sin(\frac{\pi}{4})=-16\frac{\sqrt2}{2}\cdot\frac{\sqrt2}{2}=-16\cdot\frac{2}{4}=-8$. Then find the tangent: $L=f(\frac{\pi}{4})-8(x-\frac{\pi}{4})=4+2\pi-8x$ EDIT: I think your confusion is in the way you interpreted the formula for $L(x)$. It is poorly written and should be written more like this: The tangent to $f(x)$ at $x=a$ is given by $L(x)=f(a)+f'(a)(x-a)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that $\sqrt{x-1}+\sqrt{y-1}\leq \sqrt{xy}$ How can one show that $\sqrt{x-1}+\sqrt{y-1}\leq \sqrt{xy}$ Assuming that : $\sqrt{x-1}+\sqrt{y-1}\leq \sqrt{xy}$ So $(\sqrt{x-1}+\sqrt{y-1})^2\leq xy$ $\sqrt{(x-1)(y-1)} \leq xy-x-y+2$ $ (y-1)(x-1)+3 \leq \sqrt{(x-1)(y-1)}$ Here I'm stuck !
There are some errors in your calculation, e.g. a missing factor 2 in $$ (\sqrt{x-1}+\sqrt{y-1})^2 = x - 1 + y - 1 + 2\sqrt{x-1}\sqrt{y-1} $$ and in the last step the inequality sign is in the wrong direction and the number $3$ is wrong. For $x \ge 1$, $y \ge 1$ you can square the inequality (since both sides are non-negative): $$ \sqrt{\mathstrut x-1}+\sqrt{\mathstrut y-1}\leq \sqrt{\mathstrut xy} \\ \Longleftrightarrow (x-1) + (y-1) + 2 \sqrt{\mathstrut x-1}\sqrt{\mathstrut y-1} \le xy \\ \Longleftrightarrow 0 \le xy - x - y + 2 - 2 \sqrt{\mathstrut (x-1)(y-1)} \\ \Longleftrightarrow 0 \le (x-1)(y-1) - 2 \sqrt{\mathstrut (x-1)(y-1)} + 1 $$ With $t := \sqrt{(x-1)(y-1)}$ the right-hand side is $$ t^2 - 2 t + 1 = (t-1)^2 \ge 0 \, . $$ so that the inequality is true. It follows also that equality holds if and only if $t = 1$, i.e. if $(x-1)(y-1) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to treat small number within square root guys.I am reading a math book. It has a equation shown as follows, $\sqrt{(1+\Delta^2)}$ And then,since $\Delta$ is very small, it can be written as, $\sqrt{(1+\Delta^2)} = (1+\frac12\Delta^2)$ What is the theory behind this equation?
Square both sides to get $1+\Delta^2\approx 1+\Delta^2+\frac{1}{4}\Delta^4$ which is a close approximation because when $\Delta$ is very small, $\frac{1}{4}\Delta^4$ is close to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Specific conditional entropy $H(X|Y=y)$ is not bounded by $H(X)$? Suppose that $P(Y=y)>0$ so that $$ H(X|Y=y)=-\sum_{x} p(x|y) \log_{2} p(x|y) $$ makes sense. I've assumed for a long time that $H(X|Y=y)\le H(X)$, but then it seems that the wiki article claims that $H(X|Y=y)$ is not necessarily bounded by $H(X)$. It is a standard result that $H(X|Y)\le H(X)$, and it makes perfect sense (heuristically) since knowing some more information (Y) can reduce the uncertainty in $X$. Then how is it possible that knowing additional information that $Y=y$ can actually increase the uncertainty in $X$? Is there any nice example where $H(X|Y=y)>H(X)$?
It's important to distinguish $H(X \mid Y)$ from $H(X \mid Y=y)$. The first is a number, the second is a function of $y$. The first is the average of the second -averaged over $p(y)$: $$H(X \mid Y) = E_y \left[H(X \mid Y=y)\right]$$ Hence, beacuse it's an average, in general we'll have $H(X \mid Y=y)>H(X \mid Y)$ for some values of $y$ and $H(X \mid Y=y)<H(X \mid Y)$ for some others. Then, while it's true that $H(X)>H(X \mid Y)$, this does not imply $H(X)>H(X \mid Y=y)$ for all $y$ Example: let $(X,Y)$ take values on $(0,0)$, $(0,1)$ $(1,0)$ with equal probability ($p=1/3$). Then $H(X)=h(1/3) \approx 0.92 <1$ . But $H(X \mid Y=0)=h(1/2)=1$ (here $h(\cdot)$ is the binary entropy function).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
properties of a norm I am quite bothered by this problem in Royden's 4th ed. Show that $||f||=\int_{a}^{b} {x^2 |f|}$ is a norm given that $f\in L^1$ on $[a,b]$. I want to show that $||f||=0$ iff $f=0$. Am I doing this right? if suppose $||f||=0$ then $||f||=\int_{a}^{b} {x^2 |f|}=0$ that is, $0=||f||=|f| \int_{a}^{b} {x^2 }$ then $|f|=0$ thus $f=0$. am i right to assume that $|f|$ is a constant?
The triangle inequality: $||f+g|| = \int_a^b x^2 |f(x)+g(x)|dx \leq \int_a^b x^2 (|f(x)| + |g(x)|)= \int_a^b x^2 |f(x) + \int_a^b x^2 |g(x)| = ||f|| + ||g||$ Therefore $||f+g|| \leq ||f|| + ||g||$ Positive Homogeneity: $||\alpha f||= \int_a^b x^2 |\alpha f(x)| dx =|\alpha| \int_a^b x^2 |f(x)| =|\alpha| ||f||$ Therefore $||\alpha f || = |\alpha| ||f||$ Nonnegativity: Trivial Therefore $||f||=\int\limits_a^b x^2 |f(x)| dx$ is a norm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1519988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Basic algebra inequality mistake (I probably didn't break algebra) I'm not sure what I have done here. I'm guessing it's something to do with the absolute values but I really have no idea what it is. $$\begin{array}{ccc} y>\frac1y & & y>\frac1y \\ y^2>1 & & 1>\frac1{y^2} \\ |y|>1 & & 1>\frac1{|y|} \\ \frac1{|y|}>1 & & \downarrow \\ \searrow & & \swarrow \\ & 1 < \left|\dfrac1y\right| < 1 & \end{array} $$
The error is on the left. You went from $$|y|>1$$ to $$\left|\frac{1}{y}\right|>1$$ This is incorrect. You should have divided both sides by $|y|$, leaving $$1>\frac{1}{|y|}$$ This agrees with the right-hand result. However, something happened with the absolute value. Try the case where $y=-2$. In this case, the final equation holds, but the original equation does not. So for $y<0$, we instead have $$1>\frac{1}{|y|}$$ Why did this happen? It was the first step in each one, multiplying (or dividing) by $y$ to get $y^2$. This loses the distinction between the signs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1520118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $f$ is continuously differentiable periodic, then $n\int_{0}^{1} f(x) \sin (2\pi nx) \mathrm dx \to 0 $ If $f: \mathbb R \to \mathbb R$ is continuously differentiable periodic function of period $1$, then $$n\int_{0}^{1} f(x) \sin(2\pi nx)\mathrm dx \to 0 $$ as $ n\to\infty$.
Hint. You may integrate by parts: $$ \begin{align} &n\int_{0}^{1} f(x) \sin(2\pi n x)\: dx\\\\ &=\left. -f(x)\frac{\cos(2\pi n x)}{2\pi}\right|_0^1+\frac1{2\pi}\int_0^1f'(x)\cos(2\pi n x)\:dx\\\\ &=\frac{f(0)-f(1)}{2\pi}+\frac1{2\pi}\int_0^1f'(x)\cos(2\pi n x)\:dx\\\\ &=0+\frac1{2\pi}\int_0^1f'(x)\cos(2\pi n x)\:dx \to 0 \end{align} $$ as $n \to \infty$, using the Riemann-Lebesgue lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1520214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
I randomly assign $m$ objects into one of $n$ sets. How do I compute expected value of the number of non-empty sets? To put it into more colorful terms, let's say I have $m$ balls and $n$ boxes. I select one of the $m$ balls and randomly place it into one of the $n$ boxes. I do this with each ball until I have none left. I then take all of the boxes with no balls left in them and set them aside. What is the expected value of the number of boxes left? This question is derived from a math competition I participated in a year ago. I spent weeks trying to figure this one out, but I never quite managed to wrap my head around the solution.
For $i=1,2,\cdots{},n$ define $X_i$ as $X_i=1$ if box $i$ ends up with non-zero balls and $X_i=0$ if not. Set $$X=X_1+X_2+\cdots{}X_n.$$ Notice that $X$ is the total number of boxes which are non-empty. Therefore, $$\mathbb{E}(X)=\mathbb{E}\left(X_1+X_2+\cdots{}+X_n\right)=\mathbb{E}(X_1)+\mathbb{E}(X_2)+\cdots{}+\mathbb{E}(X_n).$$ For all $i$, we have $X_i=0$ if all the $m$ balls end up in one of the other boxes. Hence $$\mathbb{E}(X_i)=P(X_i=1)=1-P(X_i=0)=1-\left(\dfrac{n-1}{n}\right)^m.$$ Summing this up gives us, $$\mathbb{E}(X)=\sum \mathbb{E}(X_i)=n\left(1-\left(\dfrac{n-1}{n}\right)^m\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1520327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Simple proof that Automorphisms preserve definable subsets? I've been looking all over for a proof of this result, but I haven't really found anything. Is anyone aware of a particularly simple or elegant one?
Unfortunately, at this basic level in model theory, you're going to have to get your hands dirty with induction on the complexity of terms and formulas, there's no way around it. Fortunately, once you do a proof or two like this, you'll find they become routine. "Automorphisms preserve definable subsets" means that if $\sigma\in \text{Aut}(M)$, $\varphi(\overline{x})$ is a formula, and $\overline{a}$ is a tuple from $M$, then $M\models \varphi(\overline{x})$ if and only if $M\models \varphi(\sigma(\overline{x}))$. It's straightforward to prove this by induction on the complexity of terms and formulas. Step 1: Show by induction on terms that if $t$ is a term, then $\sigma(t^M(\overline{a})) = t^M(\sigma(\overline{a}))$ (it's almost immediate from the fact that $\sigma$ preserves functions and constants). Step 2: Check that $\sigma$ preserves atomic formulas ($\sigma$ preserves equality and relations on elements, and you can use the result of Step 1 to plug in terms). Step 3: Show that $\sigma$ preserves all formulas by induction. The only place you have to do anything is for the existential quantifier: if $b$ is a witness to $\exists x\, \psi(\overline{a},x)$ then $\sigma(b)$ is a witness to $\exists x\, \psi(\sigma(\overline{a}),x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1520420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof that finite group contains an element of prime order I tried to prove the following claim but it seemed a bit too easy: Let $G$ be a finite group. Then $G$ contains an element of prime order. Please could someone tell me if my proof is correct or if I'm missing something? Let $g$ be any non-identity element of $G$. Let $p$ be any prime factor of $|g|$. Then $g^{|g|\over p}$ has prime order $p$. $\Box$
No the proof is not correct. It is possible that $g^{\frac{|g|}{p}}=e$. The theorem you are stating is commonly called Cauchy's theorem. There is an elementary proof here using the class equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1520513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Contraposition of continuity Function f is continuous on a metric space (M,d) if $$\forall x\ \forall \epsilon >0\ \exists \delta >0\ \forall y\ \ d(x,y)< \delta \Rightarrow d(f(x),f(y))<\epsilon $$ I want to find a contrapositive of this. I know that I have to change $\exists$ to $\forall$ and $\forall$ to $\exists$ but I am confused on what happens with " $\Rightarrow$". $$\exists x\ \exists\epsilon>0\ \forall \delta>0\ \exists y\ \text{ such that... } $$ And then I am stuck. Would it be $d(f(x),f(y))\geq\epsilon \text{ but } d(x,y) \geq \delta$ ?
The contraposition of the implication is: we have $y \in M$ and $d(f(x),f(y)) \geq \varepsilon$ only if $d(x,y) \geq \delta$. The negation of the whole statement is: there are some $\varepsilon > 0$ and some $x \in M$ such that $\delta > 0$ only if there is some $y \in M$ such that $d(x,y) < \delta$ and $d(f(x), f(y)) \geq \varepsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1520611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $x(t)$ is a solution to $ \dot{x}(t)=f(x)$ then $x(t)$approaches infinity or a fixed point, and if $x(t_1)=x(t_2)$ then $x(t)$ is constant This is an easy problem, but I would like to know a formal solution using calculus and not the Picard–Lindelöf theorem, and also without using the (obvious) fact that solutions are monotonic for the second question. Let $x$ be a solution of $\dot x = f(x)$ where $f$ is continuously differentiable. I would like to show that: * *Every solution approaches $\pm \infty$ or a fixed point. *If $x(t_1)=x(t_2)$ for some $t_1 \neq t_2$ then $x(t_1)$ is a constant solution. My attempt, please let me how can I improve it: * *The solution is monotonic so it approaches a limit. If the limit is $\pm \infty$, we are done. Otherwise, it approaches some point. I think we can show that the derivative approaches 0, and then we get that the solution approaches a fixed point. *Using the Rolle theorem we get that there exists $t_1<t_0<t_2$ such that $\dot x(t_0)=0$. We get that $y\equiv x(t_0)$ is a constant solution since $f(y(t))=f(x(t_0))=\dot x(t_0)=0$. At first I wanted to say that since the equation is autonomous then if the derivative is 0 at one point it is always 0, but that's not exactly an appropriate argument, I think. Thanks in advance!
1.) $x(n+1)-x(n)=x'(n+\theta_n)=f(x(n+\theta_n))$, $θ_n\in(0,1)$, gives you the derivative in the limit, since the first difference tends to $0$. 2.) You got two solutions passing through the same point. By local uniqueness, this is impossible if they are not identical. You can get the local uniqueness also from the Gronwall lemma, however this is rather close to Picard-Lindelöf.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1520675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove a combinatoric statement? From Number 10B with PICTURE. Suppose there are n plates equally spaced around a circular table. Ross wishes to place an identical gift on each of k plates, so that no two neighbouring plates have gifts. Let f(n, k) represent the number of ways in which he can place the gifts. For example f(6, 3) = 2, as shown below. Prove that f(n, k) = f(n−1, k) +f(n−2, k −1) for all integers n ≥ 3 and k ≥ 2. Trying combinatoric, and need help with induction I was thinking of a 'proof' involving combinatorics. Here is an idea. Consider two cases: Case 1: Plate at table one. Then going in front, there are $n-1$ tables left, but the next one cannot have a plate, so it must start at the $n-2$nd plate. There are $k-1$ plates left, the number of arrangements should equal: $f(n-2, k-1)$. Likewise the argument works for Case 2, with no plate at table one. But is this a strong enough proof? How can I strengthen a proof?
We will fix the plates in their position and number them $1,2,3,\ldots$ in clockwise order. Let $A_{n,k}$ denote the set of valid arrangements of $k$ gifts on $n$ plates. Then $f(n,k) = \vert A_{n,k}\vert$ and we will prove the claim by setting up a bijection (graphically) from $A_{n-2,k-1}\cup A_{n-1,k}$ to $A_{n,k}$, which involves adding plate number $n$ and, if required, $n-1$. In the following table, "$\bullet$" denotes a plate with a gift and "$\circ$" denotes a plate without a gift. The following mapping depends on the state of the first and last plates, which are neighbours of course, of sets $A_{n-2,k-1}$ and $A_{n-1,k}$. $$ \begin{array}{ccccccccc} \text{Plate No.: } & 1 & n-2 & & \quad 1\quad & \quad \color{red}n \quad & \color{red}{n-1} & n-2 & \\ A_{n-2,k-1} & \bullet & \circ & \qquad \mapsto \qquad & \bullet & \color{red}\circ & \color{red}\bullet & \circ & A_{n,k} \\ A_{n-2,k-1} & \circ & \bullet & \mapsto & \circ & \color{red}\bullet & \color{red}\circ & \bullet & A_{n,k} \\ A_{n-2,k-1} & \circ & \circ & \mapsto & \circ & \color{red}\bullet & \color{red}\circ & \circ & A_{n,k} \\ & \\ \text{Plate No.: } & 1 & n-1 & & \quad 1\quad & \quad \color{red}n \quad & n-1 & & \\ A_{n-1,k} & \bullet & \circ & \mapsto & \bullet & \color{red}\circ & \circ & & A_{n,k}\\ A_{n-1,k} & \circ & \bullet & \mapsto & \circ & \color{red}\circ & \bullet & & A_{n,k}\\ A_{n-1,k} & \circ & \circ & \mapsto & \circ & \color{red}\circ & \circ & & A_{n,k}\\ \end{array} $$ Note that for each of the three sets, we have included all members exactly once, hence the bijection is established and the proof is complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1520779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Limit of a sum of infinite series How do I find the following? $$\lim_{n\to\infty} \frac{\left(\sum_{r=1}^n\sqrt{r}\right)\left(\sum_{r=1}^n\frac1{\sqrt{r}}\right)}{\sum_{r=1}^n r}$$ The lower sum is easy to find. However, I don't think there is an expression for the sums of the individual numerator terms... Nor can I think of a way to get the combined sum. I just need a hint.
A way is to use Abel's summation to get $$\sum_{r\leq n}\sqrt{r}=\sum_{r\leq n}1\cdot\sqrt{r}=n^{3/2}-\frac{1}{2}\int_{1}^{n}\frac{\left\lfloor t\right\rfloor }{\sqrt{t}}dt $$ where $\left\lfloor t\right\rfloor $ is the integer part of $t $ and using $\left\lfloor t\right\rfloor =t+O\left(1\right) $ we have $$\sum_{r\leq n}\sqrt{r}=\frac{2}{3}n^{3/2}+O\left(\sqrt{n}\right) $$ and, in a similar way $$\sum_{r\leq n}\frac{1}{\sqrt{r}}=\sqrt{n}+\frac{1}{2}\int_{1}^{n}\frac{\left\lfloor t\right\rfloor }{t^{3/2}}dt=\frac{3}{2}\sqrt{n}+O\left(1\right) $$ and for the last sum the well-known identity $$\sum_{r\leq n}r=\frac{n\left(n+1\right)}{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1520866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Additivity & one-point Continuity of $f \in \mathbb{R}^\mathbb{R}$ imply there is $\alpha \in \mathbb{R}$ s.t. $f(\alpha x) = \alpha x$ I am looking for hints/help concerning a proposition I found self-studying Carothers: Suppose that $f : \mathbb{R} \to \mathbb{R}$ satisfies $f(x+y) = f(x)+f(y)$ for every $x, y \in \mathbb{R}$. If $f$ is continuous at some point $x_0 \in \mathbb{R}$, prove that there is some constant $\alpha \in \mathbb{R}$ such that $f(\alpha x) = \alpha x$ for all $x \in \mathbb{R}$. My initial ideas to prove the result move along the following lines. * *Notice by additivity that $f(0) = 0$. *Take the point $x_0 \in \mathbb{R}$ where $f$ is continuous by assumption. *Obtain that if $|x_0| > \delta_\varepsilon$, then $|f(x_0)|<\varepsilon$ *Take a $\varepsilon >0$ large enough to get a $\delta_{\varepsilon}>0$ such that $|x_0|<\delta_{\varepsilon}$ *etc... The final idea is to construct $\alpha$ as the result of $\tan \theta$, where $\theta$ is the angle between the $x$-axis, and the line that passes through $x_0$ from the origin. Does this all make sense? Thank you for your time.
Hint: $f(x)=f(x+x_0-x_1)-f(x_0-x_1)$. Use that to show that $f(x)$ is continuous at any $x_1$ if it is continuous at $x_0$. Next, let $\alpha = f(1)$ and show that $f(x)=f(1)x$ first for $x$ an integer, then $x$ a rational, and then, by continuity, all the reals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1521011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
For matrices $A, B$, show that $BA = 0 \Rightarrow (AB)^2 = 0$ I need to prove the following statement for all matrices A, B: $$A_{n \times n}, B_{n \times n} \text{ over } \mathbb{R}$$ $$BA = 0_{n \times n} \implies (AB)^2 = 0_{n \times n}$$
$(AB)^2=AB.AB=A(BA)B=0$ since $BA=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1521108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of indefinite integral of inverse function Assume that $f$ has a inverse on its domain. a. Let $y = f^{-1}(x)$ and show that $\int f^{-1}(x)dx=\int yf'(y)dy$ b. Use part a to show that $\int f^{-1}(x)dx= yf(y)-\int yf(y)dy$ c. Use part b to evaluate $\int ln(x)dx$ d. Use part b to evaluate $\int sin^{-1}(x)dx$ e. Use part b to evaluate $\int tan^{-1}(x)dx$ Parts c through e (and perhaps even b) seem fairly straightforward, but I'm stumped on how to prove a. I have a feeling I'm to use integration by parts, but every time I try I hit a dead end. Any help would be appreciated.
$f^{-1}(x)=y \implies f(y)=x$. Differentiating both sides of $f(y)=x$ gives $ f'(y) \cdot \frac{dy}{dx}=1 \implies f'(y) dy=dx$ Now just substitute.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1521252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Applying L'Hopital on $\lim_{x\to1^+}\left( \frac{1}{\ln(x)} - \frac{1}{x - 1} \right)$ I am trying to apply L'Hopital's rule here: $$\lim_{x\to1^+}\left( \frac{1}{\ln(x)} - \frac{1}{x - 1} \right)$$ The indeterminate form here is $\infty - \infty$, so I need to somehow shape this limit to have a quotient instead. My attempt: $$\frac{1}{\ln(x)} - \frac{1}{x - 1} = \frac{\frac{(x-1)-\ln(x)}{\ln(x)}}{(x-1)}$$ This becomes $$\frac{(x-1)-\ln(x)}{\ln(x)\cdot(x-1)}$$ That's $$\frac{0^+-0^+}{0^+\cdot0^+}$$ Hmmmm... Does this become $\frac{0}{0}$? I am not sure. To me, $0^+$ means "some very small number greater than $0$", but two $0^+$ don't necessarily refer to the same number... or do they? Because if they don't, then $0^+ - 0^+ \not= 0$. The same goes for $0^+ \cdot 0^+$, they are numbers greater than $0$ so their product cannot be $0$. What do you think? Or am I overthinking it?
If you make a single fraction, you get $$ \frac{x-1-\ln x}{(x-1)\ln x} $$ Both the numerator and the denominator are continuous at $1$, so this limit is of the form $\frac{0}{0}$. No need to consider signs, at this point. You can avoid long applications of l'Hôpital with a simple observation: $$ \lim_{x\to1^+}\frac{x-1-\ln x}{(x-1)\ln x}= \lim_{t\to0^+}\frac{t-\ln(1+t)}{t\ln(1+t)} $$ with the substitution $x-1=t$. Now you can rewrite the last limit as $$ \lim_{t\to0^+}\frac{t-\ln(1+t)}{t^2}\frac{t}{\ln(1+t)} $$ and you should know that the second fraction has limit $1$; so we have $$ \lim_{t\to0^+}\frac{t-\ln(1+t)}{t^2}= \lim_{t\to0^+}\frac{1-\dfrac{1}{1+t}}{2t}= \lim_{t\to0^+}\frac{t}{2t(1+t)}=\frac{1}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1521333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Deducing equivalence between norms from simple condition Let $||\cdot||,\;||\cdot||'$ be norms on $V$. Suppose for some $a,b>0$ we have: $$||x||<a\Rightarrow ||x||'<1\Rightarrow ||x||<b$$ Show that $||\cdot||,\;||\cdot||'$ are Lipschitz equivalent. I don't think this should be hard, but I am completely stuck. My idea was to first show that the norms are equivalent on the ball $\{x:\;||x||<a\}$ but I am just turning in circles with manipulations. A gentle nudge would be really appreciated.
Hint. For $x \in V$, what is a collinear vector $y$ to $x$ such that $\Vert y \Vert <a$? What happens then to $\Vert y \Vert^\prime$? And vice versa?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1521440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
how to prove that the sum of the power two of three reals is less than 5? $a, b$ and $c$ are three real number such that: $(\forall x \in [-1,1]): |ax^2+bx+c|\leq 1$ . Prove that: 1) $|c|\leq 1$ . 2) $-1\leq a+c\leq 1$ . 3)Deduce that : $a^2+b^2+c^2\leq 5$. The first and second questions are easy to prove ( just take $x=0, -1$ or $1$ ... ), but the third one ?!!! I'm sure that the responce must be easy since it's a deduction. I need your help to solve it, thank you.
$x=0$ gives us $|c| \le 1$. Similarly set $x = \pm1$ to get $|a \pm b+c| \le 1$. So we have the inequalities: $$-1 \le c \le 1 \iff -1 \le -c \le 1\tag{1}$$ $$-1 \le a+b+c \le 1 \iff -1\le -a-b-c \le 1 \tag{2}$$ $$-1 \le a-b+c \le 1 \iff -1 \le -a+b-c \le 1\tag{3}$$ Note the right side of the above three inequalities can be considered the "negatives" of the left sides. These should be enough for us to prove all sorts of things. Here, $$(2)-(1) \implies -2 \le a+b \le 2\implies (a+b)^2 \le 4$$ $$(3)-(1) \implies -2 \le a-b \le 2\implies (a-b)^2 \le 4$$ Now those two give $a^2+b^2=\frac12((a+b)^2+(a-b)^2) \le 4$. We just need to add to this $|c| \le 1 \implies c^2 \le 1$ to get what we want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1521489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
proof that there is a unique isomorphism between two free modules I'm trying to figure out if there's any way to shorten the following proof of Corollary 7 in Dummit & Foote (p. 355) so that I don't have to write down the expressions of any specific elements like $x = r_1a_1 + \dots + r_na_n$. Is it possible to write out the proof without needing to examine specific elements, or is it necessary in order to show that any endomorphism from $F_1$ to $F_1$ that is the identity on $A$ must also be the identity on $F_1$? Corollary 7. Let $F_1$ and $F_2$ be free modules on the same set $A$. Then there is a unique $R$-module isomorphism between $F_1$ and $F_2$ which is the identity on $A$. Proof. We first note by the universal property of $F_1$ that there can exist at most one homomorphism (and hence isomorphism) from $F_1$ to $F_2$ that is the identity on $A$. Since there exist inclusions $A\hookrightarrow F_1$ and $A\hookrightarrow F_2$, it follows from the universal properties of $F_1$ and $F_2$ that there exist unique $R$-module homomorphisms $\Phi_1\colon F_1\to F_2$ and $\Phi_2\colon F_2\to F_1$ that are the identity on $A$. Now define the $R$-module homomorphism $\Phi\colon F_1\to F_1$ by $\Phi = \Phi_2\circ\Phi_1$. It is clear that $\Phi$ is the identity on $A$. But then for all $x = r_1a_1 + \dots + r_na_n\in A$, \begin{align*} \Phi(x) & = \Phi(r_1a_1 + \dots + r_na_n) \\ & = r_1\Phi(a_1) + \dots + r_n\Phi(a_n) \\ & = r_1a_1 + \dots + r_na_n \\ & = x. \end{align*} Thus, $\Phi$ is the identity on $A$, so $\Phi_1$ is invertible and hence is an isomorphism.
You can use the universal property applied to $A \hookrightarrow F_1$. The function $\Phi: F_1 \rightarrow F_1$ is the identity on $A$, so it satisfies the universal property for the map $A \hookrightarrow F_1$. In other words $\Phi$ extends $A \hookrightarrow F_1$ to a map $F_1 \rightarrow F_1$. But there is already such a map, and that is the identity $F_1 \rightarrow F_1$. By uniqueness of the universal property, they must be the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1521631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are there any situations in which L'Hopital's Rule WILL NOT work? Today was my first day learning L'Hopital's Rule, and I was wondering if there are any situations in which you cannot use this rule, with the exception of when a limit is determinable.
Here's a similar example to Ittay Weiss's, but more nefarious, since in Ittay's example the limit was easily seen to be equal to 1: $$\lim_{x \rightarrow 0}\frac{e^{-\frac{1}{x^2}}}{x}$$ Both the numerator and denominator go to zero. But applying l'Hospital gives: $$\lim_{x \rightarrow 0}\frac{2e^{-\frac{1}{x^2}}}{x^3}$$ And applying l'Hospital again: $$\lim_{x \rightarrow 0}\frac{4e^{-\frac{1}{x^2}}}{3x^5}$$ We see that the situation is getting worse, not better! The way out is to forget about l'Hospital entirely, and try to replace $e^{-1/x^2}$ with something more manageable. We have, for all $x$: $$e^x \geq x$$ And so for all $x > 0$: $$e^{-x} \leq \frac{1}{x}$$ Hence, for all $x \neq 0$: $$e^{-\frac{1}{x^2}} \leq x^2$$ This implies: $$0 \leq \frac{e^{-\frac{1}{x^2}}}{x} \leq \frac{x^2}{x} = x \qquad (x > 0)$$ $$x = \frac{x^2}{x} \leq \frac{e^{-\frac{1}{x^2}}}{x} \leq 0 \qquad (x < 0)$$ And the squeeze theorem tells us that the original limit is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1521718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 9, "answer_id": 7 }
Why 1 is not a cluster point of $\frac1n$ I came across this definition of cluster point. Let $S \subset R$ be a set. A number $x \in \mathbb R$ is called a cluster point of $S$ if for every $\varepsilon > 0$, the set $(x−ε, x+ε)∩S \setminus \{x\}$ is not empty. For the set $\{1/n:n∈\mathbb N\}$ why 1 is not a cluster point? I came across this proof while googling but still cant understand it http://mathonline.wikidot.com/cluster-points Suppose that $0<c≤1$. By one of the Archimedean corollaries, since $1c>0$ then there exists a natural number $n_c\in\mathbb N$ such that $n_c−1≤1c<n_c$ and so $1n_c<c≤1n_c−1$. Choose $δ_0=\min\{∣c−1n_c∣,∣1n_c−1−c∣\}. Then $Vδ(c)={x∈\mathbb R:∣x−c∣<\min{∣c−1n_c∣,∣1n_c−1−c∣}}=∅$. I take $n_c=10, c=1$, then delta is 0.888888889. I can find an $x=0.5$ such that the inequality satisfies. Am I doing it wrong?
If we take $\epsilon$ to be $.25$ and we have $S = \{\frac{1}{n} : n \in \mathbb{N}\}$, then $(1-\epsilon,1+\epsilon) \cap S = (.75,1.25) \cap S = 1,$ so $1$ is not a cluster point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1521812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof for Product of Congruences mod P Consider $Z_p$, with $p$ prime. Prove that $[x][x] = [y][y]$ if and only if $[x] =[y]$ or $[x] =[-y]$. I think this question comes down to the fact that $Z_p$ is cyclic but am not sure.
Hint: One direction is easy, and does not require primality. From a number-theoretic point of view, the harder direction boils down to the fact that $p$ divides $x^2-y^2$ if and only if it divides $(x-y)(x+y)$. And if a prime divides a product, it divides (at least) one of the terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1521897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergent sequence on unit sphere Suppose $x_n$ is a bounded sequence in a vector space $V$ with norm $||\cdot||$. Show that if: $$\hat{x}_n=\frac{x_n}{||x_n||}\;\;\text{converges}\Rightarrow x_n\;\text{has a convergent subsequence}$$ Thoughts: Writing the limit on the sphere as $\hat{x}=x/||x||$ I would like to show that for large enough $M$ (bounding $||x_n||$), there are $n_0<n_1<\cdots$ such that $$\frac{1}{M}||x_{n_k}-x||\leq ||\hat{x}_{n_k}-\hat{x}||\quad \forall k>K$$ First, is this even true? If so could can I show it?
Hint: The sequence of numbers $\| x_n\|$ is real and bounded, so it must contain a convergent subsequence by the Bolzano–Weierstrass theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1522020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $M_a\neq M_b$. Let $f\in \mathbb Z[x]$ be a non-constant polynomial .Show that as $a$ varies over $\mathbb Z$,the set of divisors of $f(a)$ includes infinitely many different primes. My try: Let $f(x)=a_0+a_1x+a_2x^2+\cdot+\cdot +a_nx^n$, $f(a)=a_0+a_1a+a_2a^2+\cdot\cdot\cdot\cdot +a_na^n$.Suppose that $f(a)$ has the form $f(a)=(a-p_1)^{k_1}(a-p_2)^{k_2}\cdot\cdot\cdot(a-p_m)^{k_m}$ for each $a\in \mathbb Z$. Let $M_a$ be the set of different divisors of $f(a)$. If we can show that $M_a\neq M_b$ for each $a,b\in \mathbb Z$ then as $\mathbb Z$ is countable then $M=\{M_a:a\in \mathbb Z\}$ is infinite and hence the proof. I am stuck to show that $M_a\neq M_b$.Please help.
Hint: Suppose that the set of prime divisors of the $f(a), a\in \mathbb{Z}$ is finite. Then there exists $a\in \mathbb{Z}$ such that $f(a)\not =0$ has the maximum of prime divisors possible. Fix such an $a$, and put $f(a+x)=\sum_{j=0}^n b_jx^j$, with the $b_j\in \mathbb{Z}$ ($b_n\not = 0$, and $b_0=f(a))$. Now put $x=mf(a)^2$, for $m\in \mathbb{Z}$. Show that $f(a+mf(a)^2)$ has as prime divisors all the prime divisors of $f(a)$, with in addition, if $m$ is large, another prime divisor, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1522117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Evaluation of the Definite Integral $\int_{\zeta=0}^{2x} \lvert \sin(\zeta)\rvert \mathrm{d}\zeta$ I need help understanding how to evaluate the following integral: $$\int_{\zeta=0}^{2x} \lvert \sin(\zeta)\rvert \mathrm{d}\zeta$$ I am a bit lost so if someone could help out that would be great.
Sketch the graph of $|\sin x|$. Suppose $x>0$. Let $k=\lfloor 2\,x/\pi\rfloor$ be the unique non-negative integer such that $k\,\pi\le2\,x<(k+1)\pi$. Then $$ \int_0^{2x}|\sin\zeta|\,d\zeta=\int_0^{k\pi}|\sin\zeta|\,d\zeta+\int_{k\pi}^{2x}|\sin\zeta|\,d\zeta=k\int_0^\pi\sin\zeta\,d\zeta+\int_0^{2x-k\pi}\sin\zeta\,d\zeta. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1522209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find all real numbers $x$ such that $x[x[x[x]]]=88$ where $[\cdot]$ denotes floor function. Question: Find the all real numbers $x$ such that $x[x[x[x]]]=88$ where $[\cdot ]$ denotes floor function. Attempt: $$x[x[x[x]]]=88\implies [x[x[x]]]=\frac{88}{x}.$$ Since left side of equation always gives an integer value, $88$ has to be divisible by $x$. Also, $3^4=81$. So the solution should lie between $3$ and $4$.
So, the given equation basically implies $x^4\approx88$, and we know $\frac{88}x$ is an integer. Now, if $x^4\approx88$, then $x\approx\sqrt[4]{88}=3.063$, and then $\frac{88}x\approx28.732$. So I'd guess either $\frac{88}x=28$ or $\frac{88}x=29$. That is, I'd guess either $x=\frac{88}{28}=\frac{22}7$ or $x=\frac{88}{29}$. No guarantees that one of them works, though. We do know there's at most one solution, since $x\lfloor x\lfloor x\lfloor x\rfloor\rfloor\rfloor$ is an increasing function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1522323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Geometry question on triangle Let ABC be an isosceles triangle with AB = AC and let Γ denote its circumcircle. A point D is on the arc AB of Γ not containing C and a point E is on the arc AC of Γ not containing B such that AD = CE. how can I prove that BE is parallel to AD.?
Note $|AB|=|D'E'|$. Let $|AF|=|AD'|$, then $\angle AD'B=\angle AFC$. Rotate $A\to D'$ and so $\angle AD'B=\angle D'AE'$. $|D'B|=|AE'|$ so $AD' \parallel BE'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1522380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Does double negation distribute over disjunction intuitionistically? Does the following equivalence $$\lnot \lnot (A \lor B) \leftrightarrow (\lnot \lnot A \lor \lnot \lnot B)$$ hold in propositional intuitionistic logic? And in propositional minimal logic? (In propositional classical logic this is obvious since $A \leftrightarrow \lnot\lnot A$ is classically provable.) Actually I have a proof that $(\lnot \lnot A \lor \lnot \lnot B) \to \lnot \lnot (A \lor B)$ holds in propositional minimal logic, so I'm interested in the converse implication: $$\lnot \lnot (A \lor B) \to (\lnot \lnot A \lor \lnot \lnot B)$$ If it is minimally or/and intuitionistically provable, I would like a (reference to a) direct proof in natural deduction-style.
$\lnot \lnot (A \lor B) \to (\lnot \lnot A \lor \lnot \lnot B)$ is not intuitionistically acceptable. One way of seeing this is by considering the Heyting algebra whose elements are the open subsets of the unit interval $[0, 1] \subseteq \Bbb{R}$ under the subspace topology, with $A \lor B = A \cup B$, $A \to B = \mathsf{int}(A^c\cup B)$ and $\bot = \emptyset$ (see https://en.wikipedia.org/wiki/Heyting_algebra). In this Heyting algebra, $\lnot\lnot A$ is the interior of the closure of $A$ and $A \to B$ is $\top$ iff $A \subseteq B$. Hence if $A = [0, 1/2)$ and $B = (1/2, 1]$, $\lnot \lnot (A \lor B) = [0, 1]$ while $\lnot \lnot A \lor \lnot \lnot B = [0, 1] \mathop{\backslash} \{1/2\}$ and $\lnot \lnot (A \lor B) \to (\lnot \lnot A \lor \lnot \lnot B)$ is not $\top$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1522490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
Is my proof for $A\cap B=A\cap C\Longleftrightarrow B=C$ correct? Prove or disprove: for every $3$ sets $A,B,C,: A\cap B=A\cap C\Longleftrightarrow B=C$ Proof: This proof is by case analysis. Case I: Assuming $B=C$, I will prove $A\cap B=A\cap C$ 1) $A\cap B=A\cap C$ (because $B=C$) 2) $A\cap C=A\cap B$ (because $B=C$) 3) Thus, $A\cap B=A\cap C$ Case II: Assuming $A\cap B=A\cap C$, I will prove that $B=C$ 1) If $x\in A$, then $x\in A\cap B$ 2) In accordance with the original assumption, $x\in A\cap B$ 3) Therefore, $x\in B$ 4) Since $x\in A$, then $x\in A\cap C$ 5) Therefore $x\in C$ 6) Since $x\in A$, $x\in B$, and $x\in C$, we can conlusde that $B=C$ I feel that I am missing pertinent information to make this proof complete and correct. It's difficult to take my thoughts and organize them in the form of a proof. I would like constructive criticism to make this proof correct and to later use what I learn from it to improve the proofs I write in the future.
$\textbf{Hint}$: Consider the case $A= \varnothing$. You proved $B=C \Rightarrow A \cap C = A \cap B$ correctly. Just note that step 1 and 2 are equivalent. For proving that $B=C$, you have to prove $B \subseteq C$ and $C \subseteq B$, i.e. take an element in $B$ and prove it is also in $C$ (and conversely). Step 1 in your second case is false. Just because an element is in $A$ does not imply that it is in $B$. $A$ is potentially "bigger" than $A \cap B$ (for example $B=\varnothing$ and $A \neq \varnothing$). It might be a good idea to write out what it means for an element to be in the intersection of two sets: By definition, $x$ is in $A \cap B$ if and only if $x$ is in $A$ $\textbf{and}$ $x$ is in $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1522720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Stable subspace, unstable subspace, centre subspace: Of which kind of stability are we talking? Let $x'(t)=Ax(t)$ be a linear ODE system. Then the span of the eigenvectors belonging to eigenvalues with negative real-part is called the stable subspace, the subspace spanned by the eigenvectors of eigenvalues of positive real-part is called unstable subspace and the subspace spanned by the eigenvectors belonging to eigenvalues with zero real-part is called centre subspace. * *What kind of stability is the stable subspace of? *What kind of unstability (as negation of what kind of stability) is the unstable subspace of? *Is the centre subspace stable or unstable? Are there situation in which it is stable (and which kind of stability then)?
* *Globally asymptotically stable: solutions with initial values anywhere in this subspace converge to $0$. *Negation of any kind of stability: the solutions with initial values in this subspace are unbounded, eventually leaving every compact set. *Center is (globally) Lyapunov stable, not asymptotically stable. All of this follows by considering the compression of $A$ to the relevant subspace, since the orbits beginning there stay in it. Reference: G. Teschl, ODE and Dynamical Systems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1522790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a name for $2^n$ in combinatorics? $2^n$, with $n$ corresponding to the number of elements, is an almost unavoidable calculation involved in counting. At the root of its usefulness there lies - I presume - the identity: $$\sum_{0\leq{k}\leq{n}}\binom nk =2^n$$ that explains its application to the calculation of the power set or $2^S$ or the set of all subsets of $S$. However, it is also used without much need for mathematical theory in day to day calculations, such as determining the different ways that $+$'es and $-$'es can be assigned to a list of numbers, say from $1$ to $3$ (i.e. $n=3$): $1$ can be positive or negative ($2$), and for each branch, $2$ can be positive or negative ($2^2)$, and for each choice, $3$, in turn, can assume positive or negative signs, for a total of $2^3$ ways of allocating signs. In looking up for a name akin to combinations for $\binom nk$, or permutations for $P(n,k)$ for this ubiquitous $2^n$ operation, I have found "number of $k$-combinations for all $k$, with an explanation as to its correspondence to the sum of the $n$-th row of Pascal triangle. Very neat. But is there a mathematical name that directly and succinctly identifies this $2^n$ operation of counting a process for which each node or step splits into a binary choice?
I don't think there is a short name for $2^n$. However, I think that "the number of subsets of a set with $n$ elements" (or cardinal of the powerset, of course) is already nice enough. For instance, it fits very well with your exemple of +'es and -'es: just select the subset of positive terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1522910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What is the Fourier transform of $1/|x|$? I looked it up in several tables and calculated it in Mathematica and Matlab. Some tables say that the answer is simply $$\frac{1}{|\omega|}$$ and in other table it is $$-\sqrt{\frac{2}{\pi}}\ln|\omega|$$ and in Mathematica and Matlab (mupad) it is $$-\sqrt{\frac{2}{\pi}}\ln|\omega|-2\gamma$$ where $\gamma$ is the Euler–Mascheroni constant. Why are there so many answers? Are they all equivalent in some way or two (or all) of them are wrong?
To be fully rigorous, we should concede that $1/|x|$ is not directly a distn on $\mathbb R$. But it does arise as an even tempered distn $u$ such that $x\cdot u=sign(x)$. Up to a constant, Fourier transform gives ${d\over dx}\hat{u}=PV{1\over x}$. This equation has at least solution $\hat{u}=\log|x|$. The associated homog eqn has only multiples of $1$ as solns, so the eqn’s solns are $\hat{u}=\log|x|+c$ for arbitrary constants $c$ (and with the normalization constant). The constant $c$ be determined by evaluation against $e^{-\pi x^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1522986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 2 }
Convergence of $\frac{n^{n+1}}{e^n\left(n+1\right)!}$ Determine whether the series $\displaystyle \sum\frac{n^{n+1}}{e^n\left(n+1\right)!}$ is convergent or divergent. I can't use the integral test. I can't use condensation test either, since the series isn't decreasing. Any hints/ideas on how to pursue this exercise?
The series is divergent, I will provide a complete answer below, but before that, I want to make sure you know what the 'Monotone Convergence Theorem' is, as well as one of the definitions of '$e$', as they will be relevant in order to understand the answer that will be posted below. The ratio test yields an inconclusive result, but we can still work to show this series is divergent, albeit it is a bit more tricky than I would have anticipated at first glance of the question. First of all, we define $e$ as follows: $\displaystyle e = \lim_{n\to\infty} (1 + \frac{1}{n})^n$ Secondly, I will provide you with a link to a statement of the Monotone Convergence Theorem as well as a proof, in case you are unfamiliar with it. Now, onto the proof. We will aim to show that $\lim_{n\to\infty} \LARGE\frac{(\frac{1}{n})}{(\frac{n^{n+1}}{e^n(n+1)!})}$ exists and is finite. Once we have shown this, it follows from the definition of the limit to infinity of a sequence, that the following statement is true for all $n$ sufficiently large and some $k>0$ $\large(\frac{1}{n})(\frac{1}{k}) < \frac{n^{n+1}}{e^n(n+1)!}$ Why? I'll explain below: Recall the definition of the existence of a finite limit to infinity of a sequence ${a_n}$, namely, $\lim_{n\to\infty} a_n = L \iff \forall \epsilon >0 , \exists N\in \mathbb{N}$ such that $\forall n>N, |a_n - L| < \epsilon$. Read $\forall$ as 'For all' and $\exists$ as 'There exists'. Thus, suppose $\lim_{n\to\infty} \LARGE\frac{(\frac{1}{n})}{(\frac{n^{n+1}}{e^n(n+1)!})}$ $ = l$ Then $\exists N\in \mathbb{N}$ such that $\forall n>N$ $|\LARGE\frac{(\frac{1}{n})}{(\frac{n^{n+1}}{e^n(n+1)!})}$$ - l| < 1$. Now recall the reverse triangle inequality, namely: $|a|-|b| < |a+b|$ Therefore, we have : $\LARGE|\LARGE\frac{(\frac{1}{n})}{(\frac{n^{n+1}}{e^n(n+1)!})}|$ - $|l|$$ < 1$ Let $1 + |l| = k>0$ Then, rearranging, we have: $|\frac{1}{n}| < k|\frac{n^{n+1}}{e^n(n+1)!}|$ rearranging again and ignoring absolute value signs (as we are dealing with positive sequences) we get: $\large(\frac{1}{n})(\frac{1}{k}) < \frac{n^{n+1}}{e^n(n+1)!}$ for $n $sufficiently large as promised. This is desirable, because we already know that $\sum{\frac{1}{n}}$ is divergent, so the left hand side of the inequality above is divergent, and thus by the comparison test for positive series $\sum{\frac{n^{n+1}}{e^n(n+1)!}}$ is divergent. Now all we need to do is actually show that $\lim_{n\to\infty} \LARGE\frac{(\frac{1}{n})}{(\frac{n^{n+1}}{e^n(n+1)!})}$ exists and is finite. We will do so by the Monotone Convergence Theorem. We will first show that the sequence we are interested in taking the limit of is bounded below. This is easy, as it is always positive for positive integer values of $n$, hence our lower bound is simply $0$. We can also show that the sequence we are interested in is monotonic decreasing for n sufficiently large as follows: First of all, let's simplify the expression $\LARGE\frac{(\frac{1}{n})}{(\frac{n^{n+1}}{e^n(n+1)!})}$. This is simply $\large\frac{e^n(n+1)!}{n^{n+2}} = $$a_n$ We will show that for $n$ sufficiently large $a_n \geq a_{n+1}$ This amounts to showing that for $n$ sufficiently large $\large\frac{a_n}{a_{n+1}}$$ \geq 1$ On simplifying $\large\frac{a_n}{a_{n+1}}$ we see this is nothing but $\large\frac{1}{e}(1+\frac{1}{n})^n(\frac{(n+1)^3}{n^2})$ As $n$ grows increasingly large, we know by the definition of $e$ that $\large\frac{1}{e}(1+\frac{1}{n})^n$ approaches $1$ and $\large(\frac{(n+1)^3}{n^2})$ tends to $+\infty$ and so $\large\frac{a_n}{a_{n+1}}$ grows larger than $1$ for $n$ sufficiently large. Thus we have shown that for $n$ sufficiently large $a_n = \LARGE\frac{(\frac{1}{n})}{(\frac{n^{n+1}}{e^n(n+1)!})}$ is monotonic decreasing and bounded below, which means by the Monotone Convergence theorem, it converges to a finite limit. Thus $\sum{\large\frac{n^{n+1}}{e^n\left(n+1\right)!}}$ is divergent. If you have any questions please feel free to ask, as I am not sure if this explanation is very clear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1523106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove $ \forall x >0, \quad \sqrt{x +2} - \sqrt{x +1} \neq \sqrt{x +1}-\sqrt{x}$ I would like to prove $$ \forall x >0, \quad \sqrt{x +2} - \sqrt{x +1} \neq \sqrt{x +1}-\sqrt{x}$$ * *I'm interested in more ways of proving it My thoughts: \begin{align} \sqrt{x+2}-\sqrt{x+1}\neq \sqrt{x+1}-\sqrt{x}\\ \frac{x+2-x-1}{\sqrt{x+2}+\sqrt{x+1}}&\neq \frac{x+1-x}{\sqrt{x +1}+\sqrt{x}}\\ \frac{1}{\sqrt{x+2}+\sqrt{x+1}}&\neq \frac{1}{\sqrt{x +1}+\sqrt{x}}\\ \sqrt{x +1}+\sqrt{x} &\neq \sqrt{x+2}+\sqrt{x+1}\\ \sqrt{x} &\neq \sqrt{x+2}\\ \end{align} * *Is my proof correct? *I'm interested in more ways of proving it.
By the mean value theorem - MVT, for all $x >0$, it exists $\zeta_x \in (x, x+1)$ such that $$\frac{1}{2\sqrt{\zeta_x}}((x+1)-x)=\frac{1}{2\sqrt{\zeta_x}}=\sqrt{x+1}-\sqrt{x}$$ As $y \to \frac{1}{2\sqrt{y}}$ is stricly decreasing, you have $$\frac{1}{2\sqrt{\zeta_{x+1}}}=\sqrt{x+2} -\sqrt{x+1} < \sqrt {x+1}-\sqrt{x}=\frac{1}{2\sqrt{\zeta_x}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1523179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 5 }
Finding the basis with given transition matrix \begin{equation} P = \begin{bmatrix}1 & 1 & 0 \\ 0 & 1 & 3 \\ 3 & 0 & 1 \end{bmatrix} \end{equation} a) P is the transition matrix from what basis B to the standard basis S = {e1, e2, e3} for R3? b) P is the transition matrix from the standard basis S = {e1, e2, e3} to what basis B for R3? My attempt: For a), if PB=S (is this even right?), can we just multiply inverse of P both sides to get B?
Your idea looks correct. Since S is the standard basis, i.e. S is the identity matrix, in a) your basis B are the columns of $P^{-1}S = P^{-1}$. In b) B equals P: $B = PS = P$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1523376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Using the Law of the Mean (MVT) to prove the inequality $\log(1+x)If $x \gt0$, then $\log(1+x) \lt x$. My attempt at the proof thus far... Let $f(x) = x-\log(1+x)$, then $f'(x) = 1-\frac{1}{1+x}$ for some $\mu \in (0,x)$ (since $x>0$) MVT give us $f(x) = f(a) + f'(\mu)(x-a)$ So plug in our values we get: $$x-\log(1+x) = 0+(1-\frac{1}{1+\mu})(x-0)$$ which we can reduce to $$\log(1+x)=\frac{x}{1+\mu}$$ Now For $x\leq1$, then $0\lt\mu\lt1$ S0 $0\lt \frac{1}{1+\mu}\lt 1$, thus $\log(1+x)\lt x$ If $x>1$, then.... So I can see clearly that if $x>1$ is plugged into $\frac{x}{1+\mu}$ then $\log(1+x)<x$, but I am not sure of how to prove this. I would appreciate tips hints or proof completions.
You're almost there. Have a look at your statement $$\log(1 + x) = \frac{x}{1+\mu}.$$ No matter what $\mu$ is, $1 + \mu$ is bigger than $1$, hence $$\frac{1}{1+\mu} < 1.$$ Multiplying this final inequality through by $x$ gives you the result you want. In words, dividing $x$ by something bigger than $1$ gives you something smaller than $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1523460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $[0,1]$ isn't homeomorphic to $\mathbb{R}$ Prove that $[0,1]$ isn't homeomorphic to $\mathbb{R}$ My first thought is that there can not be a continuous bijection, $f$, from $[0,1]$ to $\mathbb{R}$ because a continuous function that maps $[0,1]\rightarrow \mathbb{R}$ must be bounded so there can't be a surjection. So that would then be the proof. Though I believe I am incorrect in my thinking because a hint for the assignment says to use the intermediate value theorem. That is "Suppose $f:[a,b]\rightarrow\mathbb{R}$ is continuous. If $f(a)<\delta<f(b)$ or $f(b)<\delta<f(a)$ then $\delta=f(c)$ for some $c\in[a,b]$". Why is my first thought wrong and how is the IVT useful?
Here's a way to use the IVT: Suppose $f: \mathbb{R}\rightarrow [0, 1]$ is a continuous bijection. Let $a\in\mathbb{R}$ be such that $f(a)=0$; now look at $x=a-1$ and $y=a+1$. Since $f$ is injective, we have $f(x), f(y)\not=0$; let $0<c<\min\{f(x), f(y)\}$. By IVT, we have $f(x')=c$ for some $x<x'<a$ and $f(y')=c$ for some $a<y'<y$; but then $x'\not=y'$ but $f(x')=f(y')$, so $f$ is not injective. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1523521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
The element that is an associate of everything. Suppose I have an integral domain $R$ containing an element $a \in R$ with the following property: $$(\forall r \in R)\, a \text{ is an associate of } r.$$ Is it true that the ring must be either the zero ring or the ring $\{0,1\}$? "Associates" are defined as follows: For $a,b \in R$, $a$ and $b$ are associates if $a \vert b$ and $b \vert a$. I used the equivalence relation: $$a \thicksim b \stackrel{def}\iff \text{$a$ and $b$ are associates}.$$ This leaves us with $( R\big/\!\sim) = \{ 0 \}$, but that's not the same as saying $R$ is the zero ring... is it?
$R$ must be the zero ring. Here's a proof. By definition, if $a$ and $b$ are associates then each is a unit multiple of the other. In particular for this special $a$ we have that $b$ can be anything. Take $b=0$. Then $a=cb=0$ for some unit $c$, hence $a=0$. This uses the fact that $a$ is a multiple of $b$. To finish the proof that $R$ is the zero ring, note that no matter what $b$ we choose, it must be a multiple of $a$. But $a=0$, hence $b=0$, so $0$ is the only element of $R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1523636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove that in every sequence of 79 consecutive positive numbers written in decimal system there is a number whose sum of the digits is divisible by 13 Prove that in every sequence of $79$ consecutive positive numbers written in decimal notation there is a number the sum of whose digits is divisible by $13$. I tried to take one by one sets of $79$ consecutive positive numbers. Then I tried to solve with sets,relation,function. But I am not getting any idea how to start solving the question.
For $x\in{\mathbb N}$ denote by $r(x)$ the remainder modulo $13$ of the decimal representation of $x$. If $x$ is not divisible by $10$ then $r(x)=r(x-1)+1$. If $x$ is divisible by $10$, but not by $100$, then $$r(x)=r(x-1)-9+1=r(x-1)+5\ .\tag{1}$$ If $x$ is divisible by $100$, things are more complicated; see below. Consider a run $R:=[a\ ..\ a+78]$ of $79$ consecutive natural numbers, and assume that none of these numbers is divisible by $100$. There is a smallest number $c\leq a+9$ in this run which is divisible by $10$. Assume that $r(c)=1$. Then $(1)$ implies that the $r$-values in the interval $[c\ ..\ c+39]\subset R$ are given by $$[1\ ..\ 10],\quad[2\ ..\ 11],\quad [3\ ..\ 12],\quad[4\ ..\ 13]\tag{2}$$ and cover all remainders modulo $13$. It follows that $R$ covers all remainders modulo $13$, even if $r(c)\ne1$. Note that we would see $r(x)=0$ for an $x<c+39$ if we had not insisted in $r(c)=1$. The run $R$ may contain at most one number $c$ divisible by $100$. Assume that $r(c-1)=12$ and $r(c)=1$. Then the $r$-values in the interval $[c\ ..\ c+39]$ are still given by $(2)$. The $r$-values in the intervall $[c-40\ ..\ c-1]$ are given by $$[0\ ..\ 9],\quad [1\ ..\ 10],\quad [2\ ..\ 11],\quad[3\ ..\ 12]\ ,$$ and cover all remainders modulo $13$. Note that we would see $r(x)=0$ for an $x>c-40$ if we had not insisted in $r(c-1)=12$. It follows that in the worst possible case we don't see the $r$-value $0$ for all $x$ in the interval $[c-39\ ..\ c+38]$ containing $78$ integers. In order to realize this worst possible case we need $c$ divisible by $100$, $r(c)=1$, and $r(c-1)=12$. Assume that $c-1$ has $k\geq2$ trailing nines in its decimal expansion. Then the stated conditions imply $$2=r(c)-r(c-1)=-9k+1\qquad({\rm mod}\>13)\ .$$ The smallest $k\geq2$ fulfilling this is $k=10$. The number $c-1=10^{10}-1=9999999999$ already has $r(c-1)=12$, and it is obvious that $r(c)=1$ in this case. To sum it all up: Any run of $79$ consecutive positive integers contains an $x$ with $r(x)=0$. The first run of $78$ consecutive integers containing no $x$ with $r(x)=0$ is given by $[10^{10}-39\ ..\ 10^{10}+38]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1523694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 2 }
Berkeley Problems in Mathematics 7.5.22 This is the problem: Let $A$ be a real symmetric $n \times n$ matrix with non negative entries. Prove that $A$ has an eigenvector with non-negative entries I looked at the answer key and don't quite understand it. In the expression containing max, why should it correspond to the eigenvalue $\lambda_0$? I thought that this may be because if Ax is parallel to x, then the dot product between $Ax$ and $x$ is maximised, but is it not possible that it still attains a large value if $A$ transforms $x$ in a way that scales x by so much that Ax is large enough to make $\langle Ax,x\rangle$ large even though they may not be parallel? Solution(as in answer key): Let $\lambda_0$ be the largest eigenvalue of $A$. We have $$\lambda_0 = \max{\{\langle Ax, x\rangle\mid x\in\mathbb{R}^n,\|x\| = 1\}}$$ and the maximum it attains precisely when $x$ is an eigenvector of $A$ with eigenvalue $\lambda_0$. Suppose $v$ is a unit vector for which the maximum is attained, and let $u$ be the vector whose coordinates are the absolute values of the coordinates of $v$. Since the entries of $A$ are nonnegative, we have $$\langle Au,u \rangle \ge \langle Ax,x\rangle =\lambda_0$$ implying that $\langle Au,u\rangle = \lambda_0$, so that $u$ is an eigenvector of $A$ for the eigenvalue $\lambda_0$.
$A$ is real symmetric, so it has $n$ independent eigenvectors with real eigen-values $\lambda_1\geq \cdots \geq\lambda_n$. Let $v_1,v_2,\cdots, v_n$ be the corresponding independent eigenvectors, and we can assume that their length is $1$ (i.e. they form orthonormal basis). In max expression, you considered $x$ with $\|x\|=1$. Let $$x=a_1v_1+a_2v_2+\cdots + a_nv_n.$$ Then $$Ax=\lambda_1a_1v_1+\cdots + \lambda_na_nv_n.$$ Therefore, w.r.t. above orthonormal basis, noting that $\lambda_i\leq \lambda_1$, we get $$\langle Ax,x\rangle=\lambda_1|a_1|^2+\cdots + \lambda_n|a_n|^2 \leq \lambda_1 (|a_1|^2+\cdots + |a_n|^2)=\lambda_1 \langle x,x\rangle=\lambda_1\|x\|^2=\lambda_1.$$ This implies the maximum value of $\langle Ax,x\rangle$ for unit vector $x$ is $\lambda_1$. Now this is in fact attained, if you take $$x=v_1= \mbox{ unit eigenvector for the largest eigenvalue } \lambda_1. $$ You can chech this with almost same calculations as above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1523783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Definition: honest distance function What does honest distance function mean? In the context of metric spaces.
This is not a mathematical term but a figure of speech. One says that A is an honest B to assert that A satisfies the definition of B. Usually the reason to emphasize honest is that A was informally called "B" earlier in the text. This is associated with a somewhat conversational style of writing. Specifically for distance functions, this word would emphasize that all axioms of metric are satisfied. Three examples I made up: Given a set $E$, define its counting measure $\nu(E) $ as the number of elements of $E$, or $\infty$ if $E$ is infinite. Next, we're going to check that $\nu$ is an honest measure. and Given two nonempty compact sets $A,B$, define the Hausdorff distance $d_H(A,B)$ as the infimum of $\rho$ such that $A\subset B_\rho$ and $B\subset A_\rho$. To show that $d$ is an honest distance function, the only nontrivial property to check is the triangle inequality. and finally Given two nonempty compact sets $A,B$, consider the minimal distance $D(A,B) = \min_{a,b} d(a,b)$. Despite its name, $D$ is not an honest distance function: it fails the axioms of positivity and triangle inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1523853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Writing a sentence as a proportion I tried to write the following sentence as a proportion: 6 printers is to 24 computers as 2 printers is to 6 computers: ${6\,\,printers\over 2\,\,printers} = {24\,\,computers\over 6\,\,computers} $ But when I try to check if the statement is true, it is not. Therefore, it's not a proportion. So what is the right way to write this?
If you need $6$ printers to $24$ computers so $2$ printers for $\color{blue}8$ computers $$\frac{24}{6}=\frac{8}{2}$$ $$\frac{24}{6}\color{red}\neq\frac{6}{2}$$ $\Longrightarrow1$ printer for $4$ computers, so if you have $6$ computers so you need $2$ printers
{ "language": "en", "url": "https://math.stackexchange.com/questions/1523936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is the dimension of a kernel with the basis {[0,0,0]} equal to zero What is the dimension of a kernel with the basis {[0,0,0]}? I'm confused because the definition of the dimension is number of vectors in a basis. So there is 1 vector here which is [0,0,0]. Why does my professor say that the dimension of kernel is zero? He mentioned something about the zero vector space.
A basis is defined as a set of linearly independent vectors that span a space. For the row space (kernel) of your problem, it clearly only includes the zero vector. However the zero vector, v, is linearly indepedent (as there exists a scalar, C != 0, such that Cv = 0), therefore the basis of the row space has no dimension.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1524025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that if each row of a matrix sums to zero, then it has no inverse. Could anyone help me with this proof without using determinant? I tried two ways. Let $A$ be a matrix. If $A$ has the property that each row sums to zero, then there does not exist any matrix $X$ such that $AX=I$, where $I$ denotes the identity matrix. I then get stuck. The other way was to prove by contradiction, and I failed too.
Hint: You can sum the elements of a row by multiplying this row with a vector of $1$'s. Can you find now a matrix $X$ (with appropriate columns) such that $AX=Ο$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1524109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 1 }
Why are the coefficients of the equation of a plane the normal vector of a plane? Why are the coefficients of the equation of a plane the normal vector of a plane? I borrowed the below picture from Pauls Online Calculus 3 notes: http://tutorial.math.lamar.edu/Classes/CalcIII/EqnsOfPlanes.aspx And I think the explanation he provides is great, however, I don't understand how one of the concepts work. If the equation of a plane is $ax+by+cz=d$ how is it that $\overrightarrow n = \langle a,b,c \rangle$? From the picture below I suppose I can see this since $\overrightarrow r_0$, if continued past the plane, would clearly be perpendicular, but what about $\overrightarrow r$? That one is clearly not perpendicular if extended past the plane? Sorry if what I'm asking is confusing.
Given a point $O=(x_0,y_0,z_0)$ and a vector $\vec n=\langle a,b,c\rangle$, we can describe a plane as the set of points $P=(x,y,z)$ such that $\vec{OP}\cdot \vec n=0$. In other words, the set of vectors, perpedicular to $\vec n$. $$a(x-x_0)+b(y-y_0)+c(z-z_0)=0\implies ax+by+cz=ax_0+by_0+cz_0.$$ Letting $d=ax_0+by_0+cz_0$ gives the usual equation of the plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1524246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
How can I solve limit with absolute value $\mathop {\lim }\limits_{x \to -\infty } {{x} \over {9}}|{\sin}{{6} \over {x}}|$ I try to evaluate this limit by L'Hopitals rule, but I don't know how affect limit absolute value. Can someone give me some advice please? $$\mathop {\lim }\limits_{x \to -\infty } {{x} \over {9}}|{\sin}{{6} \over {x}}|$$
L'Hôpital is not a magical thing to be used all the time. Thinking is advantageous. If you want to consider $x\to-\infty$, you will only be dealing with $x<0$. So the numbers $6/x$ are going to be negative and close to zero. In that zone, the sine is negative, so $$ \left|\sin\frac6x\right|=-\sin\frac6x $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1524372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Property of a morphism inherited by the fibers Let $T$ be a scheme over an algebraically closed field $k$ and let $f:X\to Y$ be a $k$-morphism over $T$. For which properties P is it true that if $f_t:X_t\to Y_t$ has P for all geometric points $t\in T(k)$, then $f$ has P? Is it true for open immersions, for instance? What about properness, flatness...? (Feel free to add some assumptions, I really have no idea whether I am asking something very strong or not!)
Suppose $X,Y$ are flat and of finite presentation over $T$. Then, let $P$ be a property in $\{$flat, smooth, etale, open immersion, isomorphism, flat and a relative complete intersection morphism$\}$, then $f$ has property $P$ if and only if each $f_t$ has $P$ for all geometric points $t$ in $T$ (Here $T$ doesn't have to be a $k$-scheme). This is stated in Deligne/Rapoport on page "DeRa-28" (page number 170), section I,(7.4), and he gives references to EGA IV 11.3.10 and 17.9.1. A very special case for $P = $ smooth for curves over Dedekind domains is given in Silverman's Advanced Topics in Arithmetic of Elliptic Curves, in the chapter on Neron models, section 2 (or maybe 3, or 4?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1524453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
In a group of 4 people, is it possible for each person to have exactly 3 friends? In a group of 4 people, is it possible for each person to have exactly 3 friends? Why? My solution n Let G be a graph with 4 vertices, one vertex representing each person in the group. Join two vertices u and v by an edge if and only if u and v are friends. Then the degree of each vertex equals the number of friends that the corresponding person has. If each person has exactly 3 friends, then each vertex has degree 3. Therefore, the total degree would be 3 · 4 = 12. This is an even number. $n\equiv 0\pmod{2}$ and $n>3$ So It is possible. Is this correct?
Easy solution. They're all friends with each other but not themselves. That's all, no graphs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1524540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
inverse derivative shortcut and chain rule I know that when you have the equation y=ln(x), and you need to find the derivative, you can use the shortcut y'= 1/x. My question is why, when using the shortcut, do you have to multiply by the derivative of x? I'm aware it has something to do with the chain rule but I don't understand why. Here's an example of my question: If y=ln(5x), why doesn't y'=1/5x ?
If $y = \ln x$ then $\dfrac{dy}{dx} = \dfrac 1 x$. That's not a "shortcut"; that is the derivative of this function. Recall that $\ln(5x) = \ln 5 + \ln x$, so $\dfrac d {dx}\ln(5x) = \dfrac d{dx} \ln 5 + \dfrac d {dx} \ln x$, and $\dfrac d {dx}\ln 5 = 0$ since $\ln 5$ is a constant, i.e. $\ln 5$ does not change as $x$ changes. The chain rule can be stated in the form $\dfrac{dy}{dx} = \dfrac{dy}{du}\,\dfrac{du}{dx}$. For example, if we have $y=(x^3+8x+5)^{36}$, we can write \begin{align} y = u^{36}, & & u = x^3 + 8x+5 \\[10pt] \frac{dy}{du} = 36u^{35}, & & \frac{du}{dx} = 3x^2 + 8, \end{align} so $$ \frac{dy}{dx} = 36u^{35}\cdot (3x^2+8) = 36(x^3+8x+5)^{35}(3x^2+8). $$ Now apply this in annother case: $y=\ln(x^3+8x+5)$. We have \begin{align} y = \ln u, & & u = x^3+8x+5 \\[10pt] \frac{dy}{du} = \frac 1 u, & & \frac{du}{dx} = 3x^3+8 \end{align} so $$ \frac{dy}{dx} = \frac 1 u \cdot (3x^2+8) = \frac 1{x^3+8x+5} \cdot (3x^2+8). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1524845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that $e^{At}B = B e^{At}$ if $AB= BA$ Let $A$ and $B$ be two matrices. How can I show that $e^{At}B = B e^{At}$ if $AB= BA$ and then conclude that $(d/dt) e^{At} e^{Bt} = (A+B) e^{At} e^{Bt}$? I have no idea how to do this.
By definition $$e^{At} = \sum_{n=0}^\infty \frac 1{n!} t^n A^n$$ thus term by term $$B e^{At} = \sum_{n=0}^\infty \frac 1{n!} t^n BA^n = \sum_{n=0}^\infty \frac 1{n!} t^n A^nB = e^{At} B$$ Now apply this lemma for the second part.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1525146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can someone help me understand the Euclidean metric? A Euclidean metric is defined as: $g_{euclid} = g_{ij} = \delta_{ij} dx^i \times dx^j = dx^1dx^1 + \ldots + dx^ndx^n$ Can someone explain the following: * *why do we use $dx^i$ instead of $x^i$ which is a coordinate in $R^n$ *Why oes tensor product $\times$ gets turned into multiplication? *This does not look like a tensor at all, but rather just sum of products... Finally, a metric is just an inner product. This does not look anything like an inner product! An inner product on Euclidean space is defined as $\langle \cdot , \cdot\rangle$, how is that metric thingy related to my inner product?
* *Consider the properties of the $dx^i$ compared to the $x^i$. As linear functionals, the basis one-forms $dx^i$ are...well, linear on their arguments. Inner products are bilinear, but we are using two one-forms in each linearly independent term. *Notational convention. My opinion? It's very misleading. I would almost always keep the tensor product explicit there. *It's a sum of tensor products, and it defines a symmetric, positive definite bilinear form. Can you see that for any vectors $a,b$, the quantity $g(a,b)$ obeys all the properties of an inner product of $a$ and $b$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1525245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can I prove every positive real number has square root? Does it ask that we can express any positive real number as square root of something? like 4 is equal to square root of 16?
Hint: Consider the set $\{x\in \Bbb{R}_{\geq 0} \colon x^2<y\}$. By the completeness axiom, there is a least upper bound of this set, call it $x_0$. If $x_0^2<y$, then by density of rationals, there is some rational number $q$ in $(x_0^2,y)$. Does this make sense? If $x_0^2>y$. What happens? So there's only one choice left.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1525346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 0 }
What is the method for finding $\int\frac {x^3 + 5x^2 +2x -4}{(x^4-1)}\mathrm{d}x$? $$ \int\frac {x^3 + 5x^2 +2x -4}{(x^4-1)}dx $$ A bit confused with how to integrate this question. I though it was partial fractions but was unsure about the what to do after that.
Partial Solution Well, assuming you did the partial fraction decomposition already (as you said you did), you should get the following integral $$\int\left(\frac{9-x}{2(x^2+1)}+\frac{1}{x-1}+\frac{1}{2 (x+1)}\right) dx$$ $$=\frac{9}{2}\int\frac {dx}{x^2+1} - \frac{1}{2}\int \frac {x}{x^2+1}dx + \int \frac{dx}{x-1}+ \frac{1}{2}\int \frac{dx}{x+1}$$ A.) $\quad \int\frac {dx}{x^2+1}$ = $\arctan x$ B.) $\quad\int \frac {x}{x^2+1}dx = \frac{1}{2}\int \frac {du}{u+1} = \frac{1}{2}\log (u+1) = \frac{1}{2}\log (x^2+1)$ C.)$\quad\int \frac{dx}{x-1} = \log(x-1)$ D.)$\quad\int \frac{dx}{x+1} = \log(x+1)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1525385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the equation of the locus of R if the chord $PQ$ passes through $(0,a)$ The parabola is $x^2=4ay$ Information given: Points of P$(2ap, ap^2)$, Q$(2aq,aq^2)$, and R $(2ar, ar^2)$ lie on the parabola $x^2=4ay$. The equation of the tangent at P is $y=px-ap^2$ The equation of the tangent at Q is $y=qx-aq^2$ The equation of the normal at P is $x+py=2ap+ap^3$ The equation of the normal at Q is $x+qy=2aq+aq^3$ Normals of P and Q intersect at point R whose coordinates are $(-apq[p+q], a[p^2+pq+q^2+2])$ Find the equation of the locus of R if the chord PQ passes through $(0,a)$
The line passing through $P$ and $Q$ is $$y-aq^2=\frac{aq^2-ap^2}{2aq-2ap}(x-2aq),$$ i.e. $$y=\frac{p+q}{2}x-apq.$$ If $PQ$ passes through $(0,a)$, then we have $$a=\frac{p+q}{2}\cdot 0-apq\quad\Rightarrow \quad pq=-1.$$ Here note that $$R_x=-apq(p+q)=a(p+q)\quad\Rightarrow \quad p+q=\frac{R_x}{a}$$ and that $$R_y=a(p^2+pq+q^2+2)=a((p+q)^2-pq+2)=a((p+q)^2+3)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1525597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $\lim_{x \to \infty} (f(x)-g(x)) = 0$ then $\lim_{x \to \infty} (f^2(x)-g^2(x)) = 0\ $: False? If $\lim_{x \to \infty} (f(x)-g(x)) = 0$, then $\lim_{x \to \infty} (f^2(x)-g^2(x)) = 0$ My teacher said that the above is false, but I can't find any example that shows that it's false! Can someone explain to me why it's false and also give an example?
Take $f(x)=x$ and $g(x)=x+\dfrac{1}{x}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1525710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Derivative of $e^{e^{e^x}}$ $$\Large e^{e^{e^x}}$$ Derivative of this function needs to be found out but I am not getting any substitution. Any help/hint would do . Thanks!
Step 1: Think of the function as $e^{f(x)}$, where $f(x)=e^{e^x}$. Step 2: Apply the chain rule. At some point you will need $f'(x)$, which will need the chain rule again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1525957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }