Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Integral $\int_{0}^{\infty}e^{-ax}\cos (bx)\operatorname d\!x$ I want to evaluate the following integral via complex analysis $$\int\limits_{x=0}^{x=\infty}e^{-ax}\cos (bx)\operatorname d\!x \ \ ,\ \ a >0$$ Which function/ contour should I consider ?
Let us integrate the function $e^{-Az}$, where $A=\sqrt{a^2+b^2}$ on a circular sector in the first quadrant, centered at the origin and of radius $\mathcal{R}$, with angle $\omega$ which satisfies $\cos \omega = a/A$, and therefore $\sin \omega = b/A$. Let this sector be called $\gamma$. Since our integrand is obviously holomorphic on the whole plane we get: $$ \oint_\gamma \mathrm{d}z e^{-Az} = 0. $$ Breaking it into its three pieces we obtain: $$ \int_0^\mathcal{R}\mathrm{d}x e^{-Ax}+\int_0^\omega \mathrm{d}\varphi i\mathcal{R}e^{i\varphi}e^{-A\mathcal{R}e^{i\varphi}}+\int_{\mathcal{R}}^0 \mathrm{d}r e^{i\omega}e^{-Are^{i\omega}}=0. $$ The mid integral, as $\mathcal{R}\to\infty$ is negligible. So: $$ \int_0^\infty\mathrm{d}xe^{-Ax}=\int_0^\infty\mathrm{d}r (\cos\omega+i\sin\omega)e^{-Ar(\cos\omega+i\sin\omega)} $$ $$ \frac{1}{A}=\frac{1}{A}\int_0^\infty\mathrm{d}r(a+ib)e^{-r(a+ib)} $$ $$ \int_0^\infty\mathrm{d}r(a+ib)e^{-ar} (\cos br - i\sin br) = 1 $$ Now let's call $I_c = \int_0^\infty\mathrm{d}re^{-ar}\cos br$ and $I_s = \int_0^\infty\mathrm{d}re^{-ar}\sin br$, then: $$ aI_c-iaI_s+ibI_c+bI_s=1 $$ and by solving: $$ aI_c+bI_s=1;\ \ \ \ -aI_s+bI_c=0 $$ $$ I_c=\frac{a}{a^2+b^2}; \ \ \ \ I_s=\frac{b}{a^2+b^2}. $$ This method relies only on the resource of contour integration as you asked!
{ "language": "en", "url": "https://math.stackexchange.com/questions/663131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$T: \mathbb R^n \to \mathbb R^n$, $\langle Tu,v\rangle=\langle u,T^*v\rangle$, is $T^*=T^t$ regardless of inner product? Basic question in linear algebra here. $T$ is a linear transform from $\mathbb R^n$ to $\mathbb R^n$ defined by $T(v)=Av$, $A\in \mathrm{Mat}_n(\mathbb R)$. We are given some inner product $\langle ,\rangle$ of $\mathbb R^n$. Does not have to be the standard one, just some random inner product. let $T^*$ be a linear transform from $\mathbb R^n$ to $\mathbb R^n$ such that for all $u,v \in \mathbb R^n$: $\langle T(u),v\rangle=\langle u,T^*(v)\rangle$ I know that if $\langle ,\rangle$ is the standard inner product of $\mathbb R^n$, then $T^*(v)=A^tv$. My question is, does this hold for all inner products? If it isn't, given some inner product and the transform $T$, how can I find $T^*$? And if it is true, then why?
No it does not hold for any inner product. It is hard hard to show that any other inner product $\langle\cdot,\cdot\rangle_*$ can be represented as $$ \langle x,y\rangle_\star=\langle x,Sy\rangle, $$ where $S$ is a positive definite matrix. So $$ \langle Tx,y\rangle_\star=\langle x,T^*Sy\rangle \ne \langle x,ST^*y\rangle=\langle x,T^*y\rangle_\star, $$ unless $ST^*=T^*S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/663218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Linear Algebra - check my solution, product of symmetric positive-definite matrices We are given $A,B \in Mat_n(\mathbb R)$ are symmetric positive-definite matrices such that $AB=BA$ Show that $AB$ is positive-definite What I did: First I showed that $AB$ is symmetric, this is easily shown from $(AB)^t=B^tA^t=BA=AB$ Now I'm trying to think, why are the eigenvalues of $AB$ all positive? Just because the eigenvalues of $A$ and $B$ are positive does not imply that $AB$'s eigenvalues are. However, from sylvester's criterion, we know that all the leading principal minors of $A$ and $B$ have a positive determinant, and since the determinant of the product is the product of the determinants, we can infer that every leading principal minor of $AB$ is positive. Proof: $det(AB_{ii})=det(A_{ii})det(B_{ii}) \geq 0$ since $\forall i, det(A_{ii}),det(B_{ii}) \geq 0$ So $AB$ is a symmetric matrix where every leading principal minor has a positive determinant, and so $AB$ is indeed positive-definite. This is kind of a roundabout way of solving it, is there a way of actually showing the eigenvalues of $AB$ are positive? Did my solution even make sense?
$AB=BA$ is a interesting condition. Let $v$ be an eigenvector of $A$ such that $Av=\lambda v$. Then $A(Bv)=B(Av)=B(\lambda v)=\lambda(Bv)$. This implies that the eigenvector space $V_{\lambda}$ of $A$ is invariant space of $B$. Now we can find all eigenvalues of $B$ on $V_{\lambda}$. Take the basis of $V_{\lambda}'s$ for both $A$ and $B$, you will find what you need.
{ "language": "en", "url": "https://math.stackexchange.com/questions/663392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Sufficient Statistic Basics If I know the value of a sufficient statistic, but not the sample that generated it, am I right to suspect that the conditional distribution of any other statistic given the sufficient statistic will not depend on the parameter of interest? Formally speaking: Let $\theta$ be the parameter of interest. $T(x)$ is the known sufficient statistic. Now, for any other statistic $\tilde{T}(x)$, we (would; conjecturing) have: $$ f_{\tilde{T}\mid T}(\tilde{t}\mathbb\mid\theta,t)=f_{\tilde{T}\mid T}(\tilde{t}\mid t) $$ Thanks in advance. EDIT: just to add to my line of thought. I am thinking of the new statistic as equivalent to the sample points, since they differ just by a function. So if the if I have a sufficient statistic for the distribution, it will automatically be sufficient to any other statistic.
That is correct, PROVIDED that the statistical model is right. But the sufficient statistic is not where you will find evidence that the model doesn't fit. For example, in estimating the mean and variance of a normally distributed population, the sufficient statistic is the pair whose components are the sum of the observations and the sum of their squares, whereas evidence of non-normality will be found in the residuals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/663517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that a group where $a^2=e$ for all $a$ is commutative Defining a group $(G,*)$ where $a^2=e$ with $e$ denoting the identity class.... I am to prove that this group is commutative. To begin doing that, I want to understand what exactly the power of 2 means in this context. Is the function in the group a power or something?
The trick with these types of problems is to evaluate the 'product' of group elements in two different ways. So for this problem, we interpret $(ab)^2$ two different ways, where $a,b \in G$. First, we have this rule in $G$ that an element 'squared' is the identity. So we know that $$ (ab)^2=e $$ But $$ (ab)^2=abab $$ Also note that $$ e=e\cdot e=a^2b^2 $$ So we must have $$ a^2b^2=abab $$ But then that gives us $$ \begin{align} a^{-1}a^2b^{2}b^{-1}&=a^{-1}ababb^{-1}\\ ab&=ba \end{align} $$ since $a,b \in G$ were arbitrary, $G$ is commutative. Later we use the same trick for rings by evaluating $(a+b)^2$ two different ways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/663588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
Combinatorics - possibly pigeon hole, 100 by 100 matrix with numbers from 1 to 100 We are given a $100$ by $100$ matrix. Each number from $\{1,2,...,100\}$ appears in the matrix exactly a $100$ times. Show there is a column or a row with at least $10$ different numbers. I'd like a small tip how to tackle this problem. I tried of thinking what are the pigeons and what are the holes. obviously there are 200 holes (100 rows and 100 columns). what are the pigeons?
Managed to solve it. first, understand that each number appears in at least 20 different rows and columns (maximum is 101, minimum is 20). The most extreme and compact case, is that a number fills a $10$ by $10$ matrix, so that's 20 holes But we have 100 different numbers, 100*20=2000. $\frac{2000}{200}=10$ and so there is a hole with at least 10 different numbers
{ "language": "en", "url": "https://math.stackexchange.com/questions/663677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
find $P\{P\{0\}\}$. $P$ represents the power set. I'm assuming that I'm trying to find the power set of a power set? I start from the inner power set, $P\{0\}$. $P\{0\}= \{ 0, \{0\} \}$. Now I do $P\{ 0, \{0\} \}$ which is $\{ 0, \{0\}, \{\{0\}\} \}$. 0 is the empty set. Is this correct? So I'm taking it that P{0}={0, {0}, {0, {0}} }
Almost. You forgot one subset of $\{0,\{0\}\}.$ (Hint: It isn't proper.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/663744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Continuity of $f(x)=x^p$ when $p$ is a real number and $x\in (0,\infty)$ Here is my final answer. Definition Let $x>0$ be a real, and $\alpha$ be a real number. We define the quantity $x^{\alpha}$, by the formula $\text{lim}_{n\rightarrow\infty} x^{q_n}$ where $(q_n)$ is a sequence of rationals which converges to $\alpha$. (I've shown that the definition is well-defined). Proposition: Let $p$ be a real number. Then the function $f: (0,\infty)\rightarrow \mathbb{R}$ defined by $f(x)=x^p$ is continuous. Proof: Let $x_0\in (0,\infty)$ we have to show $\lim_{x\rightarrow x_0, x\in(0,\infty)} f(x)=f(x_0)$. (1) Suppose $x_0=1$. Claim 1: For all natural numbers $\lim_{x\rightarrow 1, x\in\mathbb{R}} x^n=1$. Let $n=0$, so $x^n=1$ which is trivial. Suppose we have proven the assertion for $n\ge 0$. So, $x^{n+1}=x^nx$ and then $$\lim_{x\rightarrow 1, x\in\mathbb{R}} x^{n}x=\lim_{x\rightarrow 1, x\in\mathbb{R}} x^{n}\cdot\lim_{x\rightarrow 1, x\in\mathbb{R}} x=1$$ 2) Now we have to show that $\lim _{x\rightarrow 1; x\in (0, \infty)} x^p = 1$. Let $(x_n)_{n=0}^\infty$ be a sequence of positive real numbers which converges to $1$. We' like to show that $(x_n^p)\rightarrow 1$. Let $\varepsilon>0$ be arbitrary and let choose some $m\in \mathbb{N}$ so that $m> p$. Since $(1+1/k)_{k=1}^\infty$ and $(1-1/k)_{k=1}^\infty$ both converges to $1$, using the claim $1$, $(1+1/k)^m$ and $(1-1/k)^m$ converge also to $1$. Let $K_\varepsilon$ be a natural number such that both sequences are $\varepsilon$- close to $1$ for any $k\ge K_\varepsilon$. Let us fix some $k$, so that $k\ge K_\varepsilon$ and $1-1/k > 0$. Since $(x_n)$ converges to $1$, there is some $N_{1/k}$ such that $|x_n-1|\le 1/k$ for all $n\ge N_{1/k}$, i.e., $1-1/k\le x_n\le 1+1/k$. So, $(1-1/k)^p\le x_n^p\le (1+1/k)^p$. Also we know that $1+1/k>1$ and $p<m$, thus $(1+1/k)^p<(1+1/k)^m$. Similarly $1-1/k<1$, thus $(1-1/k)^p>(1-1/k)^m$. Putting all the inequalities together we have $$(1-1/k)^m< x_n^p< (1+1/k)^m $$ Since both $(1+1/k)^m $ and $(1-1/k)^m $ are $\varepsilon$-close to $1$, hence $x_n^p$ is. Thus $(x_n)^p$ converges to $1$ as desired. Now since $(x_n)$ was an arbitrary sequence of positive real numbers converging to $1$, hence the result hold for any sequences. Therefore $\lim _{x\rightarrow 1; x\in (0, \infty)} x^p = 1$. 3) Let $x_0 \in (0 \infty)\backslash\{1\}$ and let $(x_n)_{n=0}^\infty$ be a sequence of positive real numbers which converges to $x_0$. Using the limit laws we know that $x_n/x_0$ converges to $1$ and so by part 2), we have $(x_n/x_0)^p \rightarrow 1$. Thus \begin{align}\lim_{n\rightarrow \infty}x_n^p=\lim_{n\rightarrow \infty} x_0^p (x_n/x_0)^p\\ = x_0^p \lim_{n\rightarrow \infty} (x_n/x_0)^p \end{align} and since $\lim_{n\rightarrow \infty} (x_n/x_0)^p=1 $, hence $\lim_{n\rightarrow \infty}x_n^p=x_0^p$. Since $(x_n)$ was an arbitrary sequence of positive real numbers converging to $x_n$, this would imply that $f$ is continuous on $(0,\infty)$ as desired. Thanks to anyone.
The book's hint is spot on: suppose you have proven that $x^p$ is continuous at $1$; and choose some $\alpha>0$ distinct from $1$. Prove that $x^p$ is continuous at $\alpha$ iff $\alpha^{-p}x^p$ is continuous at $\alpha$, and note that this last is equivalent to $$\left(\frac x\alpha\right)^p\to 1$$ as $x\to \alpha$. But $x/\alpha\to 1$ as $x\to \alpha$, and since $t^p$ is continuous at $t=1$; we're done! You do have to prove $(xy)^p=x^py^p$ for $x,y$ real numbers first.
{ "language": "en", "url": "https://math.stackexchange.com/questions/663809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Give me a example of a function Lebesgue Integrable over [a,b] that is not bounded in any subinterval of [a,b] Give me a example of a function Lebesgue Integrable over [a,b] that is not bounded in any subinterval of [a,b]. *I'm thinking about this but without progress...
$f(x)=0,\ x\in{\Bbb R}\backslash{\Bbb Q}$ $f(r/s)=s$, $r/s$ irreducible fraction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/663914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Prove an equality using combinatorial arguments $$n \cdot {2^{n - 1}} = \sum\limits_{k = 1}^n {k\left( {\begin{array}{*{20}{c}} n \\ k \\ \end{array}} \right)} $$ The left-hand side can describe the number of possibilities choosing a committee with one chairman. How can the right-hand side feet to this story?
It is equivalent because for each group with the size of $k$ we can choose $k$ different chairmans.
{ "language": "en", "url": "https://math.stackexchange.com/questions/663999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Basic analysis - sequence convergence I'm taking a course entitled "Concepts in Real Analysis," and I'm feeling pretty dumb at the moment, because this is obviously quite elementary... The example in question shows $\lim_{n\to\infty} \frac{3n+1}{2n+5}=\frac{3}{2}$, and, setting $\left|\frac{3n+1}{2n+5}-\frac32\right|= \frac{13}{4n+10}$, choosing $N>\frac{13-10\varepsilon}{4\varepsilon}$ and $n\ge N$. Fine. My question is this: I don't understand why this isn't circular reasoning. I can subtract anything whatsoever from $\frac{3n+1}{2n+5}$, and with a little algebra I can have a statement $n> f(\varepsilon)$, even if I already know the limit and deliberately choose a value for $N$ which disagrees with it, and then I could claim that any $n>f(\varepsilon)$ whatsoever satisfies the criteria for convergence. I'm sorry I couldn't make the math prettier, but I'm going crazy here. Can anyone help?
It may be a little subtle, but it doesn't actually work unless you pick the limit. To see this, lets write our limiting value as $$ \left|\frac{3n+1}{2n+5} - \left(\frac{3}{2} + \delta\right)\right| < \epsilon $$ for some $\delta\neq 0$. Simpilfying, we have $$ \left| \frac{-13 - (2n+5)\cdot 2\delta}{4n+10} \right| < \epsilon. $$ If $$ \delta<-\frac{13}{2(2n+5)}<0, $$ the number in the absolute value is positive and we have $$ \frac{-13 - (2n+5)\cdot 2\delta}{4n+10} < \epsilon. $$ Rewrite this as $$ n > \frac{10\epsilon - 13 - 10\delta}{4\epsilon + 4\delta}. $$ What happens if we choose $\epsilon = -\delta > 0$? What can we choose for $N$ in the definition of convergence? On the other hand, if $$ \delta \geq -\frac{13}{2(2n+5)}, $$ then we have $$ \frac{13 + (2n+5)\cdot2\delta}{4n+10} < \epsilon, $$ which can be written as $$ n > \frac{13 - 10\epsilon + 10\delta}{4\epsilon - 4\delta}. $$ What happens for $\epsilon = \delta$? We have to be a little careful with this last case though. If $$ - \frac{13}{2(2n+5)} \leq \delta < 0, $$ then this choice of $\epsilon$ is negative, so doesn't contradict the definition. However, in this case we can fall back on the "for all" $n\geq N$ part of the definition of convergence. Note that $$ \frac{13}{2(2n+5)} $$ gets smaller and smaller as $n$ gets larger, so no such choice of $\delta$ would work for all $n\geq N$, so these won't be possible chioces of $\delta$ anyway! To try and have a happy ending, note that when you pick the limit (i.e., $\delta = 0$) then as you note above, we have the restriction $$ n > \frac{13 - 10\epsilon}{4\epsilon}, $$ and the problem that arises for other choices can't actually happen here!
{ "language": "en", "url": "https://math.stackexchange.com/questions/664095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $\mathbb Q \times \mathbb Q $ a denumerable set? How can one show that there is a bijection from $\mathbb N$ to $\mathbb Q \times \mathbb Q $?
Yes, $\mathbb{Q} \times \mathbb{Q}$ is countable (denumerable). Since $\mathbb{Q}$ is countable (this follows from the fact that $\mathbb{N} \times \mathbb{N}$ is countable), taking the cartesian product of two countable sets gives you back a countable set. This link: http://www.physicsforums.com/showthread.php?t=487173 should be helpful. The basic idea is to make a matrix out of the (infinite) list of rational numbers across each row and column so that you have pairs $(p/q,p'/q')$ of rational numbers. Then you can follow a diagonal path through the matrix to demonstrate countability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/664167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Exponents and fractions pre-calculus How would I go on about solving this: ${x^4y^7\over x^5y^5}$ When $x = {1\over3}$ and y = ${2\over 9}$ My working out: Firstly I simplify.- ${xy^2\over x}$ Then substitute ${{{1\over3} * {2\over9}}^2\over{1\over3}}$ Further, ${{{1\over3} * {4\over81}}\over{1\over3}}$ and ${{4\over243} \over{1\over3}}$ since $a/b / c/d = ab * dc$:- ${4\over243} * {3\over1}$ equals ${12\over243}$ Simplified: ${4\over81}$ The correct answer is ${4\over27}$ Can someone help me employ the proper method in solving this problem? Regards,
The first step is $$\frac{y^2}{x}$$ So the answer is $$ \frac{4/81}{1/3}= \frac{4 \cdot 3}{81} = \frac{4}{27}$$ You just had the extra $x$. Must be an oversight
{ "language": "en", "url": "https://math.stackexchange.com/questions/664230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Partial derivative of a function with another function inside it? What is $\cfrac {\partial f(x, y, g(x))} {\partial x}$ expanded out? I want to say $\cfrac {\partial f(x, y, g(x))} {\partial g(x)} \times \cfrac {\partial g(x)} {\partial x}$ but I don't think that's quite right.
$d_{x}f(x, y, g(x)) + \frac{dg}{dx} d_{z}f(x, y, g(x))$ f = f(x, y, z) and g(x) = f(x, y, g(x)). When x moves by dx, you are evaluating f to a new point where x AND z have changed. (x, y, g(x)) -> (x+dx, y, g(x+dx)) ~ (x+dx, y, g(x) + g'(x)dx) So when x moves a little bit, we will see a change in x and z. Generally speaking, f(x+dx, y+dy, z+dz) ~ $f(x, y, z) + dx ∂_{x}f(x, y, z) + dy ∂_{y}f(x, y, z) + dz ∂_{z}f(x, y, z)$ So you replace dz by g'(x)dx and you will see the answer
{ "language": "en", "url": "https://math.stackexchange.com/questions/664340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove if $A \in Mat_{n,n}(\mathbb F)$ is both symmetric and skew-symmetric then $A=0$ Prove if $A \in Mat_{n,n}(\mathbb F)$ is both symmetric and skew-symmetric then $A=0$ I know $A^T = A = -A \Rightarrow A = -A \Rightarrow A_{i,j} = -A_{i,j}$. Since $\mathbb F$ is a field we have $2A_{i,j} = 0 \Rightarrow 2 = 0 \lor A_{i,j} = 0$. However how can I verify $A_{i,j} = 0$ ? Suppose $\mathbb F = \{[0],[1]\}$. Then $2 = 0$, so I cannot conclude $A_{i,j} = 0$ ?
[from my comment] You're exactly right. This holds in every case but characteristic $2$. To see this, $A=−A\implies2A=0$, and that is true for every matrix in characteristic $2$. All you need is an explicit example to prove that a nonzero $A$ exists, e.g. the identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/664451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Inconsistent inequalities I want to prove that the following two inequalities cannot hold simultaneously $\beta_0^2(\beta_1-1)\geq \beta_1$ and $\beta_1^2(\beta_0-1)\leq \beta_0$, where $1<\beta_0<\beta_1<2$
That is a statement rather than a question... Suppose that they do hold 'simultaneously' and I will use $a:=\beta_0$ and $b:=\beta_1$. Add them together to get $$\begin{align} b+ab^2-b^2& \leq a^2b-a^2+a \\\Rightarrow a^2b-ab^2-a^2+b^2-b+a&\geq0 \\ \Rightarrow ab(a-b)-(a-b)(a+b)+1(a-b)&\geq 0 \\ \Rightarrow (a-b)(ab-a-b+1)&\geq 0 \\ \Rightarrow (a-b)(a-1)(b-1)\geq 0 \end{align}$$ which is a contradiction because $a<b\Rightarrow a-b<0$ and $a,b>1\Rightarrow (a-1),(b-1)>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/664562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $y = x$ is continuous For every $\epsilon > 0$ there exists a $\delta > 0$ such that $|x - c| < \delta$ implies $|f(x) - f(c)| < \epsilon$. Start with $|f(x) - f(c)| < \epsilon$ which gives $|x - c| < \epsilon$. We also know $|x - c| < \delta$ but how can we connect $\epsilon$ and $\delta$?
(For future visitors) You can prove it as follows. For any $x \in \mathbb R$ we have: $x_n \to x_0 \implies f\left(x_n\right) = x_n \to x_0 = f\left(x_0\right)$ Meaning, if sequence $x_n$ somehow approaches $x_0$, then corresponding sequence $f\left(x_n\right)$ always approaches $f\left(x_0\right)$, which is the definition of continuity. Taking in account that we chose any $x$ from $\mathbb R$ (in the beginning), we've proven that $\lim_{x \to x_0} f\left(x\right) = f\left(x_0\right)$ for any $x \in \mathbb R$. $\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/664657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How many strings contain every letter of the alphabet? Given an alphabet of size $n$, how many strings of length $c$ contain every single letter of the alphabet at least once? I first attempted to use a recurrence relation to work it out: $$ T(c) = \left\{ \begin{array}{cr} 0 &\mbox{ if $c<n$} \\ n! &\mbox{ if $c = n$} \\ T(c-1) \cdot n \cdot c &\mbox{ if $c > n$} \end{array} \right. $$ As there's no strings that contain every letter if c < n, and if c = n then it's just all permutations. When c > n you can take any string of size (c-1) that contains all letters (of which there are $T(c-1)$ to choose from), you choose which letter to add (of which there are $n$ choices) and there are $c$ different positions to put it. However, this gives out results that are larger than $n^c$ (the total number of strings), so it can't be right, and I realised it was because you could count some strings multiple times, as you can make them taking different inserting steps. Then I thought about being simpler: you choose n positions in the string, put each letter of the alphabet in one of those positions, then let the rest of the string be anything: $$ {c\choose{n}} \cdot n! \cdot n^{c-n} $$ But again this counts strings multiple times. I've also considered using multinomial coefficients, but as we don't know how many times each letter appears in the string it seems unlikely they would be much help. I've also tried several other methods, some complicated and some simple, but none of them seem to work. How would you go about working out a formula for this? I'm sure there's something simple that I'm missing.
Let $W(c,n)$ denote the number of words of length $c$ from an alphabet of $n$ letters. Then $W(c,n)=n^c$. Out of these, the number of words of the same size that do not contain one of the letters is $W(c,n-1)=(n-1)^c$. The number of ways of choosing which letter is missing is $\binom{n}{1}$. The number of words of the same size that do not contain two letters is $W(c,n-2)=(n-2)^c$. The number of ways of choosing which two letters are missing is $\binom{n}{2}$... and so on ... Now we use inclusion-exclusion principle: (subtract the number of words missing one of the letters, then add the number missing two of the letters, subtract the number missing three of the letters,...) We get: $$W(c,n)-\binom{n}{1}W(c,n-1)+\binom{n}{2}W(c,n-2)-\binom{n}{3}W(c,n-3)+\cdots+(-1)^{n-1}\binom{n}{n-1}W(c,n-(n-1)).$$ This is $$n^c-\binom{n}{1}(n-1)^c+\binom{n}{2}(n-2)^c-\binom{n}{3}(n-3)^c+\cdots+(-1)^{n-1}\binom{n}{n-1}1^c.$$ or $$\sum_{k=0}^{n-1}(-1)^k\binom{n}{k}(n-k)^c.$$ Another way could be: Denote $S_c^n$ the number of ways to partition the word of length $c$ into $n$ pieces. Then we just need to choose which letter goes to each of the $n$ pieces. This number is $n!$. So the number of words we are looking for is $$n!S_c^n.$$ The numbers $S_c^n$ are called Stirling's numbers of the second kind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/664726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How many arrangements of the digits 1,2,3, ... ,9 have this property? How many arrangements of the digits 1,2,3, ... ,9 have the property that every digit (except the first) is no more than 3 greater than the previous digit? (For example, the arrangement 214369578 has this property. However, 312548697 does not have the property, since 8 occurs immediately after 4, and 8>4+3.) EDIT: I think this problem should have catalan numbers involved, since this was part of some homework and other similar questions involved them.
Let $a_n$ denote the number of valid configurations. Note that if $n\ge 4$, $a_{n+1}=4a_n$. $a_4=4!$, so $a_9=4! \cdot 4^5=\boxed{24576.}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/664798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
How to calculate the hight by number of nodes Imagine that I have something like following structure and I keep adding more to it, so the level 1 has only one node and level 2 has 2 and level n had n node, how can i calculate the n from the total number of tokens (size). for example if I want to add j, knowing that the size is 9, how can i mathematically deduce that It has to go to level 4? a b c d e f g h i
If you want to add the $n^{th}$, node, it has to go on the $\left\lceil\frac{-1+\sqrt{1+8n}}{2}\right\rceil^{th}$ floor. Proof : The last element of the $n^{th}$ row is the $n^{th}$ triangular number, so $\frac{n(n+1)}{2}$. Now let's consider the $k^{th}$ node. What should be $k$ to be the last element of the $n^{th}$ row ? The answer is $k = \frac{n(n+1)}{2}$ as we have just seen. This leads to $n^2+n-2k=0$. The only positive solution is $n=\frac{-1+\sqrt{1+8k}}{2}$. This only holds when $k$ is the last element of the row. But as we are on the same row as the smallest last element of a row, we just need to ceil the result to generalize the answer. EDIT : It seems you want the result from the size of the graph. Just replace $n$ by $n+1$ in the formula above if that's the case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/664874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is the square root of complex number i? Square root of number -1 defined as i, then what is the square root of complex number i?, I would say it should be j as logic suggests but it's not defined in quaternion theory in that way, am I wrong? EDIT: my question is rather related to nomenclature of definition, while square root of -1 defined as i, why not j defined as square root of i and k square root of j and if those numbers have deeper meanings and usage as in quaternions theory.
I think this would be easier to see by writing $i$ in its polar form, $$i=e^{i\pi/2}$$ This shows us that one square root of $i$ is given by $$i^{1/2}=e^{i\pi/4} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/664962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 3 }
Is there any function that has the series expansion $x+x^{\frac{1}{2}}+x^{\frac{1}{3}}+\cdots$? $$\frac{1}{1-x} = 1+x+x^2+x^3+ \cdots$$ Is there a $f(x)$ that has the series of $n$th roots? $$f(x)= x+x^{\frac{1}{2}}+x^{\frac{1}{3}}+ \cdots$$ Wolfram Alpha seemed to not understand my input.
For $x>0,~x+x^{\frac{1}{2}}+x^{\frac{1}{3}}+...$ can't converges since $x^{\frac{1}{n}}\to1.$ Neither does it for $x<0$ since $x^{\frac{1}{2}}$ is undefined then. So the only possibility is the trivial one: $$f:\{0\}\to\mathbb R:0\mapsto 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/665108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Finding the $\limsup$ of a sequence of sets Is my proof if this equivalence correct? $$\limsup_{n\to\infty} A_n=\bigcap_{i=1}^\infty \bigcup_{j=i}^\infty A_i=\{\text{ elements that belong to infinitely mane } A_i \text{'s }\}$$ Pf. Let $B_i=\bigcup_{j\ge i} A_j$. Let $x \in \limsup$, then: ($\rightarrow$) $x\in \limsup \implies x \in B_i$ for all $i\ge 1 \implies x \in $ infinitely many $A_i$'s. ($\leftarrow$) Suppose $x \in$ infinitely many $A_i$'s. Then $x \in A_i \implies x \in B_i$ for all $i\ge 1 \implies x \in \bigcap_{j\ge 1} B_j=\limsup$.
I don't think your proof is correct. Just note that $$ x \in \limsup_{n \to \infty} A_n \quad \iff \quad \forall i \geq 1, \exists j \geq i : x \in A_j \quad \iff \quad x \text{ belongs to infinitely many $A_i$'s} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/665267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ways to induce a topology on power set? In this question, two potential topologies were proposed for the power set of a set $X$ with a topology $\mathcal T$: one comprised of all sets of subsets of $X$ whose union was $\mathcal T$-open, one comprised of all sets of subsets of $X$ whose intersection was $\mathcal T$-open. I proved there that neither such construction need be a topology on $\mathcal P(X)$ in general. (The latter will be such a topology if and only if $\mathcal T$ is discrete. If we know that $\mathcal T$ is $T_1$, then the former will be a topology if and only if $\mathcal T$ is discrete.) This led me to wonder if there are any ways to induce a topology on $\mathcal P(X)$ from a topology on $X$? Some searching shows that one "natural" way to do so is to give $\mathcal P(X)$ the topology of pointwise convergence of indicator functions $X\to\{0,1\}.$ This is certainly very nice, but I'm still curious: Are there any other ways to induce a topology on $\mathcal P(X)$ from any given topology on $X$? Of course, I would like for different topologies on $X$ to give rise to different (though potentially homeomorphic, of course) topologies on $\mathcal P(X).$ (As a bonus question, can anyone can think of any non-$T_1$ topologies for which the first construction described above is a topology, or a proof that no such non-$T_1$ topology can exist? I will gladly upvote any such example/proof and link to it from my answer to the question above.)
Let a set be open iff it is empty or of the form $\mathcal{P}(X)\backslash\big\{\{x\}:x\in C\big\}$ for some closed set C.
{ "language": "en", "url": "https://math.stackexchange.com/questions/665365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 1 }
Does every prime $p \neq 2, 5$ divide at least one of $\{9, 99, 999, 9999, \dots\}$? I was thinking of decimal expressions for fractions, and figured that a fraction of the form $\frac{1}{p}$ must be expressed as a repeating decimal if $p$ doesn't divide $100$. Thus, $\frac{p}{p}$ in decimal would equal $0.\overline{999\dots}$ for some number of $9$s, thus there must be some amount of $9$s such that $p | 999...$ in order for a decimal representation of $\frac{1}{p}$ to be possible. Furthermore, the question could be rephrased to "an infinite number of" since if it divides $999\dots$ where there are $k$ nines, it also divides when there are $2k, 3k, \dots$ nines. Is this reasoning correct? If so, this is how I thought about proving it: We can reduce the set to $\{1, 11, 111, 1111, \dots\}$ since $p=3$ obviously works. Let $a_k = 111\dots$ where there are $k$ ones. This satisfies the recursion $a_k = 10a_{k-1} + 1$ But I'm unsure what to do past this point (tried looking at modular cases or something but wasn't able to get anywhere). Am I on the right track at all? Is this "conjecture" even correct?
Essentially, you wish to find a $k$ such that $10^k - 1 \equiv 0 \pmod p$. This is equivalent to $10^k \equiv 1 \pmod p$. There are many reasons that such a $k$ exists: (for $p \ne 2,5$), but I'd argue the cleanest one is this: * *$\mathbb{Z}_p$ is a field, and so $(\mathbb{Z}_p)^\times$ is a group under multiplication. Since the group is finite, $10$ has finite order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/665462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
How is $n^{1.001} + n\log n = \Theta (n^{1.001})$? I am studying for an exam and stumbled across this here: https://cs.stackexchange.com/questions/13451/few-big-o-example (I cant comment there since commenting needs 50 reps and I am a new user. Thought math exchange would help) The chosen answer says for large $n$, $n^{0.001}$ is larger than $\log n$. But that does not make sense. $0.001$ is close to $0$. So anything raised to the power $0.001$ should come out to be slightly more than $1$ right? e.g if n = 1000,000 then $n^{0.001}$ is equal to $1.014$ whereas $\log n$ will be equal to almost 20 if $\log$ has a base of 2. Where am I thinking wrong? Is there any other way of showing this relationship?
For intuition, consider something like $n = 2^{10 ^ 6}$. We have $log(n) = 10^6$ and $n^{0.001} = 2^{10^3} \approx 10^{301}$ In general $\displaystyle \lim_{n\to \infty} \frac{(\log n)^a}{n^b} = 0$ if $b > 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/665551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
probability density function for a random variable For a given probability density function $f(x)$, how do I find out the probability density function for say, $Y = x^2$? $$f(x)=\begin{cases}cx&,0<x<2\\2c&,2<x<5\\0&,\text{otherwise}\end{cases}$$
HINT 1: Note that $P(Y < 0) = 0$ (since $Y = X^2 \geq 0$) and for $y >0$, \begin{gather*} P(Y \leq y) = P(X^2 \leq y) = P(-\sqrt{y} \leq X \leq \sqrt{y}) = \dots \end{gather*} HINT 2: For any $z \in \mathbb{R}$, $P(X \leq z) = \int_0^z f(x) dx$. Now evaluate the integral as a piecewise function, from $0$ to $2$ and then from $2$ to $5$, and plug back into HINT 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/665655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
A formula for a sequence which has three odds and then three evens, alternately We know that triangular numbers are 1, 3, 6, 10, 15, 21, 28, 36... where we have alternate two odd and two even numbers. This sequence has a simple formula $a_n=n(n+1)/2$. What would be an example of a sequence, described by a similar algebraic formula, which has three odds and then three evens, alternately? Ideally, it would be described by a polynomial of low degree.
The sequence $$n \mapsto 4n^6+n^5+6n^3+4n \pmod 7$$ for $n \geq 1$ gives $$1,1,1,0,0,0,1,1,1,0,0,0,\ldots.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/665722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove that the foot of the perpendicular from the focus to any tangent of a parabola lies on the tangent to the vertex Prove that the foot of the perpendicular from the focus to any tangent of a parabola lies on the tangent to the vertex I've been trying to prove this by plugging in the negative reciprocal of the slope of the tangent at a point $(x, y)$ into a line which passes through that point and the axis of symmetry. Then I plug the value of the focus into the result and solve for $x$. However the slope is undefined for any line parallel to the axis of symmetry.
Let $F$ be the focus of the parabola, $HG$ its directrix, with vertex $V$ the midpoint of $FH$. From the definition of parabola it follows that $PF=PG$, where $P$ is any point on the parabola and $G$ its projection on the directrix. The tangent at $P$ is the angle bisector of $\angle FPG$, hence it is perpendicular to the base $GF$ of isosceles triangle $PFG$, and intersects it at its midpoint $M$. But the tangent at $V$ is parallel to the directrix and bisects $FH$, hence it also bisects $FG$ at $M$, as it was to be proved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/665837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
modular exponentiation where exponent is 1 (mod m) Suppose I know that $ax + by \equiv 1 \pmod{m}$, why would then, for any $0<s<m$ it would hold that $s^{ax} s^{by} \equiv s^{ax+by} \equiv s \pmod{m}$? I do not understand the last step here. Is it some obvious exponentiation rule I'm overlooking here? Thanks, John.
It's false. $2*5 + 1*7 = 17 \equiv 1 \pmod{4}$, $0<2<4$ and $2^{17} \equiv 0 \pmod{4}$. However, what is true is that for $s$ and $m$ coprime, $s^{\phi(m)} \equiv 1 \pmod{m}$, where $\phi$ is Euler's totent function (http://en.wikipedia.org/wiki/Euler%27s_totient_function). $s^{\phi(m)+1} \equiv s \pmod{m}$ holds even if $s$ and $m$ are not coprime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/665939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find a basis for this matrix I have a matrix that only contain variables and zeros, like this: $$ \begin{bmatrix} 0 & -a & -b \\ a & 0 & -c \\ b & c & 0 \\ \end{bmatrix} $$ I usually would find the basis for this by row reduction and then take the columns with leading ones as basis, but how do I do when there is just variables?
The variables are also just numbers, so this isn't much different from doing what you described. (And the process you described finds a basis for the rowspace, which is the interpretation I'll use for my solution. I'm also assuming the matrix has enties in a field.) The only complication is that the size of the basis may change depending on how many variables are zero. In all cases here, it happens that the basis of the rowspace will be either empty or will have two elements. First of all, the determinant is $0$, and so the matrix can't ever have more than two linearly independent rows. If $a=b=c=0$, then the basis is empty. If $a$ isn't zero, then $(0,-a,-b)$ and $(a,0,-c)$ are linearly independent and form a basis. If $a=0$ but on of $b$ or $c$ isn't, then it's obvious that $(b,c,0)$ and $(0,0,x)$ for a basis, where $x$ is whichever one of $b,c$ that is not zero. It seems also that the question is not very clear on what "basis" we are looking for. Another plausible interpretation is this: Find a basis for the vector space of matrices $\left\{\begin{bmatrix} 0 & -a & -b \\ a & 0 & -c \\ b & c & 0 \\ \end{bmatrix} \mid a,b,c\in F \right\}$ That is fairly easy to do by inspection: $\begin{bmatrix} 0 & -a & -b \\ a & 0 & -c \\ b & c & 0 \\ \end{bmatrix}= \begin{bmatrix} 0 & -a & 0 \\ a & 0 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix}+\begin{bmatrix} 0 & 0 & -b \\ 0 & 0 & 0 \\ b & 0 & 0 \\ \end{bmatrix}+\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & -c \\ 0 & c & 0 \\ \end{bmatrix}\\ = a\begin{bmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix}+b\begin{bmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \\ \end{bmatrix}+c\begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \\ \end{bmatrix}$ So the last three matrices are a good choice of basis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/666032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Rolle theorem on infinite interval We have : $f(x)$ is continuous on $[1,\infty]$ and differentiable on $(1,\infty)$ $\lim\limits_{x \to \infty}f(x) = f(1)$ we have to prove that : there is $b\in(1,\infty)$ such that $f'(b) = 0$ I'm sure we have to use Rolle's theorem so, I tried using Mean Value theorem and using the limit definition at $\infty$ Any ideas how can I use them ? Update : after seeking the answers that I've got : I'm having trouble finding $\boldsymbol x_{\boldsymbol 1}\neq\boldsymbol x_{\boldsymbol 2}$ such that $\boldsymbol{f(x}_{\boldsymbol 1}\boldsymbol {)=f(x}_{\boldsymbol 2}\boldsymbol )$ *I need a formal solution
Let $g(x)=f\bigl(\frac 1x\bigr)$ for $x\in (0,1]$ and $g(0)=f(1)$. Then $g$ is continuous in $[0,1]$ and derivable in $(0,1)$. By Rolle's Theorem there exists $c\in (0,1)$ such that $g'(c)=0$, hence $$0=g'(c)=-f'\Bigl(\frac 1c\Bigr)\frac 1{c^2}$$ thus if $b=\frac 1c$, then $f'(b)=0$ and $b\in (1,+\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/666098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
How to prove the modulus of $\frac{z-w}{z-\bar{w}} < 1$? Given that $\Im{(z)} > 0$ and $\Im{(w)} > 0$, prove that $|\displaystyle\frac{z-w}{z-\bar{w}}|<1$ . Please help me check my answer: $z - w = a + ib$ $z - \bar{w}$ = $a + i(b+2\Im(w)) $ $|\displaystyle\frac{z-w}{z-\bar{w}}$| = $\displaystyle\frac{|a+ib|}{|a+i(b+2\Im(w)|}$ = $\displaystyle\frac{\sqrt{(a^2+b^2)}}{\sqrt(a^2+(b+2\Im(w))^2)}$ < $\displaystyle\frac{\sqrt{(a^2+b^2)}}{\sqrt{(a^2+b^2)}}$ = 1
An example of almost pretty proof (In my eyes of course) using all the givens implicitly: $$x \equiv z-w \implies y \equiv z - \overline{w}=x+2i\Im{(w)} \implies 0 <|x|^2 < |y|^2 \implies |\frac{x}{y}|^2=\frac{|x|^2}{|y|^2}<1 \implies |\frac{z-w}{z - \overline{w}}|=|\frac{x}{y}|<1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/666151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Measure Theory - condition for Integrability A question from my homework: Let $\,f:X\to [0,\infty)\,$ be a Lebesgue measurable function. Show that $\,\int_X f\,d\mu < \infty\,\,$ iff $\,\,\sum\limits_{n=1}^\infty \mu\big(\{x\in X:n\leq f(x)\}\big)<\infty.$ I've managed to solve this, but with a relatively complex and convoluted proof. I feel this should be pretty trivial and that I'm probably missing something. I feel that for $=>$ using Markov's inequality - $ \mu(\{x\in X:n\leq f(x)\})\leq\frac{1}{n}\int_Xf(x)\,d\mu$ should somehow be sufficient. For <= I feel that defining sets such as $En=\{x\in X:n\leq f(x)\leq n+1\}$ and using the fact that $\int_Xf(x)\,d\mu=\sum\limits_{n=0}^\infty \int_{En} f(x)\,d\mu$ should also somehow be sufficient. Does this just seem simple to me and actually isn't? Thanks for the help! Edit: forgot to add that $\mu(X)<\infty$
Set $$ E_n=\{x:f(x)\ge n\}, \,\,\,F_n=\{x\in X: n-1\le f(x)< n\} $$ Then the $F_n$'s are disjoint and $$ E_n=\bigcup_{j\ge n+1}F_j\quad\text{and}\quad \mu(E_n)=\sum_{j=n+1}^\infty\mu(F_j), $$ and also $\bigcup_{n\in\mathbb N}F_n=X$ and hence $\sum_{n=1}^\infty \mu(F_j)=\mu(X)$. Thus $$ \sum_{n=0}^\infty \mu(E_n)=\sum_{n=1}^\infty n\,\mu(F_n). $$ But $$ n\,\mu(F_n)\ge\int_{F_n}f\,d\mu \ge (n-1)\,\mu (F_n), $$ which implies that $$ \sum_{n=1}^\infty n\,\mu(F_n)\ge\sum_{n=1}^\infty \int_{F_n}f\,d\mu \ge \sum_{n=1}^\infty (n-1)\,\mu (F_n), $$ or equivalently $$ \sum_{n=1}^\infty \mu(E_n)\ge\int_{X}f\,d\mu \ge \sum_{n=1}^\infty \mu(E_n)-\sum_{n=1}^\infty \mu (F_n)=\sum_{n=1}^\infty \mu(E_n)-\mu (X), $$ which proves what has to be proved, if $\mu(X)<\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/666287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that for all natural $a$, $2008\mid a^{251}-a$. How to show, that for all natural $a$ coprime to 2008 the following occurs: $2008\mid a^{251}-a$? This means, that $a_{251} \equiv_{{}\bmod 2008} a$, right? It's obvious if $a\mid 2008$. In the other case I'm totally at a loss. I thought about using Euler totient function but that's obviously doesn't apply here, since $2008$ is not prime.
Note that $2008 = 8\cdot 251$. By the Chinese Remainder Theorem it is necessary and sufficient to show $a^{251}\equiv a\pmod 8$ and $a^{251}\equiv a\pmod {251}$. The second is true for all $a$ by Fermat's little theorem. The first is true for odd $a$ or if $8\mid a$, but otherwise it is not!
{ "language": "en", "url": "https://math.stackexchange.com/questions/666351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Definition of Independence in probability and how its affected if one of the events has zero probability Usually its stated that two events are independent if and only if their joint probability equals the product of their probabilities. i.e: $$P(A \cap B) = P(A)P(B)$$ However, I was not sure if that was just a definition or if it had a proof. Usually the way I have seen it made sense is relating it to conditional independence: $$P(A|B) = \frac{P(A \cap B)}{P(B)}$$ If independent then the distribution doesn't change if we are given that B has occurred: $$P(A) = \frac{P(A \cap B)}{P(B)}$$ $$P(A)P(B) = P(A \cap B)$$ And then there is a proof for the statement but my concern is, if one of the two events has zero probability of occurring, then I was not sure what happened. For example, is the definition of independence only valid when P(A) and P(B) are non-zero? (since conditional probabilities don't really exists if the denominator is zero) Or Maybe $P(A \cap B) = P(A)P(B)$ is always true? Basically when does $$P(A \cap B) = P(A)P(B)$$ hold? Always?
As you noticed, if we use the definition "$A$ and $B$ are independent iff $P\left(A|B\right)=P\left(A\right)$", we might encounter some difficulties in case $P(B)=0$.(here is a discussion about $P(A|B)$ in case $P(B)=0$) On the other hand, the definition "$A$ and $B$ are independent iff $P(A\cap B)=P(A)P(B)$" doesn't introduce such problems, and thus it is the preferred definition. Another perk of this definition is that it clearly shows that independence is symmetric. So the latter is the (preferred) definition, and from it you can deduce the former. (My answer is based on wikipedia. Also, here is a question that asked explicitly about the preferred definition.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/666533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Divisibility proof problem I need assistance with the following proof. Let a,b,c,m be integers, with m $\geq$ 1. Let d = (a,m). Prove that m divides ab-ac if and only if $\frac md $ divides b-c. Alright, I know that since d = (a,m) there exists an r and t such that $ar + mt = d$ I figure since we're trying to prove m divides ab-ac iff $\frac md$ divides $b-c$, we look to $\frac md$ dividing $b-c$ Only I get stuck when trying to work with $\frac md$ dividing $b-c$ algebraically. I've tried relating d = (a,m) to $\frac md$ but I'm still stuck.
Let $k=b-c$. We want to show that $m$ divides $ak$ if and only if $\frac{m}{d}$ divides $k$. Let $a=a_1d$ and let $m=m_1 d$. Note that $\gcd(a_1,m_1)=1$. One direction: We show that if $\frac{m}{d}$ divides $k$, then $m$ divides $ak$. By assumption we have $k=\frac{m}{d}q=m_1q$ for some $q$. Thus $ak=(a_1 d)(m_1q)=(a_1q)m$, so $m$ divides $ak$. The other direction: We show that if $m$ divides $ak$, the $\frac{m}{d}$ divides $k$. We have $m_1d$ divides $(a_1d)k$, so $m_1$ divides $a_1k$. But since $d=\gcd(a,m)$, we have $\gcd(a_1,m_1)=1$. Then from $m_1$ divides $a_1k$ we can conclude that $m_1$ divides $k$. Remark: We have used without proof the fact that if $u$ divides $vw$ and $u$ and $v$ are relatively prime, then $u$ divides $w$. That's because it has probably been proved in your course. For a proof, we can use the Bezout argument you began. There are integers $x$ and $y$ such that $ux+vy=1$. Multiply through by $w$. We get $u(xw)+(vw)y=w$. By assumption $u$ divides $vw$, so $u$ divides $u(xw)+(vw)y$, that is, $u$ divides $w$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/666630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Relationships between mean and standard deviation when one variable is linear function of another Let $a$ and $b$ be constants and let $y_j = ax_j + b$ for $j=1,2,\ldots,n$. What are the relationships between the means of $ya$ and $x$, and the standard deviations of $y$ and $x$? I'm slightly confused with how to approach a theoretical question such as this and was wondering if anyone could help provide me some advice on how to approach this problem. At the moment here is what I'm thinking, but I'm currently working without certainty: We know * *$x_j = (y_j - b)/a$ *The mean of $x$ = mean of $y$ In terms of standard deviation, I'm not sure how they correlate at all right now aside from the fact that you need the mean of $x$ or $y$ in order to calculate the corresponding standard deviation. If someone could help explain this question and help me understand what I'm being asked and how to solve this I would greatly appreciate it! EDIT: So looking at the second portion of the question I am doing the following: SD = sqrt(Sigma(y_i - y)^2/(n-1))) SD(y) = (Sigma(yi - (ax+b)))/(n-1) SD(y) = (Sigma (ax+b) - (ax+b))/(n-1) SD(y) = 1/(n-1) Is the following correct?
The mean of $x$ = mean of $y$ This is not true. The way you should approach this problem is to use the formulas for mean and standard deviation directly: \begin{align*} \text{Mean}(y_1, y_2, \ldots, y_n) &= \frac{y_1 + y_2 + \cdots + y_n}{n} \\ &= \frac{(ax_1 + b) + (ax_2 + b) + \cdots + (ax_n + b)}{n} \\ &= \frac{a(x_1 + x_2 + \cdots + x_n) + nb}{n} \\ &= a \cdot \text{Mean}(x_1, x_2, \ldots, x_n) + b \\ \end{align*} See if you can do a similar algebraic manipulation for standard deviation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/666731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why does $\sum_{i=0}^{\infty}\frac{a^i}{i!}=e^a$? Here is a standard identity: $$\sum_{i=0}^{\infty}\frac{a^i}{i!}=e^a$$ Why does it hold true?
A brief answer: Let's consider the exponential function $e^x$. The definition of $e$ is that $\frac{d}{dx}e^x = e^x$. Now let's assume that $e^x$ can be written as an infinite sum of the form $\sum_{i=0}^{\infty}a_ix^i$. Using the sum rule for derivatives, we have $\sum_{i=0}^{\infty}a_ix^i = \sum_{i=0}^{\infty}\frac{d}{dx}a_ix^i = \sum_{i=1}^{\infty}ia_{i}x^{i-1}$. Therefore, $a_i = \frac{a_{i-1}}{i}$, so starting with the base case of $i=0$, $a_i = \frac{1}{i!}$. (Warning: variables change from the previous section) Another way to look at this is to consider the more standard definition of $e$, which is $\lim_{n\to\infty}(1 + \frac{1}{n})^n$. Therefore, $e^a$ can be written as $\lim_{n\to\infty}(1 + \frac{1}{n})^{an}$. Using the binomial theorem, the expression expands to $$\lim_{n\to\infty}(1 + \frac{an}{n} + \frac{an(an - 1)}{2n^2}+ \frac{an(an-1)(an-2)}{6n^3} + \ldots)$$ The lesser powers in the numerators drop out, so the expression becomes $$1 + a + \frac{a^2}{2} + \frac{a^3}{6} + \ldots$$which is $\sum_{i=0}^{\infty}\frac{a^i}{i!}$ in sum notation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/666821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Subgroup of an abelian Group I think I have the proof correct, but my group theory is not that strong yet. If there is anything I am missing I would appreciate you pointing it out. Let $G$ be an abelian group (s.t. $gh = hg$ $\forall g,h\in G$). Show that $H = \{g\in G:g^2=e_G\}$ is a subgroup of $G$ where $e_G$ is the identity element of $G$. Proof: $1.$ Take $g \in H$. We know $g^2 = e_G$. So, we know $e_G^2 = e_G*e_G = e_G \in H$. $2.$ Let $g,h \in H \Rightarrow g^2=e_G, h^2=e_G \Rightarrow (gh)^2=g^2h^2=e_G*e_G=e_G$ $\Rightarrow gh\in H$. So we have closure in H. $3.$ Take $g \in H. g^2=e_G \Rightarrow g*g=e_G \Rightarrow g*(g*g^{-1})=e_G*g^{-1} \Rightarrow g=g^{-1} \Rightarrow g^{-1}\in H.$ So $H$ is a subgroup of $G.$ $\Box$
Looks very good, but it would be best to explictly establish, in proving closure, that $$(gh)^2 = \underbrace{(gh)(gh) = g^2h^2}_{\large G \;\text{is abelian}}$$ I'm not sure if that's what you meant but left that detail out of your proof(?), or if you erroneously made an immediate move by distributing the exponent: $(gh)^2 = g^2 h^2$, which is NOT true, in general.
{ "language": "en", "url": "https://math.stackexchange.com/questions/666903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Which of the folowing is true (NBHM-2014) * *If $f$ is twice continously differentiable in (a,b) and if for all x $\in (a,b)$ $f''(x) + 2f'(x) + 3f(x) = 0$, the $f$ is infinitely differentiable on (a,b) *Let $f \in C[a,b]$ be a differentiable in (a,b). If $f(a) = f(b) = 0$, then for any real number $\alpha$, there exist x $\in$ (a,b) such that $f'(x) + \alpha f(x) = 0 $ *The function defined below is not differentiable at x = 0 $$ \begin{equation} f(x)=\begin{cases} x^2|cos\frac{\pi}{x}|, & \text{ x $\neq 0$}\\ 0, & \text{x = 0 }. \end{cases} \end{equation}$$ For (1) and (2) are true , but i am not sure For (3) is false Thank you for sparing your valuable time in checking my solutions
For (3) $f'(0) = lim_{h\rightarrow 0} \frac{f(h) - f(0)}{h} =\lim_{h\rightarrow 0} \frac{h^2|cos\frac{\pi}{h}|}{h}$ = $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/667019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
The characteristic of a subdomain is the same as the integral domain in which it is contained? Perhaps I have a misunderstanding of what a subdomain and an integral domain are, but I'm having a hard time figuring this out. I'm asked to show that the characteristic of a subdomain is the same as the characteristic of the integral domain in which it is contained. What was tying me up is: $\mathbb Z_7$ is an integral domain. $\mathbb Z_3$ is also an integral domain, and every element in $\mathbb Z_3$ is contained in $\mathbb Z_7$, so isn't $\mathbb Z_3$ a subdomain of $\mathbb Z_7$? I assume it's probably fairly simple (a misunderstanding of a definition or something), but what am I missing here? (Edit: I, apparently, had a lapse in brain functioning which resulted in a pretty bad misunderstanding of subrings. Once this was fixed, the proof came naturally.)
Notice $1+1+1=0$ in $Z_3$ but $1+1+1\color{Red}{\ne}0$ in $Z_7$. Just because the symbols used to represent the elements of $Z_3$ are a subset of the symbols used to represent the elements of $Z_7$ doesn't mean that one structure sits inside the other structure. Not even remotely close. For a subset to be a subring its operations must be the same as those in the ring it sits inside. If $A\to B$ is any homomorphism of unital rings, we have ${\rm char}(B)\mid{\rm char}(A)$. Can you prove this? Consider the fact that the element $\underbrace{1_B+\cdots+1_B}_{{\rm char}(A)}\in B$ is the image of $\underbrace{1_A+\cdots+1_A}_{{\rm char}(A)}=0_A$. In particular if $B$ is a domain then its characteristic is either zero or a prime number (can you prove this fact?). If ${\rm char}(B)=p$ then $A$ is a domain and its characteristic is a multiple of $p$, so the characteristic of $A$ must also be $p$. I'll let you figure out the characteristic zero case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/667095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Which primepowers can divide $3^k-2$? I tried to get a survey which primepowers $p^n$ divide $3^k-2$ for some natural k. PARI has a function znlog, but there are some issues : Instead of returning 0, if the discrete logarithm does not exist, an error occurs. So I cannot filter out, for which primes $3^n\equiv2\ (mod\ p)$ has a solution. znlog(2,Mod(3,11)) returns the wrong solution 7, if the order is omitted and returns an error again, if it is added. Of course, I can use brute force to filter out the primes, but I would like to have a better method. This leads to my question : How can I find out efficiently for which prime powers $p^n$ the equation $3^k \equiv 2\ (mod\ p^n)$ has a solution (I need not know, if the solution is unique). It seems that for arbitary high powers of 5 and 7, there is a solution. If so, is there a simple explanation, why ?
To answer your last question: It well known that if $a$ is a primitive root mod $p^2$ then $a$ is a primitive root mod $p^n$ for all $n\ge 2$. $3$ is a primitive root mod $5^2$ and mod $7^2$. This explains why $3^k\equiv 2 \bmod p^n$ can be solved for $p=5$ and $p=7$ for all $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/667199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can somebody check my proof of this theorem about the derivative? I proved the following theorem and would greatly appreciate it if someone could check my proof: Theorem: Let $f:[a,b]\to \mathbb R$ be differentiable and $\alpha$ such that $f'(a) < \alpha < f'(b)$ (or $f'(a) > \alpha > f'(b)$ ) then there exists $c\in (a,b)$ with $f'(c) = \alpha$ Proof: Define a map $g:[a,b]\to \mathbb R$ , $x \mapsto f(x) - x \alpha$. This map is continuous. Therefore $g$ attains a minimum on $[a,b]$. Let $c$ be in $[a,b]$ with the property that $g(c) = \min_x g(x)$. If $c=a$ (or $c=b$) then $g(a) = f(a) -a\alpha \le g(x) = f(x) - x\alpha$ and therefore for all $x \in [a,b]$: ${f(a)-f(x)\over a - x} \ge \alpha > f'(a) = \lim_{x \to a}{f(a) - f(x) \over a-x}$. As this holds for all $ x\in [a,b]$ it follows that $\lim_{x\to a}{f(a)-f(x)\over a - x} \ge \alpha > f'(a) = \lim_{x \to a}{f(a) - f(x) \over a-x}$ which is a contradiction. Similarly derive a contradiction for $c=b$. Therefore $c \in (a,b)$. But if $g$ attains an extremum on $(a,b)$ then $g'(c)=0$ and hence $f'(c) = \alpha$.
This is the Darboux's theorem One point that is false : you divide by $a-x$ but this is negative so you must change the inequality. This leads to $\frac{f(x) - f(a)}{x-a} <= \alpha$. Take a look at the min of $f(x)-x\alpha$ similarly to complete the proof
{ "language": "en", "url": "https://math.stackexchange.com/questions/667276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\sqrt{x}$ isn't Lipschitz function A function f such that $$ |f(x)-f(y)| \leq C|x-y| $$ for all $x$ and $y$, where $C$ is a constant independent of $x$ and $y$, is called a Lipschitz function show that $f(x)=\sqrt{x}\hspace{3mm} \forall x \in \mathbb{R_{+}}$ isn't Lipschitz function Indeed, there is no such constant C where $$ |\sqrt{x}-\sqrt{y}| \leq C|x-y| \hspace{4mm} \forall x,y \in \mathbb{R_{+}} $$ we have only that inequality $$ |\sqrt{x}-\sqrt{y}|\leq |\sqrt{x}|+|\sqrt{y}| $$ Am i right ? remark for @Vintarel i plot it i don't know graphically "Lipschitz" mean? what is the big deal in the graph of the square-root function in wikipedia they said Continuous functions that are not (globally) Lipschitz continuous The function f(x) = $\sqrt{x}$ defined on [0, 1] is not Lipschitz continuous. This function becomes infinitely steep as x approaches 0 since its derivative becomes infinite. However, it is uniformly continuous as well as Hölder continuous of class $C^{0,\alpha}$, α for $α ≤ 1/2$. Reference 1] could someone explain to me this by math and not by words, please ?? 2] what does "Lipschitz" mean graphically?
$\sqrt{}$ is monotonous, so just assume $x \geq y$, then you can drop the absolute values and it simplifies to $1 \leq C(\sqrt{x} + \sqrt{y}$. Since you can make the sum of square roots arbitrarily small (by suitably decreasing $x$ and $y$), as soon as it's smaller than $1/C$ the inequality no longer holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/667346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 6, "answer_id": 3 }
Evaluate using cauchy's integral formula How can we evaluate this expression using cauchy's integral formula $\int_C \frac{e^{\pi Z}}{ ( {Z^2 + 1}) ^2} dZ$ where $C$ is $|Z-i|=1$
Clearly $$ \int_{|z-i|=1} \frac{e^{\pi z}}{ ( {z^2 + 1}) ^2} dz=\int_{|z-i|=1} \frac{\frac{\mathrm{e}^{\pi z}}{(z+i)^2}}{ ( {z-i}) ^2} dz. $$ According to the Cauchy Integral formula $f'(a)=\frac{1}{2\pi i}\int_{|z-a|=r}\frac{f(z)}{(z-a)^2}dz$ we have for $a=i$: \begin{align} \frac{1}{2\pi i}\int_{|z-i|=1} \frac{\frac{\mathrm{e}^{\pi z}}{(z+i)^2}}{ ( {z-i}) ^2} dz &=\left(\frac{\mathrm{e}^{\pi z}}{(z+i)^2}\right)'_{z=i}=\left(\frac{\pi\mathrm{e}^{\pi z}}{(z+i)^2}-2\frac{\mathrm{e}^{\pi z}}{(z+i)^3}\right)_{z=i} \\ &=\frac{\pi\mathrm{e}^{\pi i}}{(i+i)^2}-2\frac{\mathrm{e}^{\pi i}}{(i+i)^3}=\frac{-\pi}{-4}-2\frac{-1}{-8i} =\frac{\pi}{4}+\frac{i}{4}. \end{align} Thus $$ \int_{|z-i|=1} \frac{\frac{\mathrm{e}^{\pi z}}{(z+i)^2}}{ ( {z-i}) ^2} dz=\frac{\pi^2 i}{2}-\frac{\pi}{2}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/667430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Translations in two dimensions - Group theory I have just started learning Lie Groups and Algebra. Considering a flat 2-d plane if we want to translate a point from $(x,y)$ to $(x+a,y+b)$ then can we write it as : $$ \left( \begin{array}{ccc} x+a \\ y+a \end{array} \right) = \left( \begin{array}{ccc} x \\ y \end{array} \right) + \left( \begin{array}{ccc} a \\ b \end{array} \right)$$ Now the set of all translations $ T = \left( \begin{array}{ccc} a \\ b \end{array} \right) $ form a two parameter lie group (I presume) with addition of column as the composition rule. If that is so, how do I go about finding the generators of this transformation. I know the generators of translation are linear momenta in the corresponding directions. But I am not able to see this here. PS: In my course I have been taught that the generators are found by calculating the Taylor expansion of the group element about the Identity of the group. For instance, $\operatorname{SO}(2)$ group $$ M = \left( \begin{array}{cc} \cos \:\phi & -\sin \:\phi \\ \sin \:\phi & \cos \:\phi \end{array} \right) $$ I obtain the generator by taking $$ \frac{\partial M}{\partial \phi}\Bigg|_{\phi=0} = \left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right) $$ Now if I exponentiate this, I can obtain back the group element. My question how do I do this for Translation group. EDIT :This edit is to summarise and get a view of the answers obtained. Firstly, the vector representation of the translation group (for 2D) would in general have the form : $$ \begin{pmatrix} 1 & 0 & a_x\\ 0 & 1 & a_y \\ 0 & 0 & 1 \end{pmatrix}\ $$ with generators (elements of Lie algebra) $$ T_x =\begin{pmatrix} 0 & 0 & i\\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}\ , \;\; T_y = \begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & i \\ 0 & 0 & 0 \end{pmatrix}\ $$ Secondly, the scalar-field representation of the same is given by the differential operators $$ exp^{ i(a_x\frac{\partial}{\partial x}+ a_y\frac{\partial}{\partial y} )} $$ with generators $$ T_x^s = i\frac{\partial}{\partial x},\;\;T_y^s = i\frac{\partial}{\partial y} $$ The Lie algebra is two-dimensional and abelian : $ [T_x,T_y] = 0$
As suresh mentioned if the vector is just a two component object then you can't translate it without expanding the vector. However, if you consider the vector to be variable (which are essentially infinite vectors) then it can be translated. To find the differential form of a translation, start with the translation of a 1D dimensional vector, $x$: \begin{align} e ^{ i\epsilon {\cal P} } x & = x + \epsilon \\ \left( 1 + i \epsilon {\cal P} \right) x &= x + \epsilon \\ {\cal P} x & = - i \end{align} Thus we must have $ {\cal P} = - i \frac{ \partial }{ \partial x } $. Now it is easy to extend this to two dimensions: \begin{align} e ^{ i\epsilon _x {\cal P} _x + i \epsilon _y {\cal P} _y } \left( \begin{array}{c} x \\ y \end{array} \right) & = \left( \begin{array}{c} x + \epsilon _x \\ y + \epsilon _y \end{array} \right) \\ i \left( \epsilon _x {\cal P} _x + \epsilon _y {\cal P} _y \right) \left( \begin{array}{c} x \\ y \end{array} \right) &= \left( \begin{array}{c} x + \epsilon _x \\ y + \epsilon _y \end{array} \right) \end{align} where we have two different generators since you have two degrees of freedom in the transformation you gave in your question. This expression requires, \begin{align} & {\cal P} _x = \left( \begin{array}{cc} - i \frac{ \partial }{ \partial x } & 0 \\ 0 & 0 \end{array} \right) \\ & {\cal P} _y = \left( \begin{array}{cc} 0 & 0 \\ 0 & - i \frac{ \partial }{ \partial y } \end{array} \right) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/667502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
Does negative zero exist? In the set of real numbers, there is no negative zero. However, can you please verify if and why this is so? Is zero inherently "neutral"?
My thought on the problem is that all numbers can be substituted for variables. -1 = -x. "-x" is negative one times "x". My thinking is that negative 1 is negative 1 times 1. So in conclusion, I pulled that negative zero (can be expressed by "-a") is negative 1 times 0, or just 0 (-a = -1 * a).
{ "language": "en", "url": "https://math.stackexchange.com/questions/667577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 3, "answer_id": 1 }
Mean value theorem for vectors I would like some help with the following proof(this is not homework, it is just something that my professor said was true but I would like to see a proof): If $f:[a,b]\to\mathbb{R}^k$ is continuous and differentiable on $(a,b)$, then there is a $a< d < b$ such that $\|f(b)-f(a)\|\le\|f '(d)\|(b-a)$ Thanks for any help in advance. EDIT: I forgot to mention that my professor said how it could be proven. He said that one could let u be a unit vector in the direction of f(b)-f(a) and go from there.
Consider $$ f(b) - f(a) = \int_a^b f'(t) dt. $$ Take a dot product with $u$ on both sides, to get $$ \| f(b) - f(a) \| = \left|\int_a^b u \cdot f'(t) \, dt \right|. $$ Now suppose that $\|f'(d)\| < \|f(b) - f(a) \| / |b - a|$: $$ \| f(b) - f(a) \| = \left|\int_a^b u \cdot f'(t) \, dt \right| \\ \le \int_a^b \|u\| \|f'(t)\| dt \\ = \int_a^b \|f'(t)\| dt \\ < \int_a^b \|f(b) - f(a) \| / |b - a| dt \\ \le \|f(b) - f(a) \| / |b - a| \int_a^b 1 dt \\ = \|f(b) - f(a) \| $$ That's a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/667645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Inner product question We are given an inner product of $\mathbb R^3$: $f\left(\begin{pmatrix} x_1\\x_2\\x_3\end{pmatrix},\begin{pmatrix} y_1\\y_2\\y_3\end{pmatrix}\right) = 2x_1y_1+x_1y_2+y_2x_1+2x_2y_2+x_2y_3+y_3x_2+x_3y_3$ We are given a linear transformation $T$ such that: $$T\begin{pmatrix} \;\;1\\\;\;0\\-1\end{pmatrix}=a\begin{pmatrix} \;\;1\\\;\;0\\-1\end{pmatrix}$$ $$T\begin{pmatrix}\;\;0\\\;\;1\\-1\end{pmatrix}=b\begin{pmatrix}\;\;0\\\;\;1\\-1\end{pmatrix}$$ $$T\begin{pmatrix}\;\;1\\-1\\\;\;2\end{pmatrix}=c\begin{pmatrix}\;\;1\\-1\\\;\;2\end{pmatrix}$$ Show that $T=T^*$ with respect to $f$ if and only if $a=b$. Meaning, for all $v\in \mathbb R^3$, $f(Tv,v)=f(v,Tv)$ if and only if $a=b$ The intuition says let's do Gram-Schmidt, look at the matrix of $T$ with respect to that orthonormal basis, transpose it, and see they are equal if $a=b$. In practice, that didn't work.
Note that the vectors you're given the effect of T on are a basis for $ \mathbb{R}^3$ of eigenvectors of T. Since everything is linear, you need only prove that $f(Tv,w) = f(v,Tw)$ iff $a = b$ for $v, w$ a pair (that could be equal) of these basis vectors. Note that you do need the equation to hold with the vectors used on each side not equal. Try substituting some of these combinations in and see if you get anywhere. I think (but haven't actually written out the algerba) that it should work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/667769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Analytical solution for nonlinear equation Simple question: Does $\alpha = \frac{x}{\beta} - \left(\frac{x}{\gamma}\right) ^{1/\delta}$ have an analytical solution? ($\alpha,\beta,\gamma,\delta$ are constant) I'm working on big data arrays and either need to solve this equation analytically or spend resources crunching away at least squares, iterations etc. It comes from the Sersic light profile of galaxies which is talked about here (Equation 6). I need to find where the difference between galaxy components is less than a certain value So, I'm trying to solve: $\mu_{bulge} - \mu_{exp-disc} - \rm{limit} = 0$ ($\mu$s are from equation 6) By looking at the profiles, it's obvious that this equation has zero to two real solutions (bulge overlapping the disc
For certain values of the constants, yes. In general, no.
{ "language": "en", "url": "https://math.stackexchange.com/questions/667866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to tackle a recurrence that contains the sum of all previous elements? Say I have the following recurrence: $$T(n) = n + T\left(\frac{n}{2}\right) + n + T\left(\frac{n}{4}\right) + n + T\left(\frac{n}{8}\right) + \cdots +n + T\left(\frac{n}{n}\right) $$ where $n = 2^k$, $k \in \mathbb{N} $ and $T(1) = 1$. simplified to: $$T(n) = n \log_2n + \sum_{i=1}^{\log_2n}T\left(\frac{n}{2^i}\right) $$ The Master's theorem is not applicable; neither is the Akra-Bazzi method since $k = \log_2$ is not a constant. What strategy can I use to find a closed form solution? I have a feeling that the closed form is $T(n) = \sum_{i=0}^{\log_2n}\left[j\frac{n}{2^i} \log_2 \left(\frac{n}{2^i} \right)\right] + 1 $ where $j = \max\left(1, 2^{i-1}\right)$ but would like a proof.
I'd just start with $T(1)$ and look for a pattern: $$T(2^1) = 1 \cdot 2^1 + 2^{1-1}T(1)$$ $$T(2^2) = 2\cdot 2^2 + 1\cdot 2^1 + (2^{2-1}) T(1)$$ $$T(2^3) = 3\cdot 2^3 + 2\cdot 2^2 + 2 \cdot 1 \cdot 2^1 + 2^{3-1} T(1)$$ $$T(2^4) = 4\cdot 2^4 + 3\cdot 2^3 + 2 \cdot 2 \cdot 2^2 + 4 \cdot 1 \cdot 2^1 + 2^{4-1} T(1)$$ so that if $T(1) = 1$, $$T(2^n) = 2^{n-1} + \sum_{k=1}^n k \cdot 2^k + \sum_{k=1}^{n-2} (2^{n-k-1}-1)2^k.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/667929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Scaling of Fractional ideals For fractional ideals of a Dedekind Domain, are each of the elements that generate the ideal (ie. form the basis of the lattice associated with the ideal) always scaled by the same amount? That is to say, scaled by the same element from the field of fractions?
I'm not sure if I understand the question, but a fractional ideal is a fraction times an actual ideal, by the definition, i.e. for a fractional ideal $I$ over a ring $R$ we have some $r\in R$ such that $rI\unlhd R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/668036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How prove this $|ON|\le \sqrt{a^2+b^2}$ let ellipse $M:\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1$,and there two point $A,B$ on $\partial M$,and the point $C\in AB$ ,such $AC=BC$,and the Circle $C$ is directly for the AB circle,for any point $N$ on $\partial C$, show that $$|ON|\le\sqrt{a^2+b^2}$$ my try: let $$A(x_{1},y_{1}),B(x_{2},y_{2}),C(\dfrac{x_{1}+x_{2}}{2},\dfrac{y_{1}+y_{2}}{2})$$ then $$\dfrac{x^2_{1}}{a^2}+\dfrac{y^2_{1}}{b^2}=1,\dfrac{x^2_{2}}{a^2}+\dfrac{y^2_{2}}{b^2}=1$$ so the circle $C$ equation is $$\left(x-\dfrac{x_{1}+x_{2}}{2}\right)^2+\left(y-\dfrac{y_{1}+y_{2}}{2}\right)^2=r^2$$ where $$4r^2=(x_{1}-x_{2})^2+(y_{1}-y_{2})^2$$ let $N(c,d)$,then $$\left(c-\dfrac{x_{1}+x_{2}}{2}\right)^2+\left(d-\dfrac{y_{1}+y_{2}}{2}\right)^2=r^2$$ How prove $$c^2+d^2\le a^2+b^2?$$ Thank you
Jack has already written quite an answer to this problem, but I couldn't believe that it didn't have more elegant solution. As my sense of beauty didn't leave me at peace, I couldn't help but find some geometrical ideas behind this problem. I must also add that I really enjoyed solving it. As Tian has already noted, it suffices to show that $OC + CA \leqslant \sqrt{a^2 + b^2}$. Let's reflect points $A$, $B$ and $C$ across the point $O$. We get a parallelogram $ABA'B'$ and the desired sum $\color{green}{OC + CA}$ equals to $\frac{1}{4}$ of the perimeter of $ABA'B'$ (since $CC'$ is the middle line of $ABA'B'$). So we want to show that perimeter of parallelogram inscribed in an ellipse is not bigger than $4\sqrt{a^2 + b^2}$. In other words, it would be great if we knew how to maximize the perimeter of such a parallelogram. Fortunately, humanity knows how to solve this problem, namely: Among all the parallelograms inscribed in a given ellipse those of maximal perimeter have the property that any point on the ellipse is the vertex of exactly one. (paraphrasing result of this article) Moreover, according to Lemma from that article, the parallelogram has maximal perimeter iff tangent lines at adjacent vertices are perpendicular to each other. This leads us to the final fact that the maximum perimeter in originally stated problem is $4\sqrt{a^2 + b^2}$ because choosing point $A$ so that it lies on the coordinate axis produces exactly rhombus with a side equal to $\sqrt{a^2 + b^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/668125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Using Induction, prove that $107^n-97^n$ is divisible by $10$ Using Induction, prove that $107^n-97^n$ is divisible by $10$ We need to prove the basis first, so let $ n = 1 $ $107^1-97^1$ $107-97 = 10$ This statement is clearly true when $ n = 1 $ Now let's use $P(k)$ $107^k-97^k$ So far so good... next I have to use $P(k+1)$ and there is one part that is driving me nuts. The induction hypothesis implies that $107^k-97^k = 10 m$ $107^{k+1}-97^{k+1}$ $107^k * 107-97^k * 97$ I know that $107^k = 10m+97^k$ $(10m+97^k) * 107 -97^k * 97$ I am lost at this line. What do I do next? Please explain very clearly because I tried to look everywhere online and there aren't very good explanations for after this step.
Note that $$107^{k+1}-97^{k+1}\\=107^k\cdot 107-97^k\cdot 97\\ =107^k\cdot (10+97)-97^k\cdot 97\\ =107^k\cdot 10+107^k\cdot 97-97^k\cdot 97\\ =107^k\cdot 10+(107^k-97^k)\cdot 97$$ where the first term is divisible by $10$, and the second term is also divisible by $10$ by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/668202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What are functions with the property $f(f(x)) = x$ called? Do functions which, when composed with themselves, are equivalent to the identity function (i.e. functions for which $f(f(x)) = x$ in general) have a name and if so, what is it? Additionally, am I correct in saying that a such function has a splinter of two, or is it perhaps splinter of size 2 or something else entirely? Or could I say that a such function has an orbit of size 2?
These are involutions. The orbits of an involution all have size $1$ or $2$. What is a splinter?
{ "language": "en", "url": "https://math.stackexchange.com/questions/668307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Equation with the big O notation How I can prove equality below? $$ \frac{1}{1 + O(n^{-1})} = 1 + O({n^{-1}}), $$ where $n \in \mathbb{N}$ and we are considering situation when $n \to \infty$. It is clearly that it is true. But I don't know which property I have to use to prove it formally. I will appreciate if someone could give me some clues.
If $n \rightarrow \infty$ then $O(\frac{1}{n}) \rightarrow 0$ and using $$\frac{1}{1-z} = \sum_{n=0}^{\infty}z^{n} \ \ \ \ , \ \ |z|<1$$ we have $$\frac{1}{1 + O(n^{-1})} = \sum_{k=0}^{\infty}(-1)^{k}O(\frac{1}{n})^{k} = 1 +O(\frac{1}{n})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/668408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Openness w.r.t. these two metrics are equivalent. Suppose $(X,d)$ is a metric space. Define $\delta:X\times X\rightarrow[0,\infty)$, as $$\delta(x,y)=\frac{d(x,y)}{1+d(x,y)}.$$ It is easy to show that $\delta$ is a metric as well, but I am having difficulty in showing that if a subset of $X$ is $d$-open , then it is $\delta$-open too. Thanks in advance!
Let us take the $d$-open ball $B=\{x\,:\,d(x,x_0)<R\}$ and show that it is $\delta$-open. Note that the function $f(x)=\frac{x}{1+x}=\frac{1}{1+\frac{1}{x}}$ is strictly increasing. Therefore $d(x,x_0)<R$ is equivalent to $\delta(x,x_0)=f(d(x,x_0))<f(R)$, i.e. $$B=\{x\,:\,\delta(x,x_0)<f(R)\}$$ That is, every $d$-ball is also a $\delta$-ball. For the converse notice that $f:(0,\infty)\rightarrow(0,\infty)$ is bijective. Therefore also every $\delta$-ball is a $d$-ball: $$\{x\,:\,\delta(x,x_0)<R\}=\{x\,:\,d(x,x_0)<f^{-1}(R)\}$$ The topologies generated by $d$ and $\delta$ have the same set of open balls, i.e. they are equal: every $d$-open set is $\delta$-open and vice versa.
{ "language": "en", "url": "https://math.stackexchange.com/questions/668483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Center of Mass via integration for ellipsoid I need some help with the following calculation: I have to calculate the coordinates of the center of mass for the ellipsoid $$\left( \frac{x}{a} \right)^2 + \left( \frac{y}{b} \right)^2 + \left( \frac{z}{c} \right)^2 \le 1, \quad z \ge 0$$ with mass-density $\mu(x,y,z)=z^2$. I wanted to use: $$ \begin{align} x & = a r \sin\theta \cos\varphi \\ y & = b r \sin\theta \cos\varphi \\ z & = c r \cos\theta \end{align} $$ whereas $$ \begin{gather} 0 \le r \le 1, \\ 0 \le \theta \le \pi, \\ 0 \le \varphi \le 2\pi \end{gather} $$ and $$\frac{\partial (x,y,z)}{ \partial (r, \theta, \varphi)} = r^2 \sin\theta.$$ Did I choose the right things so far? 1) $$ \begin{align} M & = \int\limits_E µ(x,y,z) d(x,y,z) \\ & = \int_0^1 \hspace{-5pt} \int_0^{\pi} \hspace{-5pt} \int_0^{2\pi} c^2 r^2 \cos^2\theta \cdot r^2 \sin(\theta) d(r, \theta, \varphi) \\ & = c^2 \int_0^1 r^4 dr \int_0^\pi \sin\theta \cdot \cos^2\theta d\theta \int_0^{2\pi} d\varphi \\ & = \frac{4\pi c^2}{15}. \end{align} $$ 2) $$x_s \cdot M = \ldots $$ Here I get $\int_0^{2pi} \cos\varphi \, d \varphi = 0$, so the whole product is zero, so x_s is zero too?? What am I doing wrong?
The mass density is invariant under $x\rightarrow -x$ and $y\rightarrow -y$, so the center of mass must have $x=y=0$. You do still need to find its $z$-coordinate, but since the mass density is only a function of $z$, you can reduce this to a one-dimensional integral. At a given value of $z$, the cross-section is an ellipse with semi-major and semi-minor axes $a\sqrt{1-(z/c)^2}$ and $b\sqrt{1-(z/c)^2}$; this ellipse has area $\pi a b (1-(z/c)^2)$. The mass of a slice of thickness $dz$ at that altitude is therefore $dm=\pi a b (z^2 - z^4/c^2)dz$. The $z$-coordinate of the center of mass is $$ M_z=\frac{\int_{z=0}^{z=c}zdm}{\int_{z=0}^{z=c} dm}=\frac{\int_{0}^{c}(z^3-z^5/c^2)dz}{\int_{0}^{c}(z^2-z^4/c^2)dz}=\frac{\frac{1}{4}c^4-\frac{1}{6c^2}c^6}{\frac{1}{3}c^3-\frac{1}{5c^2}c^5}=\frac{\frac{1}{4}-\frac{1}{6}}{\frac{1}{3}-\frac{1}{5}}c=\frac{5}{8}c. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/668593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Using "we have" in maths papers I commonly want to use the phrase "we have" when writing mathematics, to mean something like "most readers will know this thing and I am about to use it". My primary question is whether this is too colloquial. My secondary question is what the alternatives are if it is too colloquial. For example, right now I have a sentence "Given a point $P\in X$ we have the residue map ${\text {res}}_P \colon \Omega_{K(X)} \rightarrow k$, as defined in ...". I don't feel saying "there exists the" is quite right. Even if it is grammatically correct, I don't think this conveys the implication that it will be almost certainly familiar to the expected audience. I have seen this question but I feel this is slightly different. If not then my apologies.
I would replace " we have" by ", then" or just ", "
{ "language": "en", "url": "https://math.stackexchange.com/questions/668645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101", "answer_count": 12, "answer_id": 2 }
Equivalence of Two Lorentz Groups $O(3,1)$ and $O(1,3)$ How can I prove that $O(3,1)$ and $O(1,3)$ are the same group?
The matrices $M$ in $O(3,1)$ and $O(1,3)$ are defined by the condition $$ M G M^T = G $$ for $$ G=G_{1,3} ={\rm diag} (1,1,1,-1)\text{ and } G=G_{3,1} = {\rm diag} (1,-1,-1,-1)$$ respectively. I use the convention where the first argument counts the number of $+1$'s in the metric tensor and the second one counts the negative numbers $-1$ that follow. But these two groups only differ by a permutation of the entries. First, note that it doesn't matter whether we have a "mostly plus" or "mostly minus" metric. If you change the overall sign of the metric via $G\to -G$, $MGM^T = G$ will remain valid. Second, the two groups only differ by having the "different signature coordinate" at the beginning or at the end. But it may be permuted around. If $M$ obeys the first condition of $O(1,3)$, $MG_{1,3}M^T =G_{1,3}$, you may define $$ M' = P M P^{-1} $$ where $P$ is the cyclic permutation matrix $$ P = \pmatrix{0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 1&0&0&0}$$ and it is easy to see that $M'$ will obey $$ M' G_{3,1} M^{\prime T} = G_{3,1} $$ simply because $$ M' G_{3,1} M^{\prime T} = PMP^{-1} G_{3,1} P^{-1T} M^T P^T $$ but $P^{-1}=P^T$ and, crucially, $$ P^{-1} G_{3,1} P = -G_{1,3} $$ So all the $P$'s will combine or cancel and one gets it. One should try to get through every step here but the reason why the groups are isomorphic is really trivial: they are the groups of isometries of the "same" spacetime, one that has three spacelike and one timelike dimension, and they only differ by the convention how we order the coordinates and whether we use a mostly-plus or mostly-minus metric. But such changes of the notation don't change the underlying "physics" so the groups of symmetries of the "same" object parameterized differently must be isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/668792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Multiply (as a Babylonian): 141 times 17 1/5 How do we multiply 141 times 17 1/5 as a Babylonian? I wasn't sure the space between 17 and 1/5, now I see that 17 1/5 is 17.2 in our notation. Is there a formula that I can solve this? Any hint, comment would be very appreciated it!
Hint: Note that ${1\over5} = {12\over 60}$, and try to write both numbers in sexagesimal notation. Then multiply in the same way as we multiply numbers in decimal notation. (The Babylonians would have multiplication tables to help them with this.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/668854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The elements of finite order in an abelian group form a subgroup: proof check If G is an abelian group, show that the set of elements of finite order is a subgroup of G. Proof: Let G be an abelian group and H be the set of elements of finite order. (1) nonempty Now e ∈ H, since $a^n$ = e, by definition of order, and |e| = 1 ∈ H. Thus, H is nonempty. (2) Closure Let a, b ∈ H, where |a| = k and |b| = m for all k, m ∈ G. Then $a^k$ = e and $b^m$ = e. So $a^{km} = (a^k)^m = e$ and $b^{km} = (b^k)^m = e$. Then $(ab)^{km} = e$ and $ab$ ∈ H with a finite order at most $km$. Hence, H has closure. (3) Inverse Let a ∈ H with |a| = k for all k ∈ G. Then $(aa^{-1})^k = e$. So $a^k(a^-1)^k = e$. Since $a^k = e$, then $(a^{-1})^k = e$. Thus $|a^{-1}| = k$ and $a^{-1}$ ∈ H. Hence, H has an inverse. Therefore, H is a subgroup of G. I think I messed up on the inverse part of my proof.
Inverse: It is sufficient to show $ka=0$ iff $k(-a)=0$ (and thus $|a|=|-a|$ ) . But inversion is an invertible function so it is sufficient to show $ka=0$ implies $k(-a)=0$ . Assume $ka=0$. $ka=2ka+(k)(-a)$ by repeated application of the identity axiom. But this is $0=2*0+k(-a), k(-a)=0 $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/668937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the determinant of a solving matrix I have such ODE: $$\frac{dy}{dt}=\begin{pmatrix} \sin^2t & e^{-t} \\ e^t & \cos^2t \end{pmatrix} y=A(t)y(t)$$ and let $M(t,1)$ be the solving matrix (a matrix whose columns generate a fundamental system of solutions), where $M(1,1)=E$. Find $\det M(-1,1)$. I don't really know how to tackle this problem, so I would really appreciate a solution with a bit of explanation going on, but even little hints might be invaluable. EDIT: I was thinking that maybe I could use the fact that $A(t)=A^T(-t)$, then: $$y'(-1)=A(-1)y(-1)=A^T(1)y(1)$$ $$A(-1)M(-1,1)=A^T(1)M(1,1)=A^T(1)E$$ $$M(-1,1)=A^T(1)EA^{-1}(-1)$$
@Max, read your book. The wronskian $W(t)=\det(M(t,1))=\exp(\int_{1}^t trace(A(u))du)$. Here $trace(A(t))=1$ and $W(t)=\exp(t-1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/669028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving $P(x,y)dx + Q(x,y)dy =0$: interpretation in terms of forms I asked a similar question here which I will formulate more sharply: When we write a differential equation as $P(x,y)dx + Q(x,y)dy = 0$, what is the interpretation in terms of differential forms? (I suppose the language of differential forms is the proper one to understand it.) Suppose we can separate into $\alpha(x)dx + \beta(y)dy = 0$. We integrate to find a relation between $x$ and $y$. What is the interpretation of this action in terms of differential forms? At first I thought we were flowing along the vector field $\alpha(x)dx + \beta(y)dy$, and the relation between $x$ and $y$ describes the flow lines. But then I realized $\alpha(x)dx + \beta(y)dy$ is a covector field, not a vector field, so this interpretation is not correct. When we integrate the right-hand side and get a constant, what is the justification of that in terms of forms?
The interpretation is the following: given a differential 1-form $\omega=Pdx+Qdy$ in the plane, you are asked to find its integral curves, i.e. 1-dim submanifolds of $\mathbb R^2$ whose tangent line at each point is annihilated by the 1-form. For example, the integral curves of $xdx+ydy$ are the concentric circles around the origin. At a point where $\alpha\neq 0$ either $P$ or $Q$ do not vanish, hence the same holds in a neighborhood of the point. If say $Q$ does not vanish, then the integral curves are graphs of solutions to the ODE $dy/dx=-P/Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/669105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 4 }
Suppose $f$ is a thrice differentiable function on $\mathbb {R}$ . Showing an identity using taylor's theorem Suppose $f$ is a thrice differentiable function on $\mathbb {R}$ such that $f'''(x) \gt 0$ for all $x \in \mathbb {R}$. Using Taylor's theorem show that $f(x_2)-f(x_1) \gt (x_2-x_1)f'(\frac{x_1+x_2}{2})$ for all $x_1$and $x_2$ in $\mathbb {R}$ with $x_2\gt x_1$. Since $f'''(x) \gt 0$ for all $x \in \mathbb {R}$, $f''(x)$ is an increasing function. And in Taylor's expansion i will be ending at $f''(x)$ but not sure how to bring in $\frac{x_1+x_2}{2}$.
Using the Taylor expansion to third order, for all $y$ there exists $\zeta$ between $(x_1+x_2)/2$ and $y$ such that $$ f(y) = f \left( \frac{x_1+x_2}2 \right) + f'\left( \frac{x_1+x_2}2 \right)\left(y - \frac{x_1 + x_2}2 \right) \\ + \frac{f''(\frac{x_1+x_2}2)}2 \left( y - \frac{x_1 + x_2}2 \right)^2 + \frac{f'''(\zeta)}6 \left( y - \frac{x_1+x_2}2 \right)^3. \\ $$ It follows that by plugging in $x_2$, $$ f(x_2) - f \left( \frac{x_1+x_2}2 \right) > f' \left( \frac{x_1+x_2}2 \right) \frac {x_2-x_1}2 + \frac{f''(\frac{x_1+x_2}2)}2 \left(\frac{x_2-x_1}2 \right)^2 $$ since $f'''(\zeta) > 0$ and $x_2 > \frac{x_1+x_2}2$. Similarly, by plugging in $x_1$, $$ f(x_1) - f \left( \frac{x_1+x_2}2 \right) < f' \left( \frac{x_1+x_2}2 \right) \frac {x_1-x_2}2 + \frac{f''(\frac{x_1+x_2}2)}2 \left(\frac{x_1-x_2}2 \right)^2, $$ (note that the sign that appears in the cubic term is now negative, hence the reversed inequality) which we can re-arrange as $$ f \left( \frac{x_1+x_2}2 \right) - f(x_1) > f' \left( \frac{x_1+x_2}2 \right) \frac {x_2-x_1}2 - \frac{f''(\frac{x_1+x_2}2)}2 \left(\frac{x_1-x_2}2 \right)^2, $$ By adding up, the quadratic terms cancel out and you get your result. Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/669161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Probability of the union of $3$ events? I need some clarification for why the probability of the union of three events is equal to the right side in the following: $$P(E\cup F\cup G)=P(E)+P(F)+P(G)-P(E\cap F)-P(E\cap G)-P(F\cap G)+P(E\cap F\cap G)$$ What I don't understand is, why is the last term(intersection of all) added back just once, when it was subtracted three times as it appears from a Venn Diagram? Here on page 3, this is explained but not in enough details that I can understand it: http://www.math.dartmouth.edu/archive/m19w03/public_html/Section6-2.pdf
One of the axioms of probability is that if $A_1, A_2, \dots$ are disjoint, then $$\begin{align} \mathbb{P}\left(\bigcup_{i=1}^{\infty}A_i\right) = \sum\limits_{i=1}^{\infty}\mathbb{P}\left(A_i\right)\text{.}\tag{*} \end{align}$$ It so happens that this is also true if you have a finite number of disjoint events. If you're interested in more detail, consult a measure-theoretic probability textbook. Let's motivate the proof for the probability of the union of three events by using this axiom to prove the probability of the union of two events. Theorem. For two events $A$ and $B$, $\mathbb{P}\left(A \cup B\right) = \mathbb{P}(A) + \mathbb{P}(B) - \mathbb{P}(A \cap B)$. Proof. Write $$A \cup B = \left(A \cap B\right) \cup \left(A \cap B^{c}\right) \cup \left(A^{c} \cap B\right)\text{.}$$ Notice also that $A = \left(A \cap B^{c}\right) \cup\left(A \cap B\right)$ and $B = \left(B \cap A^{c}\right) \cup \left(A \cap B\right)$. Since we have written $A$ and $B$ as disjoint unions of sets, applying (*) in the finite case, we have that $$\begin{align} \mathbb{P}\left(A\right) &= \mathbb{P}\left(A \cap B^{c}\right) + \mathbb{P}\left(A \cap B\right) \\ \mathbb{P}\left(B\right) &= \mathbb{P}\left(B \cap A^{c}\right) + \mathbb{P}\left(A \cap B\right) \\ \end{align}$$ Similarly, since $A \cup B = \left(A \cap B\right) \cup \left(A \cap B^{c}\right) \cup \left(A^{c} \cap B\right)$ is a disjoint union of sets, $$\begin{align} \mathbb{P}\left(A \cup B\right) &= \mathbb{P}\left[ \left(A \cap B\right) \cup \left(A \cap B^{c}\right) \cup \left(A^{c} \cap B\right) \right] \\ &= \overbrace{\mathbb{P}\left(A \cap B\right) + \mathbb{P}\left(A \cap B^{c}\right)}^{\mathbb{P}(A)} + \mathbb{P}\left(A^{c} \cap B\right) \\ &= \mathbb{P}\left(A\right) + \overbrace{\mathbb{P}\left(A^{c} \cap B\right)}^{\mathbb{P}(B)-\mathbb{P}(A \cap B)} \\ &= \mathbb{P}\left(A\right) + \mathbb{P}\left(B\right) - \mathbb{P}\left(A \cap B\right)\text{. } \square \end{align}$$ Now, armed with the result that we proved in the previous theorem, we can now prove the result for the probability of the union of three events. Theorem. $\mathbb{P}\left(A \cup B \cup C\right) = \mathbb{P}\left(A\right) + \mathbb{P}\left(B\right) + \mathbb{P}\left(C\right) - \mathbb{P}\left(A \cap B\right) - \mathbb{P}\left(A \cap C\right) - \mathbb{P}\left(B \cap C\right) + \mathbb{P}\left(A \cap B \cap C\right)$ Proof. Since $A \cup B \cup C = \left(A \cup B\right) \cup C$, by the previous theorem, $$\begin{align} \mathbb{P}\left(A \cup B \cup C\right) &= \mathbb{P}((A \cup B)\cup C) \\ &= \overbrace{\mathbb{P}\left(A \cup B\right) + \mathbb{P}\left(C\right) - \mathbb{P}\left[\left(A \cup B\right) \cap C\right]}^{\text{applying the previous theorem to }\mathbb{P}((A \cup B)\cup C)} \\ &= \overbrace{\mathbb{P}\left(A\right) + \mathbb{P}\left(B\right) - \mathbb{P}\left(A \cap B\right)}^{\mathbb{P}\left(A \cup B\right) \text{ from the previous theorem}} + \mathbb{P}\left(C\right) - \mathbb{P}\left[\overbrace{\left(A \cap C\right) \cup \left(B \cap C\right)}^{(A \cup B)\cap C\text{ (distributive property of sets)}}\right] \\ &= \mathbb{P}\left(A\right) + \mathbb{P}\left(B\right) - \mathbb{P}\left(A \cap B\right) + \mathbb{P}\left(C\right) \\ &\qquad- \overbrace{\Big[\mathbb{P}\left(A \cap C\right) + \mathbb{P}\left(B \cap C\right) - \mathbb{P}\left[\left(A \cap C\right) \cap \left(B \cap C\right) \right]\Big]}^{\text{applying the previous theorem to }\mathbb{P}\left(\left(A \cap C\right) \cup \left(B \cap C\right)\right)}\text{,} \end{align}$$ and since $\left(A \cap C\right) \cap \left(B \cap C\right) = A \cap B \cap C$, the result is proven. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/669249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 1 }
Prove a number is even using the Pigeonhole Principle Let n be an odd integer and let f be an [n]-permutation of length n, where [n] is the set of integers 1, 2, 3,...n Show that the number x = (1-f(1))*(2-f(2))*...*(n-f(n)) is even using the pigeonhole principle In this case, I don't understand what this function f is. What is an [n]-permutation of length n? Take f(2) for example. Permutations of [2] would be 1,2 and 2,1. So the way the problem is worded, f(2) must equal 12 or 21. If that's correct, which one? Will this number x still be even regardless of which [n]-permutation f(n) is?
There are $(n+1)/2$ odd numbers $i\in[n]$, and equally many numbers $i$ such that $f(i)$ is odd. Since that makes $n+1$ in all, the pigeonhole principle says that at least one $i$ is counted twice: both $i$ and $f(i)$ are odd. But then $i-f(i)$ is even, and so is the entire product. Here is a proof without the pigeonhole principle, by contradiction. For the product to be odd, all $n$ factors $i-f(i)$ must be odd. But since $n$ is odd, that would make the sum of the factors odd as well. But that sum is $0$, isn't that odd? (Indeed it isn't.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/669332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove that if $C$ is a convex set containing $B(r)$, then $\sup\{d(y,0)\mid y\in C\}=\infty$ Let $0<p<1$. Define a metric on $l^p$ by $d((a_k)_{k=1}^\infty,(b_k)_{k=1}^{\infty})=\sum_{k=1}^\infty |a_k-b_k|^p$. For any $r>0$, let $B(r)=\{x\in l^p\mid d(x,0)<r\}$. Prove that if $C$ is a convex set containing $B(r)$, then $\sup\{d(y,0)\mid y\in C\}=\infty$. Deduce that $l^p$ is not a locally convex topological vector space. How to prove this question? Thanks.
Hint: denoting by $e_n$ the $n$-th vector of "canonical basis" of $\ell^p$, compute $d(x_N, 0)$ for each $N\in\mathbb N$, where $x_N=\frac 1N \sum_{n=1}^N e_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/669411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that the product of some numbers between perfect squares is $2k^2$ Here's a question I've recently come up with: Prove that for every natural $x$, we can find arbitrary number of integers in the interval $[x^2,(x+1)^2]$ so that their product is in the form of $2k^2$. I've tried several methods on proving this, but non of them actually worked. I know, for example, that the prime numbers shouldn't be in the product. I was also looking for numbers $x$ so that between $x^2$ and $(x+1)^2$ there is actually the number $2k^2$ for some natural $k$. If we find all of these numbers, then we should prove the case only for the numbers which are not in this form. These $x$s have this property: $x^2<2k^2<(x+1)^2$ leading to $x<k\sqrt 2<x+1$ and $x\frac{\sqrt 2}{2}<k<(x+1)\frac{\sqrt 2}{2}$. This means there should be a natural number between $x\frac{\sqrt 2}{2}$ and $(x+1)\frac{\sqrt 2}{2}$. I've checked some of the numbers that aren't like that with computer, and they were: $3,6,10,13,17,...$. the thing i noticed was that the difference between the two consecutive numbers of that form, is either $3$ or $4$. I think this has something to do with the binary representation of $\frac{\sqrt 2}{2}$ but I don't know how to connect it with that. I would appreciate any help :)
This is a community wiki answer to point out that the question was answered in comments by benh: This question is a duplicate of this one; the latter question was answered by Gerry Meyerson who found a proof in this paper of Granville and Selfridge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/669487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 1, "answer_id": 0 }
Calculate sum $\sum\limits_{k=0}k^2{{n}\choose{k}}3^{2k}$. I need to find calculate the sum Calculate sum $\sum\limits_{k=0}k^2{{n}\choose{k}}3^{2k}$. Simple algebra lead to this $\sum\limits_{k=0}k^2{{n}\choose{k}}3^{2k}=n\sum\limits_{k=0}k{{n-1}\choose{k-1}}3^{2k}$. But that's still not very helpful. This binomial screws everything up for me, I would like a nice recurrence relation, but don't know what to do with it.
We have $\displaystyle k^2=k(k-1)+k$ So, $\displaystyle k^2 \binom nk=k(k-1)\binom nk+k\binom nk$ Now $\displaystyle k\cdot\binom nk=k\frac{n!}{(n-k)!k!}=kn\frac{(n-1)!}{[n-1-(k-1)]!(k-1)!\cdot k}=n\binom{n-1}{k-1}$ and $\displaystyle k(k-1)\cdot\binom nk=k(k-1)\frac{n!}{(n-k)!k!}=k(k-1)n(n-1)\frac{(n-2)!}{[n-2-(k-2)]! (k-2)!\cdot k(k-1)}=n(n-1)\binom{n-2}{k-2}$ $\displaystyle\implies\sum_{k=0}^n k^2 \binom nk3^{2k}=\sum_{k=0}^n k^2 \binom nk9^k$ $\displaystyle=9n \sum_{k=0}^n\binom{n-1}{k-1}9^{k-1}+n(n-1)9^2\sum_{k=0}^n\binom{n-2}{k-2}9^{k-2}$ Utilize $\binom mr=0$ for $r<0$ or $r>m$ More generally, $\displaystyle \sum_{r=0}^ma_rk^r=b_0+b_1\cdot k+b_2\cdot k(k-1)+\cdots+b_{m-1}\prod_{r=0}^{m-2}(k-r)+b_m\prod_{r=0}^{m-1}(k-r)$ where $b_r$s are arbitrary constants
{ "language": "en", "url": "https://math.stackexchange.com/questions/669573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Is the function continuous at x=0? Check if the function $f$ is continuous. $f(x)=$\begin{matrix} 0 & ,x=0\\ \frac{1}{[\frac{1}{x}]} & ,0<x\leq 1 \end{matrix}. For $0<x\leq 1$,,f is continuous because it is fraction of continuous functions. How can I check if it is continuous at $x=0$?
$$|f(x) - f(0)| = |\frac 1 {[\frac 1 x]} - 0| = |\frac 1 {[\frac 1 x]}| = \frac 1 {[\frac 1 x]} \le \frac 1 {\frac 1 x} = x = x - 0\;\; (\text {since $x > 0$ and $\frac 1 {[\frac 1 x]} \le \frac 1 {\frac 1 x}$})$$ Given any $\epsilon \gt 0 $ however small, $\exists \ \delta( = \epsilon) \gt 0 $ such that $|f(x) - f(0)| \lt \epsilon$ whenever $x = x - 0 \lt \delta$ $\implies \lim_{x \rightarrow 0^+} f(x) = f(0)\implies f$ is right-continuous at $x = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/669665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does "\" mean in math In a Linear Algebra textbook I am reading, the following is stated: $b\notin \operatorname{span}(A \cup \{a\})\setminus \operatorname{span}(A)$. It does so without explaining what "$\setminus$" means. I apologize if this question does not belong here but I just want to understand what it means. I can close the question if someone just comments on its meaning.
$\setminus$ (\setminus) as its name implies is the set-theoretic difference: $A\setminus B$ is the set of all elements which are in $A$ but not in $B$. ($A-B$ is also used for this.) Be careful to not confuse $\setminus$ with $/$ (quotient).
{ "language": "en", "url": "https://math.stackexchange.com/questions/669767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proof that if $a^n|b^n$ then $a|b$ I can't get to get a good proof of this, any help? What I thought was: $$b^n = a^nk$$ then, by the Fundamental theorem of arithmetic, decompose $b$ such: $$b=p_1^{q_1}p_2^{q_2}...p_m^{q_m}$$ with $p_1...p_m$ primes and $q_1...q_n$ integers. then $$b^n=(p_1^{q_1}p_2^{q_2}...p_m^{q_m})^n= p_1^{q_1n}p_2^{q_2n}...p_m^{q_mn}$$ but here i get stucked, and i can't seem to find a good satisfactory way to associate $a$ and $b$... Any help will be appreciated
Hint $\ p^{n\alpha}\!\mid p^{n\beta}\! \iff n \alpha \le n\beta \iff \alpha \le \beta \iff p^\alpha\mid p^\beta.\,$ Apply it to prime factorizations of a,b. Simpler: $\, (b/a)^n = k\in\Bbb Z\Rightarrow b/a\in \Bbb Z\,$ by the Rational Root Test applied to $\,x^n - k.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/669831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Summation proof (with binomial coefficents) I am trying to prove that $\sum_{k=2}^n$ $k(k-1) {n \choose k}$=$n(n-1)2^{n-2}$. I was initially trying to use induction, but I think a more simple proof can be done using the fact that $\sum_{k=0}^n {n \choose k}$=$2^n$. This is how I begin to proceed: $\sum_{k=2}^n$ $k(k-1) {n \choose k}$= $\sum_{k=0}^{n-2}$ $(k+2)(k+1) {n \choose k+2}$= $2^{n-2} *\sum_{k=0}^{n-2}$ $(k+2)(k+1)$. First of all, is this correct so far? And second, how would I proceed from here.
By the binomial theorem $$\sum_{k=0}^n {n\choose k} x^k=(1+x)^n$$ Take two derivatives in $x$ and plug in $x=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/669922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
How can the point of inflection change before the vertical asymptote? I have to draw a graph of a function which seems to have an inflection point AFTER the vertical asymptote. i.e. f(x) = $\tan^{-1}\left({\frac{x-1}{x+1}}\right)$ Using the quotient rule, I get... $$f'(x) = \frac{1}{1+\left(\frac{x-1}{x+1}\right)^2}.\frac{(x+1)-(x-1)}{(x+1)^2} $$ Simplifying slightly, I reached... $$f'(x) = \frac{2}{(x+1)^2+\frac{(x+1)^2(x-1)^2}{(x+1)^2}}$$ Would I be right in thinking this can be simplified further to... $$f'(x) = \frac{1}{x^2+1}?$$ As technically they are different functions since the first is not defined for "$x= -1$", but the second is. The problem I came across was when finding the point of inflection. I got the second derivative to be... $$f''(x) =\frac{-2x}{(x^2+1)^2}$$ When making this equal zero to find the points of inflection, I found it to be -2x = 0, hence x = 0. But the issue is the asymptote is at x = -1. The curve is concave up right up before the asymptote, but apparently is still concave up after the asymptote between x = -1 and x = 0. Even checking this on Google's graph widget seems to show an inflection point at x = 0 then an awkward line as it approaches the asymptote from the right. Any ideas on the reason for this or if I've missed something?
Note that your function is equivalent to arctanx - (pi/4)
{ "language": "en", "url": "https://math.stackexchange.com/questions/670004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof that rational sequence converges to irrational number Let $a>0$ be a real number and consider the sequence $x_{n+1}=(x_n^2+a)/2x_n$. I have already shown that this sequence is monotonic decreasing and thus convergent, now I have to show that $(\lim x_n)^2 = a$ and thus exhibit the existence of a positive square root of $a$. (because we took $x_1 > 0$
The recursive definition can be solved exactly. Write a^2 instead of a and consider the sequence $(x(n)+a))/(x(n)-a)$. Then $(x(n)+a)/(x(n)-a) = = (.5*x(n-1)+.5*a^2/x(n-1)+a)/(.5*x(n-1)+.5*a^2/x(n-1)-a) = = (x(n-1)^2+2*a*x(n-1)+a^2)/(x(n-1)^2-2*a*x(n-1)+a^2) = = ((x(n-1)+a)/(x(n-1)-a))^2 =$ Thus $(x(n)+a)/(x(n)-a) = ((x(0)+a)/(x(0)-a))^{(2^n)}$. This proves (very) rapid convergence to +a whenever $|x(0)-a)|<|x(0)+a)|$ and to -a whenever$|x(0)+a)|<|x(0)-a)|$. This holds even when a is complex and complex sequences are considered. You could work out, for instance, the case of a purely imaginary root and real starter $x(0)$... PS: This works only in this (Babylonian) case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/670083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Question about finding the volume of a Sphere to a certain point I've done a few things but I cant seem to figure out how to solve this. Any help please?
Area=$pi(r)^2$ Area=$pi(2Ry-y^2)$ integral of Area from $y=0$ to $y=R/3$ =Volume. $R=sqrt((R-y)^2+r^2)$ $R^2=(R-y)^2+r^2$ $R^2=R^2-2Ry+y^2+r^2$ $2Ry-y^2+r^2$ $r=sqrt(2Ry-y^2)$ integral of $pi(2Ry-y^2)dy$ from $0$ to $R/3 = pi(Ry)^2-y^3/3$ from $0$ to $R/3$ after plugging in, your answer should be $pi((R(R/3))^2-(R/3)^3/3)$. I hope this was helpful!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/670182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Would it be any concern if we find correlation between intercept and other regression coefficients? During a multiple linear regression analysis, I found correlation between intercept (beta-0) and two of the other regression coefficients. Is there any problem or concern in this case? If no, please explain me why.
Such correlations are guaranteed if you have not standardized your predictors to take the value 0 at their means. However, this correlation is not really a mathematical/statistical problem, per se, but it may be easier to interpret the coefficients if you first standardize the variables. Therefore, the short answer is no, such a correlation is not a problem, its just the interpretation. See this link as well for a good discussion on this issue. The reason the correlation is not a problem, statistically, is that standardizing is a linear transformation (add/multiply by a constant), which should not affect how well the line fits. As a concrete example...a linear regression of, say, Reaction Rate vs Temperature, (for some chemical reaction) should not depend on the choice of temperature units (Kelvin, Celsius, Farenheit). Essentially, the units you express a relationship in should not affet the accuracy or validity of your linear regression, since different units of some property (e.g., length, temp, time) are related to each other via a linear transformation, and it would make no sense for a particular, arbitrary set of units to yield more accuracy than another set of units.
{ "language": "en", "url": "https://math.stackexchange.com/questions/670254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Condition under which the subring of multiples of an element equal the ring itself? Let $R$ be a commutative ring with identity with $b\in R$. Let $T$ be the subring of all multiples of $b$, $T=\{r\cdot b : r \in R\}$. If $u$ is a unit in $R$ with $u \in T$, prove that $T=R$. Could you help me some suggestions? I really have no clues to do this questions, I can only show $1\cdot R$ belongs to $T$. I even don't know the general way to prove two rings are equal.
The critical thing to realize here is that if $u$ is a unit, and $u = ab$, then $a$ and $b$ are both units. For if $u$ is a unit, then $uv = 1$ for some $v \in R$, so that $1 = (ab)v = a(bv) = b(av)$, where we have used the commutativity of $R$. So $u \in T$ a unit implies $b \in T$ is a unit is well, since $u = ab$ for some $a \in R$. Now since $b \in T = Rb$ is a unit, for any $s \in R$ take $r = sb^{-1} = s(av)$. Then $s = s1 = sb^{-1}b = rb \in Rb = T$, so in fact $R \subset T$. Hope this helps. Cheerio, and as always, Fiat Lux!!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/670369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Showing that the square root is monotone I've shown the existence of unique square roots of all positive rational numbers, so now I want to prove that the square root is monotone: $0<a<b$ if and only if $\sqrt{a} < \sqrt{b}$
We know that if $p, q, r, s$ are positive and $p < q, r < s$ then $pr < qs$. Let $\sqrt{a} < \sqrt{b}$ using $p = r = \sqrt{a}, q = s = \sqrt{b}$ we get $a < b$. Let $a < b$. Clearly we can't have $\sqrt{a} = \sqrt{b}$ as this will mean $a = b$ (by squaring). Also we can't have $\sqrt{a} > \sqrt{b}$ as by previous part it would mean $a > b$. Hence we must have $\sqrt{a} < \sqrt{b}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/670473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
A somewhat general question about homeomorphisms. I have been asked to prove that $(0,1)$ is homeomorphic to $(0,1)$. Seems easy enough. If we assume the order topology an both, along with an identity mapping $f:x\to x$, we can show that both $f$ and $f^{-1}$ are continuous. Similarly, using the identity mapping $f$ and the order topology, we can show that $(0,1)$ is not homeomorphic to $[0,1]$. However, how can we prove that there exists no toplogy and suitable mapping such that $(0,1)$ can be proven to be homeomorphic to $[0,1]$? Thanks in advance!
Well $(0,1)$ and $[0,1]$ have the same cardinality, so there exists a bijection $\Phi:(0,1)\rightarrow[0,1]$. Now, equip $(0,1)$ with any topology and define $U\subset [0,1]$ to be open if and only if $\Phi^{-1}(U)$ is open. Then it can easily be checked that $\Phi$ is a homeomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/670572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Efficient diagonal update of matrix inverse I am computing $(kI + A)^{-1}$ in an iterative algorithm where $k$ changes in each iteration. $I$ is an $n$-by-$n$ identity matrix, $A$ is an $n$-by-$n$ precomputed symmetric positive-definite matrix. Since $A$ is precomputed I may invert, factor, decompose, or do anything to $A$ before the algorithm starts. $k$ will converge (not monotonically) to the sought output. Now, my question is if there is an efficient way to compute the inverse that does not involve computing the inverse of a full $n$-by-$n$ matrix?
EDIT. 1. The Tommy L 's method is not better than the naive method. Indeed, the complexity of the calculation of $(kI+A)^{-1}$ is $\approx n^3$ blocks (addition-multiplication). About the complexity of $P(D+kI)^{-1}P^{-1}=QP^{-1}$ (when we know $P,D,P^{-1}$); the complexity of the calculation of $Q$ is $O(n^2)$ and the one of $QP^{-1}$ is $\approx n^3$ blocks as above. Note that one works with a fixed number of digits. *When the real $k$ can take many (more than $n$) values, one idea is to do the following The problem is equivalent to calculate the resolvent of $A$: $R(x)=(xI-A)^{-1}=\dfrac{Adjoint(xI-A)}{\det(xI-A)}=$ $\dfrac{P_0+\cdots+P_{n-1}x^{n-1}}{a_0+\cdots+a_{n-1}x^{n-1}+x^n}$. STEP 1. Using the Leverrier iteration, we can calculate the matrices $P_i$ and the scalars $a_j$ $P_{n-1}:=I:a_{n-1}:=-Trace(A):$ for $k$ from $n-2$ by $-1$ to $0$ do $P_k:=P_{k+1}A+a_{k+1}I:a_k:=-\dfrac{1}{n-k}Trace(P_kA):$ od: During this step, we must calculate the exact values of the $P_i,a_j$ (assuming that $A$ is exactly known), which requires a very large number of digits. Then the complexity of the calculation is $O(n^4)$ (at least) but it's done only one time. STEP 2. We put $x:=-k_1,-k_2,\cdots$. Unfortunately, the time of calculation -with a fixed number of significative digits- of $R(-k)$, is larger than the time of calculation of $(-kI-A)^{-1}$ !!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/670649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
A tricky Definite Integral What is the value of $$\int_{\pi/4}^{3\pi/4}\frac{1}{1+\sin x}\operatorname{d}x\quad ?$$ The book from which I have seen this has treated it as a problem of indefinite integral and then directly put the values of the limits. I am not sure that this is the correct way. Kindly help.
$$\frac1{1+\sin x}=\frac1{1+\sin x}\cdot\overbrace{\frac{1-\sin x}{1-\sin x}}^1=\frac{1-\sin x}{1-\sin^2x}=\frac{1-\sin x}{\cos^2x}=\frac1{\cos^2x}-\frac{\sin x}{\cos^2x}=$$ $$=\frac{\sin^2x+\cos^2x}{\cos^2x}+\frac{\cos'x}{\cos^2x}=(1+\tan^2x)-\left(\frac1{\cos x}\right)'=\tan'x-\left(\frac1{\cos x}\right)'\iff$$ $$\iff\int_\frac\pi4^\frac{3\pi}4\frac{dx}{1+\sin x}=\left[\tan x+\frac1{\cos x}\right]_\frac\pi4^\frac{3\pi}4=\left[\tan\frac{3\pi}4-\frac1{\cos\frac{3\pi}4}\right]-\left[\tan\frac\pi4-\frac1{\cos\frac\pi4}\right]=$$ $$=\left[-1-\frac1{-1/\sqrt2}\right]-\left[1-\frac1{1/\sqrt2}\right]=-2+2\sqrt2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/670709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
$K$-theory exact sequence. Let $Y$ be a closed subspace of a compact space $X$. Let $i:Y \to X$ the inclusion and $r:X \to Y$ a retraction ($r \circ i = Id_Y$). I have to prove that exists this short exact sequence $$ 0 \to K(X,Y) \to K(X) \to K(Y) \to 0.$$ Then I have to verify that $K(X) \simeq K(X,Y) \oplus K(Y)$. How can I do it? I think that $K(X,Y) = \tilde{K}(X/Y).$ Thank you very much.
This is purely formal and relies on the fact that $K$ is a contravariant functor from topological spaces to abelian groups (actually to commutative rings but this is not needed here). Since $r\circ i=Id_Y$ we get $i^*\circ r^*=Id_{K(Y)}$ so that $r^*:K(Y)\to K(X)$ is a section of $i^*:K(X)\to K(Y)$ and your exact sequence of abelian groups $ 0 \to K(X,Y) \to K(X) \stackrel {i^*}{\to} K(Y) \to 0$ splits , yielding the required isomorphism $K(X) \simeq K(X,Y) \oplus K(Y)$. By the way, it is indeed true that $K(X,Y) = \tilde{K}(X/Y)$ but this fact is irrelevant to the question at hand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/670813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Given a matrix $A$, show that it is positive. Show that $$A := \begin{bmatrix}7 & 2 & -4\\2 & 4 & -2\\-4 & -2 & 7 \end{bmatrix}$$ is positive definite. Could this be proven by showing that each of the vectors of the standard basis gives a positive result, e.g.: $$\begin{bmatrix}1 & 0 & 0\end{bmatrix} \begin{bmatrix}7 & 2 & -4\\2 & 4 & -2\\-4 & -2 & 7 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \\ 0\end{bmatrix} > 0.$$ The second part of the question asks me to diagonalize the matrix using an orthogonal matrix, which as I understand, is to use elementary matrices on the rows and columns of the matrix to get it to a diagonal form. Would it make a difference if Ifirstly only dealt with the rows and only afterward used the exact matrices only on the columns? Thanks for your time.
No, checking the standard basis does not guarantee positive definiteness of your scalar product. It would work if the standard basis was orthogonal with respect to your bilinear form (this is Sylvester's theorem): but, in our case, this is equivalent to have the matrix being diagonal. By definition, a $n\times n$ matrix is positive definite if its signature is $(n,0,0)$. The first entry in the signature is defined as the number of vectors $v$ in an orthogonal basis (with respect to the form represented by the matrix) such that $\langle v,v\rangle>0$; the second entry is the number of $v$ such that $\langle v,v\rangle<0$; the last is the number of $v$ such that $\langle v,v\rangle=0$. Sylvester theorem guarantees that this definition indeed makes sense: in fact it says that the signature of a matrix is the same, no matter what orthogonal basis we choose. To diagonalize a form, is the same as to find an orthogonal basis. In fact, when you have an orthogonal basis, the matrix associated to your form with respect to that basis is diagonal. So, to solve both part of the exercise, you can orthogonalize the standard basis, getting a new basis, say $\{\,v_1, \dots, v_n\,\}$. Then your diagonal matrix is the matrix that has on the diagonal the values $\langle v_i,v_i\rangle$; and you have positive definiteness if all elements on the diagonal are positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/670872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 1 }
Translating and scaling a line based on two grabbed points Say there is a line segment going from 0 to 10, now imagine that point 7 and 8 are 'grabbed' and translated to respectively 6 and 11. The effect would this would be that the line segment get's scaled and translated. How can I determine the new defining points of the new line segment. Please note, in the above image there is only a x axis, no y axis. I believe this problem should be quite simple, but after trying out different things nothing seems to work. I suppose that I should calculate the scale $s = \frac{g_{1start}-g_{2start}}{g_{1end}-g_{2end}}$ and multiply that with $x_{1start}$ and $x_{2start}$ or something along those lines, but I just can't figure out how to approach this problem correctly (rather than guessing randomly).
Let $$x_{new} = h_1 + (x - g_1) \frac{h_2 - h_1}{g_2 - g_1}.$$ where I've used $g_i$ for your starting points (7, 8) and $h_i$ for your ending points (6, 11). The formula shows how to take a point $x$ in the pre-stretch coordinates and tell its post-stretch coordinates. So $$x_1^{end} = h_1 + (x_1^{start} - g_1) \frac{h_2 - h_1}{g_2 - g_1},$$ for instance, tells you how to transform the left end.
{ "language": "en", "url": "https://math.stackexchange.com/questions/670967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Unbounded linear functional maps every open ball to $\mathbb{R}$? I can't get my head wrapped around this: Let $X$ be a normed linear space. Let $f:X\rightarrow\mathbb{R}$ be a linear functional on $X$. Prove that $f$ is unbounded if and only if $\forall y\in X$ and $\forall \delta>0$ we have $\{f(x)\,:\,|x-y|<\delta\}=\mathbb{R}$. I have already proved: 1.) $f$ is continuous if and only if $f$ is bounded 2.) $f$ is bounded if and only if $f^{-1}(0)$ is closed 3.) either $f^{-1}(0)$ is closed ($f$ bounded) or $f^{-1}(0)$ is dense in $X$ ($f$ unbounded) I know that all these ideas play off one another in some fashion, but cannot seem to tease out a solution to the above statement. Any help or direction would be appreciated.
Jonathan's suggestion is spot on, but let me give you a more explicit argument. Consider first the unit ball $X_1$ of $X$. As $f$ is unbounded, there exists a sequence $\{x_n\}\subset X_1$ with $|f(x_n)|>n$. By replacing $x_n$ with $-x_n$ if necessary, we can get $f(x_n)>n$. Given $t\in[0,1]$, $tx_n\in X_1$, and $f(tx_n)=tf(x_n)$. This shows that that $f(X_1)$ contains the whole segment $[0,f(x_n)]$, and in particular $[0,n]$. As we can do this for all $n$, we get that $f(X_1)\supset[0,\infty)$. Finally, using that $X_1=-X_1$, we get $f(X_1)=\mathbb R$. Any other ball $B$ in $X$ is of the form $x+\delta X_1$ for some $\delta>0$. Given such $\delta$, for any fixed $t\in\mathbb R$ by the previous paragraph there exists $y\in X_1$ with $f(y)=(t-f(x))/\delta$. Then $f(x+\delta y)=t$. So $f(B)=\mathbb R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/671043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
The graph of the function is $g(x)=x^3-2x^2+x+1$ and the tangent to the curve is at $x=2$? a) Find the equation of the tangent to the curve at $x=2$ HELP and then b) Determine the angle that this tangent makes with the positive direction of the $x$-axis Please help I really need to know how to do this Please include working Thanks. For part a) I found the gradient by doing $g'(x)$ and subbing in $2$ for $x$ and I got $5$ so, so far I have $y=5x+c$, dunno how to find $c$ though
(a) Ok so we have the equation $$f(x) = x^3 - 2x^2 + x + 1$$ Taking the derivative we get $$f'(x) = 3x^2 - 4x + 1$$ At $x = 2$, $f'(2) = 3*4 - 4*2 + 1 = 5$, meaning that our equation of the tangent line is $$y = 5x + c$$ $f(2) = 8 - 2*4 + 2 + 1 = 3$, so the graph passes through the point $(2, 3)$. Our tangent line equation is now $$y = 5x -7$$ (b) so since the graph passes through the point (0, -7) and (7/5, 0), we have a triangle with a height of 7 and a base of 7/5. You can find theta with that information.
{ "language": "en", "url": "https://math.stackexchange.com/questions/671116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Proving $n^2$ is even whenever $n$ is even via contradiction? I'm trying to understand the basis of contradiction and I feel like I have understood the ground rules of it. For example: Show that the square of an even number is an even number using a contradiction proof. What I have is: Let n represent the number. n is odd if n = 2k + 1, where k is any number n is even if n = 2k, where k is any number We must prove that if n^2 is even, then n is even. How do I proceed on from here?
We prove the contrapositive. In this case, we want to prove $n^2$ even implies $n$ even which is equivalent to the contrapositive $n$ not even implies $n^2$ not even or in other words $n$ odd implies $n^2$ odd. If $n$ is odd, then $n=2k+1$ then \begin{align*} n^2 &= (2k+1)^2 & \text{substituting in } n=2k+1 \\ &= 4k^2+4k+1 & \text{expanding} \\ &= 2(2k^2+2k)+1 \end{align*} which is odd, since it has the form $2M+1$. We can essentially turn this into a proof by contradiction by beginning with "If $n$ is odd and $n^2$ is even...", then writing "...giving a contradiction" at the end. Although this should be regarded as unnecessary. The other direction, i.e., $n$ even implies $n^2$ even can also be shown in a similar way: If $n=2k$, then $n^2=(2k)^2=4k^2=2(2k^2)$ which is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/671243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What are all possible values of $ x \equiv a^\frac{p-1}{2} \pmod p$? Suppose p is an odd prime and a $\in$ $\mathbb{Z}$ such that $ a \not\equiv 0 \pmod p$. What are all the values of $ x \equiv a^\frac{p-1}{2} \pmod p$ ? This is what I got so far: $ x^2 \equiv a^{p-1} \pmod p$ By Fermat's Little Theorem, $ x^2 \equiv 1 \pmod p$ $ x^2 - 1 \equiv 0 \pmod p$ $ (x - 1)(x+1) \equiv 0 \pmod p$ So $\;p\mid(x-1)$ or $p\mid(x+1)$.
Do you know about quadratic residues ? The values of $x$ are $1$ and $-1$. $\frac{p-1}{2}$ values of $1$ and also $\frac{p-1}{2}$ values of $-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/671285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Finding Limit Points, Interior Points, Isolated Points, the Closure of Finding Limit Points, Interior Points, Isolated Points, the Closure of $ A \subset \mathbb{R}^2$, where $A$ is the graph of the function $f: \mathbb{R} \rightarrow \mathbb{R}$, $f(x)= \sin(1/x)$ if $x$ doesn't equal $0$ and $0$ if $x=0$. (The distance in $\mathbb{R}^2$ is the standard $d_2$. I am completely unsure how to approach the problem. I believe that there is no limit points because when I take the limit of $\sin(1/x)$ as $x \to \infty$ I get that the function is jumping between 1 and -1. Is this right?
I suggest you graph the function, and check what happens as $x \rightarrow \pm \infty$ (can you explain this behavior?). To find the closure of $A$, find all the points that the graph approaches arbitrarily closely. These are your boundary points, and the closure is the interior of $A$ along with the boundary of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/671457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Each vertex of this tree is either red or blue. How many possible trees are there? The question: Let $X$ denote the set of 'coloured trees' which result when each vertex of the tree is assigned one of the colours red or blue. How many different coloured trees of this kind are there? I am not quite sure where to begin with this question. We are studying Burnside's Lemma and the orbit-stabilizer theorem, so I'm assuming these are necessary to solve the problem. What really throws me off is that there are only 7 vertices, so if each one is red or blue, do I consider all these cases? * *4 red, 3 blue *3 red, 4 blue *2 red, 5 blue *5 red, 2 blue *1 red, 6 blue *6 red, 1 blue *7 red, 0 blue *0 red, 7 blue The previous problem asked to find the automorphism group $G$ on the set of vertices of this tree, which I obtained. So maybe there is a way to apply $G$ to the trees with red or blue vertices in order to solve these? Any help is appreciated.
It sounds like a tree with $1,2,4$ blue is the same as a tree with $1,2,5$ blue. I would count as follows: First consider the subtree of $2,4,5$. Vertex $2$ has two choices and vertices $4,5$ have three choices, so there are six ways to color the subtree. Now for the whole tree, you have two choices for $1$. You have $6$ ways to choose the colorings of the two subtrees so they match and $\frac 12\cdot 6 \cdot 5=15$ ways to choose two different colorings of the subtrees, for a total of $2(6+15)=42$
{ "language": "en", "url": "https://math.stackexchange.com/questions/671555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
embedding of a finite algebraic extension In one of my courses we are proving something (so far, not surprising) and using the fact: if $F$ is a finite algebraic field extension of $K$, there is an embedding of $F$ into $K$. Well, doesn't seems to me that we can really embed $F$ into $K$, since $F$ is bigger, but can we at least prove there is a homomorphism from $F$ to $K$?
Any homomorphism of fields must be zero or an embedding as there are no nontrivial ideals of any field. There is always the natural inclusion $i: K\rightarrow F$ if $K\subseteq F$, but rarely do we have an embedding $F \rightarrow K$. For a simple example, there is no embedding $\Bbb C\rightarrow \Bbb R$, as only one has a root of $x^2+1$ and an embedding will preserve roots of this polynomial. There are in fact examples of algebraic extensions $K\subseteq F$, with embeddings $F\rightarrow K$ (i.e. $K(x)\rightarrow K(x^p)$) .
{ "language": "en", "url": "https://math.stackexchange.com/questions/671640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given an odd integer $a$ , establish that $a^2+(a+2)^2+(a+4)^2+1$ is divisible by $12$? Given an odd integer $a$ , establish that $a^2+(a+2)^2+(a+4)^2+1$ is divisible by $12$? So far I have: $a^2+(a+2)^2+(a+4)^2+1$ $=a^2+a^2+4a+4+a^2+8a+16+1 $ $=3a^2+12a+21$ $=3(a^2+4a+7) $ where do I go from here.. the solution I have is divisible by $3$ not $12$...
If $a$ is odd, then $a = 2b+1$ for some integer $b$. Then $a^2 + 4a + 7 = 4b^2 + 4b + 1 + 8b + 4 + 7 = 4(b^2 + 3b + 3)$, which is evenly divisible by $4$. Combine this with the divisibility by $3$ that you already have, and you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/671733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Find the point of inflection Will there be an inflection point if there is no solution for $x$ when $f ''(x) = 0$? For example, $$ f(x)=\frac{x^2-x+1}{x-1} $$ with domain $\mathbb{R}-\{1\}$ Also, is that when $x$ is smaller than $1$, $f(x)$ is concave down?
There is no inflection point if there is no solution for $x$ when $f''(x) = 0$. For your case, if $x > 1$, then $f''(x) > 0$. If $x < 1$, then $f''(x) < 0$. Here is the double-derivative of $\dfrac{x^2 - x + 1}{x - 1}$ and its graph
{ "language": "en", "url": "https://math.stackexchange.com/questions/671815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determining if $W$ is a subspace of $\mathbb R^3$ I'm trying to determine whether $W = {\{(x, y, z) | x = 3y, z = -y}\}$ a subspace of $\mathbb R^3$. If someone can help me understand how to go about doing this that would be great!
As an alternative to performing a subspace test, we can just note that it's a span: we see that \begin{align*} W &= \{(3y,y,-y):y \in \mathbb{R}\} \\ &=\mathrm{span}\{(3,1,-1)\} \end{align*} and spans are subspaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/671879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
What is the probability to win? Die game You have a die. If you get one pip at any point in the game you lose. If you get two,..., six pips you start adding the number of pips to a sum. To win the sum must get greater or equal to 100. What is the probability to win the game?
An alternative approach, which still requires numerical calculation is to define a system state as the cumulative score. The system starts in state $0$ and there are a couple of absorbing states $L$ (lost) and $W$ (won). This yields 102 possible system states ${0,1,2,...,99,L,W}$. Each roll of the dice transforms the system state and it is straightforward to create the 102$\times$102 state transition matrix, $T = (p_{ij})$ where $p_{ij}$ is the probability of moving from state $i$ to state $j$. Multiplying $T$ by itself $n$ times, yields a matrix of the state transitions resulting from $n$ rolls of the dice and the row of this matrix corresponding to state $0$ yields the distribution of possible outcome states after these $n$ rolls. Taking $n\ge50$ ensures that for such outcomes only $L$ and $W$ have non-zero probabilities. (Note that whilst it is possible to lose from the first roll onwards, the definition of $L$ as an absorbing state with a corresponding transition $L \to L$ having a probability of 1 in $T$ means that it is legitimate to include rolls beyond the losing one. Similarly, defining $W$ as an absorbing state means we can consider rolls after the game is won.) In my calculation procedure, I constructed $T$ and from this derived $T^2,T^4,T^8,...,T^{64}$. From the latter my results indicate that the probability of winning is 0.010197(to 6 decimal places).
{ "language": "en", "url": "https://math.stackexchange.com/questions/672063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Value of $u(0)$ of the Dirichlet problem for the Poisson equation Pick an integer $n\geq 3$, a constant $r>0$ and write $B_r = \{x \in \mathbb{R}^n : |x| <r\}$. Suppose that $u \in C^2(\overline{B}_r)$ satises \begin{align} -\Delta u(x)=f(x), & \qquad x\in B_r, \\ u(x) = g(x), & \qquad x\in \partial B_r, \end{align} for some $f \in C^1(\overline{B}_r)$ and $g \in C(\partial B_r)$ I have to show that $$ u(0) = \frac{1}{n \alpha(n) r^{n-1}} \int_{\partial B_r} g(x) dS(x) + \frac{1}{n(n-2)\alpha(n)} \int_{B_r} \left( \frac{1}{|x|^{n-2}} - \frac{1}{r^{n-2}} \right) f(x) dx, $$ Where $\alpha(n)$ is the volume of the unit ball and $n\alpha(n)$ is the surface area of its boundary, the unit sphere.Then I'm given the hint to modify the proof of the mean value theorem. With the final remark to remember that $\vec{\nabla} \cdot (v \vec{w}) = (\vec{\nabla}v) \vec{w} + v\vec{\nabla}\cdot \vec{w}$. Now the mean value theorem from the book Partial Differential Equations by Evans states that $$ u(x) = \frac{1}{n \alpha(n) r^{n-1}}\int_{\partial B(x,r)} u \, dS = \frac{1}{\alpha(n)r^n}\int_{B(x,r)} u \, dy, $$ which is valid for harmonic functions. Thus in case $f(x)=0$ we seen that the statement holds. For the proof of the mean value theorem defines the function $$ \phi(r) := \frac{1}{n \alpha(n) r^{n-1}}\int_{\partial B(x,r)} u(y) dS(y) = \frac{1}{n \alpha(n) r^{n-1}}\int_{\partial B(0,1)} u(x+rz) dS(z). $$ Then it is shown that $\phi'(r)=0$, and so $$ \phi(r) = \lim_{t\rightarrow 0} \phi(t) = \lim_{t\rightarrow 0} \frac{1}{n \alpha(n) t^{n-1}}\int_{\partial B(x,t)} u(y) dS(y) = u(x). $$ The proof that $\phi'(x)$ is indeed constant is done with Green's formulas \begin{align} \phi'(r) &= \frac{1}{n \alpha(n) r^{n-1}}\int_{\partial B(0,1)} D u(x+rz)\cdot z \, dS(z) \\ &= \frac{1}{n \alpha(n) r^{n-1}}\int_{\partial B(x,r)} D u(y)\cdot \frac{y-x}{r} \, dS(y) \\ &= \frac{1}{n \alpha(n) r^{n-1}}\int_{\partial B(x,r)} \frac{\partial u}{\partial \nu} \, dS(y) \\ & = \frac{1}{n \alpha(n) r^{n-1}}\int_{B(x,r)} \Delta u \, dy =0. \end{align} However for the non harmonic problem I'm given the last equation becomes $$ \frac{1}{n \alpha(n) r^{n-1}}\int_{B(x,r)} \Delta u \, dy = \frac{1}{n \alpha(n) r^{n-1}}\int_{B(x,r)} f(x) \, dy. $$ What I probably should do is change the function $\phi$, but I can't seem to figure out how. Any help will be much appreciated.
I would suggest an alternative approach. Do you know the Green's function for the domain $B_r$, specifically, the value at the center $G(0; y)$? It is $$G(0;y)=\frac{1}{n(n-2)\alpha_n} \left(\frac{1}{r^{n-2}} -\frac{1}{|y|^{n-2}} \right).$$ Now the solution to the Poisson equation $u$ is the sum of the solution of the Laplace equation with inhomogeneous BCs $v$ and the solution of the Poisson equation with homogeneous BCs $w$; furthermore, $v(0)$ is given by the mean value theorem and $w(0)$ is given by the Poisson integral formula $w(0)=-\int G(0;y)f(y)dy$. Final step: $u(0)=v(0)+w(0)$ and you should be done?
{ "language": "en", "url": "https://math.stackexchange.com/questions/672160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Why does maximum likelihood estimation for uniform distribution give maximum of data? I am looking at parameters estimation for the uniform distribution in the context of MLEs. Now, I know the likelihood function of the Uniform distribution $U(0,\theta)$ which is $1/\theta^n$ cannot be differentiated at $\theta$. The conclusion is that the estimate of $\theta$ is $\max(x_i)$, where $x_1,x_2,\ldots,x_n$ is the random sample. I would like a layman's explanation for this.
Your likelihood function is correct, but strictly speaking only for values of $\theta \ge x_\max$. As you know the likelihood function is the product of the conditional probabilities $P(X_i=x_i|\theta)$. $$L = P(X_1=x_1|\theta) \times P(X_2=x_2|\theta) \times \ldots \times P(X_2=x_2|\theta) $$ We can look at three different cases for your $\hat\theta$ = the estimate of $\theta$ that maximizes the likelihood. (i) $\hat\theta_1 < \max(x_i)$ Let's say that $x_j=x_\max$. This gives $P(X_j=x_\max|\theta=\hat\theta_1) = 0$, because $x_\max$ is outside the support of the random variable, and so $$L = P(X_1=x_1|\theta=\hat\theta_1) \times \ldots \times P(X_j=x_\max|\theta=\hat\theta_1) \times \ldots \times P(X_2=x_2|\theta=\hat\theta_1) = 0 $$ (ii) $\hat\theta_2 > \max(x_i)$ Here all the data is within the support so we have $L=1/\hat\theta_2^n$. Clearly case (ii) is better than (i) because $1/\hat\theta_2^n > 0$. (iii) $\hat\theta_3 = \max(x_i)$ Once again $L=1/\hat\theta^n$. But case (iii) is better than case (ii) because $\forall a,b:\; a>b>0 \;=>\; 1/a < 1/b$. So out of the three cases, case (iii) maxmizes $L$. (Note that you can use calculus to show that $L$ is monotonically decreasing for $\theta < x_\max$ if you wish.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/672266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }