Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Finding all solutions to $3x + 4y \equiv 1 \pmod 7$ Find all solutions $\pmod 7$: $3x + 4y$ is congruent to $1 \pmod 7$. I have tried writing out the various equations such as $3x + 4y = 1$, $3x + 4y = 8$, $3x + 4y = 15$, etc., but I do not know how to find the finite solution.
Hint: for any $x,y $ we always have $3x+4y=3 (x+4)+4 (y-3) $. So by induction: $3x+4y\equiv 1 \mod 7 \implies 3 (x\pm 4k)+4 (y\mp 3k)\equiv 1 \mod 7$ So... if you know one solution, you know infinitely many. The question is are there any not generated from your first solution? (I.e. can we make that $\implies $ into $\iff $?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2049003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Query related to Eq. 3.471.9 of Book of Gradeshteyn (Integration tables series and products) Equation no. 3.471.9 of Integral series and products (By Gradeshteyn) is written below $$\int_0^{\infty}x^{v-1}e^{-\frac{\beta}{x}-\gamma x}dx=2\left(\frac{\beta}{\gamma}\right)^{\frac{v}{2}}K_{v}(2\sqrt{\beta \gamma})$$ although it is mentioned that $Re(\beta)>0$ and $Re(\gamma)>0$ there is nothing written about $v$. So my question related to the values of $v$. Is the above equation valid for all possible real values of $v$? And if it is not valid then how to solve the above integral for general real values of $v$. Many thanks in advance.
The integral will converge as long as $\text{Re}(\beta) > 0$ and $\text{Re}(\gamma) > 0$, regardless of the value of $v$. Both sides are analytic as functions of $v$ for fixed $\beta, \gamma$. So the equation should work for all $v$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2049113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to divide 2n members of the club into disjoint teams of 2 members each, when teams not labelled? I am doing this and getting $\sum_{i=0}^{2n-2} \frac{(2n-i)!}{2!(2n-i-2)!}$ but the answer given is $\frac{(2n)!}{2^n n!}$ I even tried to get this by simplifying my result, but not getting the same.
Their answer: imagine lining up everyone in a row and then pairing up adjacent people. There are $(2n)!$ ways to line everyone up, but different line-ups can produce the same teams. To account for this overcounting, divide by $n!$ for the number of ways to arrange the $n$ teams in a row, and divide by $2$ for each pair to account for swapping the order of the members of each pair. In your answer, you probably meant $\prod_{i=0}^{2n-2} \binom{2n-i}{2} = \prod_{i=0}^{2n-2}\frac{(2n-i)(2n-i-1)}{2} = \frac{(2n)!}{2^n}$, which comes from choosing the first pair, choosing the second pair, and so on. But here, you are accounting for the order that the teams are chosen. (Recall the problem states the teams are not labeled.) So, you should divide by $n!$. This agrees with their answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2049254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
What is $\tan ^{-1} (5i/3)$ What is $\tan ^{-1} (5i/3)$ My progress: Let $\tan x= \dfrac{5i}{3}= \dfrac{\sin x}{\cos x}$ I tried using $\sin x= \dfrac{e^{ix}-e^{-ix}}{2}, \cos x= \dfrac{e^{ix}+e^{-ix}}{2}$ to show that $\dfrac{e^{ix}-e^{-ix}}{e^{ix}+e^{-ix}}= \dfrac{-5}{3}$ or $e^{2ix}= \dfrac{-1}{4}$, but I'm stuck here.
You are fine. Just solve $e^{2ix}= \dfrac{-1}{4}$, taking into account that $x$ is complex. Write $x = r + i c$ to get $e^{-2c + 2ir}= \dfrac{-1}{4}= \dfrac{1}{4} e^{i \pi}$ and identify $r = \pi /2$ and $c = \ln 2$. So $$ x = \pi /2 + i \ln 2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2049360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
System of two equations with 3 unknowns and parameters Is there a way to solve this for $c_1$, $c_2$, $c_3$ in terms of $a$'s and $b$'s? $$ \begin{cases} a_1c_1+a_2c_2+a_3c_3=0 \\ b_1c_1+b_2c_2+b_3c_3=0 \end{cases} $$
In general, this system (with only two equations but three unknown) will have an infinite number of solutions. You can choose one of the $c$'s as a free variable and solve the system in terms of the $a$'s, $b$'s and the free variable as a parameter. For example, solving for $c_1$ and $c_2$ (see Wolfram|Alpha) yields: $$c_1 = \frac{c_3 (a_3 b_2 - a_2 b_3)}{a_2 b_1 - a_1 b_2} \quad , \quad c_2 = \frac{c_3 (a_3 b_1 - a_1 b_3)}{a_1 b_2 - a_2 b_1}$$ provided that $a_2 b_1 \ne a_1 b_2$ and $b_2 \ne 0$. Based on your comment; I'll add this. We now have an infinite number of solutions of the form: $$(c_1,c_2,c_3) = \left( \frac{c_3 (a_3 b_2 - a_2 b_3)}{a_2 b_1 - a_1 b_2} \;,\; \frac{c_3 (a_3 b_1 - a_1 b_3)}{a_1 b_2 - a_2 b_1}\;,\; c_3 \right)$$ where you can take $c_3 \in \mathbb{R}$ arbitrarily. For a 'nice' solution, choose $c_3 = a_1 b_2 - a_2 b_1$ to get: $$\left(a_2 b_3 - a_3 b_2 \;,\; a_3 b_1 - a_1 b_3 \;,\; a_1 b_2 - a_2 b_1 \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2049441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Intuition between Ito-Formula What is the intuition behind Ito formula ? It's looking comming from no where to me. I recall that if $f\in \mathcal C^2(\mathbb R)$, then, $$f(B_t)-f(B_0)=\int_0^t f'(B_s)dB_s+\frac{1}{2}\int_0^t f''(B_s)ds.$$
From my poor experience in stochastic calculus, I would say that if a stochastic process depends on the Brownian $B_s$, the differential stochastic equation for such a process can be intuitively derived expanding its differential in terms of the Brownian, $\textit{i.e.}$ $$df(s,B_s) =\frac{\partial f}{\partial s}ds+ \frac{\partial f}{\partial B_{s}}dB_s + \frac{1}{2}\frac{\partial^2 f}{\partial B_s^2}(dB_s)^2+...$$ I know that this is not very rigorous, but it works. From the definition of the Brownian one knows that $dB_s^2 = ds$, hence keeping terms of the first order $$df(s,B_s) =\left(\frac{\partial f}{\partial s}+\frac{1}{2}\frac{\partial^2 f}{\partial B_s^2}\right)ds+ \frac{\partial f}{\partial B_{s}}dB_s\tag1$$ Your expresion is the analogous of $(1)$ but in integral form. Hope this helps
{ "language": "en", "url": "https://math.stackexchange.com/questions/2049548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Gelfand duality for the connection between $K$ and $C(K)$ Suppose $K_1,K_2$ are compact topological spaces, let $\pi$ be a homomorphism: $C(K_2)\rightarrow C(K_1)$, then we can find a continuous map $\tau: K_1\rightarrow K_2$, s.t $\pi(f)=f\circ\tau$. Likewise, if we have $\tau: K_1\rightarrow K_2$, then the map $\pi: \pi(f)=f\circ\tau$ is a homomorphism. This is called Gelfand duality. Thus it is easy to prove that $\tau$ is homeomorphism iff $\pi$ is isometrically isomorphism. I read some notes about these stuff. It says that by that every topological property in $K$ can be reflected by algebraic property in $C(K)$,e.x. $K$ is connected iff $C(K)$ contains no idempotents. But from Gelfand duality, I could only see the connection between $C(K_1)$ and $C(K_2)$ reflects the connection between $K_1$ and $K_2$
Gelfand duality tells you, that you have a contravariant functor between the category of compact topological spaces and the category of commutative $C^*$-algebras (which is even an equivalence). The part you stated is only the contravariance, i.e. that your functor reverses the direction of the morphisms (continuous maps for spaces, algebra homomorphisms for algebras). For the connection between e.g. connectedness of $K$ and no idempotents of $C(K)$ you have to dig a little deeper and learn how to find the space if you have the algebra at hand (via defining a topology on the spectrum of the algebra). See https://ncatlab.org/nlab/show/Gelfand+duality and references therein for details. Note that understanding this math will probably take some time, as you have to understand basics of category theory and need some profound knowledge of $C^*$-algebras and topological spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2049659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computation of a series. NOTATIONS. Let $n\in\mathbb{N}$. We define the sets $\mathfrak{M}_{0}:=\emptyset$ and \begin{align} \mathfrak{M}_{n}&:=\left\{m=\left(m_{1},m_{2},\ldots,m_{n}\right)\in\mathbb{N}^{n}\mid1m_{1}+2m_{2}+\ldots+nm_{n}=n\right\}&\forall n\geq1 \end{align} and we use the notations: \begin{align} m!&:=m_{1}!m_{2}!\ldots m_{n}!,&|m|&:=m_{1}+m_{2}+\ldots+m_{n}. \end{align} QUESTION. I want to evaluate or just bound with respect to $n$ the series \begin{align} S_{n}&:=\sum_{m\in\mathfrak{M}_{n}}\frac{\left(n+\left|m\right|\right)!}{m!}\ \prod_{k=1}^{n}\left(k+1\right)^{-m_{k}}. \end{align} My hope is that $S_{n}\leq n!n^{\alpha}$ with $\alpha$ independant of $n$. BACKGROUND. In order to build an analytic extension from a given real-analytic function, I had to use the Faà di Bruno's formula for a composition (see for example https://en.wikipedia.org/wiki/Faà_di_Bruno%27s_formula). After some elementary computations, my problem boils down to show the convergence of \begin{align} \sum_{n=0}^{+\infty}\frac{x^{n+1}}{(n+1)!}\sum_{m\in\mathfrak{M}_{n}}\frac{\left(n+\left|m\right|\right)!}{m!}\ \prod_{k=1}^{n}\left(k+1\right)^{-m_{k}} \end{align} where $x\in\mathbb{C}$ is such that the complex modulus $|x|$ can be taken as small as desired (in particular, we can choose $|x|<\mathrm{e}^{-1}$ to kill any $n^{\alpha}$ term from the bound on $S_{n}$). SOME WORK. It is clear that we have to to understand the sets $\mathfrak{M}_{n}$ in order to go on (whence the tag "combinatorics"). So I tried to see what were these sets: * *for $n=2$ : \begin{array}{cc} 2&0\\ 0&1 \end{array} *for $n=3$ : \begin{array}{ccc} 3&0&0\\ 1&1&0\\ 0&0&1 \end{array} *for $n=4$ : \begin{array}{cccc} 4&0&0&0\\ 2&1&0&0\\ 1&0&1&0\\ 0&2&0&0\\ 0&0&0&1\\ \end{array} *for $n=5$ : \begin{array}{ccccc} 5&0&0&0&0\\ 3&1&0&0&0\\ 2&0&1&0&0\\ 1&0&0&1&0\\ 1&2&0&0&0\\ 0&0&0&0&1\\ 0&1&1&0&0\\ \end{array} Above, each line corresponds to an multiindex $m$, and the $k$-th column is the coefficient $m_{k}$. We see for example that the cardinal of $\mathfrak{M}_{n}$ becomes strictly greater than $n$ if $n\geq5$. Also, because I wanted to reorder the set of summation in $S_{n}$ into a the set of all multiindices $m$ such that $|m|=j$ for $1\leq j\leq n$, I tried to count given $j$ the number of $m$ such that $|m|=j$; when $n=10$, I counted $8$ multiindices $m$ with length $|m|=4$, so that this number can be greater than $n/2$. Another remark is that the number of multiindices $m$ such that $|m|=j$ becomes larger if $j$ is "about" $n/2$ - don't ask me what "about" means here, I just tried some example and saw this phenomenon.
Here is a solution of a related problem, followed by a recommendation for the original problem. It would be much simpler if your sum did not have the $n$ in $(n+|m|)!\,$. In that case, we could look at the related sum $$t_n=\sum_{m\in {\mathfrak{M} }_n}\frac{|m|!}{m!}\prod_{k=1}^n(k+1)^{-m_k}.$$ The sum for the $t$'s comes from a product of exponential generating functions. Because of the factor of $(k+1)^{-m_k}$ in $t_n$ and the term $k\,m_k$ in ${\mathfrak{M} }_n$, we must look at the series $$1+\frac{\left(\frac{x^k}{k+1}\right)^1}{1!} +\frac{\left(\frac{x^k}{k+1}\right)^2}{2!} +\frac{\left(\frac{x^k}{k+1}\right)^3}{3!} +\cdots=\exp\left(\frac{x^k}{k+1}\right).$$ From multiplying these exponential generating functions, we get $$t_n=\left[\frac{x^n}{n!}\right]\prod_{k\ge1}\exp\left(\frac{x^k}{k+1}\right).$$ This product turns out to have a nice closed form: \begin{eqnarray*} % \nonumber to remove numbering (before each equation) \prod_{k\ge1}\exp\left(\frac{x^k}{k+1}\right) &=& \exp\left(\sum_{k\ge1}\frac{x^k}{k+1}\right) \\ &=& \exp\left(\frac1x\biggl(\log\bigl(\frac1{1-x}\bigr)-x\bigr)\right) \\ &=& (1-x)^{-x}/\mathrm{e} . \end{eqnarray*} The smallest singularity of $(1-x)^{-x}$ is at 1, so a crude approximation would be $$[x^n](1-x)^{-x}\approx1^n=1$$ and $$t_n=\left[\frac{x^n}{n!}\right](1-x)^{-x}/\mathrm{e}\approx n!/\mathrm{e}.$$ Certainly, a finer analysis of the singularity of $(1-x)^{-x}$ would give a better approximation and perhaps produce the power $\alpha$ you're seeking. Now, back to the original problem. It's always the case that $|m|\le n$, so a rough bound on $s_n$ would be $$s_n\le(2n)!t_n\le(2n)!\, .$$ This bound is worse than the hope you expressed, but perhaps good enough for your eventual purposes or perhaps a start for finer analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2049773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
If $p + q = 1$ prove that for any natural $n, m$ following is true: $(1 - p^n)^m + (1 - q^m)^n \ge 1$ Let $p, q \in \mathbb R$ be positive reals for which $p + q = 1$. How to prove that for any two natural numbers $n, m$ the following inequality is true? $(1 - p^n)^m + (1 - q^m)^n \ge 1$ I don't have a big knowledge about solving inequalities, so I tried to use cauchy-schwarz inequality, binomial theorem and some other baisc techniques, but it lead me nowhere. I've been thinking about it for a long time and now I'm completly stuck.
Let there are two coins each with probability of head $p$. Let coin 1 has been tossed $n$ times independently and this whole scheme is repeated independently for $m$ times. Similarly coin 2 is tossed $m$ times independently and this whole scheme is repeated independently for $n$ times and independently of coin 1. $A$= at least one tail occurs in the string of 1st $n$-tosses, 2nd $n$ tosses, $\cdots$, $m^{th}$ $n$-tosses for coin 1 $B$= at least one head occurs in the string of 1st $m$-tosses, 2nd $m$ tosses, $\cdots$, $n^{th}$ $m$-tosses for coin 1 Notice the desired form is $P(A\cup B)\leq1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2049852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Why can we say $0\leq\sin^2(x)\leq 1$? I often see instructors write: $0\leq\sin^2(x)\leq 1$ Why is this valid? Isn't it supposed to be between $1$ and $-1$?
$x \in \mathbb R$ then $x^2 \ge 0$. If $|x| \le 1$ then $x^2 = |x|^2 = |x||x| \le |x|*1 = |x| \le 1$. So as $-1 \le \sin x \le 1$, it follows $(\sin x)^2 \le 1$. And as $(\sin x)^2 \ge 0$, $0 \le \sin^2 x \le 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2049940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Everywhere defined operators must be bounded? I have read in many places that as soon as you have an everywhere defined operator (on a Banach space), it must be automatically bounded, by the Closed Graph Theorem. However, I can't prove this using the Closed Graph Theorem (i.e., I can't prove it would be closed) and I can't find a reference for this. Is this true? Why?
You cannot prove that, as it is not true (with the axiom of choice). The statement, which is true from the closed graph theorem, is: If $T \colon X \to Y$ is a closed operator defined on a Banach space $X$ into a Banach space $Y$, than $T$ is bounded. Addendum: Let $X$ be an infinite dimensional Banach space, $Y \ne 0$ be a Banach space. Then there is an unbounded $T \colon X \to Y$. Let (AC!) $B$ a basis of $X$ and $B' =\{b_n : n \in \mathbf N\}$ a countable subset, $y \in Y$ with $y \ne 0$. Define $T$ by linear extension of $$ T(b) = \begin{cases} n\|b_n\|y & b = b_n \\ 0 & b \in B \setminus B'\end{cases} $$ Then $T$ is linear $X \to Y$, and unbounded due to $$ \|T(b_n)\| = n\|b_n\|\|y\| $$ hence $\|T\| \ge n \|y\|$ for every $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2050014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
unconditional expectation derived from conditional binomial, Poisson and exponential. X|Y~Binomial(Y,p), Y|Z~Poisson(Z), Z~exponential(b),where p and b are constants. What is the E(X) and Var (X) My way to solve it to to find the joint pdf of (Y|Z) and Z so we can have the marginal (unconditionally)Y. Similar get the marginal of X to get the E(X) and VAR(X). Is there a simpler way to solve it?
From the formulas $\mathbb{E}(\mathrm{Bin}(n,p)) = np$ and $\mathbb{V}\mathrm{ar}(\mathrm{Bin}(n,p)) = np(1 - p),$ you can get that $\mathbb{E}(X\mid Y) = Yp$ and $\mathbb{V}\mathrm{ar}(X\mid Y) = Yp(1 - p).$ Henceforth, $\mathbb{E}(X) = \mathbb{E}(Y)p$ and $\mathbb{V}\mathrm{ar}(X) = \mathbb{V}\mathrm{ar}(Y)p^2(1 - p)^2.$ Suffices to find $\mathbb{E}(Y)$ and $\mathbb{V}\mathrm{ar}(Y).$ Similarly, $\mathbb{E}(Y\mid Z) = Z,$ so $\mathbb{E}(Y) = \dfrac{1}{b}.$ To calculate $\mathbb{V}\mathrm{ar}(Y)$ one needs to be a bit careful, $\mathbb{V}\mathrm{ar}(Y) = \mathbb{E}(Y^2) - \mathbb{E}(Y)^2 = \mathbb{E}(Y^2) - \dfrac{1}{b^2}$ and $\mathbb{E}(Y^2) = \mathbb{E}(\mathbb{E}(Y^2\mid Z)) = \int\limits_0^\infty dz \mathbb{E}(Y^2\mid Z=z) be^{-bz}$, and since $Y\mid Z=z \sim \mathrm{Pois}(z),$ it follows that $\mathbb{E}(Y^2\mid Z=z) = z + z^2,$ whence $\mathbb{E}(Y^2) = \int\limits_0^\infty dz (z + z^2)be^{-bz} =\dfrac{1}{b} + \dfrac{2}{b^2}.$ Substitute back and you get all you wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2050121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Dividing an infinite power series by another infinite power series Let's say I have two power series $\,\mathrm{F}\left(x\right) = \sum_{n = 0}^{\infty}\,a_{n}\,x^{n}$ and $\,\mathrm{G}\left(x\right) = \sum_{n = 0}^{\infty}\,b_{n}\,x^{n}$. If I define the function $\displaystyle{\,\mathrm{H}\left(x\right) = \frac{\mathrm{F}\left(x\right)}{\mathrm{G}\left(x\right)} = \frac{\sum_{n = 0}^{\infty}\, a_{n}\,x^{n}}{\sum_{n = 0}^{\infty}\, b_{n}\, x^{n}}}$, is there a general way to expand $\,\mathrm{H}$ such that $\,\mathrm{H}\left(x\right) = \sum_{n=0}^{\infty}\,c_{n}\,x^{n}$ ?. I guess, what i'm asking is if there is a way to get the first few $c_{n}$ coefficients ?. I'm dealing with a physics problem in which I have two such functions $\,\mathrm{F}$, $\,\mathrm{G}$ and I'd like to get the first few terms in the power series $\,\mathrm{H}$.
The standard way (in other words, there is nothing original in what I am doing here) to get $H(x)$ is to write $H(x)G(x) = F(x)$ and get an iteration for the $c_n$. $\begin{array}\\ H(x)G(x) &=\sum_{i=0}^{\infty} c_{i} x^{i} \sum_{j=0}^{\infty} b_{j} x^{j}\\ &=\sum_{i=0}^{\infty} \sum_{j=0}^{\infty} c_{i}b_{j} x^{i+j}\\ &=\sum_{n=0}^{\infty} \sum_{i=0}^{n} c_{i}b_{n-i} x^{n}\\ &=\sum_{n=0}^{\infty} x^{n} \sum_{i=0}^{n} c_{i}b_{n-i} \\ \end{array} $ Since $H(x)G(x) = F(x) = \sum_{n=0}^{\infty} a_{n} x^{n} $, equating coefficients of $x^n$, we get $a_n =\sum_{i=0}^{n} c_{i}b_{n-i} $. If $n=0$, this is $a_0 = c_0b_0$ so, assuming that $b_0 \ne 0$, $c_0 =\dfrac{a_0}{b_0} $. For $n > 0$, again assuming that $b_0 \ne 0$, $a_n =\sum_{i=0}^{n} c_{i}b_{n-i} =c_nb_0+\sum_{i=0}^{n-1} c_{i}b_{n-i} $ so $c_n =\dfrac{a_n-\sum_{i=0}^{n-1} c_{i}b_{n-i}}{b_0} $. This is the standard iteration for dividing polynomials.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2050216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 2 }
For the following language, prove, without using Rice's Theorem, whether it is in D, SD but not D, or not in SD. The following language is: $L = \{<M>|¬L(M)∈D \}$ Let's say there is a TM called regTM. regTM = $\{<M>|L(M) $ is regular$ \}$ I know that regTM is undecidable, therefore I am led to believe any TM for L would also be undecidable. How would I prove this, without using Rice's Theorem?
Um why think about regular languages? Hint Can $L$ be enumerated? If it can, can you construct a TM that decides whether a program $P$ halts on an input $X$ or not? You can easily simulate $P$ on $X$ and if it halts you would observe it, so can you given $P$ and $X$ computably construct a program $Q$ that accepts an undecidable language iff $P$ does not halt on $X$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2050356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Spectrum for a bounded linear operator and its adjoint on a Banach space are same. I have to show that spectrum for a bounded linear operator and its adjoint on a Banach space are the same. Spectrum is defined as $$ \sigma(T)=\{\lambda\in \mathbb{K}\ :\ T-\lambda I \ \text{is invertible}.\} $$ I have to show $\sigma(T)=\sigma(T^*)$. Let $\lambda \notin \sigma(T)$; then $ (T-\lambda I ) $ is invertible and bounded. This implies $(T-\lambda I)^*$ is also invertible, since $$ (T^*-\lambda I)^{-1}=[(T-\lambda I)^*]^{-1}\implies T^*-\lambda I \ \text{is invertible}. $$ So $\lambda\notin \sigma(T^*).$ I am unable to prove the other part. Can anyone help me please? Thanks.
$T-\lambda I$ is invertible if and only if $(T-\lambda I)^*=T^*-\lambda I$ is invertible: Since for every linear operator $A$ invertibility of $A$ and of $A^*$ are equivalent, which follows by taking the adjoints of, e.g., $AA^{-1}=I$ and $A^{-1}A=I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2050473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Must an automorphism on the group of real numbers under multiplication maintain sign? Suppose we have an automorphism $\phi$ under the group $(\mathbb{R}^{\#},\,\cdot)$. I need to show that $\phi$ preserves the sign of the numbers, or that $\phi(\mathbb{R}^+)=\mathbb{R}^+$ and $\phi(\mathbb{R}^-)=\mathbb{R}^-$. I've had a bit of success. I determined fairly painlessly that $\phi(r)>0\iff\phi(r^{-1})>0$ for any $r\in\mathbb{R}^\#$. But I'm having difficulty making any other sort of progress. My guess is that there's something trivial that I'm missing that I'm currently just too blind to see, but either way, I'm a bit frustrated with my stagnancy here. Any help or hints or suggestions would really help. For reference, the only thing I really know about automorphisms is the fact that they're bijective and that it preserves the identity (in this case) $\phi(a) \cdot \phi(b)=\phi(a \cdot b)$ for all $a,b\in\mathbb{R}^\#$.
Can you think of a property, in terms of multiplication only, that positive numbers have but negative ones don't? (HINT: think about $x^2$ . . .)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2050566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Composition of functions is continuous? If $f$ any $g$ be two functions defined from $[0,1]$ to $[0,1]$ with $f$ strictly increasing. Then * *if $g$ is continuous, is $f\circ g$ continuous? *if $f$ is continuous, is $f\circ g$ continuous? *if $f$ and $f\circ g$ are continuous, is $g$ continuous? Here, $f\circ g$ implies composition of $f$ and $g$. I think the answer to the third is yes by using the fact that preimage of an open set under a continuous map is open? Any idea .Thanks.
In plain English: * *if $g$ is continuous, is $f\circ g$ continuous? Not necessarily. We know that $f$ is strictly increasing, but that does not imply that it is continuous. Counter-example: Define $f$ as any strictly increasing, non-continuous function. In other words, stating that the input to $f$ "changes smoothly" (i.e. $g$ is continuous) states nothing whatever about the output of $f$. *if $f$ is continuous, is $f\circ g$ continuous? Not necessarily. $g$ could be any arbitrary function; it may not be continuous. Describing a function as "continuous" states that if the input changes smoothly, the output changes smoothly. If the input (in other words, the output of $g$) jumps around arbitrarily (discontinuous), the output of $f$ may not change smoothly. *if $f$ and $f\circ g$ are continuous, is $g$ continuous? Yes, but note that the information provided at the beginning of the question, that $f$ is strictly increasing, is necessary to prove this point. If this restriction is omitted, the following would be a counter-example: $g(x) = \begin{cases} x \le 0.5 && 0.2 \\ x \gt 0.5 && 0.8 \end{cases}$ $f(x) = 4x^2 - 4x +1$ Note the following attributes of the above function definitions: * *$f$ is continuous. *$f \circ g$ is continous. *$g$ is not continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2050686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
I need help with understanding expected values Consider a discrete random variable $X$ that takes on the values $x_1,...,x_n$ and for every $x_i$ there's a probability $p(x_i)=p_i$. I am attempting to understand why the expectation $E[X]$ is defined the way it is. $$E[X]=x_1p_1+...+x_np_n$$ I cannot figure out why it is defined this way and I have already checked my book and wikipedia. I don't understand how this "weighted average" thing is applicable here. $E[X]$ is suppose to be the average value $X$ takes if some experiment is performed many times, correct? Since $p_1+...+p_n=1$ $E[X]$ can expressed as $$E[X]=\frac{x_1p_1+...+x_np_n}{p_1+...+p_n}$$ but that still does not tell me anything. I would appreciate if someone could help me out here.
I will try to give you some more insight: suppose you have a random variable $X$ which takes two values (say 4 and 8) with a chance of 50% each. $P(X=4) = P(X=8) =.5 = p_1 = p_2$. You can imagine that $X$ is linked to a fair coin and every Head corresponds to 4 and tail corresponds to 8. The expected value is a number which represents the average outcome of the random variable $X$. So assume you have several observations of $X$ (i.e. you throw a coin say 1000 times and note the outcome). Since both outcomes are evenly likely, the arithmetic mean (or average) of this 1000 experiments would be close to 6: $$ Ave_{1000} = \frac{\#occurrences~ of~ 4}{1000}*4 +\frac{\#occurrences~ of~ 8}{1000}*8 \approx \frac{500}{1000} \cdot 4 + \frac{500}{1000}\cdot 8 =6 $$ The expected value can be obtained by taking the number of repetitions to infinity: by doing so the relative frequency of fours would converge to .5 (since probabilities can be defined by the limit of relative frequencies) and so would the relative number of eights. Hence, $$ Ave_{\infty} := E[X] = \frac 12 4 + \frac 12 8 = p_1 4 + p_2 8 = 6. $$ So the expected value is the arithmetic mean of an experiment repeated infinitely often. In general, if you experiment $X$ has $n$ different outcomes denoted with $(x_1, \dots, x_n)$ occurring with probabilities $(p_1, \dots, p_n)$ then $p_i$ will be close to the relative frequency of outcome $x_i$ if the number of repetitions is large. If you have say 1000 repetitions then the arithmetic mean (or average) is given by $$ Ave_{1000} = \sum_{i=1}^n \frac{\#occurrences~ of~ x_i}{1000} \cdot x_i \approx \sum_{i=1}^n p_i \cdot x_i. $$ Again by taking the number of repetitions to infinity the relative frequencies converge to the probabilities and $Ave_N$ will converge to a number which is known as the expected value. I hope this was somehow helpful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2050808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Transform expression from $2^{-n}$ to $x \times 10^{-m}$ I'm trying the understand the concept of how floating number are stored in the computer. While trying to understand it I ran into following kind of transformations: $2^{-n} \approx x \times 10^{-m}$ For example: $2^{-1074} \approx 5 \times 10^{-324}$ Can you please explain me how this transformation from the base 2 expression to the base 10 expression is being performed? Thanks in advance
You want to convert $a\times 2^b$ to $c\times 10^d$. Obviously, you noticed that $d=\lfloor \log (a\times 2^b)\rfloor$. So, now you need $c$ which is given by $c=a\times 10^{-d}\times 2^{b}$ what you get easily using logarithms. Hope it helps. ( Note: The floor function used to find the exponent "d" gives us the integer part of a number.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2050903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find the minimum-variance unbiased estimator for given $\tau(\theta)$ Let $X = (X_1, \dots, X_n)$ - a sample from the distribution $U (0,\theta)$. Prove that $T(X) = X_{(n)}$ is complete and sufficient estimation for $\theta$ and find the minimum-variance unbiased estimator $T^*(X)$ for a differentiable function $\tau(\theta)$. The proof of sufficiency can be very easily carried out using the factorization criterion. I have done it. Next we need to prove completness. By the definition we need to prove that $\mathbb{P}(g(x) = 0) = 1$ from $$\mathbb{E}_{\theta}g(T(X)) = \displaystyle\int\limits_{[0,\theta]}g(x)\frac{n y^{n-1}}{\theta^n}dx = 0, \quad \forall \theta > 0$$ It can be easily done if $g(x)$ continuous. $\displaystyle\int\limits_{0}^\theta g(x)y^{n-1}dx = 0 $ $g(\theta)\theta^{n-1} = 0$. Than $g(\theta) = 0$. It works for continuous functions but how to prove it for all $g(x)$ such that $\mathbb{E}_{\theta}g(T(X))$ exists? And next I need to find the minimum-variance unbiased estimator $T^*(X)$ for a differentiable function $\tau(\theta)$. It seems like it is connected with the first question and can be done using something like Lehmann–Scheffé theorem, but I do not know how to do it exactly. Great thanks for the help!
You have $$E_{\theta}[g(T)]=\int_0^\theta g(x)\frac{nx^{n-1}}{\theta^n}\,dx,\quad\theta>0,$$ for every measurable function $g:(0,\infty)\rightarrow\mathbb{R}$ such that $g(x)x^{n-1}$ is Lebesgue integrable on $(0,\theta)$ for all $\theta>0$. By the Fundamental Theorem of Calculus for the Lebesgue integral, $$\frac{d}{d\theta}\int_0^\theta g(x)\frac{nx^{n-1}}{\theta^n}\,dx=g(\theta)\frac{n\theta^{n-1}}{\theta^n}=ng(\theta)/\theta,\quad\text{a.e. }\theta>0.$$ Thus, if $E_\theta[g(T)]=0$ for all $\theta>0$, the derivative is also zero for all $\theta>0$ and therefore $g=0$ for almost every $\theta>0$. Then $P(g(T)=0)=1$, as wanted. If you want the minimum variance unbiased estimator of $\tau(\theta)$, you need first of all an unbiased estimator of $\tau(\theta)$, say $W=W(X)$. For a general $\tau$, I do not know if it can be found. But, for example, if $\tau(\theta)=\theta$, then you could take $W=(n+1)/n \,T$ as proved in the accepted answer here. As $T$ is sufficient and complete for $\theta$, by the Lehmann-Scheffé Theorem $T^*=E[W|T]$ is the minimum variance unbiased estimator of $\tau(\theta)$. For instance, if $\tau(\theta)=\theta$ again, $T^*=(n+1)/n\,E[T|T]=(n+1)/n\,T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2051015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How to draw arrowheads? I am trying draw an arrow from $\begin{bmatrix}x_1\\y_1\end{bmatrix}$ to $\begin{bmatrix}x_2\\y_2\end{bmatrix}$. Here is my work. If I draw an arrow rotating, then I can draw arrow pointing at any direction. Here is the Java code (full code). This section runs in infinite loop. x1 = 200; y1=200; x2 = 200+150*cos(angle); y2 = 200-150*sin(angle); a=20; phi = (float)Math.atan2(y2-y1, x2-x1); line(x1, y1, x2, y2); triangle(x2, y2, x2+a*(float)Math.cos(phi+2.88f), // 165 deg = 2.88 y2+a*(float)Math.sin(phi+2.88f), x2+a*(float)Math.cos(phi+3.4f), // 195 deg = 3.4 y2+a*(float)Math.sin(phi+3.4f) ); angle+=.01; But when I run this code, I get the arrow head is flipped for left half quadrants like this. Please help me figure out where I am wrong. Why does the arrow head flip for left half quadrant, i.e. when $x2<x1$?
Most likely your atan2() function isn't returning what you expect in that region. You may need to add $\pi$ to the result for those values (i.e., when $x_2<x_1$) to get the branch of $\tan^{-1}$ that you desire.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2051149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Is $R^n$ projective as a $M_n(R)$-module? It is clear that $R^n$ is not free over $M_n(R)$. But is it projective? I suspect that it should be projective because we can probably come up with a projective basis, but I'm not sure how to find the basis. Moreover, ideally if $R^n$ were projective over $M_n(R)$, we would get a big class of projective modules that are not free which would be interesting, I guess.
Yes: $R^n$ is a finitely generated projective generator of $\textrm{Mod-}R$, so it is a finitely generated projective generator as a module over its endomorphism ring, which is the ring of matrices $M_n(R)$. This is quite easy in general. Let $P_R$ be a finitely generated projective generator of $\mathrm{Mod\text{-}}R$ and let $S=\operatorname{End}(P_R)$. Then $P$ is a left $S$-module. Let's prove it is a finitely generated projective generator. Consider a (split) epimorphism $R^n\to P$. By applying $\operatorname{Hom}_R(-,P)$, we get the split monomorphism $$ \operatorname{Hom}_R(P,P)\to\operatorname{Hom}_R(R^n,P) $$ The domain is isomorphic to $S$ as a left module, the codomain is isomorphic to $P^n$ as $S$-modules. Thus $S$ is a direct summand of ${}_SP^n$ and so ${}_SP^n$ is a generator of $S\textrm{-Mod}$, which implies ${}_SP$ is a generator as well. Since $P_R$ is a generator, there is a split epimorphism $P^n\to R$. Then, applying $\operatorname{Hom}_R(-,P)$, we get a split monomorphism $\operatorname{Hom}_R(R,P)\to\operatorname{Hom}_R(P^n,P)$. The domain is isomorphic to ${}_SP$ and the codomain is isomorphic to ${}_SS^n$. Thus ${}_SP$ is a direct summand of ${}_SS^n$ and therefore it is finitely generated projective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2051237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
$A$ is $n\times n$ complex matrix with $A^2=A$. then need to show that Rank$A$=Trace$A$ So, I have that $A^n=A$ for every natural number $n$ which gives me that determinant of $A$ is either $0$ or the $(n-1)^{th}$ roots of unity, but I don't know how to take it from here.What do I do now? Thanks in advance!
For $x\in\Bbb R^n$ we have $x=(x-Ax)+Ax$ and since $x-Ax\in\ker A$ and $Ax\in Im (A)$ and by the rank-nullity theorem we have $$\Bbb R^n=Im(A)\oplus \ker(A)$$ Let $p=rank(A)$ and let $\mathcal B=(e_1,\ldots,e_p)$ a basis of $Im(A)$ and $(e_{p+1},\ldots,e_n)$ a basis of $\ker A$ so $(e_1,\ldots,e_n)$ is a basis of $\Bbb R^n$. It's easy to prove that $x\in Im(A)\iff Ax=x$ so we see that $A$ is similar to this matrix (relative to the basis $\mathcal B$) $$D=diag(\underbrace{1,\ldots,1}_{p\,\text{times}},0,\ldots,0)$$ hence $$Tr(A)=Tr(D)=p=rank(A)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2051469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Graphical interpretation of composition of functions We know function and its inverse are mirror image about the line y=x and also that their composition is identity function (y=x again) So I was wondering if there is a link? I tried to look up for graphical interpretation of composition of functions but I couldn't fund any.
If $(x,f(x))$ is a point of the graph of the invertible function $y=f(x)$, than tha point $(f(x),f^{-1}(f(x)))$ is a point of the graph of its inverse function $f^{-1}$, so, since $f^{-1}(f(x))=x$, any point $(x,f(x))$ has its symmetric point $(f(x),x)$, with respect the line $y=x$ on the graph of the inverse function. The figure illustrates this fact for the functions $f(x)=e^x$ and $f^{-1}(x)=\ln x$. The two symmetric points on the graphs are $E=(\ln2,e^{ln 2}=2)$ and $F=(2,\ln 2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2051672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Difference between conditional probability and Bayes rule I know the Bayes rule is derived from the conditional probability. But intuitively, what the difference? The equation looks the same to me. The nominator is the joint probability and the denominator is the probability of the given outcome. This is the conditional probability: P(A∣B)=P(A∩B)/P(B) This is the Bayes' rule: P(A∣B)=P(B|A)*P(A)/P(B). Isn't "P(B|A)*P(A)" and "P(A∩B)" the same? When A and B are independent, there is no need to use the Bayes rule, right? What's the difference intuitively between conditional probability and bayes rule?
Baye's thereom uses inverse or posterior probability and also it uses the total probability of an event. You are considering two events here, maybe considering a more general case would shed light on the difference
{ "language": "en", "url": "https://math.stackexchange.com/questions/2051826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Alternative way of calculating the multiplicative inverse Suppose I want to find the inverse of $7^{-1} \equiv 1 \mod 19 $, is there a quick way to do instead of extended euclidean algorithm? (Assume the numbers are not large). Thank you
Yeah, the way most people do it in competitive programming is as follows: Suppose you have a number $k$ that is coprime to $n$ and you want to find the inverse of $k$ mod $n$. Then it is just $k^{\varphi(n)-1}$. You can calculate this really fast using exponentiation by squaring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2052094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
In given figure, prove In the given figure, two chords $AB$ and $CD$ intersect at right angles at $X$. Prove that $\text{arc } AD - \text{arc } CA =\text{arc }BD - \text{arc } BC$ My Attempt: Join $AC$ and $AD$ $1. \quad \angle CAB+\angle ACD=\angle AXD$ $2. \quad \angle DAB +\angle ADC=\angle AXC$. $3. \quad \angle AXC=90^{\circ}$ What should I do next? Please complete the proof.
$$\angle AOD=2\angle ACD$$ $$\angle AOC=2\angle ADC$$ $$\angle BOD=2\angle BAD$$ $$\angle BOC=2\angle BAC$$ $$\angle ACD+\angle BAC=\angle BAD+\angle ADC=90^{\circ}$$ $$\therefore \quad \angle ACD-\angle ADC=\angle BAD-\angle BAC$$ $$\therefore \quad \angle AOD-\angle AOC=\angle BOD-\angle BOC$$ $$\therefore \quad \operatorname{arc} AD-\operatorname{arc} AC = \operatorname{arc} BD-\operatorname{arc} BC$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2052243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
An interesting differential equations problem Suppose we have four ants, initially at rest, at the four corners of a square centered at the origin. They start walking clockwise, each ant walking directly toward the one in front of him. Suppose also that each ant walks with unit velocity, derive a differential equation that describes the trajectories. Thought: Here is the situation of the problem After some time $t$, the ants are now at points $E,F,G,H$. If we denote by $\mathbf{r(t)}$ the position of ant at $A$, we know $\mathbf{r(0)} = (1,1)$ and after some $t$, at point $E$, we have $\mathbf{r(t)} = (x(t),y(t)) = E $. We are given that $$ \frac{ y - 1 }{x - 1 } = 1 $$ Also, using the arclength formula, we know the path of ant $A$ is $$ \int\limits_0^t \sqrt{ (x')^2 + (y')^2 } dt $$ Am I on the right track?
here is my try. let us look for a symmetric solution. so i will assume the ants at $re^{i\theta}= z, iz, -z, -iz$ the differential equation satisfied by the ant at $z$ is $$ \frac{d}{dt}\left(re^{i\theta}\right) = e^{i (\theta + 3\pi/4)} .$$ this can be written as $$\frac{d r}{dt} = -\frac 1 {\sqrt 2}, \ \frac{d \theta}{d t} = \frac 1{r\sqrt 2} \mbox{ with the initial conditions } r = \sqrt 2, \theta = \pi/4. $$ this has solution $$ r = \sqrt 2 - \frac t {\sqrt 2}, \theta = \pi/4 + \ln\left( \frac 2{ 2-t}\right), 0 \le t < 2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2052361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 0 }
Derivative of a definite integral. I'm studying about Fourier Transform and the author wrote this: $f(b) = \displaystyle \int_{0}^{\infty} e^{-ax^2}cos(bx)dx$ (b is the variable here) Therefore: $f'(b) = \displaystyle - \int_{0}^{\infty} xe^{-ax^2}sin(bx)dx$ Well, I cannot see this straightforward, and I am having troubles trying to see this. Can anyone help me?
Use Leibniz's formula for derivation under integral sign: $$\frac{\partial f(a,b)}{\partial b}=\frac{\partial }{\partial b}\int_{0}^{\infty}\exp{(-ax^2)}\cos{(bx)}\,dx=\int_{0}^{\infty}\exp{(-ax^2)}\frac{\partial \cos{(bx)}}{\partial b}\,dx$$ The rest is obvious. Hope it helps
{ "language": "en", "url": "https://math.stackexchange.com/questions/2052515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Limit of a sequence including infinite product. $\lim\limits_{n \to\infty}\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)$ I need to find the limit of the following sequence: $$\lim\limits_{n \to\infty}\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)$$
PRIMER: In THIS ANSWER, I showed using only the limit definition of the exponential function and Bernoulli's Inequality that the logarithm function satisfies the inequalities $$\bbox[5px,border:2px solid #C0A000]{\frac{x-1}{x}\le \log(x)\le x-1} \tag 1$$ for $x>0$. Note that we have $$\begin{align} \log\left(\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)\right)&=\sum_{k=1}^n \log\left(1+\frac{k}{n^2}\right)\tag 2 \end{align}$$ Applying the right-hand side inequality in $(1)$ to $(2)$ reveals $$\begin{align} \sum_{k=1}^n \log\left(1+\frac{k}{n^2}\right)&\le \sum_{k=1}^n \frac{k}{n^2}\\\\ &=\frac{n(n+1)}{2n^2} \\\\ &=\frac12 +\frac{1}{2n}\tag 3 \end{align}$$ Applying the left-hand side inequality in $(1)$ to $(2)$ reveals $$\begin{align} \sum_{k=1}^n \log\left(1+\frac{k}{n^2}\right)&\ge \sum_{k=1}^n \frac{k}{k+n^2}\\\\ &\ge \sum_{k=1}^n \frac{k}{n+n^2}\\\\ &=\frac{n(n+1)}{2(n^2+n)} \\\\ &=\frac12 \tag 4 \end{align}$$ Putting $(2)-(4)$ together yields $$\frac12 \le \log\left(\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)\right)\le \frac12+\frac{1}{2n} \tag 5$$ whereby application of the squeeze theorem to $(5)$ gives $$\bbox[5px,border:2px solid #C0A000]{\lim_{n\to \infty} \log\left(\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)\right)=\frac12}$$ Hence, we find that $$\bbox[5px,border:2px solid #C0A000]{\lim_{n\to \infty}\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)=\sqrt e}$$ And we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2052624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
Isomorphisms between quotient groups Is it true that if $\mathbb{Z} \cong A/B$ with $A, B$ abelian groups then $A \cong \mathbb{Z} \times B$? I think it must be true, but can't show it.
If $A/B\simeq \mathbf Z$, it is isomorphic to a direct summand of $A$. Indeed, consider the commutative triangle: \begin{align} p: A \longrightarrow & A/B\\ s\nwarrow\;&\enspace\downarrow \varphi \\ &\enspace\,\mathbf Z \end{align} where $p$ is the canonical map, $\varphi$ the given isomorphism and $ s$ is the homomorphism defined by $s(1)=$ an inverse image of $1$ by $\varphi\circ p$. By construction, we have $\;(\varphi\circ p)\circ s=\operatorname{id}_\mathbf Z$ and $s$ is injective. Note any $x\in A$ can be written as $$x=s(\varphi\circ p(x))+\bigl(x-s(\varphi\circ p(x))\bigr),$$ and $\;x-s(\varphi\circ p(x))\in\ker p=B$, since $$\;p\bigl(x-s(\varphi\circ p(x))\bigr)=p(x)-(p\circ s)(\varphi\circ p(x))=p(x)-p(x)=0. $$ Further, $\operatorname{Im} s\cap B=\{0\}$: indeed if $x\in B$ and $x=s(n)$, then $n=(\varphi\circ p)(s(n))=\varphi(p(x))=0$, so that $x=s(0)=0$. Thus $$A=\operatorname{Im} s\oplus B\simeq\mathbf Z\oplus B. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2052751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove using mean value theorem. $|\arctan(\frac{a}{4})^{4}-\arctan(\frac{b}{4})^{4}|\leq \pi^{3}\cdot|a-b|$ I started with $f(x)=\arctan(\frac{x}{4})^{4}$. The function is continuous on $[a,b]$ and differentiable in (a,b) So there exists a $c\in (a,b)$ such that $f'(c)=\frac{\arctan(\frac{b}{4})^{4}-\arctan(\frac{a}{4})^{4}}{b-a}$. From this I have $$|\frac{-1}{1+c^{2}}|= \frac{1}{1+c^{2}}=\frac{|\arctan(\frac{b}{4})^{4}-\arctan(\frac{a}{4})^{4}|}{|b-a|}$$ and $a<c<b$. How to get the inequality with $\frac{1}{1+c^{2}}$from it?
Let $f(x) = x^4$, and suppose $|x| \le M$, then $|f'(x)| \le 4 M^3$ for $|x| \le M$. Using the mean value theorem we have $|f(x)-f(y)| \le 4 M^3 |x-y|$. Let $M = {\pi \over 2}$ and note that $|\arctan x| \le M$ and $|\arctan' x| \le 1$, hence $|(\arctan x)^4 - (\arctan y)^4| \le {1 \over 2} \pi^3 |\arctan x -\arctan y | \le {1 \over 2} \pi^3 |x-y|$. Now let $x = {a \over 4}, y = {b \over 4}$ to get. $|(\arctan {a\over 4})^4 - (\arctan {b \over 4})^4| \le {1 \over 2} \pi^3 |{a \over 4} -{b \over 4}| \le { \pi^3 \over 8} |x-y| $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2052865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $ (\frac{1+\sqrt5}{2})^{n}+(\frac{1+\sqrt5}{2})^{n-1} = (\frac{1+\sqrt5}{2})^{n+1}$. I have to get from $ (\frac{1+\sqrt5}{2})^{n}+(\frac{1+\sqrt5}{2})^{n-1}$ to $(\frac{1+\sqrt5}{2})^{n+1}$ however I do not know how to get there since i do not know what to do with the exponents. (not sure if I used the right tag)
HINT: Let $\varphi=\frac12\left(1+\sqrt5\right)$; you have to show that $\varphi^n+\varphi^{n-1}=\varphi^{n+1}$. Divide through by $\varphi^{n-1}$ to see that all you really need to show is that $\varphi+1=\varphi^2$. That’s a matter of fairly straightforward arithmetic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2052999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing that a matrix is invertible by factorisation Let $P_5$ be the vector space of polynomials of degree $\leq$ 5 over $Q$. $P_5 = {a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 + a_5x^5: a_i \in Q}$ and let $D: P_5 \longrightarrow P_5$ be the linear map $D(\alpha) = \frac{d \alpha}{dx}$. By factorising the expression $D^6 - Id$ show that $D^4+D^2+Id: P_5 \longrightarrow P_5$ is invertible and write down its inverse. I am not quite sure how I can show that $D^4+D^2+Id$ is invertible but I tried factorising the given expression in the following way: $$ \begin{align*} D^6 - Id &= D^6 - Id^6\\ &= (D-Id) (D^5 + D^4Id + D^3Id^2 + D^2Id^3 + DId^4 + Id^5)\\ &= (D-Id)(D^5 + D^4 + D^3 + D^2 + D + Id)\\ &= (D-Id) (D^3(D^2+D+Id) + D^2 + D+Id)\\ &= (D-Id) (D+Id) (D^4+D^2+Id)\\ &= (D^2 - Id) (D^4 + D^2 + Id)\end{align*} $$ and now I am not sure what to do next Thank you in advance!
In a ring $R$, $u \in R$ is invertible if $u v = v u = 1$, for some $v \in R$. Here $R$ is the ring (under composition) of $\mathbb{Q}$-linear operators on $P_5$, $u = D^4 + D^2$, and $1_R = Id$. Can you see what $D^6$ does to $P_5$? If so, you will see that you have found the $v$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2053108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
$n \cdot \text{lcm}(a,b,2016) = ab-2016$ Find the largest even number $n$ such that there exist positive integers $a,b$ with $n \cdot \text{lcm}(a,b,2016) = ab-2016$. I tried using the fact that $\text{lcm}(a,b,2016) \geq a,\text{lcm}(a,b,2016) \geq b,$ and $\text{lcm}(a,b,2016) \geq 2016$, but didn't see how to use this to solve the question. How should we approach it?
Clearly 2016 divides $ab$. Let $m=\mathrm{lcm}(a,b,2016)$; then $m\ge2016$ $\implies\ ab-2016=nm\ge2016n$ $\implies\ n\le\dfrac{ab}{2016}-1$ Hence $n=\dfrac{ab}{2016}-1$ to be as large as possible. Then $nm=ab-2016$ $\implies$ $m=2016$ So $a,b$ divide $m$ and $a,b\le m=2016$ $\implies$ $\dfrac{ab}{2016}\le2016$. Also $n$ is even, so $\dfrac{ab}{2016}$ must be odd. The largest odd divisor of 2016 is 63. Hence we can take $\dfrac{ab}{2016}=63$ (e.g. $a=2016,b=63$) and so the largest even value of $n$ is $$\boxed{n=62}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2053221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
deriving relation for circle in complex plane Prove that $|z-z_1|²+|z-z_2|²=k$ will represent a circle if $|z_1-z_2|²\leq2k$ I tried using the concept of family of circles, but it didn't help me
Expressing z as $x+yi, z_1=a_1+b_1i, z_2=a_2+b_2i$ so re writing our equation we get $(x-a_1)^2+(y-b_1)^2+(x-a_2)^2+(y-b_2)^2=k$ through simplifying we get that this a circle
{ "language": "en", "url": "https://math.stackexchange.com/questions/2053345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to recognize that a Poisson Process that is split has two separate Poisson Process through viewing it as a filtration? I read in a paper that if we have a Poisson Process with rate $\lambda$, and that each arrival is separated into two separate categories, $A$ and $B$ with probability $p$ and $1-p$, then each of the two categories are themselves an independent Poisson Process with rate $\lambda p$ and $\lambda (1-p)$. However, the paper I read said that this is easily seen without proof as each of the two processes are just a filtration of the overall process. I am not sure why this is the case. Does anyone know?
Comment: It seems you are asking for intuitive arguments. Here are three relevant ones: 1) Suppose a radioactive source emits particles into a counter according to a Poisson process with rate $\lambda.$ Now a piece of lead foil is placed between the source and the counter that randomly absorbs half of the particles. Does it make sense to you that that the counter now observes a Poisson process with rate $\frac{1}{2}\lambda\, ?$ 2) Suppose you have two small clay-like pieces of radioactive ore emitting particles into a counter at rates $p\lambda$ and $(1-p)\lambda$ respectively. Now you smash the two pieces into one. Does it make sense to you that you now have one larger piece of ore emitting particles into the counter at rate $\lambda\, ?$ 3) In a certain city stroke patients enter the local hospital ER at a Poisson rate $\lambda.$ When they arrive it turns out that stokes are randomly of two different kinds: A with probability $p$ and B with probability $(1-p)$. Upon diagnosis, type A stroke patients are sent to one department and type B patients to another. Does it make sense to you that the department receiving type A strokes observes admissions at Poisson rate $p\lambda\,?$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2053435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can an integral domain have an element that has no square root but has a square root in the field of fractions? A lemma states: Let $R$ be a UFD and $F=\operatorname{Frac}(R)$. Let $d\in R$, then equation $a^2=d$ has a root in $R$ iff it has a root in $F$. So I want to ask, is there a counterexample for this if $R$ is not a UFD? I only know some non-UFD integral domains like ring of integers for some values.
You already have a very good example, but since you mentioned ring of integers, and to put this in a more general context: A (full) ring of algebraic integers (a maximal order) can never work as a counter-example. The point is that those are (basically by definition) integrally closed. This means that every element of the fraction field of $R$ that is a root of a monic polynomial over $R$ is already in $R$. Since your equation corresponds to a particular type of monic polynomial having a root, the assertion holds for this one in particular. However, if you take subrings of rings of algebraic integers (non-maximal orders) you can get examples. For example, in $\mathbb{Z}[2\sqrt{2}]$ the equation $X^2 = 2$ has not solution. But $\sqrt{2}$ is of course in the quotient field. Let me end with an abstract argument for UFDs having the property you recalled: A UFD is integrally closed and thus it is also two-root closed (this is a somewhat common name for the property you give). Put differently, when looking for counter-examples you need to avoid integrally closed domains, since they all still have the property mentioned in your lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2053584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
General solution to $(\sqrt{3}-1)\cos x+(\sqrt{3}+1)\sin x=2$ $(\sqrt{3}-1)\cos x+(\sqrt{3}+1)\sin x=2$ is said to have a general solution of $x=2n\pi\pm\frac{\pi}{4}+\frac{\pi}{12}$. My Approach: Considering the equation as $$ a\cos x+b\sin x=\sqrt{a^2+b^2}\Big(\frac{a}{\sqrt{a^2+b^2}}\cos x+\frac{b}{\sqrt{a^2+b^2}}\sin x\Big)=\sqrt{a^2+b^2}\big(\sin y.\cos x+\cos y.\sin x\big)=\sqrt{a^2+b^2}.\sin(y+x)=2 $$ $\frac{a}{\sqrt{a^2+b^2}}=\sin y$ and $\frac{b}{\sqrt{a^2+b^2}}=\cos y$. $$ {\sqrt{a^2+b^2}}=\sqrt{8}=2\sqrt{2}\\\tan y=a/b=\frac{\sqrt{3}-1}{\sqrt{3}+1}=\frac{\frac{\sqrt{3}}{2}.\frac{1}{\sqrt{2}}-\frac{1}{2}.\frac{1}{\sqrt{2}}}{\frac{\sqrt{3}}{2}.\frac{1}{\sqrt{2}}+\frac{1}{2}.\frac{1}{\sqrt{2}}}=\frac{\sin(\pi/3-\pi/4)}{\sin(\pi/3+\pi/4)}=\frac{\sin(\pi/3-\pi/4)}{\cos(\pi/3-\pi/4)}=\tan(\pi/3-\pi/4)\implies y=\pi/3-\pi/4=\pi/12 $$ Substituting for $y$, $$ 2\sqrt{2}.\sin(\frac{\pi}{12}+x)=2\implies \sin(\frac{\pi}{12}+x)=\frac{1}{\sqrt{2}}=\sin{\frac{\pi}{4}}\\\implies \frac{\pi}{12}+x=n\pi+(-1)^n\frac{\pi}{4}\implies x=n\pi+(-1)^n\frac{\pi}{4}-\frac{\pi}{12} $$ What's going wrong with the approach ?
Our hint is: $a\cos \theta +b\sin \theta =c$. Given: $(\sqrt{3}-1)\cos \theta +(\sqrt{3}+1)\sin \theta =2$. Let $(\sqrt{3}-1) = r\cos \alpha$ and $(\sqrt{3}+1) =r\sin \alpha$. Then $r\cos \alpha \cos \theta + r\sin \alpha \sin \theta =2 \Rightarrow r\cos(\theta-\alpha) =2 \Rightarrow \cos(\theta-\alpha) =\frac{2}{r}$. Now, $r =\sqrt{(\sqrt{3}-1)^2 +(\sqrt{3}+1)^2} = \sqrt{8} =2\sqrt{2}$. Thus, $\cos(\theta-\alpha) =\frac{1}{\sqrt{2}} = \cos \frac{\pi}{4}$. Also, $\tan \alpha =\frac{\sqrt{3}+1}{\sqrt{3}-1} = \tan(\frac{\pi}{2}-\frac{\pi}{3} +\frac{\pi}{4}) \Rightarrow \alpha =\frac{5\pi}{12}$. Thus:$(\theta-\alpha) =2n\pi \pm \frac{\pi}{4}$. Giving, $\theta = 2n\pi \pm \frac{\pi}{4} +\frac{5\pi}{12}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2053720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Nonsingularity and inverse of a matrix. I am having some trouble solving this question: Let $A,B,C,D$ be $n\times n$ matrices. Suppose $C$ and $A-BC^{-1}B^T$ are nonsingular. Show that the matrix mentioned is nonsingular and find its inverse \begin{bmatrix}A&B\\B^T&C\end{bmatrix} For nonsingularity $\det(M)$ is not $0$. Any help is appreciated :) Edit: I found that M is nonsingular, can anyone help me finding the inverse?
Suppose $A, B, C$, and $D$ are matrices of dimension $n × n$, $n × m$, $m × n$, and $m × m$, respectively. In general, when $D$ is invertible, a similar identity with ${\displaystyle \det(D)}$ factored out can be derived: $${\displaystyle \det {\begin{pmatrix}A&B\\C&D\end{pmatrix}}=\det(D)\det(A-BD^{-1}C).}$$ See more on the Wikipedia article here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2053932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Limit of recursive sequence defined by $a_0=0$, $a_{n+1}=\frac12\left(a_n+\sqrt{a_n^2+\frac{1}{4^n}}\right)$ Given the following sequence: $a_0=0$, $$a_{n+1}=\frac12\left(a_n+\sqrt{a_n^2+\frac{1}{4^n}}\right),\ \forall n\ge 0.$$ Find $\lim\limits_{n\to\infty}a_n$.
Hint:Take $a_n = \dfrac{\tan \theta_n}{2^n}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2054026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Which version of Rolle's theorem is correct? #According to my textbook: Rolle's theorem states that if a function $f$ is continuous on the closed interval $[a, b]$ and differentiable on the open interval $(a, b)$ such that $f(a) = f(b)$, then $f′(x) = 0$ for some $x$ with $a ≤ x ≤ b$. #According to Wikipedia: If a real-valued function $f$ is continuous on a proper closed interval $[a, b]$, differentiable on the open interval $(a, b)$, and $f(a) = f(b)$, then there exists at least one $c$ in the open interval (a, b) such that $f'(c)=0$. So one definition says that $c$ should belong in closed interval $[a,b]$ but the other says that $c$ should be in open interval $(a,b)$. Which definition is correct ? Why?
These are theorems, not definitions, and both of them are correct. Notice that if Wikipedia is correct, then your textbook is automatically correct as well: if there exists $c\in (a,b)$ such that $f'(c)=0$, then there also exists $x\in [a,b]$ such that $f'(x)=0$, since you can take $x=c$ (since $(a,b)$ is a subset of $[a,b]$). On the other hand, you can't (in any obvious way) deduce Wikipedia's statement from your textbook's, so Wikipedia's statement is stronger: it tells you more information. So you could say Wikipedia's statement is more useful or more powerful, and is "correct" in that you might as well use it instead of your textbook's version. As for which one is "correct" in the sense of being the "standard" statement of Rolle's theorem, I would say the Wikipedia version is probably more standard. But mathematical theorems quite often do not have universally accepted "standard" versions and instead have several different versions that are closely related but may be slightly different and all tend to be referred to with the same name. It's not like there's some committee of mathematicians who gets together and declares "this is the statement we will call Rolle's theorem"; everyone just refers to theorems independently and so there ends up being some minor variation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2054174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
$u_n \in L^1$ $u_n \to u$ in $L^1$ and $\int_0^1 u_n \ dt = 1$ $\implies$ $\int_0^1 u \ dt = 1$ Consider a sequence $u_n \in L^1([0,1])$. Suppose that $u_n \to u$ in $L^1([0,1])$. If $$\int_0^1 u_n \ dt = 1$$ $\forall n \in \mathbb{N}$ then why $$\int_0^1 u \ dt = 1?$$ I'm sure this follows from a very basic result, but I'm not able to figure it out right now.
Note that $u_n \to u$ means that $(u_n - u) \to 0$ in $L^1$, which is to say that $\int|u_n - u| \to 0$. We want to show that $\int u_n - \int u \to 0$. Note, however, that $$ \left| \int u_n - \int u\right| = \left| \int (u_n - u)\right| \leq \int |u_n - u| $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2054272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
9 points in a unit square with pairwise distances greater than $\frac12$ Consider a square with sides of length 1. Can we find 9 points in it such that the distance between every pair of points is strictly greater than $\frac12$? We allow the points to be on the boundary of the square. The question arose in a conversation, and we are pretty confident that there is no such arrangement, and moreover the only configuration with pairwise distances at least half is the one where we take the vertices, the midpoints and the center. Our thoughts However, proving this seems nontrivial. An equivalent formulation is to look at the $\frac14$ blowup of the square and ask whether we can fit 9 circles with radius $\frac14$ in it or not.  Moreover, the pigeonhole principle seems intuitively to be too weak for proving this, because if the shapes will be disjoint then each will have area around $\frac18$ and for, say, squares diameter half enforces area exactly $\frac18$, but we can't fit 8 such squares without gaps.
Assume there is a way for picking such $9$ points $P_1,\ldots,P_9$. Consider nine circles $\Gamma_i$ centered at $P_i$, with radius $\frac{1}{4}$. By our assumptions, these circles are disjoint and their union is contained in a square with side length $\frac{3}{2}=1+\frac{1}{4}+\frac{1}{4}$. The sum of the areas of our circles, $\frac{9\pi}{16}$, is less than the area of the augmented square, $\frac{9}{4}$, so the impossibility of such circle packing is not completely obvious. However, the optimal circle-packing-in-a-square is known for small numbers of circles, and that proves the impossibility of our arrangement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2054357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proof that any plane perpendicular to x,y plane intersection with the bivariate normal distribution has the shape of a normal distribution. The bivariate normal distribution is given by the equation: $$f(x,y)=\frac{\exp\left(-\frac{1}{2(1-\rho)^2}\left[\left(\frac{x-\mu_1}{\sigma_1}\right)^2-2\rho\left(\frac{x-\mu_1}{\sigma_1}\right)\left(\frac{y-\mu_2}{\sigma_2}\right)+\left(\frac{y-\mu_2}{\sigma_2}\right)^2\right]\right)}{2\pi\sigma_1\sigma_2\sqrt{1-\rho^2}}$$ for $-\infty<x<\infty$ and $-\infty<y<\infty$, where $\sigma_1>0$, $\sigma_2>0$, and $-1<p<1$. The normal distribution is given by the equation: $$f(x)=\frac{e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}}{\sigma\sqrt{2\pi}}$$ for $-\infty<x<\infty$ where $\sigma>0$. I am trying to show that if I take any plane that is perpendicular to the $x,y$ plane, it's intersection with the bivariate normal distribution is equivlent to $c\cdot f(x)$, where $f(x)$ is the normal distribution and $c$ is a real constant. This can also be written as showing the following statement to be true. $$\frac{\exp\left(-\frac{1}{2(1-\rho)^2}\left[\left(\frac{x-\mu_1}{\sigma_1}\right)^2-2\rho\left(\frac{x-\mu_1}{\sigma_1}\right)\left(\frac{(mx+b)-\mu_2}{\sigma_2}\right)+\left(\frac{(mx+b)-\mu_2}{\sigma_2}\right)^2\right]\right)}{2\pi\sigma_1\sigma_2\sqrt{1-\rho^2}}=c\cdot \frac{e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}}{\sigma\sqrt{2\pi}}$$ or $$f(x,mx+b)=c\cdot f(x)$$ Where $m$ and $b$ are any real number. This does leave out all the planes parallel to the $x$ axis, although those will be trivial to show separately, after showing that $f(y,x)$ is also a bivariate normal distribution. I tried simplifying this but have made no progress so far, It's quite likely that I am going about this in completely the wrong way.
As you can see from another answer, it is possible to carry through your initial idea to get a proof. The one detail that I questioned was how we establish that the coefficient of $x^2$ in $$ \frac{1}{(1-\rho)^2}\left[\left(\frac{x-\mu_1}{\sigma_1}\right)^2-2\rho\left(\frac{x-\mu_1}{\sigma_1}\right)\left(\frac{mx+b-\mu_2}{\sigma_2}\right)+\left(\frac{mx+b-\mu_2}{\sigma_2}\right)^2\right] $$ is positive (using $m$ and $b$ as defined in the question). In fact, that coefficient is $$ \alpha = \frac{1}{(1-\rho)^2} \left(\frac{1}{\sigma_1^2}-2\rho\frac{m}{\sigma_1\sigma_2}+\frac{m^2}{\sigma_2^2}\right) . $$ But $$ \frac{1}{\sigma_1^2}-2\rho\frac{m}{\sigma_1\sigma_2}+\frac{m^2}{\sigma_2^2} = \left(\frac{1}{\sigma_1}-\rho\frac{m}{\sigma_2}\right)^2 + (1 - \rho^2)\frac{m^2}{\sigma_2^2}, $$ and since $-1<\rho<1$ it follows that $1-\rho^2>0$ and also that $\alpha > 0.$ Note that for any $\rho$ such that $\lvert\rho\rvert>1$ it is possible to choose the other parameters so that $\alpha<0,$ and of course the entire polynomial is undefined for $\lvert\rho\rvert=1,$ so the fact that $-1<\rho<1$ is a necessary condition for this proof to go through. An alternative is a suitable substitution of variables for both $x$ and $y$ that transform the plane so that the distribution over the new variables is standard normal with covariance zero over the transformed variables and such that the equation of the line in the transformed variables has the form $y'=k.$ Such a substitution certainly exists, but it might involve translation (to eliminate the means), rotation (to eliminate the covariance), scaling by unequal factors along the transformed axes (to eliminate the variances), and then a final rotation to make the perpendicular plane in the question intersect the transformed coordinate plane in a "horizontal" line. This method is conceptually simple, and avoids the concern about signs that I had with the other approach, but you may find that actually working through the math of the other approach is easier for you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2054498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$AB − BA = A$, $A$ is a nonzero matrix then $B$ is not nilpotent. Let $A,B$ $n\times n$ matrices such that $$AB-BA=A$$ and $A$ is a nonzero matrix. Prove that $B$ is not nilpotent. I know why $A$ is nilpotent, but how can I prove $B$ is not nilpotent?
Here's one way to see it: define the linear transformation $$ \Phi_B(X) = XB - BX $$ clearly, $A$ is an eigenvector of this transformation associated with $\lambda = 1$. Now, the eigenvalues of $\Phi_B$ are necessarily of the form $\lambda - \mu$ where $\lambda,\mu$ are eigenvalues of $B$ (can be seen via vectorization). Conclude that since $\Phi_B$ has a non-zero eigenvalue, $B$ has a non-zero eigenvalue, which means it is not nilpotent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2054600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Negative Binomial (pascal) Distribution A PC has two color options, white and gray. A customer demands the white PC with a probability of 0.3. A seller of these PCs has three of each color in stock, although this is not known to the customers. Customers arrive and independently order these PCs. Find the probability that all of whites are ordered before all of the grays. My attempt If all whites are ordered before grays it means that the 6th order will always be gray Therefore I found the probability that the 6th demand would be gray X=number of trials R=number of R-th success Formula Combination of $$\binom{X-1}{R-1} \cdot p^R \cdot (1-p)^{X-R}$$ Applying formula: $$\binom{5}{2} \cdot 0.7^3\cdot 0.3^3=0.0926.$$ However the answer in the answer key is $0.16308.$ Can someone tell me where I went wrong, Thanks!
Let's distinguish some cases. * *First case: The white PC's are bought in the first 3 sales. The probability for this to happen is: $$p_1 = 0.3^3 = 0.027.$$ *Second case: The white PC's are bought in the first 4 sales. Notice that the last bought PC is white (otherwise, the "experiment" stops at the first 3 sales). The probability for this to happen is: $$ p_2 = \binom{3}{2} \cdot 0.3^3 \cdot 0.7 = 0.0567.$$ *Third case: The white PC's are bought in the first 5 sales. Notice again that the last bought PC must be white. Thus: $$ p_3 = \binom{4}{2} \cdot 0.3^3 \cdot 0.7^2 = 0.07938.$$ There is no other case. If we add up all the probabilities (because the events are mutually exclusive) we have that: $$p_1 + p_2 + p_3 = 0.16308.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2054703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
68-95-99.7 Rule and Normal Distribution Question. I need help solving this question. It is on my final exam review. I was given the answer but i do not know how to solve it. 1) Not everyone pays the same price for the same model of a car. Suppose the price of a car is normally distributed. The mean price of a particular model of a new car is 22,000 and the standard deviation is 750. Use both the 68-95-99.7 Rule and Normal Distribution to find the percentages of buyers who paid more than 22,750? ANS: P(x>22,750) = 0.16 Normal Distribution P(x>22,750) = 0.1587
In addition to my comment above, you could standardize by obtaining the Z-Score and find the probability that way(where Mean = 1, Standard Deviation = 0) $$Z = \frac{X-\mu }{\sigma }$$ $$Z =\frac{22750-22000}{750}$$ $$Z=1$$ $$P(Z\geq1) = .1587$$ $$and$$ $$NormalCDF(22750,1000000,22000,750) = .1587$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2054795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
On solvable octic trinomials like $x^8-5x-5=0$ Solvable quintic trinomials $$x^5+ax+b=0$$ have been completely parameterized. Finding $6$th-deg versions is relatively easy to do such as, $$x^6+3x+3=0$$ which factors over $\sqrt{-3}$. No $7$th-deg are known, but surprisingly there are octic ones, such as the simple, $$x^8-5x-5=0$$ which factors over $\sqrt{5}$. And the not-so-simple ones, $$x^8-11(4x+3)=0\\x^8+16(4x+7)=0\\x^8 + 5\cdot23^2(12 x+43) =0$$ which factors over a quartic extension (and needs the cube root of unity). Q: Any other octic examples, if possible parametric? $\color{green}{Update:}$ Klajok in his answer below has found a family for the class of octic trinomials that factor over a quadratic extension. However, another class needs a quartic extension. For example, $$x^8-44x-33=0\tag1$$ which factors into four quadratics, $$x^2 + v x - (2v^3 - 7v^2 + 5v + 33)/13=0$$ and where $v$ is any root of $v^4 + 22v + 22=0$. More generally, eliminating $v$ between $$x^2 + v x + (pv^3 +qv^2 + rv + s)=0$$ $$v^4+av^2+bv+c=0$$ easily done by the resultant function of Mathematica will result in an irreducible but solvable octic and judicious choice of rational coefficients will yield a trinomial. However, it is not known if this second class of trinomials like $(1)$ has a parametric family as well.
A result of Harris [1] is that every monic palindromic polynomial of degree-8 can be factored into two monic palindromic polynomials of degree-4. $$ \begin{align} f(x) & = x^8 + ax^7 + bx^6 + cx^5 + dx^4 + cx^3 + bx^2 + ax + 1 \\ & = (x^4 + px^3 + qx^2 + px + 1)(x^4 + rx^3 + sx^2 + rx + 1) \\ & = x^8+x^7 (p+r)+x^6 (pr+q+s)+x^5 (p(s+1)+qr+r)+x^4 (2pr+qs+2)+x^3 (p(s+1)+qr+r)+ x^2 (pr+q+s)+x (p+r)+1 \end{align} $$ Equating the coefficients: $$ \begin{align} a &= p+r \\ b &= pr + q + s \\ c &= p(s+1) + qr + r \\ d &= 2pr + qs + 2 \end{align} $$ For the subset of degree-8 monic palindromic polynomials of the form $$ \begin{align} f(x) & = x^8 + 0x^7 + 0x^6 + 0x^5 + dx^4 + 0x^3 + 0x^2 + 0x + 1 \\ & = x^8 + dx^4 + 1 \\ \text{we have } 0 &= p+r \\ 0 &= pr + q + s \\ 0 &= p(s+1) + qr + r \\ d &= 2pr + qs + 2 \end{align} $$ We have the parametric solutions: $$ r = 0 ∧ s = -q ∧ p = 0 ∧ d = 2 - q^2, \\ r = \sqrt{2} \sqrt{q} ∧ s = q ∧ p = -\sqrt{2} \sqrt{q} ∧ d = q^2 - 4 q + 2, \\ r = -\sqrt{2} \sqrt{q} ∧ s = q ∧ p = \sqrt{2} \sqrt{q} ∧ d = q^2 - 4 q + 2. \\ $$ Note that all of the above are parametrized on $d$. On similar lines, we can factor the monic quartic palindromic polynomials into two monic quadratic palindromic polynomials each and then use the quadratic formula to get the roots. References [1]: J. R. Harris, "96.31 Palindromic Polynomials," The Mathematical Gazette, vol. 96, no. 536, p. 266–69, 2012. https://doi.org/10.1017/S0025557200004526
{ "language": "en", "url": "https://math.stackexchange.com/questions/2054945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 1, "answer_id": 0 }
A question about jointly continuous random variables. I have this question and I get the answer for it. But I really want to know how E(Y)=E(E(Y|X)) Here is the question and answer. Question: An observation X is taken uniformly from (0, 1). Then, let Y be an observation taken uniformly on (X, 1). Find E[Y]. My work: $$E[Y] = E[E[Y\mid X]] ~=~ E[(X+1)/2] = (1/2)(1/2+1) = 3/4$$
For jointly continuous random variables , $X,Y$ with joint, marginal, and conditional density functions: $f_{X,Y}(x,y), f_X(x), f_Y(y), f_{Y\mid X}(y\mid x), f_{X\mid Y}(x\mid y)$ . $$\begin{align}\mathsf E(\mathsf E(Y\mid X)) &= \int_\Bbb R f_X(x)\left(\int_\Bbb R y~f_{Y\mid X}(y\mid x)\operatorname d y\right)\operatorname d x \\[1ex] & = \iint_{\Bbb R^2} y\, f_{X,Y}(x,y)\operatorname d\, (x,y) \\[1ex] &= \int_\Bbb R y\, f_Y(y)\left(\int_\Bbb R f_{X\mid Y}(x\mid y)\operatorname d x\right)\operatorname dy \\[1ex] &=\int_\Bbb R y~f_Y(y)\operatorname d y \\[1ex] & = \mathsf E(Y) \\[2ex]\Box\qquad\quad\qquad \end{align}$$ So in particular, you have been given that $f_X(x)=\mathbf 1_{x\in[0;1]}$ and $f_{Y\mid X}(y\mid x) = \frac 1{1-x}\mathbf 1_{y\in[x;1], x\in[0;1]}$, then you could proceed as: $$\begin{align}\mathsf E(Y)~&=~ \iint_{\Bbb R^2} y f_{Y\mid X}(y\mid x)f_X(x)\operatorname d y\operatorname d x \\ &=~ \int_0^1\int_x^1 \frac y{1-x}\operatorname d y\operatorname d x \end{align}$$ However, to avoid the work of integration we instead use that $U\sim\mathcal U[a;b] \implies \mathsf E(U)=\frac{b+a}2$ , the above Law of Iterated Expectation (or Tower Property) and the Linearity of Expectation. $$\begin{align}\mathsf E(Y) ~&=~ \mathsf E(\mathsf E(Y\mid X)) \\[1ex]~&=~\mathsf E(\frac{1+X}{2}) \\[1ex]~&=~ \tfrac 12+\tfrac 12\mathsf E(X) \\[1ex]~&=~ \tfrac 34\\[1ex]\blacksquare\qquad&\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2055023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding a maximum likelihood estimator when derivative of log-likelihood is invalid I need to find the maximum likelihood for $\theta$ given the following: $X_1, ..., X_n$ are sampled i.i.d from a population with the following density: $$ f(x | \theta) = \begin{cases} e^{-(x-\theta)} & x \geq \theta \\ 0 & \text{otherwise} \end{cases} \tag*{where $\theta > 0$} $$ I begin by writing the likelihood... $$ L(\theta; x_1, ..., x_n) = \prod_{i=1}^{n} e^{-(x_i-\theta)} = e^{n\theta - \sum_{i=1}^{n} x_i}\prod^n_{j=1}\mathbb{1}_{[\theta,\infty)}(x_j) $$ and the log likelihood... $$ \ell(\theta; x_1, ..., x_n) = \log e^{n\theta - \sum_{i=1}^{n} x_i} = n\theta - \sum_{i=1}^{n} x_i $$ and setting the derivative of the log likelihood to zero... \begin{align*} 0 &= \frac{\partial}{\partial \theta} \ell(\theta; x_1, ..., x_n) \\ 0 &= \frac{\partial}{\partial \theta} \big(n\theta - \sum_{i=1}^{n} x_i\big) \\ 0 &= n \ \ \ \ \text{(?)} \end{align*} That I where I get confused, given that the standard procedure for finding the MLE estimator does not seem to give a valid expression. Where am I going wrong? What is the appropriate method for finding the MLE estimator in this situation? It's clear that $L(\theta; x_1, ..., x_n) = 0$ where $\theta > \min\{x_1, ..., x_n\}$, but I'm not sure if/how this fact is useful.
As you say, your expressions for the likelihood and log-likelihood are only valid when $\theta$ is less than or equal to all the observed $x_i$; otherwise the likelihood is $0$ and the log-likelihood $-\infty$ Meanwhile, as your derivative suggests, your expressions for the likelihood and log-likelihood are strictly increasing functions of $\theta$ when they are valid, so you want $\theta$ to be as large as possible So the maximum likelihood and maximum log-likelihood both occur when $\displaystyle \theta = \min_i x_i$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2055248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all $Hom(\mathbb Z/m ,A) $, A- finite Abelian group My attempt: I want to prove that $Hom(\mathbb Z/m, A) \simeq A$ Let's build map $k$: $f \to f(1)$. It's a group homomorphism: $fg \to g(f(1)) = g(a) = g(1+1+..+1) = g(1)a=ba=f(1)g(1) $ $k$ is injective and surjective hence bijective.
A less confusing way to put the question is as follows: * *Question: Let $m \in \Bbb Z$ and $C_m$ the cyclic group of order $m$, $A$ a finite abelian group, describe the group of homomorphisms $f : C_m \rightarrow A$ with group composition defined by $(f*g)(x) = f(x)+g(x)$. *Answer: Let $a \in C_m$ be a generator of $C_m$ and $f$ any homomorphism, then $f$ is completely determined by the value $f(a)$. But since $a^m = e$ and $f(e)=0$ one must have that the (additive) order $f(a)$ is a divisor of $m$ so $\operatorname{Hom}(C_m,A) \cong G$, The subgroup of A consisting of elements with an order a divisor of $m$. *Examples: With $A = \Bbb Z_2 \times \Bbb Z_2 \times \Bbb Z_6 \times \Bbb Z_{30}$ and $m = 6$ we have $G = \Bbb Z_2 \times \Bbb Z_2 \times \Bbb Z_6 \times \Bbb Z_6$. On the other hand for $m = 5$ we have $G = \Bbb Z_5$. *Exercise: Calculate $G$ in the cases $m = 2,3,4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2055378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why not learn the multi-variate chain rule in Calculus I? I am wondering why we don't learn the multi-variate chain rule in Calculus I? I know the name implies it is more suitable for multi-variable Calculus, but after learning it, I've found it very useful. Notably, one does not need to remember product rule or quotient rule or regular chain rule, and I don't think you would have to learn about logarithmic differentiation either. So with all these advantages, why don't we teach it?
I used to think this, too, until I taught Calculus I. If you, as a math student and enthusiast, like to see the product rule, etc., as special cases of the multivariate chain rule, then that is good for you and deepens your understanding. However, my experience has been that reasoning from the general to the specific doesn't always sink in to the novice learner. If the multivariate chain rule is mumbo-jumbo, nothing derived from it is understandable either. The median student in Calculus I struggles with the concept of function, has trouble working with more than two variables, and can't keep straight whether $\frac{1}{x}$ is the derivative of $\ln x$ or the other way around. I'm not trying to bash Calculus I students; only to recognize that they are in a different place mathematically than we are now, or even than we were when we first learned Calculus I. To reach them, we have to understand where their frontiers are and what is just beyond them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2055481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 1, "answer_id": 0 }
Let $a,b,c$ be the length of sides of a triangle then prove that $a^2b(a-b)+b^2c(b-c)+c^2a(c-a)\ge0$ Let $a,b,c$ be the length of sides of a triangle then prove that: $a^2b(a-b)+b^2c(b-c)+c^2a(c-a)\ge0$ Please help me!!!
Let $c=\max\{a,b,c\}$, $a=x+u$, $b=x+v$ and $c=x+u+v$, where $x>0$ and $u\geq0$, $v\geq0$. Hence, $\sum\limits_{cyc}(a^3b-a^2b^2)=(u^2-uv+v^2)x^2+(u^3+2u^2v-uv^2+v^3)x+2u^3v\geq0$. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2055559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Use Mathematical Induction to prove equation? Use mathematical induction to prove each of the following statements. let $$g(n) = 1^3 + 2^3 + 3^3 + ... + n^3$$ Show that the function $$g(n)= \frac{n^2(n+1)^2}{4}$$ for all n in N so the base case is just g(1) right? so the answer for the base case is 1, because 4/4 = 1 then for g(2) is it replace all of the n's with n + 1 and see if there is a concrete answer?
First, show that this is true for $n=1$: $\sum\limits_{k=1}^{1}k^3=\frac{1^2(1+1)^2}{4}$ Second, assume that this is true for $n$: $\sum\limits_{k=1}^{n}k^3=\frac{n^2(n+1)^2}{4}$ Third, prove that this is true for $n+1$: $\sum\limits_{k=1}^{n+1}k^3=$ $\color\red{\sum\limits_{k=1}^{n}k^3}+(n+1)^3=$ $\color\red{\frac{n^2(n+1)^2}{4}}+(n+1)^3=$ $\frac{(n+1)^2(n+1+1)^2}{4}$ Please note that the assumption is used only in the part marked red.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2055695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 0 }
How can I show that $4^{1536} - 9^{4824}$ can be divided by $35$ without remainder? How can I show that $4^{1536} - 9^{4824}$ can be divided by $35$ without remainder? I'm not even sure how to begin solving this, any hints are welcomed! $$(4^{1536} - 9^{4824}) \pmod{35} = 0$$
Euler's theorem implies, since $\varphi(35)=24$, that $$4^{24}\equiv 9^{24}\equiv 1\pmod {35}$$ Since $1536$ and $4824$ are multiples of $24$, the conclusion follows: $$4^{1536}-9^{4824}=(4^{24})^{64}-(9^{24})^{201}\equiv1^{64}-1^{201}\equiv 0\pmod{35}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2055833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Convergence and absolute convergence of $\sum_{n=1}^{\infty} = {(-1)^n \over n + (-1)^{n-1}}$ I am trying to conclude about the convergence and absolute convergence of $$\sum_{n=1}^{\infty} = {(-1)^n \over n + (-1)^{n-1}}$$ For absolute convergence, we can note that $$\lvert a_n \rvert = {1 \over n + (-1)^{n-1}}$$ $$\sum_{n=1}^{\infty} = {1 \over n + (-1)^{n-1}} = {1 \over 2} + 1 + {1 \over 4} + {1 \over 3} + {1 \over 6} + {1 \over 5} + \dots$$ We can see that this is a harmonic series with the terms rearranged. The sequence of partial sums will be strictly monotonic and for even numbers the terms will be equal to the terms of the sequence of partial sums of the harmonic series. This means that the series doesn't converge, so we have no absolute convergence. Now, how can we conclude about convergence? ${1 \over n + (-1)^{n-1}}$ is not monotonic, so the tests I have covered so far (Leibniz, Dirichlet and Abel) are not applicable.
If we just study the $2N$-th partial sum $$\sum_{n=1}^{2N}\frac{(-1)^n}{n+(-1)^{n-1}} = \sum_{k=1}^{N}\frac{1}{2k-1}-\sum_{k=1}^{N}\frac{1}{2k}=\sum_{k=1}^{N}\frac{1}{2k(2k-1)} $$ we trivially have that our series is conditionally convergent and $$ \sum_{n\geq 1}\frac{(-1)^n}{n+(-1)^{n-1}} = \color{blue}{\log 2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2056011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why does $A^2=0$ imply that the column space is a subset of the null space So we have a matrix n by n matrix $A$ such that $A^2 =0$. This means that $A^2 = [Aa_1 \space Aa_2 \space \dots Aa_n] = 0$, so $Aa_1 = \dots = Aa_n = 0$. But why does this imply that col(A) $\subset$ null(A)? The column space is the space spanned by linear combinations of the columns of $A$. I don't see how $Aa_1 \dots Aa_n$ are all possible linear combinations of the columns of $A$.
Here's an intuitive explanation that perhaps you can make precise. A matrix is a linear function on a vector space, and $A^2$ represents composing that function with itself. Now the column space of this matrix is essentially the image of the function, as it is the span of vectors you can get out. Now we apply $A$ once and get some linear subspace of the ambient vector space. Suppose we feed this into $A$ itself, and we get $0$. This means that everything that came out of $A$ the first time belongs to the null-space of $A$, as $A$ has taken them to $0$ on the second application.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2056071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Curious limits with tanh and sin These two limits can be easily solved by using De l'Hopital Rule multiple times (I think), but I suspect that there could be an easier way... Is there? \begin{gather} \lim_{x\to 0} \frac{\tanh^2 x - \sin^2 x}{x^4} \\ \lim_{x\to 0} \frac{\sinh^2 x - \tan^2 x}{x^4} \end{gather} Thanks for your attention!
From the standard Taylor series expansions, as $x \to 0$, $$ \begin{align} \sin x&=x-\frac{x^3}{6}+\frac{x^5}{120}+O(x^6) \\\tanh x&=x-\frac{x^3}{3}+\frac{2 x^5}{15}+O(x^6) \end{align} $$ ones gets $$ \begin{align} \left(\sin x\right)^2&=x^2-\frac{x^4}{3}+O(x^6) \\\left(\tanh x\right)^2&=x^2-\frac{2 x^4}{3}+O(x^6) \end{align} $$ giving, as $x \to 0$, $$ \frac{\left(\tanh x\right)^2-\left(\sin x\right)^2}{x^4}=\frac{-\frac{x^4}{3}+O(x^6)}{x^4}=-\frac13+O(x^2). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2056180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
$u$ upper semicontinuous iff for all $K \subseteq U$ compact and $g \in C(K)$, $u - g$ attains its maximum on $K$? As the question title suggests, how do I see that $u$ is upper semicontinuous if and only if, for all $K \subseteq U$ compact and $g \in C(K)$, the difference $u - g$ attains its maximum on $K$?
Here's the first half to get you started... A function $u\in C(U)$ is upper semicontinuous provided that for all $x \in U$, and for all sequences $(x_n)_{n=1}^{\infty}\subset U$ such that $x_n \rightarrow x$, $$\limsup u(x_n) \leq u(x)$$ I make the assumption that you're considering real-valued functions on a metric space. ($\implies$) First we show that $u$ is bounded above on $K$. Assume $u$ is not bounded above. Then there exists a sequence $(x_n) \subset K$ such that $u(x_{n+1})>u(x_n)+1$ for each $n$. As $K$ is compact, there exists a convergent subsequence $(x_{n_{k}}) \rightarrow x \in K$, and as $u$ is upper semicontinuous, $\limsup u(x_{n_k}) \leq u(x)$. But this would imply $u(x)=\infty$, so $u$ is bounded above. As $u(K)$ is bounded, it has a least upper bound $M$ meaning we can construct a sequence $(x_n)$ such that $f(x_n) > M - 1/n$. Again, as $K$ is compact we take a convergent subsequence $x_{n_k}\rightarrow x\in K$ and note that for each $n$ we have $$M-1/n<u(x_{n_k})\leq u(x) \leq M$$ so $u(x)=M$. As $g$ is a continuous function we have $\lim g(x_n) \leq g(x)$, so $\limsup (u-g)(x_n) \leq (u-g)(x)$ and $u-g$ is upper semicontinuous. We can then apply the above argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2056310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that if lim f(x) = L1 and lim g(x) = L2, then lim (f(x))^(g(x)) = L1^L2 I am trying to prove that if $$ \lim_{x \to c} (f(x)) = L_1 \\ \lim_{x \to c} (g(x)) = L_2 \\ L_1, L_2 \geq 0 $$ Then $$ \lim_{x \to c} f(x)^{g(x)} = (L_1)^{L_2} $$ I am doing this for fun, and my prof said that it shouldn't be too hard, but all I got so far is $$ \forall \epsilon >0 \ \exists \delta > 0 : \text{if}\ \ |x-c|<\delta\ \ \ \text{then}\ |P(x)-L|<\epsilon \\ |f(x)^{g(x)} - (L_1)^{L_2}| < \epsilon $$ I have no idea how to proceed. Can someone help me out? I started by defining h(x) as $$(f(x))^{(g(x))}$$ but I couldn't go anywhere with that without basically defining the limit of h(x) as x approaches c to be L1^L2
It is often convenient to write $0^0=1,$ for example, in "Let $p(x)=\sum_{j=0}^n a_jx^j$ " it is assumed that $a_0x^0=a_0$ when $x=0.$ But if $L_1=L_2=0$ then $f(x)/g(x)$ can converge to any non-negative value, or fail to converge. Examples: Let $c=0:$ (1). Let $f_1(x)=1/e^{1/|x|}$ for $x\ne 0$ and $g_1(x)=|x|.$ Then $f_1(x)^{g_1(x)}=e$ for all $x\ne 0.$ (2). Let $f_2(x)=g_2(x)=|x|$ for $x\ne 0.$ Put $|x|=1/y.$ Then $y\to \infty$ as $x\to 0,$ and $f_2(x)^{g_2(x)}=1/(y^{1/y})=\exp ((\log y)/y).$ Now $(\log y)/y \to 0$ as $y\to \infty$ so $f_2(x)^{g_2(x)}\to 1$ as $x\to 0.$ (3). From examples (1) and (2), let $f_3(x)=f_1(x)$ when $1/x\in \mathbb Z$ and $f_3(x)=f_2(x)$ when $1/x \not \in \mathbb Z.$ Let $g_3(x)=g_1(x)=g_2(x).$ Then $f_3(x)^{g_3(x)}$ does not converge as $x\to 0$. The main result is valid for $L_1>0.$ Because $\log f(x)^{g(x)}=g(x)\log f(x)$ whenever $|x-c|$ is small enough, and $\log f(x)$ will converge to $\log L_1.$ So $\log f(x)^{g(x)}$ will converge to $L_2\log L_1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2056499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Factor $64x^9 - 125y^6$ I'm trying to help my daughter with Algebra 2. This is a homework assignment. I've done a fair amount of searching but I just can't figure this out. Mathway, an online tool gave the answer: $(4x^3 - 5y^2) (16x^6 + 20x^3y^2 + 25y^4)$ I can pattern match a bit but can't figure out what's going on. Would someone be willing to provide a link or explain what sort of procedure to follow to factor this expression?
Way to do this: $a^3-b^3=(a-b)(a^2+ab+b^2)$ $64x^9 - 125y^6 = (4x^3)^3 - (5y^2)^3$ $= (4x^3 - 5y^2) [(4x^3)^2 + 4x^3 * 5y^2 + (5y^2)^2]$ $= (4x^3 - 5y^2) (16x^6 + 20x^3y^2 + 25y^4)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2056616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to find Generator Matrix from a given Parity Check Matrix? I'm given a Parity Check Matrix \begin{bmatrix}0&1&1&1&1&0&0\\1&0&1&1&0&1&0\\ 1&1&0&1&0&0&1\end{bmatrix} and I have to find the Generator Matrix of it.I spent many days try to solve it but I can't
This parity-check matrix is in the standard form $$[P^T|I_{n-k}]$$ The generator matrix is hence given by $$G=[I_k|P]=\begin{bmatrix} 1 &0& 0& 0& 0& 1& 1\\ 0& 1& 0& 0& 1& 0& 1\\0& 0& 1& 0& 1& 1& 0\\ 0& 0& 0& 1& 1& 1& 1 \end{bmatrix}$$ You can verify that $GH^T=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2056750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show that $a^2 \equiv a \mod (1+i)$ for all $a \in \mathbb{Z}[i]$. This is a problem statement from a section on quotient rings in Abstract Algebra, so I assume it requires the use of the FIT for rings. When looking around for similar problems, I was only able to find examples with number theory, which isn't really what I'm looking for. My main question here is: what does $\mathbb{Z}[i]$ mean? I'm familiar with the notation $\mathbb{Z}(i) = \{a+bi : a,b\in\mathbb{Z}\}$ and $\mathbb{Z}[X]$ (a polynomial ring), but never have I seen $\mathbb{Z}[i]$. I think that I'll be able to figure this out knowing that, but I still would very much appreciate a hint on how to start. I assume you can just show that $a^2 - a \equiv a(a - 1) \equiv 0 \mod (1 + i)$, but I'm not really sure where to go from there.
Note that $ 2 = (1+i)(1-i) \equiv 0 \pmod{1+i} $ and any $ a + bi \in \mathbf Z[i] $ with $ a, b \in \mathbf Z $ is congruent to an integer modulo $ 1 + i $, thus any element of $ \mathbf Z[i] $ is congruent to either $ 0 $ or $ 1 $ modulo $ 1+i $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2056824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
$∃x¬(\varphi ∨ \psi) → ∃x(¬\varphi ∨ ¬\psi)$ and $∃y(\varphi ∧ \psi) → (∀x$ $\varphi ∧ ∀y$ $\psi)$ For each of the following formulas, indicate if is or not a first order logic theorem, whatever the formulas $\varphi$ and $\psi$. Justify, showing that exists a natural deduction of the corresponding formula or indicating a language $L$ and formulas $\varphi$ and $\phi$ of $L$ and a structure $A=(A,.^A)$ of $L$, such as $A$ is not a model of the corresponding formula. * *$∃x¬(\varphi ∨ \psi) → ∃x(¬\varphi ∨ ¬\psi)$ *$∃y(\varphi ∧ \psi) → (∀y$ $\varphi ∧ ∀y$ $\psi)$
Hint 1st) Consider that $\lnot (\varphi \lor \psi)$ is equivalent to $\lnot \varphi \land \lnot \psi$. 2nd) Consider : "there exists a number that is $=0$ and $\ge 0$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2056904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Big-O of recursive function Let $f:\mathbb{Z}_+ \to \mathbb{Z}_+$ be the function defined by $f(k)=3f(k-1)+2$ for any $k \in \mathbb{Z}_+$. Prove that $f(n)$ is $O(6^n)$. How do I prove it with mathematical induction?
Let $g(k) = f(k)+1$, then $g(k) = 3 g(k-1)$ is a geometric sequence, hence $g(k)=3^k g(0)$ and $f(k) = 3^k (f(0)+1)-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2056993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does there exists a finite abelian group $G$ containing exactly $60$ elements of order $2$? Suppose there exists such a group. Then Lagrange's theorem assures that the group is of even order. But I conclude from this and this that such a group has odd number of elements of order $2$. Giving us contradiction. Hence there does not exist a finite abelian group $G$ containing exactly $60$ elements of order $2$. More strongly there does not exist a finite group $G$ containing even number of elements of order $2$. Is my understanding correct?
Clearly, the order of $G$ cannot be an odd, it's obvious. Suppose the order of $G$ is $2n$. Since order of identity is always $1$ i.e. $|e|=1$. So we have left only $2n-1$ element which is odd in number. Elements which are not their own inverses, these elements and their inverses exist in pairs. Then it should be even in number but we have odd number here. In that case we must have atleast an element which must be self inverse. Elements of order $2$ are exactly those elements which are self inverse. Thus number of element of order $2$ must be in odd number. But we have given $60$ number of elements of $2$ which is absurd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2057111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Limit of a series with upper bound in the summand? I have constructed a model of drug dosing, and to find the maximum quantity of drug in the body after an infinite number of doses, I believe I must compute this limit: $$\lim_{n \to \infty} D \displaystyle\sum_{i=1}^{n} e^{-(n-i)k\Delta t}$$ where $D, k, \text{and } \Delta t$ are constants. (The fixed dose amount, rate constant of idealized metabolism, and fixed interval between doses in my model.) I'm aware it can be reworked into the classic indeterminate forms $\frac{\infty} {\infty}$ and $0\times\infty$. I've played around with it a lot, and at one point concluded it equals $D/{(1-e^{-k\Delta t})}$, but I think there was a flaw in my derivation. In general, how does one evaluate the following expression? $$\lim_{n \to \infty} \displaystyle\sum_{i=a}^{n} f(n-i)$$ (Assuming the summand function is of a form that will allow convergence, such as my $e^{-c(n-i)}$.) Thank you for your wisdom!
Your sum is equivalent to $$\lim_{n\to \infty} De^{-nk\Delta t}\sum_{i=1}^{n} (e^{k\Delta t})^i$$ Which is a geometric series. $$\lim_{n\to \infty} De^{-nk\Delta t}\sum_{i=1}^{n} (e^{k\Delta t})^i=\lim_{n\to \infty} De^{-nk\Delta t} \frac{e^{k\Delta t}(e^{k\Delta t n}-1)}{e^{k\Delta t}-1}$$ Carry out the multiplication to get $$\lim_{n\to \infty} D\frac{e^{k\Delta t}(1-e^{-nk\Delta t})}{e^{k\Delta t}-1}$$ Since $$\lim_{x\to \infty}e^{-x}=0$$ You get $$D\frac{e^{k\Delta t}}{e^{k\Delta t}-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2057282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Alternatives to the politician theorem A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R and S are invited to a birthday party. We are told that: * *each pair of guests have exactly one common friend *A and G only have one common friend: C *A and G are friends *I and C only have one common friend: A How many friends does the most popular (in terms of friends) guest have ? Invoking the politician/friendship theorem: it is straightforward that A has the most friends (18), as he is the common friend to all the guests. My question is: can anyone think of an alternative solution to this problem ?
I had never encountered this surprising theorem - thanks for posting. In order to give a concrete example, the question adds some extra conditions. Under these $A$ has at least three friends: $G$ (by the second condition) and $I$ & $C$ (by the third condition). Therefore it is $A$ who is the politician. Note that this implication was invalid until Alex Ravsky added the second condition. Also the first condition ($C$ the common friend of $A$ & $G$) is redundant. Shorn of these conditions, the ask is for alternative proofs from the one in "The Book in which God keeps the most elegant proof of each mathematical theorem". That's a fair question but a tall ask. Erdős & spectral analysis have both proved very powerful in graph theory, and we are asked to turn our backs on both... Here's a different approach based on triangles. It doesn't lead to the full result yet, but I will keep playing with it, and maybe it will inspire someone else. With no assumptions about regularity for now, for any vertex $A$ with degree $d_A$, there are $C(d_A,2)$ unordered pairs of adjacent vertices. We are going to triage each of the $\sum _{A \in G} C(d_A,2)$ cases. Suppose $B$ & $C$ are two vertices adjacent to $A$. There are two possibilities: (1) If $B$ & $C$ are non-adjacent, then as $A$ is the "sole mutual friend" of $B$ & $C$, there is a one-to-one-correspondence to the set of non-edges in $G$, of which there are $C(n,2) - m$ items. (2) If $B$ & $C$ are adjacent then, with $A$, the three vertices form a triangle $ K_3$. Because each vertex is the "sole mutual friend" of the other two, each edge in $G$ belongs to exactly one such triangle. And each such triangle is discovered exactly three times in our examination, depending on which of the three vertices is discovered first. So if $m$ is the number of edges in $G$, there are hence $3 \times m/3$ = $m$ items to be counted here. Adding these two populations: $m + C(n,2) - m = C(n,2)$ So $\sum_{A \in G} C(d_A,2) = C(n,2)$. [Equation *] If we require also that $G$ is $k$-regular, then we derive: $n \times C(k,2) = C(n,2)$ => $n = k^2 - k + 1$, which is indeed an equation from The Book, so we are on the right track. Equation * is satisfied happily by all the actual (windmill) solutions. If one "politician" vertex $A$ has degree $2p$, and the other $2p$ vertices have degree $2$, for any $p \ge 0$, then $n = 2p+1$. Then: $C(2p,2) + 2p \times C(2,2) = 2p\times (2p-1)/2 + 2p \times 1 = p(2p-1)+2p = p(2p+1) = C(n,2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2057393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Isomorphic groups but not isomorphic rings Provide an example of two rings that have the same characteristic, are isomorphic as groups but are not isomorphic as rings. I'm confused with how to being. I know that having the same characteristic means that the concatenation is the same number to receive the zero element.
The group isomorphism refers to the additive structure. Let $R$ be any ring. We can define two ring structures on the set $R\times R$: the addition is the same, so the two additive groups are not only isomorphic, but identical: $$ (a,b)+(c,d)=(a+c,b+d) $$ We can define two different multiplications: $$ (a,b)\cdot(c,d)=(ac,bd) $$ and $$ (a,b)*(c,d)=(ac,ad+bc) $$ It's not difficult to show that $(R\times R,+,\cdot)$ and $(R\times R,+,*)$ are rings. Can you find the characteristic of them and a case where the two rings are not isomorphic?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2057709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 7, "answer_id": 6 }
Find all positive integer solutions for M Find all positive integer solutions M, where $x,y,z$ are non-negative integers from the equations, $x + y + z = 94$ $4x + 10y + 19z = M$ Attempt: I multiplied equation 1 by 4, and subtracted it from equation 2 to get $6y + 15z = M-376$ I know $M-376$ has to be a multiple of 3, and $376 \le M \le 1786$ if we set $(y,x)$ to be $(0,0)$ and $(0,94)$ respectively, but I don't know what to incorporate next. Any clues?
Let $S$ be the set of all combinations of the form $2y+5z$ such that $y$ and $z$ are non-negative integers with $y+z\leq 94$. You want the set $3S+376$. So we just need to determine $S$. We shall prove that $S$ contains every integer in the range $\{1,2,\dots , 94\times 5\}$ except a select few. Which numbers that are congruent to $1\bmod 5$ can be made? the smallest is clearly $6$ while the largest is clearly $6+ 91\times 5$ ( so we are missing $1,93\times5+1$). Which numbers that are congruent to $2\bmod 5$ can be made? the smallest is clearly $2$ and the largest is $2+93\times 5$ (so none is missing). Which numbers that are $3\bmod 5$ can be made? the smallest is clearly $8$ while the largest is $8+90\times 5$ (so we are missing $3,92\times 5+3,93\times 5+3$ Which number that are $4\bmod 5$ can be made? The smallest is clearly $4$ and the largest is $4+92\times 5$ (so we are missing $93\times 5+4$). Also, it is clear that all of the numbers in between each residue class can also be made, it is also clear that all multiples of $5$ can be made. Therefore $S=\{1,2,3\dots , 94\times 5\} \setminus \{1,3,92\times 5+3,93\times5+1,93\times 5 +3,93\times 5 + 4\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2057819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A normal subgroup with index relatively prime to its order This is an exercise from Lang's Algebra. The theorem and my work on it are below: Let $G$ be a finite group and $N$ be a normal subgroup such that $N$ and $G/N$ has relatively prime orders. I need to show the following two: i)Let $H$ be a subgroup of $G$ having the same order as $G/N$. Prove that $G =HN$. ii)Let $g$ be an automorphism of $G$. Prove that $g(N)=N$. For the first one, I do the following: It can be shown that $HN$ is a subgroup of $G$. By our assumption, I can say that $|H \times N |= |H||N|=|G|.$ Then I define the following map: $\varphi: H \times N \rightarrow G$ where $\varphi(h,n) = hn$. I tried to show that $\varphi$ is an isomorphism but I could not find a way to show it is bijective or surjective. One is enough to finish the proof. For the second one, I defined two maps $f_1$ and $f_2$ such that: $f_1 : G \rightarrow G / N$ with $f_1(g) = gN$ and $f_2:G / N \rightarrow G$ with $f_2(gN)=g$. By assumption, I have $g: G \rightarrow G$ which is an isomorphism. We have to show that $g(N)=N$. (*)By first isomorphism theorem, I can say that $f_2(f_1(x))=g(x)$. Now, $g(N)=f_2(f_1(N))=f_2(N)=N$. Is this part correct with * correct?
For $i)$: Since $|H|=|G/N|$, $|H|$ and $|N|$ are relatively prime by assumption which means that if $x\in H\cap N$, $x=e$ for the order of $x$ must divide both $|H|$ and $|N|$. Because $$|HN|=\frac{|H||N|}{|H\cap N|}$$ we see that $|HN|=|H||N|=|G|$. For $ii)$ what you've written is not correct. For one, $f_2$ isn't even well-defined because there are many elements in $gN$ and it's unclear how we ought to choose one. Instead try this: Because $g$ is an automorphism, $|g(N)|=|N|$ and $g(N)$ is also a subgroup so it is sufficient to show that $N$ is the unique subgroup of order $|N|$. To see this, note that if $H$ is any other subgroup, $$|HN|=\frac{|H||N|}{|H\cap N|}=\frac{|N|^2}{|H\cap N|}$$ but $|HN|$ divides $|G|$ by Lagrange's theorem so $$\frac{|G|}{|N|}\frac{|H\cap N|}{|N|}$$ is an integer and since $\frac{|G|}{|N|}$ and $|N|$ are relatively prime, it follows that $|H\cap N|=|N|$ so $H=N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2057918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Confirming the Existence the PDE Solutions in Sobolev Spaces So far I have only the most basic understanding of the Sobolev space, like the existence of a unique weak solution, or what it means in the PDE: $$\Delta u = -f ,\qquad \text{where } f \text{ is a Schwartz function}$$ However, how do I think about a PDE when $f$ only satisfies a weaker condition, like $f \in L^2$. I think I got lost in all the different kinds of Sobolev spaces, and what is allowed in each space. So, just take that $f \in L^2$. Can you kindly show how you examine the existence of a unique solution to the Laplace equation above? Thanks ....
I assume that $\Omega$ is bounded and has smooth boundary and that you have zero Dirichlet boundary conditions. If $f\in L^2(\Omega)$ (in fact one can take weaker conditions $f\in H^{-1}(\Omega)$ here) then consider the weak formulation of the Poisson equation: $$\int \nabla u \cdot \nabla v = \int f v, \forall v\in H^1_0(\Omega).$$ Using the Lax-Milgram theorem, one can deduce existence and uniqueness of solutions $u \in H^1_0(\Omega)$ satisfying the weak formulation. In my opinion, it is also instructive to establish uniqueness by hand since it requires several ubiquitous estimates. Note that $$\|\nabla u \|^2_{L^2(\Omega)}=\int \nabla u \cdot \nabla u = \int f u \leq \|f\|_{L^2(\Omega)}\|u\|_{L^2(\Omega)}\leq C_\epsilon\|f\|_{L^2(\Omega)}^2+\epsilon \|u\|_{L^2(\Omega)}^2.$$ Using the Poincare inequality for $u$: $\|u\|_{L^2(\Omega)}\leq C\|\nabla u\|_{L^2(\Omega)}$ and taking $\epsilon$ sufficiently small we have $$\|\nabla u\|_{L^2(\Omega)} \leq C \|f\|_{L^2(\Omega)}.$$ So, assuming there are two weak solutions $u,v\in H^1_0(\Omega)$, consider $w=u-v$ which solves $\Delta w = 0$ with $w=0$ on $\partial \Omega$. Our above estimate establishes that $w\equiv 0$ and so $u=v$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2057996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Graph $\text{Im}\left(\frac{1}{z}\right)=1$ I used the identity $z=x+iy$ which resulted in $\text{Im}\left(\frac{1}{x+iy}\right)$. Multiplying by the conjugate, I found that this was equal to $\text{Im}\left(\frac{x-iy}{x^2+y^2}\right)$, which by splitting this fraction into two terms is $\frac{-y}{x^2+y^2}=1$. Multiplying I found $-y=x^2+y^2$ which I think is the equation of a circle. Can somebody tell how to graph this equation in the complex plane?
You're almost there. We have $$x^2+y^2=-y\implies x^2+(y+1/2)^2=\frac14$$ In the complex plane, $x=\text{Re}(z)$ and $y=\text{Im}(z)$. Therefore, this is a circle with center $(0,-1/2)$ and radius $1/2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2058120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Equation with matrix and determinant Given $$A={\begin{bmatrix} \det(A) & \det(A)+a \\ \det(A) +b & \det(A)+c \end{bmatrix}}$$ where $a,b,c$ are given and $A$ is unknown, is it possible to use some clever tricks concerning determinants for this case? (instead of direct calculations).
Let $\text{det}A=d$. Then we are given $$A={\begin{bmatrix} d & d+a \\ d +b & d+c \end{bmatrix}}$$ Using row operations, we get $$ d=|A|=\begin{vmatrix} d & d+a \\ d +b & d+c \end{vmatrix} \rightarrow \begin{vmatrix} d & d+a \\ b & c-a \end{vmatrix} \rightarrow \begin{vmatrix} d & a \\ b & c-a-b \end{vmatrix} $$ Thus $$d=d(c-a-b)-ab.$$ Now solve for $d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2058247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is a line a closed subset of $\mathbb R^2$? I'm studying topology and I have a doubt in the following exercise. I'd appreciate some help. Let $m\neq0$ and $c$ be real numbers. Prove that the line $L=\{\langle x,y\rangle:y=mx+c\}$ is a closed subset of $\mathbb R^2$. I found similar questions here, but with answers involving continuity of functions. This exercise is in chapter $2$ of Topology without tears, in which continuity is presented in the fifth. Then, my attempt was to prove that $\mathbb R^2 \backslash L$ is open by setting, at each point $p\in L$, two open rectangles with a vertex at $p$: one above $L$ and the other below. As $\mathbb R^2 \backslash L$ is the union of these rectangles $(?)$, and every one of them is a open subset of $\mathbb R^2$, we have done. My guess is right? Thank you!
If you know that the real line is a closed subspace of $\Bbb{R}^2$ (indeed, it's a complete metric subspace, which is even stronger) the result follows easily because the real line can be mapped to any other line in $\Bbb{R}^2$ by a composition of a rotation and a translation. Since rotations and translations are isometries (i.e. homeomorphisms) of $\Bbb{R}^2$, they preserve closed sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2058367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
In the derivation of the formula for volume of a solid of revolution, how does $Δx$ "become" $\mathrm dx$? I am currently learning about the formula for the volume of a solid of revolution formed by the rotation about the $x$-axis through 2$\pi$ radians. I believe this is called the "disk" method. Referring to the section of this website "Volumes for Solid of Revolution", I am able to fully understand how one would eventually arrive at the formula: $$ V \approx \sum_{i = 1}^n A(x_i^*)\Delta x. $$ I also understand the next portion, which states that the exact volume is then: $$ V = \lim_{n\to \infty}\sum_{i= 1}^n A(x_i^*)\Delta x. $$ Where I have doubt is what it is next equated with, i.e. this statement of equation: $$ V = \lim_{n\to \infty}\sum_{i= 1}^n A(x_i^*)\Delta x = \int_a^b{\rm d}x\, A(x). $$ I understand fully how we can use integration here. What I don not understand is that how "$\Delta x$" is now replaced by $\mathrm dx$. Don't we include $\mathrm dx$ to show that we are "integrating with respect to $x$", not to represent any sort of length? And yet, $\Delta x$ was in fact supposed to represent an extremely small length. How am I supposed to understand "the transformation" of $\Delta x$ to $dx$?
$\def\d{\mathrm{d}}\def\peq{\mathrel{\phantom{=}}{}}$The identity (Note that $Δx = \dfrac{b - a}{n}$)$$ \lim_{n → ∞} \sum_{k = 1}^n A(x_{n, k}) · \frac{b - a}{n} = \int_a^b A(x) \,\d x $$ is the corollary of the definition of Riemann integral. For any $n \geqslant 1$, the notation $Δx$ means a small length, whereas the notation $\d x$ means an infinitesimal length. In fact, the “derivation” given on that site is just intuition. To prove it rigorously, denote by $D$ the solid and define the section set$$ S(x) = \{(y, z) \in \mathbb{R}^2 \mid (x, y, z) \in D\}. \quad \forall a \leqslant x \leqslant b $$ Note that for any $a \leqslant x \leqslant b$,$$ (x, y, z) \in D \Longleftrightarrow (y, z) \in S(x). \quad \forall (y, z) \in \mathbb{R}^2 $$ By the definition of area and volume,\begin{align*} V &= \iiint\limits_D \d x\d y\d z = \iiint\limits_{\mathbb{R}^3} I_D(x, y, z) \,\d x\d y\d z\\ &= \int_a^b \d x \iint\limits_{\mathbb{R}^2} I_D(x, y, z) \,\d y\d z = \int_a^b \d x \iint\limits_{\mathbb{R}^2} I_{S(x)}(y, z) \,\d y\d z\\ &= \int_a^b \d x \iint\limits_{S(x)} \d y\d z = \int_a^b A(x) \,\d x. \end{align*} Here $I_B$ is the indicator function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2058453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Integration with Half space Gaussian I have a problem to solve and I have something that i don't know how to do. The half-space Gaussian integral is given : $$\int_{0}^\infty \exp(-ax^2)dx = \frac{1}2 \sqrt{\frac{\pi}{a}}$$ I have to calculate $$\int_{0}^\infty \exp \left(-ax^2 - \frac{b}{x^2} \right)dx$$ a and b are real and positive Is someone have an idea ? :) Thanks for your answers Mathieu
Note that in THIS ANSWER, I presented a solution to a more general version of the integral of interest herein. Let $I(a,b)$ be the integral given by for $a>0$ and $b>0$. $$I(a,b)=\int_0^\infty e^{-\left(ax^2+\frac{b}{x^2}\right)}\,dx \tag 1$$ Enforcing the substitution $x\to \sqrt[4]{b/a}x$ into $(1)$ reveals $$\begin{align} I(a,b)&=\sqrt[4]{\frac{b}{a}}\int_0^\infty e^{-\sqrt{ab}\left(x^2+\frac{1}{x^2}\right)}\,dx \tag 2 \end{align}$$ Next, noting that $x^2+\frac{1}{x^2}=\left(x-\frac1x\right)^2+2$, we can write $(2)$ as $$\begin{align} I(a,b)&=\sqrt[4]{\frac{b}{a}}e^{-2\sqrt{ab}}\int_0^\infty e^{-\sqrt{ab}\left(x-\frac{1}{x}\right)^2}\,dx\tag 3 \end{align}$$ Enforcing the substitution $x\to 1/x$ in $(3)$ yields $$I(a,b)=\sqrt[4]{\frac{b}{a}} e^{-2\sqrt{ab}}\int_0^\infty e^{-\sqrt{ab}\left(x-\frac{1}{x}\right)^2}\,\frac{1}{x^2}\,dx\tag4$$ Adding $(3)$ and $(4)$, we obtain $$\begin{align} I(a,b)&=\frac12\sqrt[4]{\frac{b}{a}} e^{-2\sqrt{ab}}\int_{-\infty}^\infty e^{-\sqrt{ab}\left(x-\frac{1}{x}\right)^2}\,d\left(x-\frac{1}{x}\right)\\\\ &=\frac12\sqrt[4]{\frac{b}{a}} e^{-2\sqrt{ab}}\int_{-\infty}^\infty e^{-\sqrt{ab}\,x^2}\,dx\\\\ &=\frac1{2\sqrt a} e^{-2\sqrt{ab}}\int_{-\infty}^\infty e^{-x^2}\,dx\\\\ &=\bbox[5px,border:2px solid #C0A000]{\frac{\sqrt{\pi}}{2\sqrt a}e^{-2\sqrt{ab}}} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2058562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
An Inequality $\left(\sum\limits_{k=1}^n a_k^{1/2}\right)^2\le\left(\sum\limits_{k=1}^n a_k^{1/3}\right)^3$ Why is $\left(\sum\limits_{k=1}^n a_k^{1/2}\right)^2\le\left(\sum\limits_{k=1}^n a_k^{1/3}\right)^3$ with $a_k$ nonnegative Writing $\left(\sum\limits_{k=1}^n a_k^{1/2}\right)^2=\left(\sum\limits_{k=1}^n a_ka_k^{-1/2}\right)^2$ $\left(\sum\limits_{k=1}^n a_k^{1/3}\right)^3=\left(\sum\limits_{k=1}^n a_ka_k^{-2/3}\right)^3$ and assuming $\sum\limits_{k=1}^n a_k=1$ the inequality is equivalent to, $\left(\sum\limits_{k=1}^n a_ka_k^{-1/2}\right)^{-2}\ge\left(\sum\limits_{k=1}^n a_ka_k^{-2/3}\right)^{-3}$ This is almost the power mean inequality, if the exponent on the RHS were $-\frac 32$ instead of $-3$ but if $\sum\limits_{k=1}^n a_k=1$ then $\sum\limits_{k=1}^n a_k^{1/3}\ge1$ hence $\left(\sum\limits_{k=1}^n a_ka_k^{-2/3}\right)^{-\frac32}\ge \left(\sum\limits_{k=1}^n a_ka_k^{-2/3}\right)^{-3}$ so we're done Is there a possibility to solve this more directly
For simplicity, let $n=2$. It is easy to generalize to the case of $n>2$. Let $$ a_1=r^2b_1^4, a_2=r^2b_2^4$$ such that $$ b_1^2+b_2^2=1,b_1,b_2\ge0. $$ Then the inequality becomes $$ b_1^{\frac43}+b_2^{\frac43}\ge 1 $$ which is easy to prove. In fact, noting $0\le b_1,b_2\le1$, one has $$ b_1^{\frac43}+b_2^{\frac43}\ge b_1^2+b_2^2=1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2058667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Continuity of a function with real paramter Let $f:\mathbb{R^3}$ $\rightarrow$ $\mathbb{R}$, defined as: $$f(x,y,z)=\begin{cases} \left(x^2+y^2+z^2\right)^p \exp\left(\frac{1}{\sqrt{x^2+y^2+z^2}}\right)& ,\,\text{if }\quad(x,y,z) \ne (0,0,0)\quad \\ 0 &,\,\text{o.w} \end{cases}$$ Where $\,p\in \mathbb{R}$. Is this function is continuous?
$f$ is continuous in $\mathbb{R}^3\setminus\{(0,0,0)\}$, but not in the point $(0,0,0)$ since the limit of $f(x,y,z)$ when $(x,y,z)\to(0,0,0)$ does not exist: To see it recall that $$ e^{t}=1+t+\frac{t^2}{2!}+\frac{t^3}{3!}+\frac{t^4}{4!}+\cdots, $$ so for $t>0$ we have $$ e^{t}\geq\frac{t^{2p+2}}{(2p+2)!}. $$ Plugging $t=\sqrt{\frac{1}{x^2+y^2+z^2}}$ and translating in terms of $f$ this estimate gives $$ f(x,y,z)\geq\frac{(x^2+y^2+z^2)^p}{(x^2+y^2+z^2)^{p+1}(2p+2)!}=\frac{1}{(x^2+y^2+z^2)(2p+2)!}\overset{(x,y,z)\to0}{\to}\infty. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2058939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
computing flux integral just given this question. compute the flux out of the unit circle, C. $$F(x,y)=\langle x+2y,3x+4y\rangle $$ i am not sure on how to solve this. Usually the flux would include Z function. please help!
The flux of $F(x,y)$ across $C$ is given by $\int_C F(x,y)\cdot n\,ds$, where $n$ is the outward normal vector to $C$. Using the planar divergence theorem, you could also calculate this integral as: $$\int_C F(x,y)\cdot n\,ds=\iint_D\nabla\cdot F(x,y)\,dA$$ Where $D$ is enclosed by $C$ (in this case $D$ is the unit disk).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2059010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Prove that, if G is a bipartite graph with an odd number of vertices, then G is non-Hamiltonian Continuing with my studies in Introduction to Graph Theory 5th Edition by Robin J Wilson, one of the exercises asked to prove that, if $G$ is a bipartite graph with an odd number of vertices, then $G$ is non-Hamiltonian. This is what I've come up with. Is it strong enough? Let graph $G$ be a bipartite graph with an odd number of vertices and G be Hamiltonian, meaning that there is a directed cycle that includes every vertex of $G$ (Wilson 48). As such, there exists a cycle in G would of odd length. However, by Theorem 2.1, a graph $G$ is bipartite if and only if every cycle of $G$ has even length (Wilson 33). Proven by contradiction, if $G$ is a bipartite graph with an odd number of vertices, then $G$ is non-Hamiltonian. As an example, the picture below has 13 vertices so it must be non-Hamiltonian.
Yes, your proof is quite correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2059091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Probability distribution for y=min(x,a) let's say a random x has the following pdf, then how we can find the distribution for $y=min (x,a)$, for $\theta <= a <=w $? $(x-\theta)^k exp(-\lambda(x-\theta))$. for $\theta<=x<=w$ Thank you for helping!
$\theta$ is just a shift parameter, either for $x$ and $a$ and $w$ and thus also for $y$, and we can get rid of it, leaving $$ \left\{ \begin{gathered} 0 \leqslant a \leqslant w\quad p(a) = 1/w\quad \left( {\text{?}\;\text{supposed}} \right) \hfill \\ 0 \leqslant x \leqslant w\quad p(x) = x^{\,k} e^{\, - \,\lambda \,x} /\int_{x = 0}^{\,w} {x^{\,k} e^{\, - \,\lambda \,x} dx} = x^{\,k} e^{\, - \,\lambda \,x} /C\quad \left( {\text{?}\;\text{supposed}} \right) \hfill \\ 0 \leqslant y = \min (a,x) \leqslant w \hfill \\ \end{gathered} \right. $$ where it is understood that all the parameters are net of $\theta$. Then $$ \begin{gathered} p(y)dy = P\left( {y \leqslant a \leqslant y + dy} \right)P\left( {y \leqslant x \leqslant w} \right) + P\left( {y \leqslant x \leqslant y + dy} \right)P\left( {y \leqslant a \leqslant w} \right) = \hfill \\ = \frac{1} {{C\;C'}}\left( {\frac{1} {w}\left( {\int_{x = y}^{\,w} {x^{\,k} e^{\, - \,\lambda \,x} dx} } \right) + \frac{{w - y}} {w}y^{\,k} e^{\, - \,\lambda \,y} } \right)dy \hfill \\ \end{gathered} $$ where $C'$ and thus $CC'$ shall be such as to normalize $p(y)$, i.e.: $$ C\;C'\;:\quad \int_{y = 0}^{\,w} {p(y)dy} = 1 $$ So, actually, it doesn't matter to normalize $p(x)$ at the beginning.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2059209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Do any major theorems of complex analysis that require holomorphic functions fail if the function is only holomorphic up to removable singularities? Or is "holomorphic on $\Omega$" universally (i.e., in practice and standard texts, and including for purposes of Ph.D. quals) understood to mean "at worst possessing a holomorphic extension to $\Omega$"? To be sure, I can think of extremely trivial, sub-pedantic examples of statements that would require a strictly holomorphic function like "let $f$ be entire, then $f$ is defined everywhere." Yes, this "theorem" fails for $f(z) = z/z$ at the origin because of the hole, but it hardly gives useful information (if it forms the cornerstone of your current research, however, please accept my sincere apologies). What I'm more concerned about is assuming that it's OK to have the "weaker" definition somewhere that the "stronger" definition is actually called for, e.g., when applying one of the major grad-level-course theorems. I can't think of any specific instances in which the distinction is relevant, but it crossed my mind as I tried to prove that the convergence of the Taylor series of $f$, holomorphic at $z=c$, on a disc $D(c, r)$ implies the holomorphicity of $f$ on that disc. That implication is supposed to be true, but it is not true if removable singularities cause a function to not be holomorphic; indeed, all we have to do is replace $f(z)$ with $f(z)\frac{z-w}{z-w}$ for some $w$ in $D(c, r)\backslash\{c\}$ for a counterexample. I just don't want to lose generality in an argument by taking $f$ with removable singularities up to its analytic continuation.
A bit less trivial (but still trivial) example is Picard theorem
{ "language": "en", "url": "https://math.stackexchange.com/questions/2059363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Set of all finite subsets of the plane How could I determine the cardinality of the set of all finite subsets of the plane? I believe I am correct in saying that this set is equivalent to the power set of $\Bbb R^{2}$ minus all infinite sets in that set. Is it correct to say that I can map any set of size $k$ to $\Bbb R^{2k}$, and then get a countable union of sets of cardinality $c$...which is $c$? That just sounds a little incomplete because how do we know this is a $countable$ union of said sets?
The set is $$\mathcal{P}_f\left(\mathbb{R}^2\right)=\bigcup_{n=0}^{\infty}P_n\left(\mathbb{R}^2\right)$$ where $P_n\left(\mathbb{R}^2\right)=\{S\subset\mathbb{R}^2\,;\,|S|=n\}\subset{\left(\mathbb{R}^2\right)}^n$. Now, clearly, the cardinality of $P_1\left(\mathbb{R}^2\right)$ is $\left|\mathbb{R}^2\right|=|\mathbb{R}|$. For $n> 1$, the cardinality of $P_n\left(\mathbb{R}^2\right)$ is also $|\mathbb{R}|$, because there is a trivial injection onto $\mathbb{R}^n$. At this point it suffices to use this theorem on cardinality of infinite unions to deduce that $\left|\mathcal{P}_f\left(\mathbb{R}^2\right)\right|=|\mathbb{R}|$. Notice that basically the same argument yields that if $A$ is any infinite set and $\mathcal{P}_f(A)$ is the set of finite subsets of $A$, then $|\mathcal{P}_f(A)|=|A|$. Here we used the handy fact that for any infinite cardinal $\kappa$ it holds that $\kappa\times\kappa=\kappa$ (which implies that $\left|A^n\right|=|A|$ for all $n \in \mathbb{N}$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2059499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Every cyclic subspace contains an eigenvector The question is : Let $X$ be a non-null vector.Then there exists an eigenvector $Y$ of $A$ belonging to the span of $\{X, AX, A^{2} X, ... \}$. I have tried to the best of my ability to solve it. But I don't find any right way to proceed.Please help me. I want to add a solution of my own question which I find just now in a pdf. Here's this : Let $k$ be the least positive integer such that $X, AX, A^{2} X, ... , A^{k} X$ are linearly dependent. Now let us consider a relation $\sum_{i=0}^{k} c_{i} A^{i} X = 0$. Then we must have $c_k \neq 0$. Now let us consider a polynomial $g(t) = \sum_{i=0}^{k} c_{i} t^{i}$. Let $\beta_{1}, \beta_{2}, ... , \beta_{k}$ be the roots of the polynomial $g(t)$. Then $g(t) = c_{k} \prod_{i=1}^{k} (t - \beta_{i})$. Hence, $\sum_{i=0}^{k} c_{i} A^{i} = g(A) = c_{k} \prod_{i=1}^{k} (A - \beta_{i} I)$.Taking $Y = (\prod_{i=2}^{k} (A - \beta_{i} I)) X$ , it is easy to see that $Y \neq 0$ by the minimality of $k$ and $(A - \beta_{1} I) Y = 0$. Hence the result follows. But at last it is not clear to me why is $Y$ in the span of $\{X, A X, A^{2} X, ... \}$? Please anyone suggest me what is the trick behind it. Thank you in advance.
Why $Y\ne 0$: Let $h(t)=\prod_{i=2}^k(t-\beta_i).$ Then $\deg (h)<k$ and $h(t)$ is not identically $0$ so by the minimality of $k$ we have $0\ne h(A)(X).$ But $h(A)(X)=Y.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2059608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
bromwich inverse laplace of $\frac{1}{\sqrt{s+1}}$ I want to use the Bromwich integral to evaluate the inverse laplace of $\frac{1}{\sqrt{s+1}}$. The complex function $\frac{e^{st}}{\sqrt{s+1}}$ has a pole and branch point in -1. I cannot find a good contour to evaluate. Am I right when I say the countour should enclose the poles but not cross any branch lines?
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\bbox[15px,#ffd]{\int_{\pars{-1}^{\large +} - \infty\ic} ^{\pars{-1}^{\large +} + \infty\ic}{1 \over \root{s + 1}}\,\expo{ts}{\dd s \over 2\pi\ic}} = \expo{-t}\int_{0^{\large +} - \infty\ic} ^{0^{\large +} + \infty\ic}s^{-1/2}\expo{ts}{\dd s \over 2\pi\ic} \\[5mm] = &\ -\expo{-t}\int_{-\infty}^{0} \bracks{\pars{-s}\expo{\ic\pi}}^{-1/2}\expo{ts} {\dd s \over 2\pi\ic} -\expo{-t}\int_{0}^{-\infty} \bracks{\pars{-s}\expo{-\ic\pi}}^{-1/2}\expo{ts} {\dd s \over 2\pi\ic} \\[5mm] = &\ \ic\expo{-t}\int_{0}^{\infty}s^{-1/2}\expo{-ts} {\dd s \over 2\pi\ic} + \ic\expo{-t}\int_{0}^{\infty}s^{-1/2}\expo{-ts} {\dd s \over 2\pi\ic} \\[5mm] = &\ {\expo{-t} \over \pi}\int_{0}^{\infty}s^{-1/2}\expo{-ts}\dd s = {\expo{-t}t^{-1/2} \over \pi}\int_{0}^{\infty}s^{-1/2}\expo{-s}\dd s = {\expo{-t}t^{-1/2} \over \pi}\ \overbrace{\Gamma\pars{1 \over 2}}^{\ds{\root{\pi}}} \\[5mm] = &\ \bbx{\expo{-t} \over \root{\pi t}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2059683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If $\frac{\sqrt{31+\sqrt{31+\sqrt{31+ \cdots}}}}{\sqrt{1+\sqrt{1+ \sqrt{1+ \cdots}}}}=a-\sqrt b$, find the value of $a+b$ $\frac{\sqrt{31+\sqrt{31+\sqrt{31+ \cdots}}}}{\sqrt{1+\sqrt{1+ \sqrt{1+ \cdots}}}}=a-\sqrt b$ where $a,b$ are natural numbers. Find the value of $a+b$. I am not able to proceed with solving this question as I have no idea as to how I can calculate $\frac{\sqrt{31+\sqrt{31+\sqrt{31+ \cdots}}}}{\sqrt{1+\sqrt{1+ \sqrt{1+ \cdots}}}}$. A small hint would do.
√(31+√(31+√(31....))) = s s = √(31+s) s² = s+31 s = (5√5 + 1)/2 (by the quadratic formula) √(1+√(1+√(1....))) = k k = √(1+k) k² = k+1 k = (√5 + 1)/2 (by the quadratic formula) s/k = (5√5 + 1)/(√5 + 1) = (5√5 + 1)(√5 - 1)/4 (multiplying the numerator and denominator both by conjugate √5 - 1) = (24-4√5)/4 = 6-√5 Since 6 - √5 = a - √b, a = 6, b = 5 a + b = 11 Thus, the answer is 11.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2059779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Proof verification: almost linear map is injective iff the kernel only contains zero. $f:V\to W$ is a $\mathbb{R}$-$\mathbb{C}$-linear map between a $\mathbb{R}$-vector space $V$ and a $\mathbb{C}$-vector space $W$ if $$ f(av+bw)=af(v)+bf(w):=(a+i0)f(v)+(b+i0)f(w) $$ for any $a,b\in\mathbb{R}$ and $v,w\in V$. Theorem: A $\mathbb{R}$-$\mathbb{C}$-linear map is injective iff the kernel only contains zero. Proof: Assume that $f$ is injective and note that \begin{align*} f(\mathbf{0})= f(0\cdot \mathbf{0}) = 0f(\mathbf{0}):=(0+i0)f(\mathbf{0}) = \mathbf{0}, \end{align*} proving that $\mathbf{0}\in \mathrm{ker}(f):=\{v\in V: f(v)=\mathbf{0}\}$, and by injectivity of $f$ we have that $\mathrm{ker}(f)$ must be a singleton, hence $\mathrm{ker}(f)=\{\mathbf{0}\}$. Conversely assume that $\mathrm{ker}(f)=\{\mathbf{0}\}$ and consider any two elements $v_1,v_2\in V$ such that $f(v_1)=f(v_2)$. Then $f(v_1-v_2)=f(v_1)-f(v_2)=\mathbf{0}$, implying that $v_1-v_2\in \mathrm{ker}(f)$, but this must entail that $v_1-v_2=\mathbf{0}\iff v_1=v_2$, since $\mathrm{ker}(f)=\{\mathbf{0}\}$. QED. This proof is a straight forward replication of the true linear case, but I can't find any problems with it in this case. Am i correct that I can extend the well known theorem, to what i defined as $\mathbb{R}$-$\mathbb{C}$-linear maps above?
Your proof is fine. Here is another reason it should work out the same: The vectors in a complex vector space also form a real vector space under vector addition and scalar multiplication by reals. Your "almost linear" maps are then linear maps to this corresponding real vector space. Since you already have this result for linear maps between real vector spaces, it still holds here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2059894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
All fractions which can be written simultaneously in the forms $\frac{7k-5}{5k-3}$ and $\frac{6l-1}{4l-3}$ Find all fractions which can be written simultaneously in the forms $\frac{7k-5}{5k-3}$ and $\frac{6l-1}{4l-3}$ for some integers $k,l$. Please check my answer and tell me is correct or not.... $$\frac{43}{31},\frac{31}{27},1,\frac{55}{39},\frac{5}{3},\frac{61}{43},\frac{19}{13},\frac{13}{9}$$
Suppose there is integer $p$ which can be written as $\frac{6l-1}{4l-3}$ and $\frac{7k-5}{5k-3}$. $$p= \frac{6l-1}{4l-3} =\frac{7k-5}{5k-3}$$ $$\implies kl+8k+l=6$$ $$\implies(k+1)l=(6-8k)\implies l=\frac{-2(4k-3)}{(k+1)}$$. Which gives following integer solutions: $(k,l)=(-15,-9),(-8,-10),(-3,-15),(-2,-22),(0,6),(1,-1),(6,-6),(13,7)$. These all sets of values will give you a new such number. I shall let you conclude now.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2060154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Prove that $P(\bigcup_{i=1}^{\infty} A_i) = 1$ $A_i$ $(i=1,2,...)$ are independent events $\sum_{i=1}^{\infty}P(A_i) = \infty.$ Prove that: $P(\bigcup_{i=1}^{\infty} A_i) = 1 $ Can someone please help me out with this question?
By virtue of the non-trivial part of Borel-Cantelli lemma, one is allowed to conclude that the event $\displaystyle \mathrm{A} = \{A_n, \mathrm{i.o.}\} = \bigcap_{k = 1}^\infty \bigcup_{n = k}^\infty A_n$ has total mass equal to one. Notice that $\mathrm{A}$ lies within the union of the $A_n$ and the exercise follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2060285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Proving critical point of a system is a center We have $$ \begin{cases} \dot{x} = - y - y^3 \\ \dot{y} = x \end{cases} $$ where $x,y \in \mathbb{R}$. Show that the critical point for the linear system is a $\mathbf{center}$. Prove that the type of the critical point is the same for the $\mathbf{nonlinear}$ system. TRY: Notice $( \dot{x}, \dot{y} ) = (0,0)$ iff $x = 0 $, $y + y^3 =0 \iff y(1+y^2) = 0$. Thus, the only critical point is $(0,0)$. Lets linearize the system. The jacobian is $$ J(x,y) = \left( \begin{matrix} 0 & -1 - 3y^2 \\ 1 & 0 \end{matrix} \right) $$ We have $$ J(0,0) = \left( \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right) $$ And the eigenvalues of this matrix are $\lambda = \pm i $ showing that indeed the critical point is a $\mathbf{center}$. But, Im stuck on showing that the same is true for the nonlinear system. How do we show the existence of closed orbits near the critical point?
You can use the following Lyapunov function candidate $$V(x,y)=1/2x^2+1/2y^2+1/4y^4,$$ which is positive definite and $V(x=0,y=0)=0$. The derivative is given by $$\dot{V}=x\dot{x}+y\dot{y}+y^3\dot{y}=x[-y-y^3]+yx+y^3[x]\equiv0.$$ This condition implies that the origin is a center.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2060418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Having trouble understanding summation identity I'm having trouble understanding how the right side of this summation is equal to the left side. Any information would be helpful! $$\sum_{k=0}^{n} {r \choose k}{s \choose n-k}={r+s \choose n}$$
This is known as Vandermonde's Identity. It says that for each way to choose $n$ items from a set of $r+s$ items, for some $k$ we must choose $k$ from the set of $r$ and $n-k$ from the set of $s$. This translates to $$ \binom{r+s}{n}=\sum_{k=0}^n\binom{r}{k}\binom{s}{n-k}\tag{1} $$ Another way to look at this is to look at $$ (1+x)^{r+s}=(1+x)^r(1+x)^s\tag{2} $$ Then, using the Binomial Theorem, we get $$ \begin{align} \sum_{n=0}^{r+s}\binom{r+s}{n}x^n &=\sum_{i=0}^r\binom{r}{i}x^i\sum_{j=0}^s\binom{s}{j}x^j\\ &=\sum_{n=0}^{r+s}\sum_{k=0}^n\binom{r}{k}\binom{s}{n-k}x^kx^{n-k}\\ &=\sum_{n=0}^{r+s}\sum_{k=0}^n\binom{r}{k}\binom{s}{n-k}x^n\tag{3} \end{align} $$ Comparing coefficients of $x^n$ in $(3)$, we get $(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2060567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Simplifying $\frac { \sqrt2 + \sqrt 6}{\sqrt2 + \sqrt3}$? Is there anything else you can do to reduce it to something "nicer" other than multiplying it by $\dfrac {\sqrt3 - \sqrt2}{\sqrt3 - \sqrt2}$ and get $\sqrt 6 -2 + \sqrt {18} - \sqrt {12}$? The reason I think there's a nicer form is because the previous problem in the book was to simplify $\sqrt{3+ 2 \sqrt 2} - \sqrt{3 - 2 \sqrt 2}$, which simplifies nicely to $\sqrt{( \sqrt 2+1)^2} - \sqrt{ ( \sqrt 2 - 1)^2} = 2.$
Well, you could note that $$\sqrt6-2+\sqrt{18}-\sqrt{12}=-2+\sqrt6(1-\sqrt2+\sqrt3)$$ But beyond that, it doesn't look any better.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2060719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Simple Dice Probability Question This problem is throwing me off because it seems extremely simple, but the answer given is not what I get. The book has the answer as 5/36, but wouldn't it be 6/36 which would reduce to 1/6? Because you could have the combinations 4/6, 5/5, 5/6, 6/4, 6/5, 6/6 which is six total combinations out of the total 36 possible ones... Am I missing something, or is this a misprint? What is the probability of rolling a pair of dice and getting a sum of 10 or more?
The text book must be wrong then 6 Combinations (4,6), (5,5), (5,6), (6,4), (6,5), (6,6)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2060856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
prove the integral function is continuous Let $f: [a,b]\times [c,d]\to \mathbb{R}, (x,\alpha)\to f(x,\alpha)$ is a 2 variable continuous function on $[a,b]\times [c,d]$, then $h(\alpha )=\int _a^b f(x, \alpha )dx$ is a continuous function on $[c,d]$. My attempt: I've tried to prove this using the definition of 2-var continuous (that is, for any $(x, \alpha)$, $\epsilon >0$, $\exists \delta$, all $(y, \beta)$ satisfies $\sqrt{(y-x)^2+(\beta-\alpha)^2}<\delta$ yields $|f(y, \beta)-f(x, \alpha)|<\epsilon$) but since $\epsilon$ depends on $x$ so I can't go directly from here.Thanks for any help.
Hint: the projection of a continuous function onto any coordinate is continuous, and the integral of a continuous real function is continuous. In a general sense, we may define the space $\mathbb{R}^2$ to be the collection of ordered pairs from $\mathbb{R}$, equipped with the property that the projection of any continuous function onto either coordinate is always continuous. This is one possible way to define the product topology on $\mathbb{R}^2$ (which happens to be just the usual Euclidean topology).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2060964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculate the derivative of $g(x)=x^2.cos(1/x^2)$. Does the derivative of this function exist at $x=0$? Calculate the derivative of $g(x)=x^2.cos(1/x^2)$. Does the derivative of this function exist at $x=0$? I calculated the derivative to be $2.cos(1/x^2)x+2(sin(1/x^2))/x$. I'm tempted to say that the derivative does not exist at zero because it is a vertical line, but I'm not sure this is the case.
$g'$ does not have limit in $0$, but it does not mean $g$ not differentiable in $0$ : $$\frac{g(x)-g(0)}{x}=x\cos\frac{1}{x^2}\xrightarrow[x\to0]{}0$$ so $g'(0)=0$. All you can say is that $g'$ is not continuous in $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2061065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\sqrt{3-x}-\sqrt{-x^2+8x+10}=1$ Consider the equation $$ \sqrt{3-x}-\sqrt{-x^2+8x+10}=1. $$ I have solved it in a dumb way by solving the equation of degree four. So, the only real solution is $x = -1$. Can you please suggest, maybe there are better or easier ways to solve it?
You did it in the right way. You will end up with a $x^4$ term, and you must solve from there to get $-1.$ The above answer is right, that it gives you a $range$ of values, but if you want to solve directly, you must do it the way you have said above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2061205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Random placement of rooks on a chessboard $8$ rooks are placed randomly on an $8\times 8$ chess board. What is the probability of having exactly one rook each row and each column? I guess there is no meaningful order here?
Hint. Eight (indistinguishable) rooks can be placed on an $8\times 8$ chess board in $\binom{64}{8}$ ways (we have to select $8$ positions among $8^2=64$). Having exactly one rook each row and each column can be done in $8!$ ways (for each of the $8$ columns we choose e different row).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2061325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is there a math symbol for $f(x) > g(x)$ except when $x=0$, in which case $f(x)=g(x)$? Title says it all, except that $x \in R^n$. For want of a better choice, I've specified the amsmath symbol $\gtrdot$ for this relation, but probably this actually means something quite different that google hasn't revealed?? If no such symbol exists, and my choice means something different, does anybody have an alternative suggestion? Thanks very much!
You could define a Boolean function: $$D^+(f, g) = \delta^{f(0)}_{g(0)} | f(x) > g(x) \forall x \neq 0 |$$ Presumably everyone reading your paper knows what the Kronecker delta and the Iverson bracket are. It's much easier to look for a prior definition of a function than it is to look for a prior use of a symbol, as you yourself have already experienced. For example, if $f(x) = 3x^2$ and $g(x) = \frac{x^2}{3}$, then $D^+(f, g) = 1$. But with the prime counting function $\pi_1(x)$ and the semiprime counting function $\pi_2(x)$, we'd have $D^+(\pi_1, \pi_2) = 0$ because we also have $\pi_1 = \pi_2$ for $-2 < x < 2$. Then, later on, to use Bob's example, you could write $D^+(\alpha, \beta) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2061459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }