Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Borsuk - Ulam Theorem for $n=2$ Show that Borsuk -Ulam Theorem for $n=2$ is equivalent to the following statement : For any cover $A_1, A_2, $ and $A_3$ of $S^2$ with each $A_i$ closed, there is at least one $A_i$ containing a pair of antipodal points. For one direction, the function $f:S^2\rightarrow \mathbb R^2$ where $f(x)=(d(x,A_1),d(x,A_2))$ is enough. For another direction, I don't even know how to start the proof. Can you help me?
For any continuous function $ f: S^2 \to \mathbb R^2 $ consider the function $ g (x) = f (-x) -f (x) $. Note that $ g (x) $ is odd for reflection through the origin. Now cover $\mathbb R^2$ by three closed sets $ B_i $ such the only pair of points given by reflection through the origin that is contained in each set is the origin (paired with itself). Then consider the inverse images of these closed sets sitting in $ S^2$.You should be able to get the rest from there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/950654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that a string cant be outside a circle How can I prove that a chord can't be outside the circle itself. Is there a way to prove that you can't draw a chord outside the circle.
Let $P_1 = (x_1,y_1)$ and $P_2 = (x_2,y_2)$ be points on the unit circle, and let $P=(x,y)$ be a point on the chord $P_1P_2$, not equal to $P_1$ or $P_2$. So we can write $P=tP_1 + (1-t)P_2$ for some $t \in (0,1)$. So we have $x = tx_1 + (1-t)x_2$ and $y = ty_1 + (1-t)y_2$, giving $$\begin{align} x^2+y^2 & = t^2(x_1^2+y_1^2) + (1-t)^2(x_2^2+y_2^2) + 2t(1-t)(x_1x_2+y_1y_2) \\ & = t^2+(1-t)^2 + 2t(1-t)P_1.P_2\\ & < t^2+(1-t)^2 + 2t(1-t) \;(\mathrm{because}\;2t(1-t) > 0 \;\mathrm{and}\;P_1.P_2 < 1)\\ & = 1 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/950750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to multiply and reduce ideals in quadratic number ring. I am studying quadratic number rings and I have a problem with multiplying and reducing ideals, for example: Let $w=\sqrt{-14}$. Let $a=(5+w,2+w)$, $b=(4+w,2-w)$ be ideals in $\mathbb Z[w]$. Now, allegedly, the product of ideals $a$ and $b$ in $\mathbb Z[w]$ is $(6,3w)$. Please explain clearly, how to get to $(6,3w)$.
You reduce an ideal by adding a multiple of one generator to another with the goal of simplifying things (making the numbers smaller and removing appearances of $w$) First, reduce $a$ and $b$ : $a = (5+w,2+w) = (3,2+w) \\ b = (4+w,2-w) = (6,2-w)$ Then, multiply every generator of $a$ with veery generator of $b$, and remove the $w^2$ terms : $ab = (18, 6-3w,12+6w,4-w^2) = (18,6-3w,12+6w,18)$ Then reduce : $ab = (18,6-3w,12+6w) = (18,6-3w,24) = (18,6-3w,6) = (6,3w)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/950869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
How to proof the following function is always constant which satisfies $f\left( x \right) + a\int_{x - 1}^x {f\left( t \right)\,dt} $? Suppose that $f(x)$ is a bounded continuous function on $\mathbb{R}$,and that there exists a positive number $a$ such that $$f\left( x \right) + a\int_{x - 1}^x {f\left( t \right)\,dt} $$ is constant. Can anybody show that $f$ is necessarily constant ?
Below, we will see that the function $f$ is actually the restriction of an entire function to $\Bbb{R}$, i.e. the sume of a convergent power-series with infinite radius of convergence. Once this is shown, the (sadly rather downvoted) answer of @Leucippus actually becomes a valid argument. For this, let $K := \Vert f \Vert_\sup$. By assumption, $K < \infty$. We will show by induction on $n \in \Bbb{N}_0$ that $\Vert f^{(n)} \Vert_\sup \leq K \cdot |2a|^n$ for all $n \in \Bbb{N}_0$. For $n=0$ this is trivial. As also noted in the other posts, continuity of $f$ implies that $x \mapsto \int_{x-1}^x f(t) dt$ is continuously differentiable, which implies that $f$ is continuously differentiable with $f'(x) = -a \cdot (f(x) - f(x-1))$. By induction, $$f^{(n)}(x) = -a \cdot [f^{(n-1)}(x) - f^{(n-1)}(x-1)].$$ Indeed, for $n=1$, this is what we just noted. In the induction step, we get $$ f^{(n+1)}(x) = \frac{d}{dx} (-a) \cdot [f^{(n-1)}(x) - f^{(n-1)}(x-1)] = (-a) \cdot [f^{(n)}(x) - f^{(n)}(x-1)]. $$ By induction hypothesis, this yields $$ |f^{(n+1)}(x)| \leq |a| \cdot [|f^{(n)}(x)| + |f^{(n)}(x-1)|] \leq 2|a| \cdot \Vert f^{(n)} \Vert_\sup \leq |2a|^{n+1}. $$ But this implies $$ \sum_{n=0}^\infty \left|\frac{f^{(n)}(0) \cdot x^n}{n!} \right| \leq \sum_{n=0}^\infty \frac{(2|ax|)^n}{n!} = \exp(2|ax|) < \infty, $$ so that the power series $$g(x) := \sum_{n=0}^\infty \frac{f^{(n)}(0) \cdot x^n}{n!}$$ around $0$ of $f$ converges absolutely on all of $\Bbb{R}$. It remains to show $f = g$. To this end, note that the Lagrange form of the remainder for Taylor's formula (see http://en.wikipedia.org/wiki/Taylor's_theorem#Explicit_formulae_for_the_remainder) yields $$ |f(x) - g(x)| \xleftarrow[k \to \infty]{} \left| f(x) - \sum_{n=0}^k \frac{f^{(n)}(0) x^n}{n!} \right| = \left| \frac{f^{(k+1)}(\xi_L)}{(k+1)!} \cdot |x|^{k+1}\right| \leq \frac{(2|xa|)^{k+1}}{(k+1)!} \xrightarrow[k\to\infty]{} 0, $$ because even $\sum_k \frac{(2|xa|)^{k+1}}{(k+1)!} = \exp(2|xa|) < \infty$. Hence, $f = g$ is the restriction of an entire function to $\Bbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/950961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 2 }
Need to prove that the equation has only 1 solution. I have been trying to solve the following equation: $5^x+7^x=12^x.$ Obviously, x=1 is a solution but how do I prove that there are no other solutions.
we'll show it has no solutions for $x > 1$, hope you can use the idea to deal with the other case. your observation (that $1$ is a solution) will be crucial. write $x = 1 + y$ with $y > 0$, then $$ 5^x + 7^x = 5 \cdot 5^y + 7 \cdot 7^y < (5+7)\cdot 7^y < 12 \cdot 12^y = 12^x$$ inequalities are strict because $y > 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/951088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Convergence of $\sum \frac{(2n)!}{n!n!}\frac{1}{4^n}$ Does the series $$\sum \frac{(2n)!}{n!n!}\frac{1}{4^n}$$ converges? My attempt: Since the ratio test is inconclusive, my idea is to use the Stirling Approximation for n! $$\frac{(2n)!}{n!n!4^n} \sim (\frac{1}{4^n} \frac{\sqrt{4\pi n}(\frac{2n}{e})^{2n}}{\sqrt{2 n \pi} \sqrt{2n \pi} (\frac{n}{e})^{2n}} =\frac{(2)^{2n}}{4^n \sqrt{n \pi}}$$ The series of the secomd term diverges. It is correct to conclude thatthe series diverges? Another ideas are welcome! Thanks
A much simpler way: $$ \frac{a_{n+1}}{a_n}=\frac{(2n+2)(2n+1)}{4(n+1)(n+1)}=\frac{2n+1}{2n+2}\ge \sqrt{\frac{n}{n+1}}, $$ since $$ \left(\frac{2n+1}{2n+2}\right)^2=\left(1-\frac{1}{2n+2}\right)^2\ge 1-\frac{2}{2n+2} =\frac{n}{n+1}, $$ and hence $$ a_n=a_1\prod_{k=2}^n\frac{a_{k}}{a_{k-1}}\ge a_1\prod_{k=2}^n\sqrt{\frac{k-1}{k}} =\frac{a_1}{\sqrt{n}}, $$ and hence $$ \sum_{n=1}^\infty a_n=\infty. $$ Note that $$ \frac{a_{n+1}}{a_n}=\frac{(2n+2)(2n+1)}{4(n+1)(n+1)}=\frac{2n+1}{2n+2}=1-\frac{1}{2n+2}=1-\frac{1}{2n}+{\mathcal O}\left(\frac{1}{n^2}\right). $$ This series diverges due to Gauss Test otherwise known as Raabe's Test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/951171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
help with strange Double Integral: $\iint_E {x\sin(y) \over y}\ \rm{dx\ dy}$ i'm having trouble with this double integral: $$ \iint_E {x\sin(y) \over y}\ \rm{dx\ dy},\ \ \ \ E=\Big\{(x,y) \in \mathbb{R^2} \mid 0<y\le x\ \ \ \land\ \ \ x^2+y^2 \le \pi y\Big\} $$ i've tried using polar coordinates, but after i made the domain normal i realized that the integrand got a lot more complicated.. then i've tried another transform: $u=y/x, v=y$; with even worse results. i'm looking mainly for a tip on how to tackle this, also i'd like to know the reasoning behind an eventual tip... thanks in advance!
$$ \iint_E {x\sin(y) \over y}\ \rm{dx\ dy},\ \ \ \ E=\Big\{(x,y) \in \mathbb{R^2} \mid 0<y\le x\ \ \ \land\ \ \ x^2+y^2 \le \pi y\Big\}\\ \\ \begin{cases} x^2+y^2\le\pi y \iff x^2\le\pi y -y^2 \iff \vert x\vert \le \sqrt{\pi y -y^2}\\ 0<y\le x \end{cases} \implies \\ \implies 0<y\le x \le \sqrt{\pi y -y^2} \implies \begin{cases} \sqrt{\pi y -y^2} \ge y \\ \pi y -y^2\ge 0\\ y >0 \end{cases} \iff 0 < y \le \pi/2 $$ so now we have: $$ 0 < y \le \pi/2 \ \ \ \land \ \ \ y\le x \le \sqrt{\pi y -y^2} $$ which gives us: $$ \iint_E {x\sin(y) \over y}\ \rm{dx\ dy} = \int_{0}^{\pi/2} {\sin{y} \over y} \ \Big[ \int_{y}^{\sqrt{\pi y -y^2}} x\ \rm{dx} \Big] \rm{dy} = \cdots = {\pi-2 \over 2} $$ you should be able to solve the single variable integrals yourself, also keep in mind that the integral is improper when y approaches 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/951256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Given 2 colors, how to calculate the mix amount between them? As an input I have two colors, let's say red (RGB = 1,0,0) and magenta (RGB = 1,0,1). Now I have an image which includes additive mixes between these two colors, for example purple (RGB = 0.5,0,1). I want to calculate the mix amount between these two colors where 0 is 100% the first color (red) and 1 100% the second color (magenta). In the example with purple, the mix would be 0.5. Of course input colors may be as complicated as possible, but it is always ensured that they are indeed mixable. I know how to calculate this for grayscale colors, but not for arbitrary input colors. A visualized input and output would be like this, when the output will be used as the alpha channel for the second color:
Things became much easier to me when I thought of the RGB components as XYZ coordinates of a 3-dimensional point, and that the lengths between those is the key to success! To get the mix-amount between two colors, you require some basic vector maths: * *We have the background color A and foreground color B. C is the mixed color. All 3 have R, G and B components. *Compute the square length between A and B. With the formula for retrieving the square length between two 3-dimensional vectors, do the following: $$ \left \| AB \right \|^2 = (A_{r}-B_{r})^2+(A_{g}-B_{g})^2+(A_{b}-B_{b})^2 $$ *Calculate the square length between A and C (the mixed color). For C it is ensured that it is a point on the track between A and B (and if it isn't, we don't care, as the provided A and B is wrong and it's not our fault): $$ \left \| AC \right \|^2 = (A_{r}-C_{r})^2+(A_{g}-C_{g})^2+(A_{b}-C_{b})^2 $$ *Divide the square length of A to B by the square length of A to C. If you take the root of the result, you'll get the "alpha denominator" (you'll see shortly why): $$ d = \sqrt{\frac{\left \| AB \right \|^2}{\left \| AC \right \|^2}} $$ *The alpha value eventually is 1 divided by the denominator: $$ a = \frac{1}{d} $$ The resulting a is exactly 0 if the color is completely the background color A. It is 1 if the color is completely foreground color B. Anything between is background color A + foreground color B * a. I'm sorry if any of my math formatting is incorrect, I'm new to LaTeX and not really a mathematician :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/951395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show independence of number of heads and tails I am independently studying Larry Wasserman's "All of Statistics" Chapter 2 exercise 11 is this: Let $N \sim \mathrm{Poisson}(\lambda)$ and suppose we toss a coin $N$ times. Let $X$ and $Y$ be the number of heads and tails. Show that $X$ and $Y$ are independent. First, I don't understand what the significance of the Poisson distribution of $N$ is? Secondly, I am guessing that I'm supposed to show: $$f_{X,Y}(x,y) = f_X(x)f_Y(y)$$ I know that: $f_X(x) = {n \choose x} p^x(1-p)^{n-x}$ because the number of heads would follow a binomial distribution I don't understand how to express $f_Y(y)$ or $f_{X,Y}(x,y)$
The problem can also be solved without the use of expectations, which at this point of the book (chapter 2 [1]) are yet to be introduced (they only appear in chapter 3 [1]). As you correctly identified, we are going to be seeking $f_{X}(x)$, $f_{Y}(y)$ and $f_{X, Y}(x, y)$, to show that $$f_{X}(x)f_{Y}(y) = f_{X, Y}(x, y)$$ for all $x$, $y$. Should there be any doubt as to why this is equivalent to showing that $X$ and $Y$ are independent, one can consult theorem 2.30, p. 35 in [1], From the definition of the probability mass function, $f_X(x) = \Pr(X = x)$, $f_Y(y) = \Pr(Y = y)$ and $f_{X,Y}(x, y) = \Pr(X=x, Y=y)$, so we can focus our attention on finding these three probabilities, to establish: $$\Pr(X = x)\Pr(Y = y) = \Pr(X=x, Y=y)$$ We are going to rely on a few key tricks: * *The law of total probability (Theorem 1.16, p. 12 [1]): $$P(B)=\sum_{i=1}^{k}\Pr(B \mid A_{i})P(A_{i})$$ where $A_{1}, \ldots A_{k}$ is a partition of the sample space, and $B$ is any event. *$\sum_{k=0}^{k=+\infty}\frac{x^k}{k!}=e^{x}$, which is well-known. *$P(X = x, N = n) = \Pr(X = x, Y = n - x)$, which comes from the fact that $X + Y = N$. *Not really a trick, but from the definition of conditional probability, $\Pr(A \mid B)\Pr(B) = \Pr(AB)$ Using trick 1 we can write down: $$\Pr(X = x) = \Pr(X = x \mid N < x)\Pr(N < x) + \sum_{k=0}^{k=+\infty}\Pr(X=x \mid N = x + k)\Pr(N = x + k)$$ The first part of the sum is 0, since $N$ can never be less than $x$. Let us compute the second part. Using the definitions of Binomial and Poisson distributions, we can write the following: $$\Pr(X=x \mid N = x + k) = {x + k \choose x} p^x(1-p)^k$$ $$\Pr(N=x + k) = e^{-λ} \frac {λ^{x + k}}{(x + k)!}$$ This, after some simplifications, gives us $$e^{-λ}\frac{λ^x}{x!}p^x\sum_{k = 0}^{k = +\infty}\frac{(λ(1-p))^{k}}{k!}$$ The time has come to apply trick 2, which yields: $$\Pr(X = x) = p^{x}\frac{λ^{x}}{x!}e^{-λp}$$ By symmetry, we can argue that: $$\Pr(Y = y) = (1-p)^{y}\frac{λ^{y}}{y!}e^{-λ(1-p)}$$ We're half-way done, but the "crux" of the problem lies in the computation of $\Pr(X = x, Y = y)$. We can apply trick 3. $$\Pr(X = x, Y = n - x) = \Pr(X = x, N = n)$$ To compute the right side, we can use trick 4. The left side is almost what we desire, except we will have to substitute $y = n - x$ on both sides after we're done with all the computations. So $$\Pr(X = x, Y = n - x) = \Pr(X = x \mid N = n)\Pr(N = n)$$ $$\Pr(X = x, Y = n - x) = \mathrm{Binomial}(n, p)(x)\mathrm{Poisson}(λ)(n)$$ After some computation we get $$\Pr(X = x, Y = n - x) = \frac{p^{x}(1-p)^{n-x}e^{-λ}λ^{x + n-x}}{x!(n-x)!}$$ Applying the substitution, we have $$\Pr(X = x, Y = y) = \frac{p^{x}(1-p)^{y}e^{-λ}λ^{x + y}}{x!y!}$$ Which, combined with our previous result, gives $\Pr(X = x)\Pr(Y = y) = \Pr(X=x, Y=y)$, as required. As other answers have pointed out, this is vaguely amusing, since having $N$ fixed makes $X$ and $Y$ as dependent as variables can possibly be: knowing $X$ determines $Y$ exactly, but somehow, someway setting $N \sim \mathrm{Poisson}(λ)$ adds just about enough randomness to $N$ that learning $X$ doesn't tell us anything about $Y$ that we wouldn't have known before we learned $X$ ($X$ and $Y$ are independent). The bizarre interplay between $\mathrm{Poisson}$ and $\mathrm{Binomial}$ continues, for additional practice see for example a very similar exercise 2.16, p. 45 [1] [1] Larry Wasserman, All of Statistics, 2004, corrected second printing, ISBN 0-387-40272-1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/951512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Showing that the $\text{ord}(a+b) = \text{min}(\text{ord}(a),\text{ord}(b))$ in a DVR This is Problem 2.29 from Fulton's Algebraic Curves. First a bit of background because I don't know how standard his terminology is. For a discrete valuation ring $R$ with maximal ideal $\mathfrak{m} = \langle t \rangle$, any element $z$ of $K := \text{Frac}(R)$ can be written uniquely as $z = u t^n$ where $u \in R^*$ and $n \in \mathbb{Z}$. In this case, we define the order of $z$ to be $\text{ord}(z) =n$. Let $R$ be a discrete valuation ring with quotient field $K$. If $a,b \in K$ and $\text{ord}(a) < \text{ord}(b)$, show that $\text{ord}(a+b) = \text{ord}(a)$. So far, I've got the following. Suppose $a = ut^m, \; b = vt^n$, where $m < n$. Then $$a+b = (u + v t^{m-n})t^m$$ and since the exercise tells me that the order of $a+b$ should be $m$, I'm inclined to believe that it remains for me to show that $u + v t^{m-n}$ is a unit in $R$. However, I can't seem to find a combination of $u,v,u^{-1},v^{-1}$ and $t$ that is inverse to $u + v t^{m-n}$. Am I attacking this problem incorrectly?
$u + v t^{m-n}\notin\mathfrak m$, so it is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/951619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
If $xH=yH$, then $xy^{-1} \in H$. Use a counterexample to disprove the following statements: * *If $xH=yH$, then $xy^{-1} \in H$. *If $xy^{-1} \in H$, then $xH=yH$. I was thinking for the first statement: $$xH=yH \rightarrow y^{-1}x \in H$$ But we do know if H is commutative. Is this correct? Would it be the same for the second statement.
I don't know any group theory. Here's how I figured out a counterexample to your first statement, just knowing the basic definitions, and following my nose. First, I figured out that $xH=yH$ means $y^{-1}x\in H$. So I'm looking for an example where $y^{-1}x\in H$ while $xy^{-1}\notin H$. So $x$ and $y$ should not commute. Trying to keep things simple, I took $y=y^{-1}=(1\ 2)$. Next I needed a permutation that doesn't commute with $(1\ 2)$, so I took $x=(1\ 3)$ and hoped for the best. [. . .] That didn't pan out, so next I tried $x=(1\ 2\ 3)$. I found that $y^{-1}x=(1\ 2)(1\ 2\ 3)=(2\ 3)$ and $xy^{-1}=(1\ 2\ 3)(1\ 2)=(1\ 3)$. Now all I needed was a group $H$ containing $(2\ 3)$ but not $(1\ 3)$. For that I took the smallest group containing $(2\ 3)$, so $H=\langle(2\ 3)\rangle=\{(1),(2\ 3)\}$. So now I had my counterexample: $H=\{(1),(2\ 3)\}$, $x=(1\ 2\ 3)$, $y=(1\ 2)$. Finally, I double-checked: $xH=(1\ 2\ 3)\{(1),(2\ 3)\}=\{(1\ 2\ 3),(1\ 2)\}$, and $yH=(1\ 2)\{(1),(2\ 3)\}=\{(1\ 2),(1\ 2\ 3)\}=\{(1\ 2\ 3)(1\ 2)\}=xH$, amd $xy^{-1}=(1\ 3)\notin H$, so everything is fine! As for the second statement, I'm too lazy to start all over again, So I'll try using the same $x$ and $y$ as before, with a new $H$. I want $xy^{-1}\in H$ while $xH\ne yH$. Since $xy^{-1}=(1\ 3)$, I guess I'll take $H=\langle(1\ 3)\rangle=\{(1),(1\ 3)\}$. Is $xH=yH$? Well, $x\in xH$; is $x\in yH$? Let's see, $yH=(1\ 2)\{(1),(1\ 3)\}=\{(1\ 2),(1\ 3\ 2)\}$, so $x\notin yH$ and $xH\notin yH$. Well, that's how I did it. Doesn't seem all that hard. Where did you get stuck?
{ "language": "en", "url": "https://math.stackexchange.com/questions/951672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Predicate logic proof problem Where the domain of the variables are Real Numbers, determine the truth value for the following: $$ \forall x \exists y(y^2-x<200) $$ I don't understand how to formally prove this problem. Since $y^2\geq 0$ it would stand to reason that any $x< 0$ could disprove this statement. I tried to prove it true by using one case where $x=3$ and $y=3 \therefore 9-3<900 $ which makes it True. How should I tackle a problem like this? Thanks
One way to understand the statement $$ \forall x \exists y(y^2-x<200) $$ is to think of it as a game. One player takes the $\forall$ quantifiers and one takes the $\exists$ quantifiers. The $\forall$ player is trying to make the statement at the end (the $y^2-x<200$) false, and the $\exists$ player is trying to make it true. Each quantifier, $\exists$ or $\forall$, is one move in the game. In this game, the $\forall$ player moves first, and picks a value for $x$. Then the $\exists$ player moves second, and picks a value for $y$. The entire statement, including the quantifiers, is true if the $\exists$ player can always make $y^2-x < 200$. Suppose the $\forall$ player picks $x=17$. Then the $\exists$ player can win by picking $y=10$, since $10^2 - 17 < 200$. Suppose the $\forall$ player picks $x=2$. Can the $\exists$ player pick a $y$ that still makes $y^2-x < 200$? Can the $\forall$ player make a move that leaves the $\exists$ player without a winning reply? That's the question you are supposed to answer. Here's my favorite example: The statement $$\forall x\exists y (\text{$y$ is the mother of $x$})$$ is true, because whoever $\forall$ picks in move 1, $\exists$ can win by picking that person's mother. Suppose $\forall$ picks $x$ equal to Angela Merkel; then $\exists$ wins by picking $y$ equal to Angela Merkel's mother. But the statement $$\exists y\forall x (\text{$y$ is the mother of $x$})$$ is false, because now $\exists$ must go first, and she has no winning move. Suppose she picks $y$ equal to Angela Merkel's mother. Then $\forall$ can win by picking $x$ equal to someone who is not Angela Merkel or one of Merkel's siblings; say George W. Bush. Of course if $\exists$ picks Barbara Bush in her move, $\forall$ must not pick George W. Bush; he should pick someone else, like say Sisqo. You asked in comments for an example with an implication. Let's consider $$\forall x\exists y (\text{$x$ is even} \to \text {xy = 2}).$$ The $\forall$ player moves first. If he picks an odd number for $x$ he loses immediately, because then regardless of what $\exists$ does, the implication $\text{$x$ is even} \to xy=2$ is vacuously true (the antecedent is false, so the entire implication is true), and the $\forall$ player loses when the final statement is true. So if $\forall$ wants to win he had better pick an even number! Suppose he picks $x=2$. Then $\exists$ can answer with $y=1$. And suppose he picks $x=4$; then $\exists$ can answer with $\frac12$ (if that is allowed; it may be clear from context, or it may be stated explicitly what moves are allowed.) But if $\forall$ picks $x=0$, he wins no matter what $\exists$ does, and so the quantified statement is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/951780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Throw X dice, sum face $1,2,3,4$ minus Y, then add $ 5,6$ formula I am trying to program something to find out the probability of all sum possible when I throw $X$ dice, sum all the face showing $1,2,3,4$ then I minus $Y$ from that sum (with a minimum of $0$, no negative), then I add the sum of face showing $5,6$. Example: 3 dices minus 7 $1,3,6$ would result $6$ $4,2,3$ would result $2$ $3,5,6$ would result $11$ I need a simple formula to get the probability of all possible results. No fancy symbol please (or else explain what they means, I am not good with statistic terms and even less with english statistic terms :)) Thanks.
The simpler formula is a generating function as I showed to you yesterday. A generating function is a polynomial that can represent many things but essentially the coefficients of the polynomial represent the frequency of the value of it exponent. By example: to represent a simple dice that go from 1 to 6 the generating function will be $$f(x)=\sum_{k=1}^{6}x^k=x+x^2+x^3+x^4+x^5+x^6$$ For a dice of 10 sides with sides $s=\{0,0,0,1,1,1,2,2,3,4\}$ it generating function will be $$f(x)=3x^0+3x^1+2x^2+x^3+x^4$$ Now that you know how setup generating functions for dice you can multiply polynomials to know the frequency and sums of throwing a different amount of dice (equals or different). If you put some special rule for a game you can multiply just some powers of a dice and not others. By example: throwing 2 standard dice of 6 sides $$f(x)=(x+x^2+x^3+x^4+x^5+x^6)^2=\\=x^{12}+2 x^{11}+3 x^{10}+4 x^9+5 x^8+6 x^7+5 x^6+4 x^5+3 x^4+2 x^3+x^2$$ Your question is really far to be clear but I think I understand. This formula $$f(x)=x^{-Y}\sum_{k=1}^{X}(x+x^2+x^3+x^4)^k$$ let all the possible and different sums of the sides $\{1,2,3,4\}$ of X dice and after subtract the Y value. So we need to discard from this polynomial the elements with negative exponents. I dont know a clear way to do this on mathematics but you can program it. A pure mathematical approach to know the frequencies of these sums just for positive sums need the use of this formula. Sry, fancy symbols for you man :). We have that the frequency for a sum $S\in \{1,..,4k\}$ will be $$\nu(S,k,4)=\sum_{j=0}^{\lfloor(S-k)/4\rfloor}(-1)^j\binom{k}{j}\binom{S-4j-1}{k-1}$$ (The $\lfloor x\rfloor$ is the floor function and the $\binom{n}{k}$ is the binomial coefficient. You need to know some "fancy symbols" sry. And if upper limit of a summation is lesser than it starting point you get then an empty sum.) From here our polynomial with just positive exponents will be $$f^*(x)=x^{-Y}\sum_{k=1}^{X}\sum_{S=1}^{4k}\nu(S,k,4)\cdot x^S\cdot\delta_{|S{-}Y|,S{-}Y}$$ where the $\delta$ is the Kronecker delta that discards all $S-Y<0$ This $f^*(x)$ isnt the complete distribution of sums because we need to add all the combinations with the 5s and 6s. This is done completing the above formula with the others dice that complete the throw and have the values of 5s and 6s $$f(x)=\sum_{k=1}^{X}\left((x^5+x^6)^{X-k}\left(x^{-Y}\sum_{S=1}^{4k}\nu(S,k,4)\cdot x^S\cdot\delta_{|S{-}Y|,S{-}Y}\right)^{\delta_{|4k{-}Y|{,}4k{-}Y}}\right)$$ The Kronecker delta as an exponent prevents the zero when $4k-Y\leq 0$, using the standard convention that $0^0=1$ (not confuse with $\lim_{x\to a}f(x)=0^0$ that is a undetermined form). I dont know a more simple way to express this on mathematical terms. On programs languages is easy to setup anything of these tasks using conditions and loops. I tried dont use any "fancy symbol" but I think a summation, binomial coefficient, floor function and the Kronecker delta is enough simple to understand to somebody not extremely lazy with basic arithmetic knowledge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/951902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given $a+b+c=4$ find $\max(ab+ac+bc)$ $a+b+c = 4$. What is the maximum value of $ab+ac+bc$? Could this be solved by a simple application of Jensen's inequality? If so, I am unsure what to choose for $f(x)$. If $ab+ac+bc$ is treated as a function of $a$ there seems no easy way to express $bc$ in terms of $a$. EDIT: The context of the question is maximising the surface area of a rectangular prism. Also I might have misinterpreted the question, because it says "the sum of the length of the edges (side lengths are a,b,c) is 4", and gives the options $\frac1{3}, \frac{2}{3}, 1, \frac{4}{3}$. Otherwise, how would this be done?
Jensen's Inequality gives $$ \left[\frac13(a+b+c)\right]^2\le\frac13(a^2+b^2+c^2) $$ and we know that $$ \begin{align} ab+bc+ca &=\frac12\left[(a+b+c)^2-(a^2+b^2+c^2)\right]\\ &\le\frac12\left[(a+b+c)^2-\frac13(a+b+c)^2\right]\\ &=\frac13(a+b+c)^2 \end{align} $$ and equality can be achieved when $a=b=c$. Therefore, if $a+b+c=4$, the maximum of $ab+bc+ca$ is $\frac{16}{3}$ which can be achieved if $a=b=c=\frac43$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/951997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Differential equation multivariable solution I don't understand how I would solve the following problem: Where does the $F(t,y) = -5$ come from? I tried solving it normally, do I create a multivariable function that satisfies $F(t,y) = -5$? Guidance would be appreciated, thanks.
This DE is separable. Rewrite as $$(9y^2-30e^{6y})\,dy=\frac{25t^4}{1+t^{10}}\,dt$$ and integrate. On the left, we get $3y^3-5e^{6y}$. On the right, make the substitution $t^5=u$. After a short while, we arrive at $5\arctan(t^5)+C$. Use the initial condition to find $C$. Since $y=0$ when $t=0$, we get $C=-5$. At this point, we more or less have to stop, for we cannot solve explicitly for $y$ in terms of $t$. (We can solve explicitly for $t$ in terms of $y$.) So we end up with the equation $F(y,t)=-5$, where $$F(y,t)=3y^3-5e^{6y}-5\arctan(t^5).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/952104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
eigenvalues of the sum of a diagonal matrix and a skew-symmetric matrix Suppose $A$ is a skew-symmetric matrix (i.e., $A+A^{\top}=0$) and $D$ is a diagonal matrix. Under what conditions, $A+D$ is a Hurwitz stable matrix?
For any $x\in\mathbb{R}^n$, $x^T(A+D)x=x^TDx$. If $x^T(A+D)x<0$ for all nonzero $x$, all eigenvalues of $A+D$ have negative real parts. Consequently, if the diagonal of $D$ is negative, $A+D$ is Hurwitz stable. To see that for a real matrix $B$, $x^TBx<0$ for all nonzero $x$ implies the negativity of the real part of the spectrum of $B$, consider an eigenvalue $\lambda=\alpha+i\beta$ and the associated eigenvector $x=u+iv$, where $\alpha,\beta\in\mathbb{R}$ and $u,v\in\mathbb{R}^n$. We have $$ Bx=\lambda x\quad\Leftrightarrow\quad B(u+iv)=(\alpha+i\beta)(u+iv)\quad\Leftrightarrow\quad Bu=\alpha u-\beta v, \quad Bv=\beta u+\alpha v. $$ Hence $$ u^TBu+v^TBv=\alpha u^Tu-\beta u^Tv+\beta v^Tu+\alpha v^Tv=\alpha (u^Tu+v^Tv)=\alpha\|x\|_2^2. $$ Since $u^TBu+v^TBv$ is negative (at least one of the vectors $u$ or $v$ is nonzero), we have that $\alpha<0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/952233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
High-School level question concerning circle and arcs This question somehow is unsolvable to me. Any idead/hints wil be much appreciated. $AB$ is a chord which is cut ny the chords $CD$ and $EC$ in the circle. Givens: $\frown{AC} +\frown{BE}=\frown{AD}+\frown{BC}$ $S_{CFG}=S_{CGH}$ I need to show: $AB \perp HG$ I realized that $CG$ is a median in $\triangle{HFG}$, so I'm trying to prove $CF=CG$ or $CG=GH$ which then will suffice to say that $\triangle{HFG}$ is a right triangle, but am not able to find a way.
Since $$\text{arc}AC+\text{arc}BE=\text{arc}AD+\text{arc}BC,$$ we have $$\angle{CBA}+\angle{BCE}=\angle{ACD}+\angle{BAC},$$ i.e. $$\angle{CBG}+\angle{BCG}=\angle{ACF}+\angle{FAC}.$$ Considering $\triangle CBG$ and $\triangle CAF$, we can say $$\angle{CGF}=\angle{CFG}\Rightarrow CF=CG.$$ Then, as you wrote, since $CF=CH$, we can say $\triangle HFG$ is a right triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/952324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Show the minimum value for v Really struggling with this question... If $$v=\frac{(L\cdot V1-V1\cdot x+V2\cdot x)\cdot R}{2Lrx-2rx^2+LR}$$ Prove that the minimum values ($x>0$) for $v$ will occur at: $$x=\frac{L}{V2-V1}\cdot [-V1 \pm \sqrt{V1\cdot V2-\frac{R}{2rL} \cdot (V1-V2)^2}]$$ How do I do this? I have tried differentiating the equation and setting it to 0, but I end up with a different formula than the one given. Thanks in advance...
Suppose that $L,r,R,V_1,V_2$ are constants. If we set $$a=-2r,b=2Lr,c=LR,d=R(V_2-V_2),e=RLV_1,$$ we have $$v=\frac{dx+e}{ax^2+bx+c}\Rightarrow \frac{dv}{dx}=\frac{-dax^2-2eax+dc-eb}{(ax^2+bx+c)^2}.$$ Hence, we have $$-dax^2-2eax+dc-eb=0$$$$\iff -R(V_2-V_1)(-2r)x^2-2RLV_1(-2r)x+R(V_2-V_1)LR-RLV_1\times 2Lr=0$$ $$\iff (V_2-V_1)x^2+2LV_1x+\frac{(V_2-V_1)LR}{2r}-L^2V_1=0$$ $$\iff x=\frac{-LV_1\pm\sqrt{D}}{V_2-V_1}\tag1$$ where $$D=L^2V_1^2-(V_2-V_1)\left(\frac{(V_2-V_1)LR}{2r}-L^2V_1\right)$$ $$=L^2\left(V_1^2-\frac{(V_1-V_2)^2R}{2rL}+V_1(V_2-V_1)\right)$$ $$=L^2\left(V_1V_2-\frac{R}{2rL}(V_1-V_2)^2\right).$$ Hence, from $(1)$, we have what you wrote. P.S. Suppose that $L\gt 0$. Since $$D=L^2\left(V_1V_2-\frac{R}{2rL}(V_1-V_2)^2\right),$$ we have $$\sqrt D=\sqrt{L^2\left(V_1V_2-\frac{R}{2rL}(V_1-V_2)^2\right)}=L\sqrt{V_1V_2-\frac{R}{2rL}(V_1-V_2)^2}.$$ Hence, from $(1)$, we have $$\begin{align}x&=\frac{-LV_1\pm\sqrt{D}}{V_2-V_1}\\&=\frac{-LV_1\pm L\sqrt{V_1V_2-\frac{R}{2rL}(V_1-V_2)^2}}{V_2-V_1}\\&=\frac{L}{V_2-V_1}\left(-V_1\pm \sqrt{V_1V_2-\frac{R}{2rL}(V_1-V_2)^2}\right).\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/952414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A cute limit $\lim_{m\to\infty}\left(\left(\sum_{n=1}^{m}\frac{1}{n}\sum_{k=1}^{n-1}\frac{(-1)^k}{k}\right)+\log(2)H_m\right)$ I'm sure that for many of you this is a limit pretty easy to compute, but my concern here is a bit different, and I'd like to know if I can nicely compute it without using special functions. Do you have in mind such ways? $$\lim_{m\to\infty}\left(\left(\sum_{n=1}^{m}\frac{1}{n}\sum_{k=1}^{n-1}\frac{(-1)^k}{k}\right)+\log(2)H_m\right)$$
$$ \begin{align} \lim_{m\to\infty}\left(\sum_{n=1}^m\sum_{k=1}^{n-1}\frac{(-1)^k}{nk}+H_m\log(2)\right) &=\lim_{m\to\infty}\sum_{n=1}^m\sum_{k=n}^\infty\frac{(-1)^{k-1}}{nk}\\ &=\sum_{n=1}^\infty\sum_{k=n}^\infty\frac{(-1)^{k-1}}{nk}\\ &=\sum_{k=1}^\infty\sum_{n=1}^k\frac{(-1)^{k-1}}{nk}\\ &=\sum_{k=1}^\infty(-1)^{k-1}\frac{H_k}k\\ &=\frac12\zeta(2)-\frac12\log(2)^2 \end{align} $$ the last step is from this answer where only series manipulations are used, no special functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/952535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Show $\frac{1}{|x-y|^{p}}-\frac{1}{|x|^{p}}\in L^{2}(\mathbb{R})$ for fixed real y This is homework so no answers please Show $\frac{1}{|x-y|^{p}}-\frac{1}{|x|^{p}}\in L^{2}(\mathbb{R})$ for fixed real y where $|p|< \frac{1}{2}$. Any strategies? Attempt: I will type as I go, I am just curious to see strategies suggested. I tried the usual ones i.e. Holder, Jensens, Minkowski, linear interpolation. Thanks
The result is false for $p=1/2$, since $1/|x|$ is not integrable at $0$. If $p<1/2$, you only have to worry about integrability at $\infty$. Use the Mean Value Theorem. You will find the condition $p>-1/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/952622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Olympiad problem: Erdos-Selfridge The following problem is a special case of Erdos-Selfridge theorem: http://projecteuclid.org/euclid.ijm/1256050816 Problem: Prove that for any positive integer $n$, the product $(n+1)(n+2)...(n+10)$ is not a perfect square. It appeared as an olympiad problem, so there is a solution using just olympiad techniques (i.e. "elementary" maths), and it shouldn't be too long either because in an olympiad you only have limited time. However, after trying this for almost a day I still couldn't get anything interesting. I've tried simpler versions but it was no use. I would appreciate any help! EDIT: I couldn't find a solution anywhere, I tried the AoPS contest section. It's from the 2002 Bosnia Herzegovina Team Selection Test, in case anyone wants to try looking.
the product is equal 2^5 x (odd number) . 5 is not even, thus the product cannot be a square. Best regards
{ "language": "en", "url": "https://math.stackexchange.com/questions/952800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Showing that $-\ln{X} \sim \exp{\alpha}$ for $X \sim Beta(\alpha, 1)$ The CDF for $X \sim Beta(\alpha,1)$ is given by: $$F(x) = \frac{\int_{0}^{x}t^{\alpha-1}dt}{\int_{0}^{1} t^{\alpha-1}dt}$$ I am given to understand that $-\ln{X} \sim \exp(\alpha)$ if $\alpha > 0$. How do I go about showing this? In all my attempts, I really don't know what to do with the presence of the natural logarithm term. EDIT: If this is not so trivial, it might help to look at the Kumaraswamy Distribution.
Let $g(x) = -\log x$, hence $Y = g(X) = -\log X$, so that $X = g^{-1}(Y) = e^{-Y}$. Then $$F_Y(y) = \Pr[Y \le y] = \Pr[-\log X \le y] = \Pr[X \ge e^{-y}] = 1 - F_X(e^{-y}).$$ Then differentiation of the CDF gives $$f_Y(y) = f_X(e^{-y})e^{-y}$$ by the chain rule. Now if $X \sim \mathrm{Beta}(\alpha,1)$, then $f_X(x) \propto x^{\alpha - 1}$, hence we have $$f_Y(y) \propto e^{-y(\alpha-1)} e^{-y} = e^{-\alpha y}$$ which is clearly exponential with rate parameter $\alpha$. There is no need to explicitly compute $F_X$; it suffices to know the PDF of $X$. Indeed, this approach can be used even when $X \sim \mathrm{Beta}(\alpha,\beta)$ in general, although the resulting distribution for $Y$ is not generally exponential. I leave it to the reader as an exercise to determine whether such a distribution has a name.
{ "language": "en", "url": "https://math.stackexchange.com/questions/952888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving an irrational number in the cantor set I'm trying to prove that $0.2020020002\ldots_3 \in \Bbb Q^c\cap C$ where $C$ denotes the Cantor set. I'm trying to get a contradiction assuming $0.2020020002\ldots_3 \in \Bbb Q$ (without using the fact that every rational number is periodic or terminating). Suppose $$\sum_{k=1}^{\infty} \frac{2}{3^{\frac{k(k+1)}{2}}}=\frac{p}{q}.$$ Choose $n$ such that $3^n \gt q+1$. Then $$ q \times3^n\times \sum_{k=1}^{\infty} \frac{2}{3^{\frac{k(k+1)}{2}}}= p \times 3^n=integer.$$ I'm stuck after this step. How can I show that this is an integer between $0$ and $1$? Any help is greatly appreciated.
Suppose $$ \sum_{k=1}^{\infty}\frac{2}{3^{\frac{k(k+1)}{2}}} \text{ is rational.}$$ Then there exists $ p,q\in \mathbb{Z}^{+} $ such that $\gcd(p,q)=1$ and $$ \sum_{k=1}^{\infty}\frac{2}{3^{\frac{k(k+1)}{2}}}=\frac{p}{q} .$$ Then you need to choose $ n\in \mathbb{N} $ such that $ n=\frac{k_{0}(k_{0}+1)}{2} $ for some $ k_{0}\in \mathbb{N} $ with $ 3^{k_{0}}>q+1 $. Put $$ x=q3^{n}\left( \frac{p}{q}-\sum_{k=1}^{k_{0}}\frac{2}{3^{\frac{k(k+1)}{2}}}\right). $$ Then $$x=3^{n}p-q3^{n}\sum_{k=1}^{k_{0}}\frac{2}{3^{\frac{k(k+1)}{2}}}. $$ Hence $x$ is an integer. Observe that $$x=q3^{n}\sum_{k=k_{0}+1}^{\infty}\frac{2}{3^{\frac{k(k+1)}{2}}}>0. $$ Therefore $x$ is a positive integer. Also observe that $$x=q3^{n}\sum_{k=k_{0}+1}^{\infty}\frac{2}{3^{\frac{k(k+1)}{2}}}<q3^{n}\sum_{k=k_{0}+1}^{\infty}\frac{2}{3^{n+k}}<q\sum_{k=k_{0}+1}^{\infty}\frac{2}{3^{k}}=\frac{q}{3^{k_{0}}}<\frac{q+1}{3^{k_{0}}}<1. $$ That is $x$ is a positive integer with $ x<1 $. So we have a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/952999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why do polynomial terms with an even exponent bounce off the x-axis? Like if it's $f(x) = (x-5)^2(x+6)$ Why, at $x=5$, does the graph reflect off the x-axis?
Well... at $x=5$, it is clear that $$f(5)=(5-5)^2(5+6)=0^2*11=0$$ So from here, you need to observe that if you pick some number greater than $5$ for $x$ (we can write this as $x=5+x_0$, where $x_0 >0$) then we get $$f(5+x_0) = ((5+x_0)-5)^2((5+x_0)+6)=(x_0)^2(11+x_0)>0$$ The $>$ above is true because $(x_0)^2$ is always positive and $11+ x_0$ is positive because $x_0>0$. But this is true for any $x_0 >0$ and it only gets bigger for larger $x_0$, so this is why it looks the way it does.
{ "language": "en", "url": "https://math.stackexchange.com/questions/953128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
ker$(A^n) =$ ker$(A^m), \forall m > n$ If $A$ is a square matrix and ker$(A^n) =$ ker$(A^{n+1})$, then ker$(A^n) =$ ker$(A^m), \forall m > n$. I'm trying to prove that this is correct, but I'm having trouble figuring out what the relation between the ker$(A^n)$ and ker$(A^{n+1})$ is precisely. I know "ker" represents all solutions to the homogenous equation, but I'm not sure how to think about something like ker$(A^n)$ where $A$ is raised to a power. I can think of something like ker$(A_1\times ... \times A_n)$ = ker$(A_1\times ... \times A_{n+1})$, but is there some way I can transform this equation to make it easier to understand how to prove the above?
You can think of the problem in the following way: If we know that $A^{n+1}x = A^n \cdot Ax = 0 \implies A^n x = 0$ (for all $x$), show that $A^m x = 0 \implies A^n x = 0$ for all $m>n$ (for all $x$). In fact, it's enough to show the following: If we know that $A^{n+1}x = A^n \cdot Ax = 0 \implies A^n x = 0$ (for all $x$), show that $A^m x = 0 \implies A^{m-1} x = 0$ for all $m>n$ (for all $x$). Now, note that $$ A^m x = A^{n+1}(A^{m-n-1} x) = 0 $$ and apply the hypothesis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/953228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Show that there exists a real function $f$ such that $x\to a, |f(x)|\to |L|$ but the limit of $f(x)$ does not exist. Show that there exists a real function $f$ such that $x\to a, |f(x)|\to |L|$ but the limit of $f(x)$ does not exist. Just before this problem I was asked to prove the theorem: If $L=\lim_{x\to a}f(x)$ exists, then $|f(x)|\to|L|$ as $x\to a$. I was able to prove that just fine which is causing the confusion here. I was under the impression that in order to have $\lim_{x\to a}f(x)=L, f(x)$ must approach L whenever $x$ approaches $a$. Based on that, how do we create an example where the limit does not exist?
The classic Sign Function is also Okay: $$f(x) =\left\{\begin{matrix} 1 & x>0 \\ 0 & x =0 \\ -1 & x<0 \end{matrix}\right..$$ $\lim_{x \to 0} f(x)$ does not exist, but $\lim_{x \to 0} | f(x) | = 1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/953441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If the dot product between two vectors is $0$, are the two linearly independent? If we have vectors $V$ and $W$ in $\mathbb{R^n}$ and their dot product is $0$, are the two vectors linearly independent? I can expand $V_1 \cdot V_2 = 0 \Rightarrow v_1w_1+...+v_nw_n = 0$, but I don't understand how this relates to linear independence.
More generally, Yes any orthogonal set of vectors are linearly independent. So if we consider $R^n$ together with the inner product being the dot product then if the set of vectors have dot product of zero then they are linearly indepedent: Consider the set A = {$x_1$,$x_2$,...,$x_n$} Now we need to prove that the sets A is linearly indepedent set that is if you take a linear combinations of A then only solution that works is trivial one. Consider $a_n \in R$ $a_1x_1 + ... + a_nx_n = 0$ do the dot product with $x_1$ notice everything will vanish since we have by assumption dot product is zero and you'll be left with $a_1x_1^{2} = 0$ so $a_1$ = 0 and you can proceed similarly and you'll find out that all constants are zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/953512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Calc III - Parameterization Given x(t) = (2t,t^2,t^3/3), I am asked to "find equations for the osculating planes at time t = 0 and t = 1, and find a parameterization of the line formed by the intersection of these planes." I have already solved the vector-valued functions for x. I know this is vague and I'm asking for a lot, but can someone please explain how to solve the problem and also what parameterization is? Thank you!
First find the unit tangent vector $${\bf T} = \frac{{\bf x'}}{\|{\bf x'}\|}$$ Then the unit normal (or just normal for ease): $${\bf N} = \frac{{\bf T'}}{\|{\bf T'}\|}$$ And you may need the binormal which is just $${\bf B} = {\bf T} \times {\bf N} $$ Then your point is ${\bf x}(0)$ and your vector is ${\bf B}(0)$. Then plug into the plane equation: $$a(x-x_0) + b(y-y_0) + c(z-z_0)=0$$ Example: $x(t) = \langle \cos(t), \sin(t), t \rangle$ I get $${\bf T} = \langle -\frac{1}{\sqrt{2}}\sin(t), \frac{1}{\sqrt{2}}\cos(t), \frac{1}{\sqrt{2}} \rangle$$ Then the unit normal (or just normal for ease): $${\bf N} = \langle \cos(t), -\sin(t), 0 \rangle$$ And you may need the binormal which is just $${\bf B}(0) = {\bf T}(0) \times {\bf N}(0) $$ $${\bf B}(0) = \langle -\frac{1}{\sqrt{2}}\sin(0), \frac{1}{\sqrt{2}}\cos(0), \frac{1}{\sqrt{2}} \rangle \times \langle \cos(0), -\sin(0), 0 \rangle $$ $${\bf B}(0) = \langle 0, \frac{1}{\sqrt{2}}, -\frac{1}{\sqrt{2}} \rangle $$ I get the point $x(0) = \langle \cos(0), \sin(0), 0 \rangle = \langle 1, 0, 0 \rangle $ So the plane is $$0(x-1) + \frac{1}{\sqrt{2}}(y-0) - \frac{1}{\sqrt{2}}(z-0)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/953591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Graph Theory inequality I've been trying to prove the following inequality If G is r-regular graph and $\kappa (G)=1 $, then $\lambda (G)\leq \left \lfloor \frac{r}{2} \right \rfloor$ I've tried manipulating the Whitney inequality, but it doesn't seem to help.
Let $G=(V,E)$ be the $r$-regular graph. Since $\kappa(G)=1$, there exist three vertices $u$, $v$ and $x$ such that, after removing $x$ from $G$, $u$ and $v$ are not connected. Call $G'$ the graph obtained by removing $x$ from $G$, and define $$ G_u=\{ y \in V \;|\; y \mbox{ is connected to } u \mbox{ in } G' \}$$ $$ G_v=\{ y \in V \;|\; y \mbox{ is connected to } v \mbox{ in } G' \}.$$ $G_u$ and $G_v$ are not connected in $G'$. Now define $$ E_u=\{(x,y) \in E | y \in G_u \} $$ $$ E_v=\{(x,y) \in E | y \in G_v \}. $$ Observe that by removing one of these two sets the graph obtained is disconnected. Moreover, $$ |E_u|+|E_v|=d(x)=r.$$ Hence one of them has cardinality $\leq \lfloor\frac{r}{2} \rfloor$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/953679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Automorphism groups of vertex transitive graphs Does there exist a finite nonoriented graph whose automorphism group is transitive but not generously transitive (that is, it is not true that each pair $(x,y)$ of vertices can be interchanged by some automorphism)? I know, for example, that Cayley graphs are always generously vertex-transitive.
Cayley graphs for abelian groups have generously transitive automorphism groups. In general a Cayley graph for a non-abelian group will not be generously transitive. In particular if $G$ is not abelian and $X$ is a so-called graphical regular representation (abbreviated GRR) for $G$, then its automorphism group is not generously transitive. The key property of a GRR is that the stabilizer of a vertex is trivial. Constructing GRRs is not trivial, but it is expected that most Cayley graphs for groups that are not abelian or generalized dicylic will be GRRs - so choosing a connection set at random usually works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/953856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Integration without substitution How to i integrate this with out substitutions or Partial fraction decomposition ? ($3x^2$+$2$)/[$x^6$($x^2$+1)] I've got to : 2/x^6(x^2+1),but after this i haven't been able to eliminate the 2.
You can use $\displaystyle\frac{3x^2+2}{x^6(x^2+1)}=\frac{2(x^2+1)}{x^6(x^2+1)}+\frac{x^2}{x^6(x^2+1)}=\frac{2}{x^6}+\frac{1}{x^4(x^2+1)}$ $\displaystyle=\frac{2}{x^6}+\left(\frac{x^2+1}{x^4(x^2+1)}-\frac{x^2}{x^4(x^2+1)}\right)=\frac{2}{x^6}+\frac{1}{x^4}-\frac{1}{x^2(x^2+1)}$ $\displaystyle=\frac{2}{x^6}+\frac{1}{x^4}-\left(\frac{x^2+1}{x^2(x^2+1)}-\frac{x^2}{x^2(x^2+1)}\right)=\frac{2}{x^6}+\frac{1}{x^4}-\frac{1}{x^2}+\frac{1}{x^2+1}.$ [Notice that this gives the partial fraction decomposition of this particular function.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/953936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
In the theorem is it necessary for ring $R$ to be commutative? According to the statement of theorem that a commutative ring $R$ with prime characteristic $p$ satisfies $$\begin{align} (a+b)^{p^n} = a^{p^n} + b^{p^n} \end{align}$$ $$\begin{align} (a-b)^{p^n} = a^{p^n} - b^{p^n} \end{align}$$ Where a,b $\in$ $R$, n $\geq 0$ and n $\in \mathbb{Z}$ I have thoroughly checked the proof many times but I don't understand why the condition of commutativity is important.
The commutative property is key in proving (for example, by induction) the Binomial Formula for commutative rings $A$ with $1$: $$ (a+b)^n = \sum_{k=0}^n {n \choose k} a^k b^{n-k} $$ for all $a,b\in A$. In prime characteristic $p$, we can then use $p | {p^n \choose k}$, $k<p^n$ and $k>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/954052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proof of the product rule for the divergence How can I prove that $\nabla \cdot (fv) = \nabla f \cdot v + f\nabla \cdot v,$ where $v$ is a vector field and $f$ a scalar valued function? Many thanks for the help!
$$ \begin{align} \nabla \cdot (fv) &= \sum \partial_i (f v_i) \\ &= \sum (f \partial_i v_i + v_i \partial_i f) \\ &= f \sum \partial_i v_i + \sum v_i \partial_i f \\ &= f \nabla \cdot v + v \cdot \nabla f \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/954158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
What does the notation for a group ${A_n}^*$ mean? I was on a website that catalouges lattices. I was looking through the alternating group lattices and the dihedral lattices and there were two kinds for each. For example, there was $A_2$ and ${A_2}^*$ as well as $D_3$ and ${D_3}^*$. I can't seem to find any info on what these other groups are. What does that $^*$ indicate?
'*' indicates dual or weighted lattices. See this pdf. But, to fulfill this answer, I may assure you that $A_n\;^*$ means the lattice of vectors having integral inner products with all vectors in $A_n:A_n\;^*=\{x\in R^n |(x,r)\in Z \;for\;all\;r\in A_n\}$. And, that,$A_n\;^*\;^*=A_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/954294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$M^{-1}$ has at most $n^2-2n$ coefficients equal to zero Let $M\in GL_n(\mathbb{R})$ such that all its coefficients are non zero. How can one show that $M^{-1}$ has at most $n^2-2n$ coefficients equal to zero ? I have no idea how to tackle that problem, I've tried drawing some contradiction if $M^{-1}$ had $n^2+1-2n$ zero coefficients but couldn't find any. Notations : $GL_n(\mathbb{K})$ : Set of invertible matrices in $\mathcal{M}_n(\mathbb{K})$
Equivalently you need to prove that $M^{-1}$ has at least $2n$ nonzero entries. Let's in fact prove that every row of $M^{-1}$ had at least two nonzero entries. Suppose that the $i$th row of $M^{-1}$ has less than two nonzero entries; of course it cannot have a zero row (otherwise it would be singular), so it has exactly one nonzero entry. But then, in the equality $M^{-1}M = I$, we reach a contradiction, because in the $i$th row you would get: $$M^{-1} M = \begin{pmatrix} \dots &&&&&& \dots \\ 0 & \dots & 0 & a_{i,j} & 0 & \dots & 0 \\ \dots &&&&&& \dots \end{pmatrix} \times \begin{pmatrix} \vdots & b_{j,1} & \vdots \\ \vdots & \vdots & \vdots \\ \vdots & b_{j,n} & \vdots \end{pmatrix} = \begin{pmatrix} \dots && \dots \\ a_{i,j}b_{j,1} & \dots & a_{i,j}b_{j,n} \\ \dots && \dots \end{pmatrix}$$ And since all the entries of $M$ are nonzero, the $i$th row of $I$ would only have nonzero entries, which is absurd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/954377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Recurrence Relation from Old Exam I see this challenging recurrence relation that has a solution of $T^2(n)=\theta (n^2)$. anyone could solve it for me? how get it? $$T(n) = \begin{cases} n,\quad &\text{ if n=1 or n=0 }\\ \sqrt{1/2[T^2(n-1)+T^2(n-2)]+}n ,\quad &\text{ if n>1} \end{cases}$$
Hint: Let $a(n)=T^2(n)$. Then the recurrence relation becomes $$a(n)={1\over2}a(n-1)+{1\over2}a(n-2)+n$$ with $a(0)=0$ and $a(n)=1$. Now try finding $c$ and $C$ for which there's an inductive proof that $cn^2\lt a(n)\lt Cn^2$ for sufficiently large $n$. Added later (at OP's request): I'll do one inequality, which hopefully will show you how to do the other. We'd like to find a constant $C$ such that $a(n-1)\lt C(n-1)^2$ and $a(n-2)\lt C(n-2)^2$ together imply $a(n)\lt Cn^2$. If we just plug the assumed inequalities into what we know, we have $$\begin{align} a(n)&={1\over2}a(n-1)+{1\over2}a(n-2)+n\\ &\lt {1\over2}C(n-1)^2+{1\over2}C(n-2)^2+n\\ &={1\over2}C\big(n^2-2n+1+n^2-4n+4\big)+n\\ &=Cn^2+(1-3C)n+{5\over2}C \end{align}$$ Keep in mind that this is all "scratch" work: We're not writing up a proof at this point, nor are we trying to find the exact value of $C$ that gives the inequality, we're just trying to spot a value that will make things work. The key is in the coefficient $(1-3C)$ in the last line. If $C$ is big, that coefficient is negative, which will make the term $(1-3C)n$ hugely negative if $n$ is "sufficiently" large. So let's pick a convenient value and see what happens. Let's try $C=2$. If you redo the derivation above with $C=2$, you wind up with $$a(n)\lt 2n^2-5n+5\lt 2n^2\quad\text{if }n\ge2$$ since $5-5n$ is negative for $n\ge2$. Note that any $C$ greater than ${1\over3}$ will make $1-3C$ negative, so you don't have to choose $C=2$. It just happens to be convenient. Something similar should happen when you look at the other inequality, $cn^2\lt a(n)$. Give it a try!
{ "language": "en", "url": "https://math.stackexchange.com/questions/954442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
2nd order odes with non constant coeff. I am trying to solve these two DE: $ y''+(2x)/(1+x^2)y'+1/(1+x^2)^2y=0 $ and $ xy''-y'-4x^3y=0 $ and I am looking for methods on how to find the solutions. Should I go with the series method or is there a simpler way?
Hint: (2) You have that $y_1 = \text {sinh}\ x^2 $ is a solution of your DE. Write $$\begin{align}y_2 &= v\ \text{sinh}\ x^2 \\ y'_2 &= v'\ \text{sinh}\ x^2 + \ v\ (2x\ \text{cosh}\ x^2) \\ y''_2 & = v''\text{sinh}\ x^2 + 2v' (2x\ \text{cosh}\ x^2) + v(2\ \text{cosh}\ x^2+ 4x^2\ \text{sinh}\ x^2) \ \end{align}$$ Plugging in the differential equation we have $$xv''\text{sinh}\ x^2 + [4x^2\ (2x\ \text{cosh}\ x^2)-\ \text{sinh}\ x^2]v' = 0$$ Then set $w=v'$ and make separation of variables we have $$\frac{dw}{w} = (\frac{1}{x} - 4x\ \frac{coshx^2}{sinhx^2})\ dx$$ You take it from here. You will find $w = v' = \frac{x}{sinh^2(x^2)}$ , because $ln$ is one-to-one. Then $v = -\frac{1}{2}coth(x^2)$. The general solution is $$y(x) = c_1\ \text{sinh}\ x^2 + c_2\ \text{cosh}\ x^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/954536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
volume of unit ball in high dimension with maximum norm If $z=(x,y)$, $x\in X, y\in Y$, space $X, Y$ could have any distance $d_x, d_y$, and space $z\in Z$ has maximum distance: $$d(z,z')=\max\{d_x(x,x'),d_y(y,y')\}.$$ If the volume of unit ball in $X, Y$ are $c_x, c_y$, the volume of unit ball in $Z$ is $$c_z=c_xc_y.$$ Why this is true? I am lost when different distances involves together. Could any body explain to me the general definition for "volume"(for a unit ball) in product space, like $Z=(X, Y)$? Thank you very much!
The reason is that the unit ball in $Z$ is the Cartesian product of the unit balls of $X$ and $Y$. Indeed, if $z=(x,y)$, then $$ d(z,0)\le 1 \iff d(x,0)\le 1 \text{ and } d(y,0)\le 1 $$ And the volume (I assume Lebesgue measure) of the product of two sets is the product of their volumes; this comes directly from the construction of the measure as the product of one-dimensional Lebesgue measures.
{ "language": "en", "url": "https://math.stackexchange.com/questions/954665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Lang Fiber Products STATEMENT: Let $\mathcal{C}$ be a category.A product in $\mathcal{C}_z$ is called the fiber product of $f$ and $g$ in $\mathcal{C}$ and is denoted by $X\times_zY$, together with its natural morphisms on $X,Y$, over $Z$, which are sometimes not denoted by anything. QUESTION: What is $X\times_zY$. Is it just the standard cartesian product of $X$ and $Y$ ?
If $Z$ is a final object then the fibered product coincides with the usual product, however this is not true in general. Note that the definition of the fibered product also includes ''projections'' $\pi_X: X\times_Z Y\to X$ and $\pi_X: X\times_Z Y\to Y$, as well as two morphisms $\alpha:X\to Z$ and $\beta:Y\to Z$ such that $\alpha \circ \pi_X = \beta\circ \pi_Y$. The fibered product is also required to be universal and is thus defined uniquely up to unique isomorphism using the standard construction. In the category Set for example we have $X\times_Z Y = \{(x,y)\in X\times Y \,|\, \alpha(x)=\beta(y)\}$. Just thought I should add that the fibered product needn't look like a restricted Cartesian product. Consider the case of the category of open sets in a given topology where the morphisms are just the inclusion maps. In this case the fibered product of two sets is their intersection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/954760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is $f(x)^{-1}$ used to denote the inverse of a function, and not its reciprocal? Function notation says that any operations applied to a variable inside the parenthesis are applied to the variable before it enters the function, and anything applied to the function as a whole is applied to the entire function or to the result of the function. So you could say that... If $f(x)=x^2$, then $f(3x-2)=(3x-2)^2$, and $3f(x)-3=3x^{2}-3$. I understand that $f^{-1}(x)$ is used to denote the inverse of a function so... $$f^{-1}(x)=\pm\sqrt{x}.$$ But putting something to the $-1$ power gives you its reciprocal like... $$x^{-1}=\frac{x^{-1}}{1}=\frac{1}{x^1}= \frac{1}{can}$$ So would it not make sense that... $$f^{-1}(x)=\frac{1}{x^2}.$$ My question is why $f^{-1}(x)$ is considered the inverse of $f(x)$? Was this notation chosen randomly, or is there some logical explanation for why it is that way?
See, Inverse of $a$ i.e. $a^{-1}$ (where $a$ is any element of some group) means that $a*a^{-1}=e$ where $e$ is the identity w.r.t. binary operation $*$. I guess you don't understand Group theory and all i.e. abstract algebra. But only thing to notice here is that in Functions binary operation is not multiplication but composition denoted by $\circ$ and identity is not 1, rather this 1 denotes identity function i.e. $e$ s.t. $e(x)=x$ for all $x$. So inverse of $f$ here means $f^{-1}$ s.t. $f\circ f^{-1}=e$. So if $f(x)=x^2$ and what will you compose $x^2$ with to get $e$? Replace $f(x)^{-1}=\pm\sqrt{x}$ insted of $x$ in $f(x)=x^2$, you will see, $f\circ f^{-1}(x)$=($\pm\sqrt{x})^2=x$ i.e. $f\circ f^{-1}=e$ i.e. identity function. Just make sure you understand composition of functions, that is the binary operation here, not multiplication. $2^{-1}=1/2$ because operation is Multiplication in $\mathbb{R}$. To your wonder $2^{-1}=-2$ in $\mathbb{Z}$ as operation there is addition and identity is $0$ and it can also be $2$ itself if we consider $2$ as an element of another group say {$1,2$} and put operation as multiplication modulo 3 (means remainder of $2\times 2$ when divided by $3$, isn't it $1$? so isn't $2^{-1}=2$? but don't worry about that now. I hope I didn't confused you. Just worry about composition of function for now, and understand that
{ "language": "en", "url": "https://math.stackexchange.com/questions/954849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Exact Differential Equation? I tried to solve this equation so far, since the partial derivative respect to $x$ and $y$ are not exact, I have to find the $u(x)$ to make them exact $e^x \cos(y)dx+\sin(y)dy=0$ Partial derivative of $y = e^x\sin(y)$ partial derivative of $x = -\sin(y)$ Now i have to find the $u(x)$ to make them exact $u(x)(e^x\sin(y))=u'(x)(-\sin(y))+u(x)(-\sin(y))$ $u(x)\sin(y)(e^x+1)=u'(x)-\sin(y)$ $u(x)(e^x+1) = \frac{du}{dx} - 1$ $(e^x + 2)dx = \frac 1 u du $ $e^x + 2x = \ln(u)$ $u = e^{e^x + 2x}$ this $u(x)$ is too strange to make them exact... If anyone can help to understand why it doesn't work.....
You don't need to make this equation exact, it's seperable. First subtract $sin(y)dy$ to get $e^x\cos ydx=-\sin y dy$ Then divide both sides by $\cos y$ to get $e^xdx=-tanydy$. Proceed to integrate both sides.
{ "language": "en", "url": "https://math.stackexchange.com/questions/955035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Use the law of logarithms to expand an expression $$\log\sqrt [ 3 ]{ \frac { x+2 }{ x^{ 4 }(x^{ 2 }+4) } } $$ How is this answer incorrect? $$\frac { 1 }{ 3 } [\log(x+2)-(4\log x+\log(x^ 2+4))]$$
with this simplify you have to know that your domain will be changed and that is not exactly with your initial question.suppose you want to simplify this you can't input negative integer in this,but if you simplify that: that you can input negative numbers,so domain changed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/955144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Find the value of $\sqrt{10\sqrt{10\sqrt{10...}}}$ I found a question that asked to find the limiting value of $$10\sqrt{10\sqrt{10\sqrt{10\sqrt{10\sqrt{...}}}}}$$If you make the substitution $x=10\sqrt{10\sqrt{10\sqrt{10\sqrt{10\sqrt{...}}}}}$ it simplifies to $x=10\sqrt{x}$ which has solutions $x=0,100$. I don't understand how $x=0$ is a possible solution, I know that squaring equations can introduce new, invalid solutions to equations and so you should check the solutions in the original (unsquared) equation, but doing that here doesn't lead to any non-real solutions or contradictions. I was wondering if anyone knows how $x=0$ turns out as a valid solution, is there an algebaric or geometric interpretation? Or is it just a "special case" equation? A similar question says to find the limiting value of $\sqrt{6+5\sqrt{6+5\sqrt{6+5\sqrt{...}}}}$, and making a similar substituion for $x$ leads to $$x=\sqrt{6+5x}$$ $$x^2=6+5x$$ which has solutions $x=-1,6$. In this case though, you could substitute $x=-1$ into the first equation, leading to the contradiction $-1=1$ so you could satisfactorily disclude it. Is there any similar reasoning for the first question? I know this might be a stupid question but I'm genuinely curious :)
Denote the given problem as $x$, then \begin{align} x&=10\sqrt{10\sqrt{10\sqrt{10\sqrt{10\sqrt{\cdots}}}}}\\ &=10\cdot10^{\large\frac{1}{2}}\cdot10^{\large\frac{1}{4}}\cdot10^{\large\frac{1}{8}}\cdot10^{\large\frac{1}{16}}\cdots\\ &=10^{\large1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{1}{16}+\cdots}\\ &=10^{\large y} \end{align} where $y$ is an infinite geometric series in which its value is \begin{align} y &=1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{1}{16}+\cdots\\ &=\frac{1}{1-\frac{1}{2}}\\ &=2 \end{align} Therefore \begin{equation} x=10^{\large 2}=100 \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/955190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52", "answer_count": 7, "answer_id": 4 }
Prove that $ \int_0^{\pi} \frac{(\cos x)^2}{1 + \cos x \sin x} \,\mathrm{d}x =\int_0^{\pi} \frac{(\sin x)^2}{1 + \cos x \sin x} \,\mathrm{d}x $ In a related question the following integral was evaluated $$ \int_0^{\pi} \frac{(\cos x)^2}{1 + \cos x \sin x} \,\mathrm{d}x =\int_0^{\pi} \frac{\mathrm{d}x/2}{1 + \cos x \sin x} =\int_0^{2\pi} \frac{\mathrm{d}x/2}{2 + \sin x} \,\mathrm{d}x =\int_{-\infty}^\infty \frac{\mathrm{d}x/2}{1+x+x^2} $$ I noticed something interesting, namely that $$ \begin{align*} \int_0^{\pi} \frac{(\cos x)^2}{1 + \cos x \sin x} \,\mathrm{d}x & = \int_0^{\pi} \frac{(\sin x)^2}{1 + \cos x \sin x} \,\mathrm{d}x \\ & = \int_0^{\pi} \frac{(\cos x)^2}{1 - \cos x \sin x} \,\mathrm{d}x = \int_0^{\pi} \frac{(\sin x)^2}{1 - \cos x \sin x} \,\mathrm{d}x \end{align*} $$ The same trivially holds if the upper limits are changed to $\pi/2$ as well ($x \mapsto \pi/2 -u$). But I had problems proving the first equality. Does anyone have some quick hints?
$$ \frac{\cos(\pi/2-x)^2}{1+\cos(\pi/2-x)\sin(\pi/2-x)} = \frac{\sin(x)^2}{1+\sin(x)\cos(x)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/955294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 4 }
Spaces where all compact subsets are closed All compact subsets of a Hausdorff space are closed and there are T$_1$ spaces (also T$_1$ sober spaces) with non-closed compact subspaces. So I looking for something in between. Is there a characterization of the class of spaces where all compact subsets are closed? Or at least, is there a name for them?
According my acknowledge, it hasn't a characterization of this class of spaces, I believe that the reason is that this class of spaces is not Hausdorff and many people don't care it since we always study the classes of spaces at least Hausdorff. The following paragraphes may be useful for you, which is copied from Page 221 Encyclopedia of General Topology: A result taught in a first course in topology is that a compact subspace of a Hausdorff space is closed. A Hausdorff space with the property of being closed in every Hausdorff space containing it as a subspace is called H-closed (short for Hausdorff-closed). H-closed spaces were introduced in 1924 by Alexandroff and Urysohn. They produced an example of an H-closed space that is not compact, showed that a regular H-closed space is compact, characterized a Hausdorff space as H-closed precisely when every open cover has a finite subfamily whose union is dense, and posed the question of which Hausdorff spaces can be densely embedded in an H-closed space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/955382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }
Any shorter way to solve trigonometric problem? If $10 \sin^4\theta + 15 \cos^4 \theta=6$, then find value of $27 \csc^2 \theta + 8\sec^2 \theta$ I know the normal method o solve this problem in which we need to multiply L.H.S. of $10 \sin^4\theta + 15 \cos^4 \theta=6$ by $(\sin^2\theta + \cos \theta^2)^2$ and then simplifying it. It is kind of simplification in which we can commit silly mistakes. So, is there any shorter way to solve this trigonometric problem?
How can $20\sin^2\theta + 15\cos^2\theta = 6$? $20\sin^2\theta + 15 \cos^2\theta = 15 + 5\sin^2\theta$, implying that $5 \sin^2\theta = -9$, but $\sin^2\theta$ should be nonnegative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/955698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Mean and Variance of X with possible outcomes I roll a six-sided die until I get a 6. Then I roll it some more until I get an even number. Let X be the total number of rolls. So here are some possible outcomes with the resulting value of X: 24 1 2 6 1 5 4 : X = 8 3 6 4 : X = 3 3 4 6 3 1 1 2 : X = 7 1 5 4 6 6 : X = 5 Find the mean and variance of X. I believe the best way to approach this is to write X as the sum of two (or more) random variables. But I am just a little confused on where to begin.
Expressing $X$ as a sum of two simple random variables is indeed the best approach. Let $U$ be the number of tosses until we get a $6$, and $V$ the number of additional tosses until we get an even number. Then $X=U+V$. Note that $U$ and $V$ each have geometric distribution (with different parameters), and are independent. The mean of $X$ is the sum of the means of $U$ and of $V$, and (by independence) the variance of $X$ is the sum of the variances of $U$ and $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/955796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do we set $u_n=r^n$ to solve recurrence relations? This is something I have never found a convincing answer to; maybe I haven't looked in the right places. When solving a linear difference equation we usually put $u_n=r^n$, solve the resulting polynomial, and then use these to form a complete solution. This is how to get $F_n=\frac{1}{\sqrt{5}}(\phi^n-(-\phi)^{-n})$ from $F_{n+1}=F_n+F_{n-1}$ and $F_0=0, F_1=1$ for example. I can see that this works, but I don't understand where it comes from or why it does indeed give the full solution. I have read things about eigenfunctions and other things but they didn't explain the mechanics behind this very clearly. Could someone help me understand this?
* *if $u_n=f(n)$ satisfies a linear recurrence relation (without looking at the starting condition(s)) then $u_n=cf(n)$ also satisfies the recurrence relation. *if $u_n=f_1(n)$ and $u_n=f_2(n)$ both satisfy a linear recurrence relation (again: without the starting conditions(s)) then $u_n=f_1(n)+f_2(n)$ also satisfies the recurrence relation. *so having $u_n=f_1(n), \ldots, u_n=f_k(n)$ satisfying the recurrence also $u_n=c_1f_1(n)+\cdots+c_kf_k(n)$ satisfies the recurrence relation *Now substituting $u_n=r^n$ into a recurrence of order $k$ gives a polynomial equation of degree $k$. If this polynomial has $k$ different roots $r_1,\ldots,r_k$ we have $k$ different functions $f_1(n)=r_1^n,\ldots,f_k(n)=r_k^n$ satisfying the recurrence. The linear combination of them has $k$ coefficients $c_1,\ldots,c_k$ to be set, just enough to make the linear combination fulfill the starting condition(s). So this shows why the Ansatz $u_n=r^n$ works. How this Ansatz was originally guessed? Well, for a recurrence relation of order 1 $u_{n+1}=r u_n$ you naturally get $u_n=u_0 r^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/955875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Prove that the limit, $\displaystyle\lim_{x \to 1^+} \frac{x-3}{3 - x - 2x^2}$, exists. For each of the following, use definitions (rather than limit theorems) to prove that the limit exists. Identify the limit in each case. d)$\displaystyle\lim_{x \to 1^+} \frac{x-3}{3 - x - 2x^2}$ Proof: We need to prove the limit exists as $x$ approaches $1$ from the right. Let $M$ be a real number. Then we need to show $f(x) > M$, without loss of generality, suppose $M > 0$. Then as $x \to 1^+$, $x-3 \to -2$ and observe the denominator approaches $x \to 0$ as $x \to 1^+$. Also note the parabola has roots $\frac{-1}{2}$ and $1$. Therefore, let $0 < \delta < 1$ such that$ 1 < x < 1 + \delta$ thus, $0 < 3 - x - 2x^2 < \frac{1}{M}$ That is $\frac{1}{3 - x - 2x^2} > M > 0$ then $\frac{x-3}{3 - x - 2x^2} < M$ for all $1 < x < 1 + \delta$ Can someone please help me finish? If it needs work, please help me. Thank you very much.
To prove that the limit doesn't exist, it suffices to show that for any $M > 0$, there exists a $\delta$ such that for $0 < x - 1 < \delta$, $|f(x)| > M$. (Then, $f$ is unbounded as $x \rightarrow 1^{+}$.) One way to reverse engineer a proof is find an expression in terms of $\delta$ that will bound $|f(x)|$ as $\delta \rightarrow 0$; using that expression, you can then set $M$ equal to your bound, and deduce what $\delta$ would have to be so that for $0 < x - 1 < \delta$, $|f(x)| > M$. In this process, it is convenient to make certain simplifying assumptions (without loss of generality) to make deducing a bound easier. For instance, you might find it helpful to deduce a bounding expression in terms of $\delta$ that holds when $0 < \delta < 1/4$. (Or, instead of 1/4, pick a convenient upper bound that helps you derive an upper bound; I found 1/4 was convenient for this purpose.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/955969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If two groups are generated by the same number of generators, and the generators have the same orders, are the groups isomorphic? If, say, for two groups G and H, G = < a1, a2, ... , a_n > and H = < b1, b2, ... , b_n >, such that |a_i| = |b_i| for all i from 1 to n, is G isomorphic to H? If so, what is the proof of this? I know that for n = 1, then H and G are a cyclic group and are isomorphic to Z/|a|Z. Does this extend to larger generating sets?
No; for n=2, we get two distinct groups, where $1$ is the identity $$G=<a,b | a^2 = 1, aba^{-1}b = 1, b^2=1>$$ $$H=<a,b | a^2 = 1, b^2 = 1>$$ Notice that $G$ is finite - it is $\mathbb{Z_2\times Z_2}$. However, $H$ can be described as the set of words which alternate $a$ and $b$ (with the appropriate multiplication operator) - which is infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/956050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Are there other accumulation functions that holds $a(n-t)={a(n) \over a(t)}$? This might be a beginner's question regarding accumulation methods and their functions, but so far I have learned that compound interest satisfy $$a(n-t)={a(n) \over a(t)}$$ Which allows nice results such as $$s_{\bar{n}\rceil} = (1+i)^n a_{\bar{n}\rceil}$$ I also understand that the first property does not hold for simple interest. But out of curiosity, are there accumulation functions that are not compound interests that holds the same property? If so, does it show up in real life actuarial problems?
You can rearrange your equation by multiplying through to get $a(n-t) a(t) = a(n)$. Let $n = s+t$ and you get $a(s) a(t) = a(s+t)$. So you want to find a function satisfying $a(s) a(t) = a(s+t)$ for all choices of $s$ and $t$. Now, you can rewrite this to get $a(s) a(s) = a(2s)$, and then $a(s) a(2s) = a(3s)$ so $a(3s) = a(s)^3$, and in general $a(ms) = a(s)^m$ for any real number $s$ and integer $m$. If you let $s = 1/m$, then you have $a(1) = a(1/m)^m$ and so $a(1/m) = a(1)^{1/m}$. Therefore, for any integers $p$ and $q$, you have $a(p/q) = a(1/q)^p = (a(1)^{1/q})^p = a(1)^{p/q}$. Thus if we know $a(1)$, we know $a(p/q)$ for any rational number $p/q$. If you assume that the function $a$ is continuous then you can prove that any function satisfying your initial rule satisifes $a(x) = a(1)^x$ -- which, to go back to your initial question, is just the rule for compound interest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/956119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Sum the series (real analysis) $$\sum_{n=1}^\infty {1 \over n(n+1)(n+2)(n+3)(n+4)}$$ I tried to sum the above term as they way I can solve the term $\sum_{n=1}^\infty {1 \over (n+3)}$ by transforming into ${3\over n(n+3)} ={1\over n}-{1\over(n+3)}$ but I got stuck while trying to transform $12\over n(n+1)(n+2)(n+3)(n+4)$ into something solvable.
By working it out, by various methods, it is seen that \begin{align} \frac{1}{n(n+1)(n+2)(n+3)(n+4)} = \frac{1}{4!} \, \sum_{r=0}^{4} (-1)^{r} \binom{4}{r} \, \frac{1}{n+r} \end{align} Now, \begin{align} S &= \sum_{n=1}^{\infty} \frac{1}{n(n+1)(n+2)(n+3)(n+4)} \\ &= \frac{1}{4!} \, \sum_{r=0}^{4} (-r)^{r} \binom{4}{r} \, \sum_{n=1}^{\infty} \frac{1}{n+r} \\ &= \frac{1}{4!} \left[ \zeta(1) - 4 \left( \zeta(1) - 1\right) + 6 \left( \zeta(1) - 1 - \frac{1}{2} \right) - 4 \left( \zeta(1) - 1 - \frac{1}{2} - \frac{1}{3} \right) + \left(\zeta(1) - 1 - \frac{1}{2} - \frac{1}{3} - \frac{1}{4} \right) \right] \\ &= \frac{1}{4!} \left[ 4 - 6 \left( 1 + \frac{1}{2} \right) + 4 \left( 1 + \frac{1}{2} + \frac{1}{3} \right) - \left( 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} \right) \right] \\ S &= \frac{1}{4 \cdot 4!} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/956221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What do limits of functions of the form $te^t$ have to do with l'Hopital's rule? I have an improper function that I have to integrate from some number to infinity. Once integration is done, the function is of the form $te^t$. What I'm wondering is what does this have to do with l'Hopital's rule? From reading my book, I see the following: We know that $e^t \to 0$ as $t \to -\infty$, and by l'Hopital's rule we have $$\lim_{t\to-\infty} te^t = \lim_{t\to-\infty} \frac{t}{e^{-t}} = \lim_{t\to-\infty} \frac{1}{-e^{-t}} = \lim_{t\to-\infty} -e^t = 0.$$ I know what l'Hopital's rule is The limit of a quotient of functions is equal to the limit of the quotient of their derivatives. I don't understand what this has to do with the explanation above. Can somebody help me understand this please?
The invocation of l'Hôpital's rule is $$ \lim_{t\to-\infty} \frac{t}{e^{-t}} = \lim_{t\to-\infty} \frac{1}{-e^{-t}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/956315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
How to evaluate the limit $\lim_{x\to 0} ((1+2x)^{1/3 }-1)/ x$ without using the l'Hospital's rule? $$\lim_{x\to 0} \frac{(1+2x)^{\frac{1}{3}}-1}{x}.$$ Please do not use the l'hospital's rule as I am trying to solve this limit without using that rule... to no avail...
Hint: Put $2x+1 = y^{3}$. And note that as $x \to 0 \implies y \to 1$. Then the limit becomes \begin{align*} \lim_{x \to 0} \frac{(2x+1)^{1/3}-1}{x} &= \lim_{y \to 1} \frac{y-1}{\frac{y^{3}-1}{2}}\\ &= \lim_{y \to 1} \frac{2}{y^{3}-1} \times (y-1)\\ &= \lim_{y \to 1} \frac{2}{y^{2}+y+1} = \frac{2}{3} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/956371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 1 }
Strong induction on a summation of recursive functions (Catalan numbers) I've been stuck on how to proceed with this problem. All that's left is to prove this with strong induction: $$\forall n \in \mathbb{N}, S(n) = \sum_{i=0}^{n-1} S(i)*S(n - 1 - i)$$ Some cases: S(0) = 1, S(1) = 1, S(2) = 2, S(3) = 5, S(4) = 14. As I understand it, I should assume that $\forall k \in \mathbb{N}, k < n \implies S(k) = ...$. But then since $n-1 < n$, it must be true for $n-1$, so we'll have... $$S(n-1) = \sum_{i=0}^{(n-1) - 1} S(i)*S((n-1) - 1 - i)$$ I can't figure out how to get from that to the conclusion though. Any guidance would be appreciated.
You neglect to mention that $S(n)$ is the $n$th Catalan number. You're probably trying to prove some formula, say $S(n) = \varphi(n)$. For given $n$, you already know (this is the strong induction hypothesis) that $S(k) = \varphi(k)$ for $k < n$. Therefore $$ S(n) = \sum_{i=0}^{n-1} S(i) S(n-1-i) = \sum_{i=0}^{n-1} \varphi(i) \varphi(n-1-i). $$ It remains to show that the right-hand side equals $\varphi(n)$, and then you can deduce that $S(n) = \varphi(n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/956446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Conjunctive Normal Form Is a statement of the form $\phi \vee \psi \vee \xi$ considered to be in its conjuntive normal form (CNF), given that $\phi \vee \psi$ is considered to be in CNF? Example: While converting $\phi \wedge \psi \rightarrow \xi$ to its CNF, we get $\neg(\phi \wedge \psi) \vee \xi$ which gives $(\neg \phi) \vee \neg(\psi) \vee \xi$. Is this statement in its CNF?
Here's a definition of a clause adapted from Merrie Bergmann's An Introduction to Many-Valued and Fuzzy Logic p. 20: * *A literal (a letter or negation of a letter) is a clause. *If P and Q are clauses, then (P $\lor$ Q) is a clause. Definition of conjunctive normal form. * *Every clause is in conjunctive normal form. *If P and Q are in conjunctive normal form, then (P$\land$Q) is in conjunctive normal form. So, even though ϕ∨ψ∨ξ is not in conjunctive normal form (note the parentheses), ((ϕ∨ψ)∨ξ) is in conjunctive normal form and (ϕ∨(ψ∨ξ)) is also in conjunctive normal form. Demonstration: Suppose that ϕ, ψ, and ξ are literals. Since ϕ, and ψ are literals, and literals are clauses, by definition of a clause and detachment, (ϕ∨ψ) is a clause. Since ξ is a literal, ξ is a clause. Thus, by definition of a clause and detachment, ((ϕ∨ψ)∨ξ) is a clause. Since every clause is in conjunctive normal, ((ϕ∨ψ)∨ξ) is in conjunctive normal form. One can similarly show that ((ϕ∨ψ)∨ξ) is a clause by building it up from its literals using part 2. of the above definition of a clause, and invoking the definition of conjunctive normal form to infer that ((ϕ∨ψ)∨ξ) is a clause.
{ "language": "en", "url": "https://math.stackexchange.com/questions/956522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Task "Inversion" (geometry with many circles) Incircle $\omega$ of triangle $ABC$ with center in point $I$ touches $AB, BC, CA$ in points $C_{1}, A_{1}, B_{1}$. Сircumcircle of triangle $AB_{1}C_{1}$ intersects second time circumcircle of $ABC$ in point $K$. Point $M$ is midpoint of $BC$, $L$ midpoint of $B_{1}C_{1}$. Сircumcircle of triangle $KA_{1}M$ intersects second time $\omega$ in point $T$. Prove, that сircumcircles of triangle $KLT$ and triangle $LIM$ touch.
HINT for a less tour-de-force approach (impressive though MvG, or his software, may be) The obvious idea is to invert (as the question hints). The first thing to try is clearly the incircle. Denote the inverse point of $X$ as $X'$. So the line $B_1C_1$ becomes the circle $AB_1C_1$ and hence $L'=A$. Like $A'=L$, $K'$ lies on the line $B_1C_1$ and the inverse of the circumcircle, which is the circumcircle of the three midpoints of the sides of $A_1B_1C_1$. $T$ lies on the incircle, so $T'=T$. Thus the inverse of the circle $KLT$ is the circle $K'AT$ and the inverse of the circle $LIM$ is the line $AM'$. So we need to show that $AM'$ is tangent to the circle $K'AT$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/956600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 0 }
PDF of an unknown probability after a successful test. Let $X$ be a random variable having the possible outcomes $\{0, 1\}$ representing failure or success, with unknown probability. We test $X$ just one time and the outcome is success. Let $X(1) = p \in [0, 1]$, ie, $p$ is the probability that another test of $X$ will be successful. Is it possible to find a probability density function for $p$? If so, how could this distribution be described? Intuitively it would seem that $P(p = 0) = 0$, and $P(p > \frac{1}{2}) > P(p < \frac{1}{2})$, for example. This question is very similar to the unanswered Unknown Probabilities, but that question was asked over a year ago and takes the problem in a slightly different direction.
Okay, I think I have an idea here. Let P(1) denote the probability of the first outcome being a success. If $p$ is given then $P(1)=p$, but since $p$ is unknown and different values of $p$ are mutually exclusive events with probabilities $f(p)dp$: $$ P(1) = \int p{f(p) dp} = \int {pd(F(p))}\\ = pF(p) - \int{F(p)dp} $$ Let $g(p)= \int Fdp$, and $P(1)=a$, constant.Then $g'-g/p=a/p$. I multiply the homogenous equation by an integration factor $M(p)=1/p$ to get $$ g_h'-\frac{1}{p}g_h=0 \\ \frac{1}{p}g_h'-\frac{1}{p^2}g_h=0 \\ \frac {d}{dp} \left( \frac{g_h}{p}\right) = 0 \\ g_h = cp $$ The obvious partial solution is $g_p(p)=a$ so $g= g_h+g_p = cp+a$. This is where I'm stuck. $F(p)=g'(p)=c$, but this doesn't make sense. I think my initial assertion that $P(1) = \int p{f(p) dp}$ is correct, but I can't spot the error in the math.
{ "language": "en", "url": "https://math.stackexchange.com/questions/956699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How did we find the solution? In my lecture notes, I read that "We know that $$x^2 \equiv 2 \pmod {7^3}$$ has as solution $$x \equiv 108 \pmod {7^3}$$" How did we find this solution? Any help would be appreciated!
Check this proof of Hensel's Lemma, which is basically the justification/proof of why what André did works, to go over the following: $$x^2=2\pmod 7\iff f(x):=x^2-2=0\pmod 7\iff x=\pm 3\pmod 7$$ Choose now one of the roots, say $\;r=3\;$ , and let us "lift" it to a solution $\;\pmod{7^2}\;$ (check this is possible since $\;f'(3)=6\neq 0\pmod 7\;$) $$s:=3+\left(-\frac77\cdot 6^{-1}\right)\cdot=3-(-1)^{-1}\cdot7=3+7=10$$ and indeed $$f(10)=10^2-2=98=0\pmod{49=7^2}$$ Again: $$s=10+\left(-\frac{98}7\cdot20^{-1}\right)\cdot7=10+14\cdot22\cdot7=2,166=108\pmod{7^3=343}$$ and indeed $$f(108)=108^2-2=11,662=0\pmod{7^3}$$ Of course, you can repeat the above with every root in each stage.
{ "language": "en", "url": "https://math.stackexchange.com/questions/956895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
How to compute the induced matrix norm $\| \cdot \|_{2,\infty}$ The induced norm of the matrix $A$ as a map from $(\mathbb R^n , \| \cdot \|_p)$ to $(\mathbb R^n, \| \cdot \|_q)$ is given by $$ \| A \|_{p,q} = \sup_{x\in\mathbb{R}^n\setminus \{0\}} \frac{\|Ax\|_q}{\|x\|_p}.$$ I would like to compute this norm for $p,q\in\{1,2,\infty\}$. Thanks to Tom's answer to my previous question, the results for most of the cases can be found in the paper On the Calculation of the $l_2\to l_1$ Induced Matrix Norm. The only case missing is $\| \cdot \|_{2,\infty}$. Is this still an open problem? If it is, then is there any result on finding a tight and easy-to-compute upper bound on the norm? Thank you in advance.
You have, for $x$ with $\|x\|_2=1$ and using Cauchy-Schwarz, $$ \|Ax\|_\infty=\max_k\,\left|\sum_{j=1}^nA_{kj}x_j\right|\leq\max_k\left(\sum_{j=1}^n|A_{kj}|^2\right)^{1/2}=\max_k \|R_k(A)\|_2, $$ where $R_k(A)$ is the $k^{\rm th}$ row of $A$. Now suppose that we fix $k_0$ such that $\|R_{k_0}(A)\|_2$ is maximum among the rows. Let $x_0=R_{k_0}(A)/\|R_{k_0}(A)\|_2$. Then $$ \|Ax_0\|_\infty=\max_k\left|\sum_{j=1}^nA_{kj}\frac{A_{kj}}{\|R_k(A)\|_2}\right| =\max_k\frac{\|R_k(A)\|_2^2}{\|R_{k_0}(A)\|_2}=\|R_{k_0}(A)\|_2. $$ So $$ \|A\|_{2,\infty}=\max_k\|R_k(A)\|_2. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/956988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Cantor's Theorem (surjection vs bijection) Let me state Cantor's Theorem first: Given any set $A$, these does not exist a function $f:A \rightarrow P(A)$ that is surjective. I understand the proof of this theorem, but I'm wondering why it's enough to show that the function $f$ is not surjective, but rather not bijective? Can't you map one element of $A$ to multiple elements of $P(A)$ such that you cover the whole set? Shouldn't it be that you can't produce a bijective function?
A surjective map from $A \rightarrow B$ maps each element of $A$ to one element of $B$, such that every element of $B$ is mapped to by at least one element of $A$. So the Cantor proof does show that there is no surjective mapping, because surjective does not allow one element of $A$ to be mapped to multiple elements of $P(A)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/957068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Finding the integer solutions an equation $ 3\sqrt {x + y} + 2\sqrt {8 - x} + \sqrt {6 - y} = 14 $ . I already solved this using the Cauchy–Schwarz inequality and got $x=4$ and $y=5$. But I'm sure there is a prettier, simpler solution to this and I was wondering if anyone could suggest one.
Using the Cauchy-Schwarz inequality is a very clever move: Letting $${\bf a}:=(3,2,1), \quad{\bf b}:=\bigl( \sqrt{x+y},\>\sqrt{8-x},\>\sqrt{6-y}\bigr)\tag{1}$$ we get $$3\sqrt{x+y}+2\sqrt{8-x}+\sqrt{6-y}={\bf a}\cdot{\bf b}\leq|{\bf a}|\>|{\bf b}|=\sqrt{14}\>\sqrt{14}\ ,$$ with equality iff ${\bf b}=\lambda{\bf a}$ for some $\lambda\geq0$ (note that ${\bf a}$ has all components $>0$, and that the components of ${\bf b}$ are $\geq0$). This implies $$x+y=9\lambda^2,\quad 8-x=4\lambda^2,\quad 6-y=\lambda^2\ ,$$ which immediately implies $\lambda=1$, and from $(1)$ we then get $x=4$, $y=5$. The above argument shows that $(4,5)$ is the only real solution (whether integer or not) of the given equation. Using some algebraic elimination procedure would have lead to repeated squarings of the given equation, and only after a lot of computation one would maybe arrive at the simple outcome you have observed at little cost.
{ "language": "en", "url": "https://math.stackexchange.com/questions/957197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prime ideals in $k[x,y]/(xy-1)$. Let $k$ a field. Let $f$ be the ring injective homomorphism $$ f:k[x] \rightarrow k[x,y]/(xy-1)$$ obtained as the composition of the inclusion $k[x] \subset k[x,y]$ and the natural projection map $ f:k[x,y] \rightarrow k[x,y]/(xy-1)$. Prove that there isn't any prime ideal in the ring $k[x,y]/(xy-1)$ whose contraction in $k[x]$ is the prime ideal $(x)$. Is there any prime ideal in $k[x,y]/(xy-1)$ whose contraction is $(x-1)$? Thanks! :)
HINT: $k[x,y]/(xy-1)$ is naturally isomorphic to the ring of fractions $k[x][\frac{1}{x}] = S^{-1}\ k[x]$, where $S= \{1,x,x^2, \ldots\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/957302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Integrate cos (lnx) dx Integrating by parts: I'm having a hard time choosing the $u$, $du$, $v$ and $dv$... I gave it a shot. $u = \ln x \implies du = 1/x \ dx$ $v= \ ?$ $dv = \cos \ dx$
I think you're better off using $u$-substitution here. Setting $\ln x = u$, you get $du = \frac{dx}{x} = \frac{dx}{e^u}$, and so, $dx = e^u\,du$. Then, your integral becomes $$\int \cos\left( \ln x \right) \, dx = \int e^u\cos u \, du,$$ which you can evaluate by using integration by parts twice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/957390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Integrate $(1+7x)^{1/3}$ from $0$ to $1$ Integrate $(1+7x)^{1/3}$ from $0$ to $1$ So using substitution, I'm able to get to $$\frac17 \frac34 u ^{4/3}$$ Pretty sure that part is right, but I'm getting stuck after that.
It looks like you did the integration correctly. Now you just have to evaluate your answer at the upper and lower limits. The problem is that the upper and lower limits 1 and 0 are for the variable $x$. You did a variable change so that your integral became in terms of $u$. This means you have to evaluate your answer at the upper and lower limits of the variable $u$. But how do you find the upper and lower limits of the variable $u$? Well, to solve the problem you had set $u = 7x + 1$. And the lower limit for the variable $x$ was $0$. Plugging in this value of $x$ into your equation $u = 7x + 1$ will give you the new lower limit of $u$. Similarly, to find the upper limit of $u$, we know the upper limit of $x$ was $1$, so plugging this into $u = 7x + 1$ will give you the new upper limit. So, if you did the work correctly, your new upper and lower limits should be 8 and 1. So you must evaluate your answer at $u = 8$, and then subtract what you get when you evaluate it at $u = 1$, and that is your final answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/957453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Amateur Math and a Linear Recurrence Relation I haven't received a formal education on this topic but a little googling told me this is what I am trying to find. I would like to put $$ a_n = 6 a_{n-1} - a_{n-2} $$ $$ a_1 =1, a_2 = 6 $$ into its explicit form. So far I have only confused myself with billions of 6's. I would like someone to respond with the following: * *Is this relation homogenous? (not sure if subtraction counts) *Is there a basic way I can solve this type of recurrence? (I don't need the long answer, just enough to satisfy curiosity) I am not necessarily looking for only a solution to this problem but one would be appreciated nonetheless. Still, I am more interested in the background and would better appreciate that than simply some formula.
As user164587 answered, the closed form is $$a_n = A\alpha_1^n + B\alpha_2^n$$ where $\alpha_1$ and $\alpha_2$ are the roots of the characteristic equation $u^2 - 6u + 1 = 0$ that is to say $\alpha_1=3-2 \sqrt{2}$, $\alpha_2=3-2 \sqrt{2}$. Solving for $A$ and $B$, you would find that $$A=-\frac{{a_2}-{a_1} {\alpha_2}}{{\alpha_1} ({\alpha_2}-{\alpha_1})}$$ $$B=-\frac{{-a_2}+{a_1} {\alpha_1}}{{\alpha_2} ({\alpha_2}-{\alpha_1})}$$ Using the values of $a_1,a_2,\alpha_1,\alpha_2$, you then arrive to $$a_n= \frac{\left(3+2 \sqrt{2}\right)^n-\left(3-2 \sqrt{2}\right)^n}{4 \sqrt{2}}$$ which is a simple expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/957505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Sums with squares of binomial coefficients multiplied by a polynomial It has long been known that \begin{align} \sum_{n=0}^{m} \binom{m}{n}^{2} = \binom{2m}{m}. \end{align} What is being asked here are the closed forms for the binomial series \begin{align} S_{1} &= \sum_{n=0}^{m} \left( n^{2} - \frac{m \, n}{2} - \frac{m}{8} \right) \binom{m}{n}^{2} \\ S_{2} &= \sum_{n=0}^{m} n(n+1) \binom{m}{n}^{2} \\ S_{3} &= \sum_{n=0}^{m} (n+2)^{2} \binom{m}{n}^{2}. \end{align}
HINT: As $\displaystyle n\binom mn=m\binom{m-1}{n-1},$ $\displaystyle\sum_{n=0}^mn\binom mn^2=m\sum_{n=0}^m\binom mn\binom{m-1}{n-1}=m\binom{2m-1}m$ comparing the coefficient of $x^{2m-1}$ in $(1+x)^m(1+x)^{m-1}=(1+x)^{2m-1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/957605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Chain Rule for Second Partial Derivatives I am trying to understand this: Let $g:\mathbb{R}\rightarrow\mathbb{R}$ and $f(r,s) = g(r^2s)$, where $r=r(x,y) = x^2 + y^2$ and $s = s(x) = 3/x$. What is (with Chain Rule) $$ \frac{\partial^2f}{\partial y\partial x}$$ g is a function of one variable so is the first part just: $$ \frac{\partial f}{\partial x} = \frac{dg}{dr}\frac{\partial r}{\partial x} + \frac{dg}{ds}\frac{\partial s}{\partial x} $$
$f$ is a function of two variables ($r$ and $s$), so the chain rule would give $$\frac{\partial f}{\partial x} = \frac{\partial f}{\partial r} \frac{\partial r}{\partial x} + \frac{\partial f}{\partial s} \frac{\partial s}{\partial x}$$ To compute for example $\partial f / \partial r$, remember that g is a function of one variable (say $u=r^2s$) so $$\frac{\partial f}{\partial r}=\frac{\partial g}{\partial r}=\frac{d g}{d u}\frac{\partial u}{\partial r}=g'(r^2s)\cdot 2rs$$ So overall you would have that $$\frac{\partial f}{\partial x} = \frac{d g}{d u}\frac{\partial u}{\partial r} \frac{\partial r}{\partial x} + \frac{d g}{d u}\frac{\partial u}{\partial s} \frac{\partial s}{\partial x} =g'(r^2s)\cdot 2rs\cdot 2x +g'(r^2s)\cdot r^2\cdot \left(-\frac{3}{x^2}\right)$$ If you want the answer in terms of $r$ and $s$ you can substitute $x$ with $3/s$ to get $$\frac{\partial f}{\partial x}=g'(r^2s)\cdot (12r-r^2s^2/3)$$ Now think of the above as a new function of $r$ and $s$ (say $h(r,s)$) and try to find its partial derivative with respect to $y$ in the same way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/957694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Relative distance in Codes I'm studying coding theory. In my lecture we saw that Hadamard codes have an optimal relative distance of $1/2$. The relative distance of a code $C$ with minimum distance $d(C)$ and block length $n$ is defined by: $$\dfrac{d(C)}{n}.$$ According to this, the minimum distance for a Hadamard code with block length $n=2^k$ is $\dfrac{n}{2} = 2^{k-1}$. Is there any code $C$ with block length $n$ and minimum distance $d(C)>n/2$? How can I prove it?
It depends what kind of code you want. In its simplest form, a $q$-ary block code of length $n$ is just a subset of $\mathbf{F}_q^n$. Hence it is trivial to construct a block code of length $n$ with minimal distance $n$: $\{0^n,1^n\}$. This code is also linear, and even cylic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/957792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove by contradiction that $(x-y)^3+(y-z)^3+(z-x)^3 = 30$ has no integer solutions By factorizing it I found that $(x-y)(y-z)(z-x) = 10$
Just as Karvens comments: Let $a=x-y$ and $b=y-z$. Then $-(a+b)=z-x$. Clearly, $a,b,c \in \Bbb Z$. So $$30=a^3+b^3-(a+b)^3=(a+b)(a^2-ab+b^2)-(a+b)^3=-3ab(a+b)$$ And hence $$10=-ab(a+b).$$ Therefore $a=\pm 1$ or $a=\pm 2$ or $a=\pm 5$ or $a=\pm 10$. Claerly they cannot be the solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/957875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Choice Problem: choose 5 days in a month, consecutive days are forbidden I'm "walking" through the book "A walk through combinatorics" and stumbled on an example I don't understand. Example 3.19. A medical student has to work in a hospital for five days in January. However, he is not allowed to work two consecutive days in the hospital. In how many different ways can he choose the five days he will work in the hospital? Solution. The difficulty here is to make sure that we do not choose two consecutive days. This can be assured by the following trick. Let $a_1, a_2, a_3, a_4, a_5$ be the dates of the five days of January that the student will spend in the hospital, in increasing order. Note that the requirement that there are no two consecutive numbers among the $a_i$, and $1 \le a_i \le 31$ for all $i$ is equivalent to the requirement that $1 \le a_i < a_2 — 1 < a_3 — 2 < a_4 — 3 < a_5 — 4 \le 27$. In other words, there is an obvious bijection between the set of 5-element subsets of [31] containing no two consecutive elements and the set of 5-element subsets of [27]. *** Instead of choosing the numbers $a_i$, we can choose the numbers $1 \le a_i < a_2 — 1 < a_3 — 2 < a_4 — 3 < a_5 — 4 \le 27$, that is, we can simply choose a five-element subset of [27], and we know that there are $\binom{27}{5}$ ways to do that. What I don't understand here $1 \le a_i < a_2 — 1 < a_3 — 2 < a_4 — 3 < a_5 — 4 \le 27$: * *Why do the subtracting numbers increment with every other $a_i$? *Why 27? And the very last sentence (***) is unclear to me. * *Why is there no talk about "non-consecutive"? Why choosing 5 elements of 27 is equivalent to choosing 5 non-consecutive elements out of 31? I miss the connection. I'd be very thankful if you could help me to understand this example!
The prohibition on consecutive days means the constraints are really $$1 \le a_1$$ $$a_1+1 \lt a_2$$ $$a_2+1 \lt a_3$$ $$a_3+1 \lt a_4$$ $$a_4+1 \lt a_5$$ $$a_5 \le 31.$$ Rewrite these so that the right hand side of each line is the same as the left hand side of the next line as $$1 \le a_1$$ $$a_1 \lt a_2-1$$ $$a_2-1 \lt a_3-2$$ $$a_3-2 \lt a_4-3$$ $$a_4-3 \lt a_5-4$$ $$a_5 -4 \le 27$$ and you can now combine these as a single line $$1 \le a_1 < a_2 — 1 < a_3 — 2 < a_4 — 3 < a_5 — 4 \le 27$$ Added for extended question: Having done that, you could let $b_1=a_1$, $b_2=a_2-1$, $b_3=a_3-2$, $b_4=a_4-3$, and $b_5=a_5-4$, so $$1 \le b_1 < b_2 < b_3 < b_4 < b_5 \le 27$$ and so the number of integer solutions for the $b_i$ is obviously the same as the number of ways of choosing $5$ distinct integers from $27$, i.e. ${27 \choose 5}$. Having chosen the $b_i$ in order from smallest to largest, you can get back to the $a_i$ (the working days) by adding $0$ to the smallest, $1$ to the next, $2$ to the middle one, $3$ to the next, and $4$ to the largest. It is obvious that this will give five dates with at least ones day's gap between them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/957940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 0 }
Series Expansion $\frac{1}{1-x}\log\frac{1}{1-x}=\sum\limits_{n=1}^{\infty}\left(1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}\right)x^n$ How do I prove that, if $|x|<1$, then $$\frac{1}{1-x}\log\frac{1}{1-x}=\sum_{n=1}^{\infty}\left(1+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{n}\right)x^n$$
$$\frac1{1-x}\log\frac1{1-x}=\sum_{i\geqslant0}x^i\cdot\sum_{k\geqslant1}\frac{x^k}k=\sum_{i\geqslant0,k\geqslant1}\frac{x^{k+i}}k=\sum_{n\geqslant1}\ldots x^n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/958003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving a function $f(x + T)=k\;f(x)$ satisfies $f(x)=a^x g(x)$ for periodical $g$ I need to prove the following: If a function $\,f$ satisfies $$f(x+T)=k\;f(x), \forall x \in \mathbb R$$ for some $k \in \mathbb N$ and $T > 0$, prove that $\,f$ can be written as $f(x)=a^xg(x)$ where $g$ is a periodical function with period $T$. Prove reverse statement/reversal. I would need some tips/hints how to begin, since I have no idea how to start.
If you do the reverse statement, then it is suggestive that $a=k^{1/T}$. So let us set $a=k^{1/T}$ and consider $g(x)\equiv f(x)/a^x$. To check the periodicity of $g$: $$ g(x+T)=\frac{f(x+T)}{a^{x+T}}=\frac{kf(x)}{a^{x+T}}=\frac{a^Tf(x)}{a^{x+T}}=\frac{f(x)}{a^x}=g(x). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/958210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Distance between powers of 2 and 3 As we know $3^1-2^1 = 1$ and of course $3^2-2^3 = 1$. The question is that whether set $$ \{\ (m,n)\in \mathbb{N}\quad |\quad |3^m-2^n| = 1 \} $$ is finite or infinite.
Note first that if $3^{2n}-1=2^r$ then $(3^n+1)(3^n-1)=2^r$. The two factors in brackets differ by $2$ so one must be an odd multiple of $2$, and this is only possible if $n=1$ (the only odd number we can allow in the factorisation is $1$) Now suppose that $3^n-1=2^r$ and $n$ is odd. Now $3^n\equiv -1$ mod $4$ so $3^n-1$ is not divisible by $4$. Now suppose $3^n=2^{2r}-1=(2^r+1)(2^r-1)$. The two factors differ by $2$ and cannot therefore both be divisible by $3$. Only $r=1$ is possible. The final case is $3^n=2^{k}-1$ where $k$ is odd. Now the right hand side is $\equiv -2$ mod $3$, so only $k=1$ is possible, and $n=0$ (if permitted). __ The previous version of this answer was overcomplicated - trying to do things in a hurry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/958304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to find the number of squares formed by given lattice points? Let us say that we are N integer coordinates (x, y) - what would our approach be if we were supposed to find the number of squares we could make from those given n points? Additionally, if we were to figure out how many, minimally, more points should we add that we manage to form at least ONE square from the points, how would we go about that?
I dont have the reputation to comment so I am writing it in answer. This problem seems to be taken from http://www.codechef.com/OCT14/problems/CHEFSQUA live contest! please do remove it. Please respect the code of honor. And more over you are complaining of no help.
{ "language": "en", "url": "https://math.stackexchange.com/questions/958381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Expected value of a non-negative random variable How do I prove that $\int_0^\infty Pr(Y\geq y) dy = E[Y]$ if $Y$ is a non-negative random variable?
Assuming we have a continuous random variable with an existant probability density function $f_Y$. $\begin{align} \int_0^\infty \Pr(Y \geqslant y) \operatorname d y & = \int_0^\infty \int_y^\infty f_Y(z)\operatorname d z\operatorname d y \\[1ex] & = \int_0^\infty \int_0^z f_Y(z)\operatorname d y\operatorname d z \\[1ex] & = \int_0^\infty f_Y(z)\int_0^z 1\operatorname d y\;\operatorname d z \\[1ex] & = \int_0^\infty z f_Y(z)\operatorname d z \\[1ex] & = \mathsf E[Y] \end{align}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/958472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Showing equivalence of two binomial expressions I wish to show that $\sum_{k=0}^n {n\choose k}(\alpha + k)^k (\beta + n - k)^{(n-k)} = \sum_{k=0}^n {n\choose k}(\gamma + k)^k (\delta + n - k)^{(n-k)}$ given that $\alpha + \beta = \gamma + \delta$. Both sides appear to be the n-th coefficient in the product of some exponential generating series, namely $F(y,x) = \sum_{n=0}^{\infty} (y+n)^n / n!$, and that $F(y,x) = \exp(y\times T(x))$ for some series $T(x)$. It seems to be exponential because we could then have $F(a+b,x) = F(a,x) F(b,x)$. I have thought to use the Lagrange inversion theorem in a manner similar to some proofs of Abel's binomial theorem. Namely, if $T(x)$ is the solution to $T(x) = x \phi (T(x))$ for some invertible $\phi(x)$ in $\mathbb{Z} [[x]]$, then for $f(y,x) = exp(yx)$ we would have: $[x^n]f(yT(x)) = \frac{1}{n} [x^{n-1}] f'(x) \phi(x)^n = (y+n)^n / n!$. The issue is then in showing such a $\phi(x)$ exists. Unfortunately, my attempts using this approach have not been successful. Could I have some hints or suggestions on how to proceed? Regards, Garnet
We can prove this using a close relative of the labelled tree function that is known from combinatorics. This will provide a closed form of the exponential generating function of the four terms that are involved. The species of labelled trees has the specification $$\mathcal{T} = \mathcal{Z} \times \mathfrak{P}(\mathcal{T})$$ which gives the functional equation $$T(z) = z \exp T(z).$$ Now consider the function $$Q_\rho(z) = \frac{\exp(\rho T(z))}{1-T(z)}$$ with $\rho$ a real parameter. Extracting coefficients via Lagrange inversion we have $$Q_n = n! \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n+1}} \frac{\exp(\rho T(z))}{1-T(z)} dz.$$ Put $T(z)=w$ so that $z=w/\exp(w) = w\exp(-w)$ and $dz = \exp(-w) - w\exp(-w)$ to get $$n! \frac{1}{2\pi i} \int_{|w|=\epsilon} \frac{\exp(w(n+1))}{w^{n+1}} \frac{\exp(\rho w)}{1-w} (\exp(-w) - w\exp(-w)) dw \\ = n! \frac{1}{2\pi i} \int_{|w|=\epsilon} \frac{\exp(wn)\exp(w\rho)}{w^{n+1}} \frac{1}{1-w} (1 - w) dw \\ = n! \frac{1}{2\pi i} \int_{|w|=\epsilon} \frac{\exp(w(\rho+n))}{w^{n+1}} dw.$$ But we have $$n! [w^n] \exp(w(\rho+n)) = n! \times \frac{(\rho+n)^n}{n!} = (\rho+n)^n$$ which means that $Q_\rho(z)$ is the exponential generating function of $(\rho+n)^n.$ The equality that we seek to prove is a convolution of two exponential generating functions on the left and on the right and to verify it we must show that $$Q_\alpha(z) Q_\beta(z) = Q_\gamma(z) Q_\delta(z).$$ But this is simply $$\frac{\exp(\alpha T(z))}{1-T(z)} \frac{\exp(\beta T(z))}{1-T(z)} = \frac{\exp(\gamma T(z))}{1-T(z)} \frac{\exp(\delta T(z))}{1-T(z)}$$ which is $$\frac{\exp((\alpha+\beta) T(z))}{(1-T(z))^2} = \frac{\exp((\gamma+\delta) T(z))}{(1-T(z))^2}$$ which holds since $\alpha+\beta = \gamma+\delta.$ The labelled tree function recently appeared at this MSE link.
{ "language": "en", "url": "https://math.stackexchange.com/questions/958518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many nonnegative integer solutions are there to the pair of equations $x_1+x_2+…+x_6=20$ and $x_1+x_2+x_3=7$? How many nonnegative integer solutions are there to the pair of equations \begin{align}x_1+x_2+\dots +x_6&=20 \\ x_1+x_2+x_3&=7\end{align} How do you find non-negative integer solutions?
You are correct. You can also think of it in terms of permutations. The number of non-negative integer solutions of $x_1+x_2+x_3=7$ is the number of permutations of a multiset with seven $1$'s, and two $+$'s. This is $$\frac{9!}{7!\ 2!}.$$ Similarly, the number of non-negative integer solutions of $x_4+x_5+x_6=13$ is the number of permutations of thirteen $1$'s, and two $+$'s. This is $$\frac{15!}{13!\ 2!}.$$ This is why the first number in your combination is what the variables equal, and the second is "one less" the amount of variables, since you're permuting the $+$'s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/958580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculation of $\int\frac1{\tan \frac{x}{2}+1}dx$ Calculation of $\displaystyle \int\frac{1}{\tan \frac{x}{2}+1}dx$ $\bf{My\; Try}::$ Let $\displaystyle I = \displaystyle \int\frac{1}{\tan \frac{x}{2}+1}dx$, Now let $\displaystyle \tan \frac{x}{2}=t\;,$ Then $\displaystyle dx=\frac{2}{1+t^2}dt$ So $\displaystyle I = 2\int\frac{1}{(1+t)\cdot (1+t^2)}dt$ Now Using Partial fraction, $\displaystyle \frac{1}{(1+t)\cdot (1+t^2)} = \frac{A}{1+t}+\frac{Bt+C}{1+t^2}\Rightarrow 1=A(1+t^2)+(Bt+C)(1+t)$ Now put $(1+t)=0\Rightarrow t=-1\;,$ We get $\displaystyle 1=2A\Rightarrow A = \frac{1}{2}.$ Now Put $(1+t^2)=0\Rightarrow t^2=-1\;,$ We Get $1=Bt^2+(B+C)t+C$ So $\displaystyle 1=\left(-B+C\right)+(B+C)$. So Solving equation...$B+C=0$ and $-B+C=1$ So $\displaystyle B=-\frac{1}{2}$ and $\displaystyle C=\frac{1}{2}$ So $\displaystyle I = 2\int\frac{1}{(1+t)\cdot (1+t^2)}dt = \int\frac{1}{1+t}dt+\int\frac{-t+1}{1+t^2}dt$ So $\displaystyle I = \frac{1}{1+t}dt-\frac{1}{2}\int\frac{2t}{1+t^2}dt+\int \frac{1}{1+t^2}dt$ So $\displaystyle I = \ln \left|1+t\right|-\frac{1}{2}\ln \left|1+t^2\right|+\tan^{-1}(t)+\mathcal{C}$ So $\displaystyle I = \ln \left|1+\tan \frac{x}{2}\right|-\frac{1}{2}\ln \left|1+\tan^2 \frac{x}{2}\right|+\frac{x}{2}+\mathcal{C}$ Can we solve it without using partial fraction? If yes then please explain to me. Thanks
Hint: Multiply the denominator and nominator of the fraction by $\cos x/2(\sin x/2-\cos x/2)$. At the bottom you get $-\cos x$, on the top you get $-\cos^2 x/2+1/2\sin x$. What can you do with $\cos^2 x/2$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/958679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
How to derive the closed form of this recurrence? For the recurrence, $T(n) = 3T(n-1)-2$, where $T(0)= 5$, I found the closed form to be $4\cdot 3^n +1$(with help of Wolfram Alpha). Now I am trying to figure it out for myself. So far, I have worked out: $T(n-1) = 3T(n-2)-2, T(n-2) = 3T(n-3)-2, T(n-3) = 3T(n-4)-2$ leading me to: $T(n)=81\cdot T(n-4)-54-18-6-2 $ etc. I have noticed the constants follow a Geometric Series whose sum is given by $1-3^n$. I am having trouble putting this together to end up with the final closed form.
By making use of the generating function method the difference equation $T_{n} = 3 T_{n-1} - 2$, $T_{0} = 5$, can be seen as follows \begin{align} T(x) &= \sum_{n=0}^{\infty} T_{n} x^{n} = 3 \sum_{n=0}^{\infty} T_{n-1} x^{n} - 2 \sum_{n=0}^{\infty} x^{n} \\ T(x) &= 3 \left( T_{-1} + \sum_{n=1}^{\infty} T_{n-1} x^{n} \right) - \frac{2}{1-x} \\ &= 3 \left( T_{-1} + x T(x) \right) - \frac{2}{1-x} \end{align} which yields \begin{align} (1-3x) T(x) = 3 T_{-1} - \frac{2}{1-x}. \end{align} Since $T_{-1} = \frac{7}{3}$ then \begin{align} T(x) &= \frac{5 - 7x}{(1-x) (1-3x)} = \frac{1}{1-x} + \frac{4}{1-3x} \\ &= \sum_{n=0}^{\infty} \left( 4 \cdot 3^{n} + 1 \right) x^{n} \end{align} and provides $T_{n} = 4 \cdot 3^{n} + 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/958781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
If $\,f^{7} $ is holomorphic, then $f$ is also holomorphic. I need some help with this problem: Let $ \Omega $ be a complex domain, i.e., a connected and open non-empty subset of $ \mathbb{C} $. If $ f: \Omega \to \mathbb{C} $ is a continuous function and $ f^{7} $ is holomorphic on $ \Omega $, then $ f $ is also holomorphic on $ \Omega $. Thanks in advance.
This is a topological variant of Christian’s elegant solution. Let $ g \stackrel{\text{df}}{=} f^{7} $. As $ g $ is holomorphic on $ \Omega $, either (i) it is $ 0 $ everywhere on $ \Omega $ or (ii) its set of roots is an isolated subset of $ \Omega $. If Case (i) occurs, then we are done. Hence, suppose that we are in Case (ii). Let $ \mathcal{Z} $ denote the (closed) set of roots of $ g $. For any point $ p \in \Omega \setminus \mathcal{Z} \neq \varnothing $, there exist * *a (sufficiently small) open disk $ D $ centered at $ p $ and contained in $ \Omega \setminus \mathcal{Z} $, and *a holomorphic function $ h: D \to \mathbb{C} $ such that $ g|_{D} = e^{h} $. Now, for each $ z \in D $, we must have $$ f(z) \in \left\{ e^{2 \pi i k / 7} ~ e^{h(z) / 7} ~ \Big| ~ k \in \{ 0,\ldots,6 \} \right\}. $$ This is because the right-hand side is precisely the set of all complex $ 7 $-th roots of $ g(z) $. This means that we have a function $ k: D \to \{ 0,\ldots,6 \} $ such that $$ \forall z \in D: \quad f(z) = e^{2 \pi i \cdot k(z) / 7} ~ e^{h(z) / 7}. $$ We claim that $ k $ is locally constant on $ D $. Assume the contrary. Then there exists a $ q \in D $ such that $ k $ is not constant on any open neighborhood of $ q $. We can thus find a sequence $ (z_{n})_{n \in \mathbb{N}} $ in $ D \setminus \{ q \} $ converging to $ q $ such that $ k(z_{n}) \neq k(q) $ for all $ n \in \mathbb{N} $. By the Pigeonhole Principle, we can extract a subsequence $ (z_{n_{j}})_{j \in \mathbb{N}} $ such that all the $ k(z_{n_{j}}) $’s are equal to a single integer $ m \in \{ 0,\ldots,6 \} $. Then \begin{align} 0 & = \lim_{j \to \infty} \left[ f(q) - f(z_{n_{j}}) \right] \quad (\text{As $ f $ is continuous.}) \\ & = \lim_{j \to \infty} \left[ e^{2 \pi i \cdot k(q) / 7} ~ e^{h(q) / 7} - e^{2 \pi i \cdot k(z_{n_{j}}) / 7} ~ e^{h(z_{n_{j}}) / 7} \right] \\ & = e^{2 \pi i \cdot k(q) / 7} ~ e^{h(q) / 7} - e^{2 \pi i m / 7} ~ e^{h(q) / 7} \quad (\text{As $ h $ is continuous.}) \\ & = \left[ e^{2 \pi i \cdot k(q) / 7} - e^{2 \pi i m / 7} \right] e^{h(q) / 7} \\ & \neq 0. \quad (\text{As $ m \neq k(q) $.}) \end{align} We have a contradiction, so $ k $ must be locally constant on $ D $. As $ D $ is connected, $ k $ must be constant on $ D $. Consequently, $ f $ is a holomorphic function on $ D $. As $ p \in \Omega \setminus \mathcal{Z} $ is arbitrary, $ f $ is holomorphic on $ \Omega \setminus \mathcal{Z} $. (Note: Although holomorphicity has global consequences, it is a local property.) Finally, $ f $ is holomorphic on all of $ \Omega $ as it is continuous at each (isolated) root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/958866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 4 }
Spectral radius of a normal element in a Banach algebra I know that if $A$ is a C*-algebra, then $||x||=||x||_{sp}$ for every normal element $x\in A$. I like to find an example of a normal element $x$ in a involutive Banach algebra with $||x||\neq ||x||_{sp}$. Thanks for your regard.
Take $\ell_1(\mathbb{Z})$ with the $*$-operatorion given by $\delta_n^* = \delta_{-n}$. Consider the element $\tfrac{1}{2}\delta_1 + \tfrac{1}{2}\delta_2$. It has norm one but its spectral radius is 1/2. Instead of $\mathbb{Z}$ you can take your favourite finite, non-trivial group to have a finite-dimensional example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/958947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
I dropped mathematics and want to self-teach it to myself starting from the basics I will make this question as objective based as possible. My curriculum teaches maths in a way that does not work well with me. Further, I also do not like the textbooks used (It was voted the worst in the state). The only way to succeed in this course was to get a tutor, objectively speaking. The highest non-tutor mark was about 60%~, majority severely failed the exam. I dropped the course as a whole. In lieu of this, I still have a passion for mathematics. Currently, I just begin Calculus, but I am missing a solid development in all pre-calculus topics. By solid development, I mean I have no intuitive way of grasping things - just stuff and methods rote learned from formulas provided. I need a good textbook, and that is my question here, that provides a solid, rigorous, theory intensive approach to mathematics from all pre-calculus topics to beyond. I don't mind if its multiple book recommendations in order, I am willing to study them hard. Much thanks. I'm sure someone on this site has exactly what I'm looking for and am hoping they can share it with me.
Calculus and Analytic Geometry by George B. Thomas and Ross L. Finney. This is a great book for self learning, it balances clear explanations with enough rigor and will help you gain some intuition from the basics all the way to the multivariate integral theorems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/959106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
unreduced suspension Is the definition $SX=\frac{(X\times [a,b])}{(X\times\{a\}\cup X\times \{b\})}$ of the unreduced suspension the standard defininition? If I consider $X=$ point, the suspension of $X$ is a circle. But I saw an other definition of the unreduced suspension such that the suspension of a point should be an interval. Regards.
I would have said that the suspension was $X \times [-1, 1]$ modulo the relation that $$ (x, a) \sim (x', a') $$ if and only iff * *$a = a' = 1$ or *$a = a' = -1$, or *$a = a'$ and $x = x'$. Wikipedia seems to agree with me. It looks as if your author was a little glib, and failed to mention that the "bottom" and "top" sets of equivalent points were not supposed to be made equivalent to each other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/959183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
$G$-representations, $W \otimes V^* \to \text{Hom}(V,W)$ Let $V$ and $W$ be finite-dimensional vector spaces. I know how to construct an explicit isomorphism of vector spaces $W \otimes V^* \to \text{Hom}(V,W)$ and show that it's an isomorphism. But if I supposed that $V$ and $W$ are $G$-representations of some group $G$, how do I show that the isomorphism above is an isomorphism of $G$-representations?
Let $\rho_1: G \to GL(V)$ be a $G$-representation and $\rho_2: G \to GL(W)$ as well. We can make $V^*$ a $G$-representation $\rho_1^*: G \to GL(V^*)$ given by$$(\rho_1^*(g)(\phi))(v) = \phi(g^{-1}v).$$Then $W \otimes V^*$ is a $G$-representation $\rho_T: G \to GL(W \otimes V^*)$ by $$\rho_T(g)(w \otimes \phi) = (\rho_2(g)(w)) \otimes (\rho_1^*(g)(\phi)).$$In addition, we have $\text{Hom}(V, W)$ is a $G$-representation $\rho_H: G \to GL(\text{Hom}(V, W))$ given by $($for $T \in \text{Hom}(V, W)$$)$$$(\rho_H(g)(T))(v) = \rho_2(g)(T(\rho_1(g^{-1})(v))).$$As all the $\rho$'s are getting unwieldy at this point, we will just write the $G$ action as $g \cdot$ and hope everything is clear from context. So we can now see that the map $L$ from above is $G$-linear. For$$L(g \cdot (w \otimes \phi))(v) = L((g \cdot w) \otimes (g \cdot \phi))(v) = (g \cdot \phi)(v)g \cdot w$$and $$(g \cdot L(w \otimes \phi))(v) = g \cdot (L(w \otimes \phi)(g^{-1} \cdot v)) = g \cdot (\phi(g^{-1} \cdot v)w) = \phi(g^{-1} \cdot v)g \cdot w.$$And those are equal from how one defines the representation $V^*$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/959301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
stuck on logarithm of derivative of sum $\frac{\partial\mathrm{log}(a+b)}{\partial a}$ I need to evaluate an expression similar to the following: $\frac{\partial\mathrm{log}(a+b)}{\partial a}$ At this point I don't know how to proceed. $b$ is a constant so there should be some way to eliminate it. How would you proceed in this case? Actually, the original expression is much more complicated, and related to multiclass logistic regression, but I wanted to spare you tedious details.
Suppose you know $\frac{d}{dx} e^x=e^x$ then define $f(x)=e^x$ to have inverse function $g(x) = \ln (x)$ to be the inverse function of the exponential function. In particular this means we assume $e^{\ln(x)}=x$ for all $x \in (0, \infty)$ and $\ln(e^x) = x$ for all $x \in (-\infty, \infty)$. The existence of $g$ is no trouble as the exponential function is everywhere injective. Ok, assume $x>0$ and let $y=\ln(x)$ then $e^y = e^{\ln(x)}=x$. Differentiate the equation $e^y=x$: $$ \frac{d}{dx} e^y = \frac{d}{dx} x \ \ \Rightarrow \ \ e^y \frac{dy}{dx}=1$$ where we have used the chain-rule and the observation that $y$ is a function of $x$. Finally, solve for $\frac{dy}{dx}$ (which is what we're after here) $$ \frac{dy}{dx} = \frac{1}{e^y} = \frac{1}{x} \ \ \Rightarrow \ \ \frac{d}{dx} \ln (x) = \frac{1}{x}.$$ Now, to solve your problem I merely apply this to $\ln(a+b)$ thinking of $a$ as $x$. $$ \frac{\partial}{\partial a} \ln (a+b) = \left[\frac{d}{dx} \ln (x+b) \right]_{x=a} = \left[\frac{1}{x+b} \right]_{x=a} = \frac{1}{a+b}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/959410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing $\cos{\pi \frac{mx}{R}}\sin{ \pi\frac{nx}{R}}$ are orthogonal in $L^{2}([0,R])$? I'm trying to show that $\sin \left(\frac{n\pi}{R}x\right)$ and $\cos \left(\frac{m\pi}{R}x\right)$ are orthogonal using the trigonometric identity that $2\cos{mx}\sin{nx} = \sin((m+n)x) + \sin((m-n)x)$ on $L^{2}([0, R])$. I could conclude this is true of the region of interest were $2\pi$-periodic, but since $\sin \left(\frac{n\pi}{R}x\right)$ and $\cos \left(\frac{m\pi}{R}x\right)$ have period $2R$ and the interval of interest is half that length, how can I conclude that $$\int_{0}^{R}\cos \left(\frac{m\pi}{R}x\right)\sin \left(\frac{n\pi}{R}x\right)dx = \frac{1}{2} \left( -\frac{R \cos{\frac{\pi (m+n)}{R}x}}{m+n} \big\vert_{0}^{R} - \frac{R\cos{\frac{ \pi (m-n)}{R}}}{m-n} \big\vert_{0}^{R} \right) = 0 $$ Summary: since both functions of interest have period $2R$ and are being integrated over a region of length $R$, how can the integral be shown to be zero?
Indeed, this is not true. For instance, taking $R=\pi$ for simplicity, consider $n=1$ and $m=0$. Then the integral is $\int_0^\pi \sin( x)\,dx$, which is the integral of a positive integrand and is definitely not 0 (in fact it is 2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/959547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Hard inequality $ (xy+yz+zx)\left(\frac{1}{(x+y)^2}+\frac{1}{(y+z)^2}+\frac{1}{(z+x)^2}\right)\ge\frac{9}{4} $ I need to prove or disprove the following inequality: $$ (xy+yz+zx)\left(\frac{1}{(x+y)^2}+\frac{1}{(y+z)^2}+\frac{1}{(z+x)^2}\right)\ge\frac{9}{4} $$ For $x,y,z \in \mathbb R^+$. I found no counter examples, so I think it should be true. I tried Cauchy-Schwarz, but I didn't get anything useful. Is it possible to prove this inequality without using brute force methods like Bunching and Schur? This inequality was in the Iran MO in 1996.
Pursuing a link from a comment above, here's the Iran 1996 solution attributed to Ji Chen. Reproduced here to save click- & scrolling ... The difference $$4(xy+yz+zx)\cdot\left[\sum_\text{cyc}{(x+y)^2(y+z)^2}\right]-9\prod_\text{cyc}{(x+y)^2}$$ which is equivalent to the given inequality, is presented as the sum of squares $$=\;\sum_{\text{cyc}}{xy(x-y)^2\left(4x^2 + 7xy + 4y^2\right)} \:+\:\frac{xyz}{x+y+z}\sum_{\text{cyc}}{(y-z)^2\left(2yz + (y+z-x)^2\right)}$$ whence non-negativity is clear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/959688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Expression for the Fourier transform of $f(x) = \frac{1}{1 +\|x\|^2}$ I'm having troubles with the Fourier transform of $f(x) = \frac{1}{1 +\|x\|^2} \in L^2(\mathbb{R}^{n})$. For the case $n=1$ I got $\hat{f}(\xi) = \pi e^{-2\pi |\xi|}$ using residues. Does the general case have a nice expression? How is that expression obtained?
It's not in $L^2$ for $n \ge 4$. For $n \le 3$, assume wlog $\xi$ is in the direction of one of the coordinate axes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/959791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a geometrical definition of a tangent line? Calculus books often give the "secant through two points coming closer together" description to give some intuition for tangent lines. They then say that the tangent line is what the curve "looks like" at that point, or that it's the "best approximation" to the curve at that point, and just take it for granted that (1) it's obvious what that means, and (2) that it's visually obvious that such statements are true. To be fair, it's true that (1) I can sort of see what they mean, and (2) yes, I do have some visual intuition that something like that is correct. But I can't put into words what exactly a tangent line is, all I have is either the formal definition or this unsatisfying vague sense that a tangent "just touches the curve". Is there a purely geometrical definition of a tangent line to a curve? Something without coordinates or functions, like an ancient Greek might have stated it. As an example, "A line that passes through the curve but does not cut it" is exactly the kind of thing I want, but of course it doesn't work for all curves at all points.
The best geometrical definition of the tangent line to a curve is the one given by Leibniz in his 1784 article providing the foundations of his infinitesimal calculus. The definition of the tangent line is the line through two infinitely close points on the curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/959885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 9, "answer_id": 8 }
Integral of $\int \frac{\cos \left(x\right)}{\sin ^2\left(x\right)+\sin \left(x\right)}dx$ What is the integral of $\int \frac{\cos \left(x\right)}{\sin ^2\left(x\right)+\sin \left(x\right)}dx$ ? I understand one can substitute $u=\tan \left(\frac{x}{2}\right)$ and one can get (1) $\int \frac{\frac{1-u^2}{1+u^2}}{\left(\frac{2u}{1+u^2}\right)^2+\frac{2u}{1+u^2}}\frac{2}{1+u^2}du$ but somehow this simplifies to (2) $\int \frac{1}{u}-\frac{2}{u+1}du$ to get: (3) $\ln \left(\tan \left(\frac{x}{2}\right)\right)-2\ln \left(\tan \left(\frac{x}{2}\right)+1\right)+C$ as the final answer. But how does one simplify (1) to (2)?
Consider the integral \begin{align} I = \int \frac{\cos(x) \, dx}{\sin^{2}(x) + \sin(x) } . \end{align} Method 1 Make the substitution $u = \tan\left(\frac{x}{2}\right)$ for which \begin{align} \cos(x) &= \frac{1-u^{2}}{1+u^{2}} \\ \sin(x) &= \frac{2u}{1+u^{2}} \\ dx &= \frac{2 \, du}{1+u^{2}} \end{align} and the integral becomes \begin{align} I &= \int \frac{ \frac{1-u^{2}}{1+u^{2}} \cdot \frac{2 \, du}{1+u^{2}} }{ \left( \frac{2u}{1+u^{2}} \right)^{2} + \frac{2u}{1+u^{2}} } \\ &= \int \frac{2(1-u^{2})}{(1+u^{2})^{2}} \frac{ (1+u^{2})^{2} }{2u} \, \frac{du}{2u+ 1 + u^{2} } \\ &= \int \frac{(1-u^{2})^{2} \, du}{ u (1+u)^{2} } = \int \frac{(1-u) \, du}{u(1+u)} \\ &= \int \left( \frac{1}{u} - \frac{2}{1+u} \right) \, du\\ &= \ln(u) - 2 \ln(1+u) + c_{1}. \end{align} This leads to \begin{align} \int \frac{\cos(x) \, dx}{\sin^{2}(x) + \sin(x) } = \ln \left( \frac{\tan\left(\frac{x}{2}\right)}{ \left( 1 + \tan\left(\frac{x}{2}\right) \right)^{2} } \right) + c_{1} \end{align} Method 2 Make the substitution $u = \sin(x)$ to obtain \begin{align} I &= \int \frac{du}{u^{2} + u} = \int \left( \frac{1}{u} - \frac{1}{1+u} \right) \, du \\ &= \ln(u) - \ln(1+u) + c_{2} \end{align} for which \begin{align} \int \frac{\cos(x) \, dx}{\sin^{2}(x) + \sin(x) } = \ln\left( \frac{\sin(x)}{1+\sin(x) } \right) + c_{2} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/959992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Standard Uniform Distibution with Random Variable Could someone help explain how to solve the following problem: From my understanding, this problem states that we have a function, Uniform(0, 1), that will generate a random value from 0 to 1 with uniform distribution. What I don't understand is how this translates into the random variable X or the given probability mass function.
One approach is to consider the cumulative distribution function $F(x)=P(X \le x)$ So taking a cumulative sum of the probabilities in your table, it might look like x 3 4 5 P(X=x) 0.40 0.15 0.45 P(X<=x) 0.40 0.55 1.00 Then look at your standard uniform random variable $U$ and * *if $0 \le U \le 0.4$ then set $X=3$ *if $0.4 \lt U \le 0.55$ then set $X=4$ *if $0.55 \lt U \le 1$ then set $X=5$
{ "language": "en", "url": "https://math.stackexchange.com/questions/960123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why not use two vectors to define a plane instead of a point and a normal vector? In Multivariable calculus, it seems planes are defined by a point and a vector normal to the plane. That makes sense, but whereas a single vector clearly isn't enough to define a plane, aren't two non-colinear vectors enough? What I'm thinking is that since we need the cross-product of two vectors to find our normal vector in the first place, why not just use those two vectors to define our plane. After all, don't two non-colinear vectors define a basis in R2?
Remember, vectors don't have starting or ending positions, just directions. So take the vectors <1,0,0> and <0,1,0>. These vectors will define a plane that only goes in the x-y direction, but the problem is, they will work for any z-coordinate. So you need some starting point to anchor your plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/960261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
The condition that a ring is a principal ideal domain If $R$ is a nonzero commutative ring with identity and every submodule of every free $R$-module is free, then $R$ is a principal ideal domain. What I don't know is how to show that every ideal is free. Once an ideal is free, for nonzero $u,v\in I$, $uv-vu=0$ shows that the ideal has only one basis. that is, the ideal is principal. Any help?
Consider $R$ as $R$-module, then this is free, with basis $\lbrace 1 \rbrace $. Suppose $I \subseteq R $ is an ideal, $I \neq 0 $; then $I$ is a submodule, whence it is free. If $u, v \in I $ they are linearly dependent over $R$, because $vu -uv = 0 $. This implies that if $B= \lbrace u_1, u_2, \ldots \rbrace $ is a basis for $I$ as $R$-module, then $|B| = 1 $, so $I = < w > $ and I is a principal ideal. For the domain part, if $a \in R $ , $a \neq 0 $, suppose $ba = 0 $ with $b \neq 0 $. Then consider $I = <a> $. By the previous part $I$ has a basis $\lbrace w \rbrace $. We have $w = k a $ for $ k \in R $. Then $$ bw = bka = k (ba) = 0$$ But $w $ is linearly independent and $b \neq 0 $. This is a contradiction and so $ba \neq 0 $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/960336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integration and Laplace-Stieltjes of a multiplied Weibull and Exponential distribution Function I have a trouble for integrating a multiplied weibull and exponential distribution. The expression is as follows: $$ Y(t) = \int_0^t e^{-\lambda T}e^{-(T/\mu)^z}dT\,. $$ Then, I need to take Laplace-Stieltjes from Y as follows: $$ W = \int_0^\infty se^{-st}Y(t)dt\, =\int_0^\infty se^{-st} \int_0^t e^{-\lambda T}e^{-(T/\mu)^z}dT\ dt\ $$ I think you can read the expression and help with the question. Actually, in above-mention problem z parameter have a special bounds as follows: 0 < z < 1 For example consider z = 0.5; Is there any solution or approximation for this problem at this condition?
A method of solving is shown below :
{ "language": "en", "url": "https://math.stackexchange.com/questions/960455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that for any $n$, we have $\sum_{i=1}^n \frac{a_i}i \ge a_n $. Consider the sequence $(a_n)_{n \ge1}$ of real nos such that $$a_{m+n}\le a_m+a_n,\ \forall m,n \ge 1$$ Prove that for any $n$, we have $$\sum_{i=1}^n \frac{a_i}i \ge a_n .$$
This is Problem 2 from the Asian Pacific Mathematical Olympiad (APMO) 1999. I took the following proof essentially from AoPS: We will prove this by strong induction. Note that the inequality holds for $ n=1$. Assume that the inequality holds for $ n=1,2,\ldots,k$, that is, $$ a_1\ge a_1,\quad a_1+\frac{a_2}{2}\ge a_2,\quad a_1+\frac{a_2}2+\frac{a_3}3\ge a_3,\quad \dotsc,\quad a_1+\frac{a_2}3+\frac{a_3}3+\cdots+\frac{a_k}k\ge a_k. $$ Sum them up: $$ ka_1+(k-1)\frac{a_2}2a_2+\cdots+\frac{a_k}{k}\ge a_1+a_2+\cdots+a_k.$$ Add $ a_1+\ldots+a_k$ to both sides: $$ (k+1)\left(a_1+\frac{a_2}2+\cdots+\frac{a_k}k\right)\ge (a_1+a_k)+(a_2+a_{k-1})+\cdots+(a_k+a_1)\ge ka_{k+1}.$$ Divide both sides by $ k+1$: $$ a_1+\frac{a_2}2+\cdots+\frac{a_k}k\ge\frac{ka_{k+1}}{k+1},$$ i.e. $$a_1 + \frac{a_2}{2} + \frac{a_3}{3} + \cdots + \frac{a_{k+1}}{k+1} \geq a_{k+1}$$ which proves the induction step.
{ "language": "en", "url": "https://math.stackexchange.com/questions/960602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Why does this loop yield $\pi/8$? Was doing some benchmarks for the mpmath python library on my pc with randomly generated tests. I found that one of those tests was returning a multiple of $\pi$ consistently. I report it here: from mpmath import * mp.dps=50 print "correct pi: ", pi den=mpf("3") a=0 res=0 for n in xrange(0,1000): res=res+1/den a=a+32 den=den+a print "result: ", res*8 Result: correct pi: 3.1415926535897932384626433832795028841971693993751 result: 3.1410926536210432286970258295579986968615976706527 The convergence rate is very low. 10000 terms only give accuracy to the 5th-4th decimal digit. Is it there somewhere a formula for pi that involves the number 32?
From your program one can infer that we must have $$ den_n=f(n)=32 T(n)+3 $$ Where $T(n)=1+2+...+n=\frac{n(n+1)}{2}$ is a so-called Triangular Number. Then you form the sum $$ \begin{align} res_n&=\sum_{i=0}^n\frac{1}{f(i)}\\ &=\sum_{i=0}^n\frac{1}{32 T(i)+3}\\ &=\sum_{i=0}^n\frac{1}{16i(i+1)+3}\\ \end{align} $$ So the question remains: is this series approximating $\pi/8$ and why ... Others have elaborated on that :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/960681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Embedding of linear order into $\mathcal{P}(\omega)/\mathrm{fin}$ I try to prove following problem (in Kunen): Assume $\mathrm{MA}(\kappa)$ and $(X,<)$ be a total order with $|X|\le\kappa$, then there are $a_x\subset \omega$ such that if $x<y$ then $x\subset^* y$. ($x\subset^* y$ if $|x-y|<\omega$ and $|y-x|=\omega$.) Hint. Let $P$ be a set of pairs $(p,n)$, where $p$ is a partial function from $X$ to $\mathcal{P}(n)$ and the domain of $p$ is finite. Define $(p,n)\le (q,m)$ iff $\operatorname{dom}p\supseteq \operatorname{dom}q$, $n\ge m$, $p(x)\cap m=q(x)$ for each $x\in \operatorname{dom} q$ and $$\forall x,y\in \operatorname{dom}q:x<y\implies p(x)-p(y)\subseteq m.$$ Use the $\Delta$-system lemma to prove that $(P,\le)$ has the c.c.c. I prove $P$ has the c.c.c. and if $G$ is a filter which intersects the dense sets $$D_x=\{(p,n):x\in\operatorname{dom}p\}$$ over $P$ then $a_x:= \bigcup\{p(x)\mid\exists n<\omega: (p,n)\in G\text{ and }x\in\operatorname{dom}p\}$ satisfies $|a_x-a_y|<\omega$ if $x<y$. But I don't know how to prove $|a_y-a_x|=\omega$. I think some dense sets are added but which sets are added? Thanks for any help.
If $x<y$, $(q,m)\in P$, $x,y\in Dom(q)$, and $N<\omega$ define $p$ by $Dom(p)=Dom(q)$, $q(z)=p(z)$ if $z<y$, and $q(z)=p(z)\cup\{m+N\}$ otherwise, then $(p,m+N+1)\leq (q,m)$, and there is an element $\geq N$ in $q(y)\setminus q(x)$. As all elements of $G$ are pairwise compatible, the genericity of this set and what we just showed implies $|a_y\setminus a_x|=\omega$
{ "language": "en", "url": "https://math.stackexchange.com/questions/960767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Spivak Sine confusion (possible error) quote from Spivak: "Let us consider the function $f(x) = \sin(1/x)$." The goal is to show it is false that as $x \to 0$ that $f(x)\to 0$ He says we have to show "we simple have to find one $a > 0$ for, which the condition $f(x) < a$ cannot be guaranteed, no matter how small we require $|x|$ to be. In fact $a = 1/2$ will do. It is impossible to ensure that $|f(x)| < 1/2$ no matter how small we require $x$ to be." (Spivak 93). Okay, major issue here. $\sin(1/x) < 1/2$ $ \implies 1/x < \arcsin(1/2)$ $ \implies 1/x < \pi/6$ $ \implies 6/\pi < x$ I just proved $\sin(1/x)$ can be less than 1/2. what is Spivak talking about?!?
He is saying that there are arbitrarily small values of $x$ such that the $|f(x)<1/2|$ is not satisfied.
{ "language": "en", "url": "https://math.stackexchange.com/questions/960860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
What does X-2 mean given continous probability distribution X? I have the continous probability distribution X: $f(x) = 2x e^{-x^2} \, x \geq 0$ and zero everywhere else. One of my homework problems is to find the probability distribution of X-2, -2X, and X^2 but intuitively it doesnt make much sense to me. For example if i consider X-2: $f(x) = 2xe^{-x^2} - 2 \, x \geq 0$ and $-2$ everywhere else. This doesnt make sense and isnt a probability distribution. Neither is: $f(x) = 2xe^{-x^2} - 2 \, x \geq 0$ and $0$ everywhere else. A little bit of input would be highly appreciated.
We approach the problem through the cumulative distribution functions, even though it is more inefficient than the method of transformations. 1) Let $Y=X-2$. We want the density function of $Y$. First we find an expression for the cumulative distribution function of $Y$, that is, $\Pr(Y\le y)$. We have $$F_Y(y)=\Pr(Y\le y)=\Pr(X-2\le y)=\Pr(X\le 2+y).$$ For $y\le -2$, we have $\Pr(X\le 2+y)=0$, so $F_Y(y)=0$, and therefore the density function $f_Y(y)$ is $0$. For $y\gt -2$, we have $$F_Y(y)=\int_0^{2+y}2xe^{-x^2}\,dx.$$ Now we have two options: (i) Calculate $F_Y(y)$, and differentiate to find $f_Y(y)$ or (ii) Use the Fundamental Theorem of Calculus to differentiate the above integral. That is easier, and gives $$f_Y(y)=2(2+y)e^{-(2+y)^2}$$ for $y\gt -2$. 2) Let $Y=-2X$. We have $F_Y(y)=\Pr(Y\le y)=\Pr(-2X\le y)=\Pr(X\ge -\frac{y}{2}$. Now work much as in 1). 3) Let $Y=X^2$. For $Y\le 0$, we have $F_Y(y)=0$, so the density function is $0$. For $y\gt 0$, we have $$F_Y(y)=\Pr(X^2\le y)=\Pr(X\le \sqrt{y})=\int_0^{\sqrt{y}} 2xe^{-x^2}\,dx.$$ Now calculate the integral, and differentiate, or differentiate under the integral sign. Remark: In the title, you ask what $X-2$ means. It is easier to explain with a different function. Imagine an experiment in which we take a person at random, and measure her height. Let the random variable be the person's height in metres. Let $Y=100X$. Then $Y$ is a random variable, and measures the person's height in cm. Suppose the measurement was made when the person was wearing shoes with $2$ cm thick soles. Let $Z=100X-2$. Then the random variable $Z$ is the person's bare foot height in cm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/961035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }