Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Let X~geometric (1/3), and let Y=|X-5|. Find the range n and PMF of Y. Let X~geometric (1/3), and let Y=|X-5|. Find the range n and PMF of Y. Here is my trial If $x=0$, $P(Y=|0-5|)=P(Y=5)=\left(\frac{2}{3}\right)^5 \frac{1}{3}$ If $x=1$, $P(Y=|1-5|)=P(Y=4)=\left(\frac{2}{3}\right)^4 \frac{1}{3}$ If $x=2$, $P(Y=|2-5|)=P(Y=3)=\left(\frac{2}{3}\right)^3 \frac{1}{3}$ If $x=3$, $P(Y=|3-5|)=P(Y=2)=\left(\frac{2}{3}\right)^2 \frac{1}{3}$ If $x=4$, $P(Y=|4-5|)=P(Y=1)=\left(\frac{2}{3}\right)^1 \frac{1}{3}$ If $x=5$, $P(Y=|5-5|)=P(Y=0)= \frac{1}{3}$ Range of $Y={0,1,2,3,4,5}$ and PMF is I $P(Y=y)=\left(\frac{2}{3}\right)^y \frac{1}{3}$ for $y=0,1,2,3,4,5$ Am I correct. If not please correct me.
Range of $Y$ is $(0,\infty)$. $$P(Y=k) \;=\; (2/3)^{k-1} * (1/3).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2165379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does this basic probability question solution make sense? I am just having some trouble understanding why it is valid to answer the following question in a certain way. Suppose we have iid Bernoulli trials, with $p=0.7$, We do trials until we have 4 success or 4 fail, if we get 4 success first, we win, else we lose what is the probability that we win. Now what I dont understand is my, one a work page I had seen, the professor had wrote that we can solve it by noting that it is binomial with 7 trials, and we win if we have 4 or more success. But I dont understand why this is valid because how can we have more then 4 successes, ie , that game is over at that point. I would think it would be = $$P[S=4, F=0]+P[S=4, F=1] + P[S=4, F=2]+ P[S=4, F=3]$$ But mostly I am wondering about that solution and how it is valid to say they are equivalent even though it does not make sense to have more than 4 successes or failures Thanks
Now what I dont understand is my, one a work page I had seen, the professor had wrote that we can solve it by noting that it is binomial with 7 trials, and we win if we have 4 or more success. But I dont understand why this is valid because how can we have more then 4 successes, ie , that game is over at that point. A winner will be declared when one player has four successes before the other does.   The players could continue playing a whole seven games without affecting their probability for winning.   However, imagining they do so may make it easier to calculate. In seven games exactly one player will have four or more successes while the other must have three or less.   That means that one player will have reached four successes before the other does so.   The events "have at least four successes in seven games" and "have four successes before the other player" are the same event and hence will have the same probability. $$\binom 74 p^4(1-p)^3+\binom 75 p^5(1-p)^2+\binom 76 p^6(1-p)+p^7$$ I would think it would be $= P[S=4,F=0]+P[S=4,F=1]+P[S=4,F=2]+P[S=4,F=3]$ Sure, but with the additional criteria that the last game played must be the fourth success.   That is: $$\binom 33 p^4+\binom43 p^4(1-p)+\binom 53p^4(1-p)^2 +\binom63 p^4(1-p)^3$$ These two expressions are actually equal.   Both evaluate to: $~p^4 (35-84p+70p^2-20 p^3)~$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2165475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proving that $R^2\to 1$ as the degree of a polynomial $k\to \infty$ for a least squares regression. Recently, I thought of the following interesting problem. Given a set of data, I noticed that as the degree of a polynomial increases, in general the $R^2$ value tends to increase too. I will define the $R^2$ value as the following: For a polynomial $p_k(x)=a_0+a_1 x+\cdots+a_k x^k$, the $R^2$ value for a set of points $(x_i,y_i)$ is: $$R^2\equiv 1-\sum_{i=1}^n \left[y_i-(a_0+a_1\cdot x_i +\cdots + a_n\cdot {x_i}^k)\right]^2 \tag{1}$$ Where $n$ is the number of points. Below I demonstrate the increase of $R^2$ with the following set of data I've made up: $$\begin{array}{c|c}x&y\\\hline0&-1\\0.5&-0.5\\1.4&-0.9\\2.1&0.2\\2.5&0.7\\3.1&1.7\\4.3&2.3\\5.2&1.5\\5.6&3.5\end{array}$$ Here is an animated GIF I have created showing this: I realized that the set of data must be many-to-one or one-to-one for the $R^2$ value to tend to $1$, otherwise the interpolating polynomial will not be able to pass through all the points since the polynomial is a function. Therefore, I've conjectured the following, and would like to prove it: Let $p_k(x)$ be a least squares fitting polynomial of degree $k$. Consider a discrete many-to-one or one-to-one relationship between $x$ and $y$ with finite values of $y$. Then $$\lim_{k\to \infty} R^2=1$$ for all sets of data satisfying the above conditions. I figured that the $R^2$ value will never decrease as $k$ increases. I know that given $n$ points $(x_i,y_i)$, the following yields the coefficients $a_0, a_1,\cdots,a_k$: $$\begin{bmatrix} y_1 \\ y_2 \\ \vdots \\y_k \end{bmatrix}=\begin{bmatrix} 1 & x_1 & {x_1}^2 & \cdots & {x_1}^k \\ 1 & x_2 & {x_2}^2 & \cdots & {x_2}^k \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & {x_n}^2 & \cdots & {x_n}^k \end{bmatrix} \begin{bmatrix} a_0 \\ a_1 \\ \vdots \\ a_k \end{bmatrix} \tag{2}$$ Now, I was thinking that I could combine $(1)$ and $(2)$ to prove it, however I am unsure how to do so. However, I noticed that there may be a problem with this because I know that one condition for a matrix to be invertible is that the matrix must be square. Therefore, I think we may be restricting ourselves to the specific cases where $n=k$ when proving it (Which loses the generality of the proof). I was wondering whether proving this may be related to Power Series which does this for continuous functions $\forall x \in \mathbb{R}$. If so, I think the approach to proving this may be significantly easier. If this is not a true conjecture or if some clarification is required, please let me know in the comments. Thanks in advance.
A $n-1$ degree polynomial can perfectly fit through $n$ points $(x_i,y_i)$ by choosing the right coefficients. Therefore, for big enough degree of the polynomial, the sum will disappear and thus the $R^2$ will equal one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2165600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why Use Arbitrary Unions and Finite Intersections in Topology? Why the definition of a topological space defined under finite intersection and arbitrary union What if we change the conditions by arbitrary intersection and finite union?
General topology arose in large part as a generalization of real analysis. On the real line with the usual topology, an arbitrary union or finite intersection of open intervals is open. We cannot push to even countably infinite intersections, however, since $$ \bigcap_{n=1}^\infty \left(-\frac{1}{n}, \frac{1}{n} \right) = \{0\} $$ gives us an example of a countably infinite collection of open intervals whose intersection is not open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2165829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Any counter example for $P(A|C)=P(A),P(B|C)=P(B)$ but $P(A\cap B|C)\neq P(A\cap B)$? Let $A,B,C$ be three events, what would be an example that $P(A|C)=P(A)$ and $P(B|C)=P(B)$ do not imply $P(A\cap B|C)= P(A\cap B)$?
Consider a dice with 8 faces with numbers from $1$ to $8$ on them respectively. We roll that dice. Denote $X$ as the resulting face. Let $A=[X=1,X=2]$, $B = [X=2,X = 3]$, $C=[X~\text{is even}]$. Then we definitely have $$P(A|C)=P(X=2\vert C)=\frac14=P(A)$$ and $$P(B|C)=P(X=2\vert C)=\frac14=P(B).$$ But $$P(A\cap B|C) = P(X = 2\vert C) = \frac14\ne \frac18 = P(X = 2) = P(A\cap B) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2165927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
On the sum: $\sum\limits_{n=0}^{\infty}\left[\,\sum\limits_{k=1}^{a}\frac{1}{an+k}-\sum\limits_{k=1}^{b}\frac{1}{bn+k}\,\right]$ How to prove: $$ \sum_{n=0}^{\infty}\left[\,\sum_{k=1}^{a}\frac{1}{a\,n+k}-\sum_{k=1}^{b}\frac{1}{b\,n+k}\,\right]=\log\left(\frac{a}{b}\right)\qquad\colon\,a\,,b\in\mathbb{N}^{+}\tag{1}$$ Obviously, we cannot split the sum: $$ \left(\sum_{n=0}^{\infty}\,\sum_{k=1}^{a}\frac{1}{a\,n+k}\right)-\left(\sum_{n=0}^{\infty}\,\sum_{k=1}^{b}\frac{1}{b\,n+k}\right) = \left(\sum_{n=1}^{\infty}\frac1n\right)-\left(\sum_{n=1}^{\infty}\frac1n\right)=\infty-\infty $$ Any ideas? Thanks in advance.
For starters, let us give an integral representation to the general term of the outer sum: $$ \sum_{k=1}^{a}\frac{1}{an+k}-\sum_{k=1}^{b}\frac{1}{bn+k} = \int_{0}^{1}\left(x^{an}\sum_{k=1}^{a}x^{k-1}-x^{bn}\sum_{k=1}^{b}x^{k-1}\right)\,dx \tag{1}$$ turning the whole sum into: $$ \lim_{N\to +\infty}\sum_{n= 0}^{N}\int_{0}^{1}\left[\left(x^{an}-x^{bn}\right)-\left(x^{a(n+1)}-x^{b(n+1)}\right)\right]\,\frac{dx}{1-x} \tag{2}$$ If now we exploit $$ \int_{0}^{1}\frac{x^A-x^B}{1-x}\,dx = H_B-H_A \tag{3}$$ the whole sum turns into $$ \lim_{N\to +\infty} \left(H_{aN}-H_{bN}\right) = \log\frac{a}{b} \tag{4}$$ due to $H_n=\log(n)+\gamma+O\left(\frac{1}{n}\right).$ It is interesting to point out that we are not allowed to exchange $\sum_{n\geq 0}$ and $\int_{0}^{1}$ in $(2)$, since the hypothesis of the dominated convergence theorem are not met.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2166049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
How to show that $\lim\limits_{x \to 0} \frac{3x-\sin 3x}{x^3}=9/2$? $\lim\limits_{x \to 0} \frac{3x-\sin 3x}{x^3}$ I need to prove that this limit equals to $\frac{9}{2}$. Can someone give me a step by step solution? EDIT: I am sorry. The $x$ goes to $0$, not $1$.
Using l'Hôpital's rule;$$\lim_{x\to0}\frac{3x-\sin(3x)}{x^3}=\lim_{x\to0}\frac{3-3\cos(3x)}{3x^2}=\lim_{x\to0}\frac{9\sin(3x)}{6x}=\lim_{x\to0}\frac{27\cos(3x)}{6}=\frac{9}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2166150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 1 }
Solving Trigonometric Derivatives I have a function $F(θ) = \sin^{−1} \sqrt{\sin(11θ)}$ I derived the following answer using basic trigonometric and quotient rules. $\dfrac{11\csc \left(\left(11\theta \right)^{\left(\frac{1}{2}\right)}\right)\cos \left(\left(11\theta \right)\right)}{2\sqrt{1-\sin \left(11\theta \right)}}$ My answer however is wrong. Can anyone outline how to go about getting the correct solution to a problem such as this?
We have $\sin F(x)= (\sin 11x)^{1/2}.$ so by the Chain Rule, $$(1)....(\cos F(x)\;)\cdot F'(x)=(1/2)(\sin 11x)^{(1/2-1)})(11\cos 11 x)=\frac {11\cos 11x}{2\sqrt {\sin 11x}}.$$ Assuming that we are using the main (or usual) branch of $\sin^{-1}$, which takes values in $[-\pi /2,\pi /2],$ we have $F(x)\in [-\pi /2,\pi /2]$ so $\cos F(x)\geq 0$. So $$\cos F(x)=\sqrt {1-\sin^2F(x)}=\sqrt {1-\sin 11x}$$ which we can substitute into (1).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2166251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Error of Taylor Polynomial a) Construct the Taylor polynomials of degree 4 at $x_0 = 0$ for the following functions: $$f(x) = \frac{1}{2+x}$$$$ f(x)=\sin\left(\frac{x}{3}\right)$$ b) Find a bound for the error terms for $x\in [-1,1].$ I have the solution to the first part as $$\frac{1}{2+x} \approx \frac{1}{2}-\frac{x}{4} +\frac{x^2}{8} -\frac{x^4}{32} $$ and $$\sin\left(\frac{x}{3}\right) \approx \frac{x}{3}-\frac{x^3}{162}$$ but I am completely stumped at finding the error bounds in part b.
Recall the remainder term is bounded by $\displaystyle{Mr^{n+1}\over (n+1)!}$ where $M=\displaystyle\sup_{a\le x\le b}|f^{(n+1)}(x)|$ and $r={b-a\over 2}$ is the radius of the interval in question. In your case you know the derivatives well enough to see that $$|f^{(n+1)}(x)|=\begin{cases}\displaystyle{n!\over (2+x)^{n+1}} && f(x) = (x+2)^{-1} \\ 3^{-n-1}|g(x)| && f(x) = \sin(x/3)\end{cases}$$ Where here $g(x)$ is something bounded by $1$, but would be a sine or cosine depending on $n$ and I'm too lazy to write that out. So you get bounds The maximum of the first is just $n!$ since the smallest the denominator can be is $2+-1=1$. For the second we just bound by $1$ for simplicity. This gives rise to error bounds $$B_1={1\over n+1}$$ $$B_2={1\over 3^{n+1}(n+1)!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2166350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Translate linear motion to polar coordinates I'm a math noob, so even though I want to understand a general solution to my problem, I don't even know how to articulate it except as an example. So: The setup: Suppose I want to draw a segment from (1,2) to (0,1) on the Cartesian plane. Suppose I get to do so by tracing a vector along said segment. That is, I have a stepper motor at (0,0) that rotates about the z axis, and a pen parallel to the z axis, tip facing the xy plane, attached to the tip of a rod that can extend linearly from the axle of the stepper motor. Effectively, I can represent arbitrary vectors from the polar origin (ignoring length limitations, lets assume I can manage sqrt(5). My problem is thus: If I run the stepper motor at constant velocity, this represents constant angular velocity. This is not a problem, except I would need to know how to compensate for this by moving the pen tip (vector magnitude) such that the end result is a smooth -y-ward motion. I have no idea where to begin. I would even appreciate answers that point me toward being able to ask this question more succinctly.
The transformation from Cartesian coordinates to polar coordinates is $$ x = r\cos\theta, \quad y=r\sin\theta. $$ Since the equation of the line on which your two points lie is $y=x+1$, its polar equation is $$ r\sin\theta=r\cos\theta+1, $$ which you can solve for $r$ to obtain $$ r = \frac1{\sin\theta - \cos\theta}. $$ That tells you what you want the radial position of the pen to be, given the current angle of the stepper motor. If the constant angular velocity, which is $\frac{d\theta}{dt}$, is equal to $a$ (make sure you use radians per second, not degrees per second!), then the required velocity of your pen tip in the radial direction should then be $$ \frac{dr}{dt} = \frac{dr}{d\theta} \frac{d\theta}{dt} = \frac{\sin\theta+\cos\theta}{\sin2\theta-1} \cdot a. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2166504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding $\lim_{z\to i} \frac{\arctan(1+z^2)^2}{\sin^2(1+z^2)}$ without using Taylor expansions I'm stuck trying to compute this limit: $$\lim_{z\to i} \frac{\arctan(1+z^2)^2}{\sin^2(1+z^2)}$$ I tried to use the logarithm form of $\arctan$ and the exponential form of $\sin$ but the formula got too complicated.
$$\lim_{z\to i} \frac{\arctan(1+z^2)^2}{\sin^2(1+z^2)}=\lim_{w\to 0} \frac{\dfrac{\arctan w^2}{w^2}}{\dfrac{\sin^2 w}{w^2}}=1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2166570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find the equation of the tangent to the curve Find the equation of the tangent to the curve: $$y = e^x +1$$ At the point: $$(1, e+1)$$ My process: $$Gradient: y' = e^x$$ Tangent: $$y-(e+1) = e^x(x-1)$$ $$y= xe^x-e^x+e+1$$ I don't understand where my mistake is. I found the derivative (gradient) and then put it into the gradient formula to find the equation of the tangent but I am still wrong and I don't know where?
At $(1, e+1)$, $$y' = e^{1} = e$$ Then, $$y - (e + 1) = e(x - 1)$$ $$y = ex + 1$$ I think it is a careless mistake.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2166692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Triangle Inequality Issue Suppose that $\left| a \right| \leq 1$ and $\left| b \right| \leq 1$, is there a nice way, other than a proof by cases, to show that $$\left| \left| a \right|^n - \left| b \right|^n \right| \leq 1?$$ I'm obviously aware of the triangle inequality $\left| \left| a \right| - \left| b \right| \right| \leq \left| a - b \right|$, but this doesn't help me out.
From $|a|^n \le 1$ we get $|a|^n \le 1+|b|^n$, hence $|a|^n-|b|^n \le 1$ In the same way we show: $|b|^n-|a|^n \le 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2166815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Laplace transform of Bessel function of order zero I'm trying to prove that the Laplace transform of the function $$ J_0(a\sqrt{x^2+2bx}) $$ is $$ \frac{1}{\sqrt{p^2+a^2}} \exp\left\{bp- b\sqrt{p^2+a^2} \right\} $$ as asserted in the eqworld. This formula can also be found in the book "Tables of Integral Transforms" page 207. My first attempt was to insert $a\sqrt{x^2+bx}$ in the series representation of the Bessel function $$ J_0(x) = \sum_{m=0}^\infty \frac{(-1)^m}{(m!)^2}\left(\frac{x}{2}\right)^{2m} $$ and then integrating term by term. However, the Laplace transform of the function $(x^2+bx)^m$ is not trivial. My second attempt was to look at a change of variable in the differential equation of $J_0$ $$ t^2J_0''(t)+tJ_0'(t)+t^2J_0(t)=0 $$ but I was not able to find it. Thanks for any clue.
By the substitution $x\mapsto bx$, the problem is equivalent to finding the Laplace transform of $f_c(x)=J_0(c\sqrt{x^2+x})$ or the inverse Laplace transform of $$ g_c(p)=\frac{1}{\sqrt{p^2+1}}\,\exp\left(\frac{-c}{p+\sqrt{p^2+1}}\right). \tag{1}$$ It is useful to recall that $\mathcal{L}(J_n(x))(p) = \frac{1}{\sqrt{1+p^2}(p+\sqrt{1+p^2})^n}$. It follows that: $$ \mathcal{L}^{-1}(g_c(p))(x) = \sum_{n\geq 0}\frac{(-c)^n J_n(x)}{n!}=\sum_{n\geq 0}\frac{(-c)^n}{n! 2\pi i^n}\int_{0}^{2\pi}e^{ix\cos\theta}e^{in\theta}\,d\theta \tag{2}$$ and $$\begin{eqnarray*} \mathcal{L}^{-1}(g_c(p))(x) &=& \frac{1}{2\pi}\int_{0}^{2\pi}\exp\left(ix\cos\theta+ice^{i\theta}\right)\,d\theta\\&=&\frac{1}{2\pi}\int_{0}^{2\pi}\exp\left(i(x+c)\cos\theta-c\sin\theta\right)\,d\theta\\&=&I_0\left(\sqrt{c^2-(x+c)^2}\right)=I_0\left(\sqrt{-x(2c+x)}\right). \end{eqnarray*}\tag{3}$$ The last term equals $\color{red}{J_0(\sqrt{x^2+2cx})}$ as wanted. A particular thing in Mathematics won't ever cease to amaze me: $a=b$ and $b=a$ are the same thing on a semantic level, however to prove $a\mapsto\ldots\mapsto b$ or $b\mapsto\ldots\mapsto a$ may be asymmetrically extremely hard - almost trivial. I do not know for sure which part of this phenomenon is due to the fact that we are used to have the "initial term" on the left and the "final term" on the right of a blank page (in many Asian cultures it is likely the opposite, it might be interesting to investigate "the other side" of this issue, too), which part is due to the fact that we heavily rely on causal links (even when they are $\Leftrightarrow$, in order to achieve a hierarchical, tree-like structure in our minds). In this particular case, I found easier to compute an inverse Laplace transform rather than a direct Laplace transform, and I think we should be trained to "think backwards" more often. That also reminds me of an interesting question I heard once: if both us and Mathematics are built in the image of God, why do we find so difficult to prove many things? Why mathematical truth is not evident in our eyes?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2166925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Well Ordering Principle, Zorn's Lemma, Induction, etc. I know there are plenty of proofs online that show Induction implies well ordering, well ordering implies induction, zorn's lemma implies induction, etc. and what I seem to be getting is that in most courses professors introduce these by assuming one and proving the others from it, but is there an accessible proof of one that does not assume the others and is self-contained? I don't seem to be finding any. I hate the idea that you more or less have to assume one because the proof of any one of them without using the others is too unaccessible to undergraduate students. Thanks!
All three are variations of the Axiom of Choice: https://en.wikipedia.org/wiki/Axiom_of_choice There is no ultimate proof because it is an independent axiom.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2167034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
How can we prove this inequality involving e? $$ \frac{1}{1+|x|} \le \frac{e^x - 1}{x} \le1 + |x|(e - 2) $$ where $$x \in [-1,0)\cup(0,1]$$ How can we prove this inequality? My text used it to prove the limit of the middle function when $x→0$ which is $1$ using sandwich theorem. I have no idea how to begin. I know I can find the limit using L'Hospital but still I cant figure out where did this inequality come from.
It sounds awkard to use these inequalities to prove the limit you want. The limit $\lim_{x\to 0} (e^x-1)/x=1$ follows by L'Hospital's rule as you said, or by using the fact that $(e^x-1)/x=1+\sum_{n\ge 1} x^n/(n+1)!$ and that the power series $\sum_{n\ge 1} x^n/(n+1)!$ is absolutely convergent and hence the sum converges to $0$ as $x\to 0$. If you are not convinced of that at sight you may use $|\sum_{n\ge 1} x^n/(n+1)!|\le |x|\sum_{n\ge 0} |x|^n/(n+2)!\le |x|e^{|x|}$. Now to prove the inequality $(e^x-1)/x\le 1+|x|(e-2)$ here are some hints. First note that the case $x\in [-1,0[$ follows from the case $x\in ]0,1]$, since for $x<0$ we have $(e^x-1)/x=(e^{-|x|}-1)/(-|x|)=(1-e^{-|x|})/|x|=(e^{|x|}-1)/(|x|e^{|x|})\le (e^{|x|}-1)/|x|$, since $e^u\ge 1$ if $u\ge 0$. Now let's concentrate on the case $x\in I=]0,1]$. In that case we may write $(e^x-1)/x=1+x\sum_{n\ge 0} x^n/(n+2)!\le 1+x\sum_{n\ge 0}1/(n+2)!=1+x(e-2)$. I didn't try, but I guess similar arguments may handle the other inequality. I hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2167114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Induction - request for help in proving lemma Could anyone help with proving the following lemma, please? Let: $n\in \mathbb{N}$, $Z_{n}^{*}:=\{k\in\mathbb{N}: k\in\{1,\dots,n\} \wedge \space GCD(k,n)=1\}$. Then: $\forall n\in \mathbb{N} \space \forall p \in \mathbb{P}: |Z_{p^{n}}^{*}|=p^{n}-p^{n-1}$ I tried to prove this by induction with respect to $n$, but I stuck at general case. I know how induction works, but I can't see how to do main point...
You may be overthinking it. ${\rm gcd \ }(k,p^{n}) = 1$ holds if and only if $k$ is not divisible by $p$. So you need to count the number of elements in $\mathbb Z_{p^n}$ that are not divisible by $p$. Ask yourself: (1) How many elements are there in $\mathbb Z_{p^n}$ in total? (2) What fraction of these elements are divisible by $p$ and what fraction are not?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2167235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conjecture about the digits of $\pi$ Consider irrational numbers between 1 and 9. Lets call a specific one $a$. Let $n>0$ be an integer. In decimal consider the first $n$ digits of $a$. Call that string $A(a,n)$. Now consider the next $n$ digits of $a$ after the first $n$. Call that string $B(a,n)$. Define $a$ has the repeat digits property (rdp) iff : $$A(a,n) = B(a,n) $$ For some $n$. Also $rdp(a) = $ true. Now it is tempting to think statistically about this. What is the probability that $rdp(a) =$ true ? And probably that probability is equal to $$ 10^{-1} + 10^{-2} + 10^{-3} + ... = 1/9 $$ Or close to it. Is that correct ?? However i wonder about actual proofs rather than statistical reasoning. So I make a conjecture There exists NO $n$ such that $$ A(\pi,n) = B(\pi,n)$$ Now i picked the number $\pi$ because we know alot about its digits. ( unlike say zeta(5) , euler gamma etc , in fact we are not even sure they are irrational ! ) For instance we can compute the 100000 th digit in base 16 without needing to store or compute all the previous ones. . See https://en.m.wikipedia.org/wiki/Bailey–Borwein–Plouffe_formula So how to handle this ?? Or is this one of the simplest undecidable problems ? Or the simplest example of computational irreducibility ? ( see wolfram's book a new kind of science ). Is there any hope of solving they things besides brute force Search and luck ? Is this THE example of the ultimate halting problem ?
I believe your conjecture is open, but here are some partial results: Let $a$ be defined to have the infinitely-often-repeating-digits property (IORDP) if there are infinitely many values of $n$ such that $A(a,n) = B(a,n)$. If $a$ has IORDP, it has irrationality measure at least $2$. That's because in the repeat-digits cases we can match the first $2 \cdot n$ digits of $a$ with a fraction whose denominator only has $n$ digits (all $9$'s). This isn't very helpful since every irrational number has irrationality measure at least $2$. (However, irrationality measure is defined as an infimum so maybe it is possible to strengthen this to a strict inequality?) But, we can define the infinitely-often-twice-repeating-digits property (IOTRDP) to mean that there are infinitely many $n$ with $A(a,n) = B(a,n) = C(a,n)$ where $C$ is the third sequence of $n$ digits. If $a$ has IOTRDP, then it has irrationality measure at least $3$. So one thing we can say is that algebraic numbers like $\sqrt{2}$ do not have IOTRDP. And since $\pi$ is known to have an irrationality measure less than $8$, we can be sure that only finitely often is it the case that the first $n$ digits of $\pi$ repeat $8$ times. But this is not enough to establish that any particular number of repetitions of the initial digit sequence of $\pi$ never occurs. Also, there is a relationship between irrationality measure and the runtime analysis of digit-extraction algorithms like BBP. Although the expected number of summands for BBP to extract the $n^{th}$ nybble is $n+O(1)$, we can't rule out the existence of cases where it requires more than $7 \cdot n$ terms, i.e. the first $n$ nybbles of $\pi$ are followed by more than $6 \cdot n$ $0$'s or $f$'s. This doesn't affect the asymptotic runtime of BBP since it's a constant factor but it means maybe there are digits that take more than seven times as long as typical ones to extract.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2167357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Probability of winning after adding coin condition Assume that we have two games $X, Y$ with probability of winning $P(X)$ and $P(Y)$ respectively. Now I create a new game $Z$: I throw a coin and if I get heads I play $X$. Otherwise I play $Y$. What is the probability of winning in $Z$? Do we need more info on $X, Y$ in order to solve this? This might be a newbie question, I'm just so bad at probability.
Let $P(Z)$ be the probability of winning game $Z$. Then we have that $P(Z) = \frac{1}{2}P(X) + \frac{1}{2}P(Y)$. If you draw a probability tree you can visualise why this is true. If you flip a coin - you either get heads or tails, each with probability $\frac{1}{2}$. Then if you get heads, the probability of winning is $P(X)$, and if you get tails the probability of winning is $P(Y)$. Since the coin flip, and $P(X), P(Y)$ are independent events, we can therefore find the probability using the product of the branches in the probability tree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2167475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate the image and a basis of the image (matrix) What's the image of the matrix? What's the basis of the image? $M=\begin{pmatrix} -1 & 1 & 1\\ -2 & -3 & 6\\ 0 & -1 & 1 \end{pmatrix}$ First transposed the matrix: $M^{T}=\begin{pmatrix} -1 & -2 & 0\\ 1 & -3 & -1\\ 1 & 6 & 1 \end{pmatrix}$ Now we use Gauss and get zero lines. Take the first line and add it to the third: $M^{T}=\begin{pmatrix} -1 & -2 & 0\\ 0 & -5 & -1\\ 1 & 6 & 1 \end{pmatrix}$ Take the first line and add it to the third: $M^{T}=\begin{pmatrix} -1 & -2 & 0\\ 0 & -5 & -1\\ 0 & 4 & 1 \end{pmatrix}$ Multiply the second line with $4$, multiply the third line with $5$, then add second line to third: $M^{T}=\begin{pmatrix} -1 & -2 & 0\\ 0 & -20 & -4\\ 0 & 0 & 1 \end{pmatrix}$ Transpose back: $M=\begin{pmatrix} -1 & 0 & 0\\ -2 & -20 & 0\\ 0 & -4 & 1 \end{pmatrix}$ The image of the matrix is $\text{Im(M)}= \text{span} \left ( \left\{ \begin{pmatrix} -1\\ -2\\ 0 \end{pmatrix}, \begin{pmatrix} 0\\ -20\\ 4 \end{pmatrix},\begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix} \right\} \right)$ The basis of the image is $\left\{ \begin{pmatrix} -1\\ -2\\ 0 \end{pmatrix}, \begin{pmatrix} 0\\ -20\\ 4 \end{pmatrix},\begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix} \right\}$ Please tell me if I did everything correctly? It's very important for me to know as I would do it like that in the exam :) I hope it's correct and please also tell me if the notation is.
The image of a matrix is the same as its column space. To find column space, you first find the row echelon form of the given matrix (do not transpose it). The definition of row-echelon form is: * *Rows with all zero's are below any nonzero rows *The leading entry in each nonzero row is a one *All entries below each leading "1" are zero With the matrix in row-echelon form, the image (and column space) basis of the matrix comprises of the columns that contain a leading 1. It is also useful to note that the dimensions (dim) of im(M) = dim(colM) = rank of M
{ "language": "en", "url": "https://math.stackexchange.com/questions/2167561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
$\arcsin x- \arccos x= \pi/6$, difficult conclusion I do not understand a conclusion in the following equation. To understand my problem, please read through the example steps and my questions below. Solve: $\arcsin x - \arccos x = \frac{\pi}{6}$ $\arcsin x = \arccos x + \frac{\pi}{6}$ Substitute $u$ for $\arccos x$ $\arcsin x = u + \frac{\pi}{6}$ $\sin(u + \frac{\pi}{6})=x$ Use sum identity for $\sin (A + B)$ $\sin(u + \frac{\pi}{6}) = \sin u \cos\frac{\pi}{6}+\cos u\sin\frac{\pi}{6}$ $\sin u \cos \frac{\pi}{6} + \cos u\sin\frac{\pi}{6} =x$ I understand the problem up to this point. The following conclusion I do not understand: $\sin u=\pm\sqrt{(1 - x^2)}$ How does this example come to this conclusion? What identities may have been used? I have pondered this equation for a while and I'm still flummoxed. All advice is greatly appreciated. Note: This was a textbook example problem
Hint: if $u=\arccos (x)$ than $\cos u=x$ and $\sin u=\pm\sqrt{1-\cos^2u}=\pm \sqrt{1-x^2}$ substitute and square your equation and you have $ \frac{3}{4}(1-x^2)=\frac{1}{4}x^2 $ that can be solved, (with care to improper solutions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2167669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is the Axiom of Choice not needed when the collection of sets is finite? According to Wikipedia: Informally put, the axiom of choice says that given any collection of bins, each containing at least one object, it is possible to make a selection of exactly one object from each bin. In many cases such a selection can be made without invoking the axiom of choice; this is in particular the case if the number of bins is finite... How do we know that we can make a selection when the number of bins is finite? How do we even know that we can make a selection from a single bin of finite elements? Then it gives an example: To give an informal example, for any (even infinite) collection of pairs of shoes, one can pick out the left shoe from each pair to obtain an appropriate selection, but for an infinite collection of pairs of socks (assumed to have no distinguishing features), such a selection can be obtained only by invoking the axiom of choice. But how can we even make a selection out of a single pair of socks if they don't have any distinguishing features? Is there another axiom being assumed here?
This is a rider to the excellent answers given by Clive and Dustan. To address the case of a single pair of socks: assume you are given an unordered pair of socks $\{x, y\}$ such that the socks $x$ and $y$ have no distinguishing features. Then $x$ and $y$ are the same sock. In this case, you should take the sock back to the store and claim Leibniz's law of the identify of indiscernibles as an unchallengeable reason for a refund.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2167783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 9, "answer_id": 4 }
Prove ideal of matrix is that set Given the set of matrices: $M=\left\{ \begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix} \mid a,b \in \Bbb Z \right\}$. It is easily seen that $M$ is a commutative subring of $M_{2,2}(\Bbb Z)$. If K is ideal of M prove that exits $m,n \in \Bbb Z$ to $K=\left\{ \begin{pmatrix} ma & 0 \\ 0 & nb \end{pmatrix} \mid a,b \in \Bbb Z \right\}$ My ideas is : if $\begin{pmatrix} m & 0 \\ 0 & n \end{pmatrix}\in K$ then $\left\{ \begin{pmatrix} am & 0 \\ 0 & bn \end{pmatrix} \mid a,b \in \Bbb Z \right\}\in M$. I've tried to find $m: \mid m \mid < \mid a \mid, \begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix} \in K $ and $n: \mid n \mid < \mid b \mid, \begin{pmatrix} a & 0 \\ 0 & b \end{pmatrix} \in K$. But I'm not sure existence of both m, n: $\mid m \mid < \mid a \mid,\mid n \mid < \mid b \mid $. Beside, I've known that $K=\Bbb Z$x$ \Bbb Z$, and A is subgroup of $\Bbb Z$ if and only if $A=m\Bbb Z$. I have trouble proving it, is it possible?
Let $A = \left\{\begin{pmatrix} a & 0 \\ 0 & 0\end{pmatrix} |\; a \in \mathbb{Z}\right\}$. Clearly $A$ is an ideal of $M$, and $A$ is a ring isomorphic to $\mathbb{Z}$. Since $K$ is an ideal of $M$, we know that $K\cap A$ is also an ideal of $M$. Moreover, we see that $K\cap A$ is an ideal of $A$, and therefore $K\cap A$ isomorphic to an ideal of $\mathbb{Z}$ via the mapping $\begin{pmatrix}a & 0 \\ 0 & 0 \end{pmatrix}\mapsto a$. Since the ideals of $\mathbb{Z}$ are of the form $m\mathbb{Z}$ for some $m\in \mathbb{Z}$, it follows that $K\cap A = \left\{\begin{pmatrix}ma & 0 \\ 0 & 0\end{pmatrix}|\; a\in \mathbb{Z}\right\}$. In other words, the upper left entries of elements in $K$ are all of the form $ma$ for some $a\in\mathbb{Z}$. A similar argument shows that the lower right entries are all of the form $nb$ for some $b\in \mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2167858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Convergence of the following sum Does the following sum converge? $$\sum_{n=1}^{\infty}\frac{\sin^2(n)}{n}$$ I tried the ratio test and got that $\rho=0$ which means that the series converges absolutely. However, Mathematica and Wolfram Alpha do not give a result when trying to find its convergence. Am I wrong?
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{n = 1}^{N}{\sin^{2}\pars{n} \over n} & = {1 \over 2}\sum_{n = 1}^{N}{1 \over n} - {1 \over 2}\,\Re\sum_{n = 1}^{N}{\exp\pars{2\ic n} \over n} \end{align} But, $\ds{{1 \over 2}\,\Re\sum_{n = 1}^{\infty}{\exp\pars{2\ic n} \over n} = -\,{1 \over 2}\,\Re\ln\pars{1 - \exp\pars{2\ic}} = -\,{1 \over 2} \ln\pars{\root{\bracks{1 - \cos\pars{2}}^{\,2} + \sin^{2}\pars{2}}} = -\,{1 \over 4}\ln\pars{2\bracks{1 - \cos\pars{2}}} = -\,{1 \over 4}\ln\pars{4\sin^{2}\pars{1}} = \bbx{\ds{-\,{1 \over 2}\,\ln\pars{2\sin\pars{1}}}}}$ So, $$\bbx{\ds{% \sum_{n = 1}^{N}{\sin^{2}\pars{n} \over n} \sim {1 \over 2}\,H_{N} + {1 \over 2}\,\ln\pars{2\sin\pars{1}} \qquad\mbox{as}\ N \to \infty}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2167941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
A team of 11 players with at least 3 bowlers and 1 wicket keeper is to be formed from 16 players; 4 are bowlers; 2 are wicket keepers. Out of 16 players in a cricket team, 4 are bowlers and 2 are wicket keepers. A team of 11 players with at least 3 bowlers and 1 wicket keeper is to be formed. Find the number of ways the team can be selected. My solution: Choosing 3 bowlers out of 4: $\binom43$ Choosing 1 wicket keeper out of 2: $\binom21$ Choosing the remaining 4 players out of 12: $\binom{12}{7}$ Hence, total permutations=$\binom43\binom21\binom{12}{7}=6336$ Given solution: Number of bowlers=4 Number of wicket keepers=2 Total required selection=$\binom43\binom21\binom{10}{7} +\binom44\binom21\binom{10}{6}+\binom43\binom22\binom{10}{6}+\binom44\binom22\binom{10}{5}=2472$ I feel my solution is just a simpler version of what the book gives. But why is then my answer coming different and where have I gone wrong in my approach?
Your solution counts some solutions more than once, namely if you have four bowlers or two wicket keeps you count them four times and twice respectively. The book however counts each in a different place: Each term on the LHS is one of four possibilities, depending on how many bowlers/wicket keepers there are.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2168055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Product of functions is $0$ but none of the functions is identically $0$? Given that $f(x) \cdot g(x) = 0$ for all $x$ is it true that at least one of the functions is $0$ for all $x$? The correct answer is this doesn't necessarily hold true. Can you give such example? I have a feeling this had something to do with piecewise functions.
One continuous example: $f(x)=|x|+x, g(x)=|x|-x$. For any $x$, at least one of the functions must be $0$, but there is nothing stopping them from "sharing" the $x$-axis between them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2168152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 7, "answer_id": 5 }
One implication in $f'(x) \ge 0 \iff f \ \text{is monotonically increasing}$ I am trying to understand why $f'(x) \ge 0 \iff f \ \text{is monotonically increasing}$ with the usual set of assumptions. To do this I am trying to prove the two implications. It is relatively easy to get why $\impliedby$ holds since an increasing $f$ implies$${f(x_0+h)-f(x_0) \over h} \ge 0$$ no matter what the $h$ is. The second implication $\implies$ proves to be more tricky. I am able to show it rewriting the mean value theorem as $$f(b)=f'(\xi)(b-a)+f(a)$$ and concluding that for $a,b$ satisfying $a<b$ we do get $f(b)\ge f(a)$. Is there an easier way to see $\implies$ without using the mean value theorem?
Yes, by the fundamental theorem of calculus. Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be differentiable with $f'(x) \geq 0 \forall x \in \mathbb{R}$. Taking $x \in \mathbb{R}$ and $h \in \mathbb{R}_{\geq 0}$. \begin{align*} f(x+h) - f(x) = \int_{x}^{x+h} f'(t) dt \geq 0 \end{align*} Rearranging, $f(x + h) \geq f(x)$. Edit: this assumes integrability of $f'(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2168285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$Why$ is the axis of symmetry of a parabola $-{b\over 2a}$ and ${not}$ ${b\over 2a}$? I'm working on a lesson plan for my students regarding completing the square for a parabola, and I've done the following: $$\begin{align}ax^2+bx+c &= a\left(x^2+{b\over a}x\right) + c \\ & = a\left(x^2+{b\over a}x+{b^2\over 4a^2}-{b^2\over 4a^2}\right)+c \\& = a\left(x^2+{b\over a}x+{b^2\over4a^2}\right) - a{b^2\over4a^2} + c \tag{$*$}\\ &= a\left(x^2-2hx+h^2\right)-ah^2+c \tag{Let $h=-{b\over 2a}$}\\ &=a(x-h)^2 + c-ah^2 \\ &= a(x-h)^2+k.\tag{Let $k=c-ah^2$} \end{align}$$ This gives us the standard form of a parabola we all know and love to pick out the vertex of the parabola, but how am I going to justify the selection of $\displaystyle h = -{b\over2a}$ over $\displaystyle h={b\over 2a}$? The selection is to give us the axis of symmetry $\displaystyle x=-{b\over2a}$, of course, but how do I explain why the negative gives us the axis of symmetry? I found a demonstration here, but for my purposes it seems like a chicken-and-egg situation: I haven't discussed the quadratic formula yet, which I will be proving the quadratic formula as part of the lecture, but I need to discuss completing the square first. So it seems kind of silly to use the quadratic formula to prove a result that will be used to prove the quadratic formula. I know that by definition, the axis of symmetry is the line where any two $x$ values of the quadratic equation $ax^2+bx+c$ have the same $y$ value. But I'm stuck with trying to find these $x$-values otherwise. Do I need to reorganize my lesson plan?
First show that the parabola $y=ax^2$ is symmetrical about the $y-$axis ($x=0$). Displacing it by $c$ vertically gives $y=ax^2+c$ which is also symmetrical about $x=0$. Moving it by $h$ to the left gives $y=a(x-h)^2+c$ which is symmetrical about $x-h=0$ or $x=h$. From the equation given $$ax^2+bx+c=a\left(x+\frac b{2a}\right)^2+c-\frac {b^2}{4a}$$ which is symmetrical about $x+\dfrac b{2a}=0$ or $x=-\dfrac b{2a}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2168412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Can a multiple of $15$ and a multiple of $21$ differ by $1$? I know a solution to this question having to do with the fact that the $\gcd(15, 21) = 3$, so the answer is no. But I can't figure out what is the reasoning behind this. Any help would be really appreciated!
No: you can prove that $\text{gcd}(a, a +b) = \text{gcd}(a,b)$. Therefore, we have that $\text{gcd}(a,a+1) = \text{gcd}(a,1) =1$. If we now consider multiples of $15$ and $21$, say $k \cdot 15, n \cdot 21$ with $k, n \in \mathbb{Z}$, such that $n \cdot 21 = k \cdot 15 + 1$, then we find that $3$ divides $\text{gcd}(k \cdot 15, n \cdot 21) = \text{gcd}(k \cdot 15, k \cdot 15 + 1) = 1$, which is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2168524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Is a surjection from the natural numbers enough to show that a set is countable? We have all seen how Cantor showed that the rational numbers are countable with his zig-zag method, but I want to show the same thing without the zig-zag, so here is my approach, does it work? We can list ALL the rational numbers. $\frac{1}{1}, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \dots $ $\frac{2}{1}, \frac{2}{2}, \frac{2}{3}, \frac{2}{4}, \dots $ $\frac{3}{1}, \frac{3}{2}, \frac{3}{3}, \frac{3}{4}, \dots $ $\frac{4}{1}, \frac{4}{2}, \frac{4}{3}, \frac{4}{4}, \dots $ $\dots$ Then we will pair the first row with some unique natural numbers. We will take the first prime number two and append fours to it. $24, 244, 2444, 2444, \dots$ Then the next row will use the next prime number three. $34, 344, 3444, 3444, \dots$ Then the next row will use the the next prime number five. $54, 544, 5444, 5444, \dots$ Then the next row will use the the next prime number seven. $74, 744, 7444, 7444, \dots$ And so on. Every rational number will be assigned at least one unique natural number, so we know that the cardinality of the rational numbers can't be greater than the cardinality of the natural numbers.
There are two common definitions of countability. One is more properly called "countably infinite" where $X$ is countably infinite if it can be put in bijection with $\mathbb{N}$. The other, weaker definition of countability is exactly what you said, i.e. that we can map $\mathbb{N}$ onto $X$. So the latter would encompass the former, i.e. a countably infinite set trivially can have $\mathbb{N}$ onto it. But it also holds for finite sets. A set is countably infinite if and only if it's countable and not finite. In a fairly intuitive way, if $f : \mathbb{N} \to X$ is a surjection, and $X$ is infinite, then we can construct a bijection $g : \mathbb{N} \to X$ "induced by" $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2168594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Prove that $\cos 20^{\circ} + \cos 100^{\circ} + \cos {140^{\circ}} = 0$ Assume $A = \cos 20^{\circ} + \cos 100^{\circ} + \cos 140^{\circ}$ . Prove that value of $A$ is zero. My try : $A = 2\cos 60^{\circ} \cos 40^{\circ} + \cos 140^{\circ}$ and I'm stuck here
$\cos 20^{\circ} + \cos 100^{\circ} = \cos (60^{\circ}-40^{\circ}) + \cos (60^{\circ}+40^{\circ}) = \cos 40^{\circ}$ by using the addition formulae for $\cos$. Then $\cos 140^{\circ} = \cos (180^{\circ} - 40^{\circ}) = -\cos (40^{\circ})$. So $$\cos 20^{\circ} + \cos 100^{\circ} + \cos 140^{\circ} = \cos 40^{\circ} - \cos 40^{\circ} = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2168706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is $\mathbb{R}$ not a linear subspace of $\mathbb{R}^3$? I thought that $\mathbb{R}$ would just be any 1-dimensional line in $\mathbb{R}^3$ and that as long as you multiply or add two vectors on that line together, you'd still be in $\mathbb{R}$.
Every one-dimensional subspace of $\mathbb R^3$ is isomorphic to $\mathbb R$ as $\mathbb R$ vector spaces. For many people, isomorphism is enough to call that copy "$\mathbb R$", but one has to remember that it isn't special and that there are many other such copies. Someone might object to saying that $\mathbb R$ is a subspace if they want to say that it isn't even a subset: $\mathbb R^3$ consists of triples of real numbers while $\mathbb R$ are "1-tuples" of real numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2168812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
second statement of GAGA GAGA theorem 2 states that: If $\mathscr{F}$ and $\mathscr{G}$ are two coherent algebraic sheaves on X, every analytic homomoprhsim of $\mathscr{F}^h$ into $\mathscr{G}^h$ comes from a unique algebraic homomorphism of $\mathscr{F}$ into $\mathscr{G}$. What are the homomorphisms between $\mathscr{F}$ and $\mathscr{G}$? Can we just take global sections of the sheaf $Hom_\mathcal{O}(\mathscr{F},\mathscr{G})$ see the main statement section of this article: https://en.wikipedia.org/wiki/Algebraic_geometry_and_analytic_geometry
If $X$ is a scheme over $\mathbb{C}$ then an algebraic homomorphism between $\mathcal{F}$ and $\mathcal{G}$ is a $\mathcal{O}_{X}$-linear homomorphism of the sheaves $\mathcal{F}$ and $\mathcal{G}$ which is given by a compatible collection of homomorphisms between $\mathcal{G}(U)$ and $\mathcal{F}(U)$ (or if you wish, a natural transformation of the functors $\mathcal{F}$ and $\mathcal{G}$). Now there is a way of passing to the analytification of the sheaves (passing from $X$ to the complex valued points, holomorphics functions etc.). The GAGA theorem asks whether every homomorphism of ther analytification comes from an algebraic one, and in the case of sufficiently nice ones (coherent) answers this question by "yes". n.B.: GAGA is standard terminology in algebraic geometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2169023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\int_{0}^{1}f(x)\cdot g(x)dx=\int_{0}^{1}f(x)dx\cdot \int_{0}^{1}g(x)dx$ Let $f:[0,1]\rightarrow \mathbb{R} $ be a continuous function so that $\int_{0}^{1}f(x)\cdot g(x)dx=\int_{0}^{1}f(x)dx\cdot \int_{0}^{1}g(x)dx$, for any $g:[0,1]\rightarrow \mathbb{R}$, continuous and not differentiable. Prove that $f$ is constant. I have no idea on what to do.
As @RobertIsrael pointed out, the non-differentiable continuous functions are dense in $C[0,1]$ with respect to the uniform norm, and therefore $$\int_0^1 f(x) g(x) \, dx = \int_0^1 f(x) \, dx \int_0^1 g(x) \, dx$$ holds for any continuous function $g$. If we choose $f=g$, then we find $$\int_0^1 f(x)^2 \, dx = \left( \int_0^1 f(x) \, dx \right)^2.$$ This means that $$\int_0^1 ( f(x) - a)^2 \, dx = 0$$ for $a:= \int_0^1 f(y) \, dy$. Since the integrand $x \mapsto (f(x)-a)^2$ is continuous and non-negative, this implies $$f(x)-a=0 \qquad \text{for all $x \in [0,1]$},$$ i.e. $$f(x) = a = \int_0^1 f(y) \, dy \qquad \text{for all $x \in [0,1]$}.$$ Remark: More generally, if $f$ is a continuous function and $V$ strictly convex, then $$\int_0^1 V(f(x)) \, dx = V \left( \int_0^1 f(x) \, dx \right)$$ if and only if $f$ is constant, see this question; here, we have proved this statement for $V(x) = x^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2169154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Prove that $x<2^x$ How can we prove that given that $x\leq 2$, then $2^x>x$? I think that this seems to be intuitively correct but I don't know how to prove it. Can it be proven without calculus?
Note: Hazem Orabi's solution is really the one I prefer personally but since i had fun messing with a more convoluted and ultimately inconclusive approach I'd just like to share it. Solution Let's do this for rational numbers $x = p/q$ and then just assume it extends properly to real numbers. Let $p,q$ be relatively prime and let's just note that $$0 < x < 2 \Rightarrow p < 2q$$ Now the number $2^{x} = 2^{p/q}$ is the solution to the equation $y^q = 2^p$ and the idea that $2^x > x$ can be rephrased as the implication $$p < 2q \quad \text{and} \quad y^q = 2^p \quad \Rightarrow \quad qy > p \quad \text{(to be proven)}$$ The reason I phase it like this is that we're just dealing with integral powers the results of which are hopefully more intuitive (easier to prove) than rational or irrational powers. Let us start by observing $$(qy)^q = q^q y^q = q^q 2^p$$ And then divide our intention to two special cases $q < p$ and $p < q$ Case 1. $p < q$. In this case we use $q^q > p^q$ $$(qy)^q = q^q 2^p > p^q 2^p > p^q $$ and therefore $$(qy)^q > p^q \Leftrightarrow qy > p$$ Case 2. $q < p$. In this case we use $2^p > 2^q$ $$(qy)^q = q^q 2^p > q^q 2^q > (2q)^q > p^q $$ (where $2q > p$ was invoked in the final step) and therefore again $$(qy)^q > p^q \Leftrightarrow qy > p$$ (The principle invoked over and over again is that integral powers preserve order which while not entirely trivial is at least natural)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2169275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Closed form expression for modulo function? I wonder if there is a closed form expression that returns the values of modulo function for integers $(n \mod m)$? I mean, the modulo operation is not really analytic since one chops off the number after division. But maybe there is a continuous periodic function that is equal to mod at integer input values?
Remark that for reals variables $\quad\displaystyle{n \mod m=n-\lfloor \frac nm\rfloor\times m}$ So for $m$ fixed the function $x:\mathbb R\to \mathbb R$ with $f(x)=x\mod m$ is piecewise continuous since the $floor$ function is periodic and continuous inside its period interval. This is the blue curve below ($m=5$). You should not be surprised of that, we use it everyday in trigonometry when we talk about angles $\theta$ modulo $2\pi$, the angle $\theta$ is a real value, and so is $2\pi$. The other function for $n$ fixed $x:\mathbb R\to \mathbb R$ with $f(x)=n\mod x$ is not so nice, it is also piecewise continuous but not periodic anymore and the behaviour near $0$ is dreadful. This is the red curve ($n=5$). Now you can visualize how to make $f$ (the blue curve) completely continuous: instead of the vertical bars (which are discontinuities and artefacts of drawing application), use a segment to join $y=4$ to the next $y=0$ then the function becomes piecewise linear and continuous. $\begin{cases} x\in[mk,mk+(m-1)]\qquad\qquad f(x)=x-m\lfloor\frac xm\rfloor\\ x\in[mk+(m-1),m(k+1)[\quad\ \ f(x)=(m-1)(m-x+m\lfloor\frac xm\rfloor) \end{cases}$ Should be ok, but it doesn't really respect the modulo in the interval $[mk+m-1,mk+m]$. Instead it is possible to convolute $f$ with a smooth function in the interval $[mk-\varepsilon,mk+\varepsilon]$ to get a completely $C^\infty$ function with $\lim\limits_{x\to km^-}f(x)=m$ and $\lim\limits_{x\to km^+}f(x)=0$. (a bit abusive language when talking about limits, but you understand the pinciple).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2169407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Why is a dot product called a dot product? I have just started learning about vectors at school, and some of the applications of vectors are still a bit confusing to me. I'm hoping that finding out the etymology behind the word dot product can help me better understand what a dot product is. In other words, its one thing to be able to follow the dot product formula, but another to actually know what a dot product is and why it's called dot product instead of something more self-descriptive. Any help is appreciated.
The dot product of two vectors will return a scalar. Algebraically, it is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. The name "dot product" is derived from the centered dot " · " (used to denote multiplication) that is often used to designate this operation; the alternative name "scalar product" emphasizes that the result is a scalar (rather than a vector). For more information : https://en.wikipedia.org/wiki/Dot_product
{ "language": "en", "url": "https://math.stackexchange.com/questions/2169541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can every possible 'imaginary number' be expressed in terms of $i$? The domain of the function $f(x)=\sqrt{x}$ can be extended to all real numbers by introducing a new number, $i=\sqrt{-1}$. Can this be done for any function, say $\arcsin{x}$, or $\log{x}$? What is $\arcsin{2}$? Or $\log{-3}$?
Euler's formula tells us that $e^{ix} = \cos(x)+i\sin(x)$. Therefore, (in a sense) $\ln (-3) = \ln3+\pi i$, since $$e^{\ln{3}+\pi i} = e^{\ln 3}e^{\pi i} = 3(\cos \pi + i\sin \pi) = -3$$ (note: $\ln 3+k\pi i$ would have worked for any odd integer $k$) Similarly, if we let $z = \frac{\pi}{2}-i\ln(2+\sqrt 3)$, then $$\sin(z) = \sin(\pi/2)\cos(i\ln(2+\sqrt 3))+\cos(\pi/2)\sin(i\ln(2+\sqrt 3)) = \cos(i\ln(2+\sqrt 3))$$ Since $e^{ix} = \cos(x)+i\sin(x)$, we have that $\cos(x) = \frac{1}{2}(e^{ix}+e^{-ix})$, and so $$\sin(z) = \cos(i\ln(2+\sqrt 3)) = \frac{1}{2}\left(\frac{1}{2+\sqrt 3}+(2+\sqrt 3) \right) = 2$$ However, it is not always the face that functions can be "extended" in this manner. As said in a comment, for example, there is no reasonable way to define $\ln 0$ as a complex number. An issue that I've glossed over is that it seems that there are multiple ways to "choose" a value for $\ln (-3)$ and similar expressions. This is similar to the issue that there are multiple solutions to, say, $\sin(x) = 1/2$, and so $\sin^{-1}(1/2)$ is defined by making a reasonable "choice" from among the solutions to the equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2169814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Prove or disprove that closure of union of infinitely many sets is equivalent to union of infinitely many closure. Let $\overline{A},\overline{B}$ denote the closure of a set $A,B\subseteq \mathbb{R}$ respectively. Prove or disprove that \begin{align*} \bigcup_{n=1}^{\infty}\overline{A_{n}} = \overline{\bigcup_{n=1}^{\infty} A_n}\end{align*} Firstly, i can prove that $\overline{A\cup B}=\overline{A}\cup \overline{B}$. I then can use this fact to prove that the main statement is true by Mathematical Induction. However, i can also find a counter-example by letting $A_{n}=[\frac{1}{n},1].$ So, i think the method by Mathematical Induction must be wrong. But i could not see how so.
Induction is a little trickier than that, I actually made the same mistake when I first learned it. Induction says that a statement is true for all $n\in \Bbb N$, but is says nothing about an infinite number. For example $A\cap B$ is open for open $A$ and $B$, so by induction any finite intersection of open sets is open. Same thing with your theorem, you have shown any closure of a finite union is the union of closures.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2169951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding the remainder to an awful division If $x=(11+11) \times (12+12) \times (13+13) \times\cdots \times (18+18) \times (19+19)$, what will be the remainder if $x$ is divided by $100$? I tried simplifying the expression like this: $$\frac{2(11)\times2(12)\times2(13)\cdots\times2(19)}{100}$$ $$\frac{2^8(19!\div10!)}{2^2\times5^2}$$ $$\frac{2^6(19!\div10!)}{5^2} ≡x \mod 10$$ Now, am I going in the right direction? I'm completely clueless. I'm only a 7th grader, so I've very limited knowledge on mathematics. A little help solving this will be really nice.
$$ x=(20+2) \times (20+4) \times (20+6) \times\cdots \times (20+16) \times (20+18) $$ After the multiplication is expanded, we get many terms. But the remainder is driven by the last two terms, namely $$ S = S_1 + S_2 = \left(2\cdot4\cdot\dots\cdot18\right) + \left(20\cdot2\cdot4\cdot\dots\cdot18\left(\frac12+\dots+\frac1{18}\right)\right), $$ as the others are divided b $20^2$. In fact, the second term is $$ S_2 = \left(20\cdot2\cdot4\cdot\dots\cdot18\left(\frac12+\dots+\frac18+\frac1{12}\dots+\frac1{18}\right)\right) + \left(20\cdot2\cdot4\cdot\dots\cdot18\left(\frac1{10}\right)\right), $$ where the first term is divided by $20\times10$ and does not influence the remainder. And the other is just $2S_1$. So we should find $$S_1 = 2^99!\mod100 = 2^99!\mod100 = 512\cdot9!\mod100 = 12\cdot 362880\mod100 = $$$$= 12 \cdot 80\mod100 = 60$$ Then $$S\mod100 = 3S_1 \mod 100 = 80$$ The result is verified by Python :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2170064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Digits of irrationals I've been studying floating point arithmetic and I've read somewhere that numbers with infinitely many decimal digits without recursion are irrational. But since we can't know all the digits of such a number then how did we come to the conclusion that its digits have no recursion? Does it have anything to do with formulae used to compute the $n$-th digit of a number? (This is a question simply out of curiosity.)
The simpler (and ancient) way to know if a number $a$ is irrational is to explicitly show that it cannot be expressed as a quotient $\frac{n}{m}$ of two integers $n,m$. But there are numbers, as the number $\pi+e$, for which we don't know if they are rational or irrational.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2170210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Real Analysis question comparing functions I am trying to compare the functions $\lim f'_n$ and $(\lim f_n)'$with the following sequence, $$f_n(x)=\frac{x^n}{n}$$ for $x\in [0,1]$ So I have already $limf_n'$ by calculating the derivative with respect to x and got the following, $$f_n'(x) = x^{n-1}$$ So $f_n'(x)$ converges pointwise to 0 if x is in $[0,1)$ and $1$ if $x=1$ I believe I have done this part correct and now I am trying to compare it to $(\lim f_n)'$ but I am not quite sure how to calculate that
Note that $$\lim_{n\to \infty}f_n(x)=\lim_{n\to \infty}\frac{x^n}{n}=0$$ for all $x\in [0,1]$. What is the derivative of $0$? The point of the exercise is to show that the derivative of the limit of a sequence of functions (i.e, $(\lim_{n\to \infty}f_n(x))'$) is not always equal to the limit of the derivative of that sequence of functions (i.e., $\lim_{n\to \infty} f_n'(x)$). There are sufficient conditions (uniform convergence of the sequence of derivatives, $f_n'(x)$) for which they will be equal, but those are not met here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2170339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove: $\text{$n$ is even} \iff n^n\equiv 1\mod{(n+1)}$ Prove: $\text{$n$ is even} \iff n^n\equiv 1\mod{(n+1)}$ where $n\in\mathbb{N}$. First, to prove $n^n\equiv 1\mod{(n+1)}\implies\text{$n$ is even}$, I supposed $n^n\equiv 1\mod{(n+1)}$ is true. It goes like this: The supposed proposition could be rewriten in the form of: $$\forall k\in\mathbb{Z}:n^n=1+k(n+1)\tag{1}$$ Assume $n$ is odd, then, $n=2p+1$ so $n^n$ too. Hence, $n^n=2q+1$ where $p,q\in\mathbb{N}$. Applying this to $(1)$ we get: \begin{align} \forall k&:2q+1=1+k(2p+2)\\ \forall k&:2q=2k(p+1)\\ \forall k&:q=k(p+1)\\ \end{align} Assume $k=-1$ then $q=-p-1\implies q+p=-1$ and because the sum of two naturals will always be greater than 1, then this conclusion is false, then we've got a contradiction. So $n$ must be even. The problem is the other way around, to prove $\text{$n$ is even} \implies n^n\equiv 1\mod{(n+1)}$. I don't even know where to start. I tried assuming $n$ is even, then $n^n$ too but I don't know when to insert the modulo operator. Any hint or solution would be fine.
The reverse direction is false for the counterexample of $n=1$. It happens to be true however for all other values of $n\geq 2$. Note that $1^1\equiv 1\pmod{1+1}$ however $1$ is not even. To prove the reverse direction is true for all $n\geq 2$ we can do the same as when we prove the forward direction except this time we begin by assuming $n$ is odd. Begin by noticing that $n\equiv -1\pmod{n+1}$, so we have $n^n \equiv (-1)^{2k+1}\equiv -1\pmod{n+1}$ which is not equivalent to $1\pmod{n+1}$ (except in the case that $0\equiv 2\pmod{n+1}$, i.e. when $n=1$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2170522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$\lim_{x\to0}\,(a^x+b^x-c^x)^{\frac1x}$ Given $a>b>c>0$, calculate$\displaystyle\,\,\lim_{x\to0}\,(a^x+b^x-c^x)^{\frac1x}\,$ I tried doing some algebraic manipulations and squeeze it, but couldn't get much further.
$$\lim_{x\to0}(a^x+b^x-c^x)^\frac{1}{x}=[1^\infty]=\exp\lim_{x\to 0}(a^x+b^x-c^x-1)\frac{1}{x}\boxed=\\(a^x+b^x-c^x-1)\frac{1}{x}=(a^x-c^x+b^x-1)\frac{1}{x}=c^x\cdot\frac{\left(\frac{a}{c}\right)^x-1}{x}+\frac{b^x-1}{x}\\ \boxed =\exp \lim_{x\to 0}\left(c^x\cdot\frac{\left(\frac{a}{c}\right)^x-1}{x}+\frac{b^x-1}{x} \right)=\exp\left(\ln\frac{a}{c}+\ln b\right)=\exp\left( \ln \frac{ab}{c} \right)=e^\frac{ab}{c}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2170630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
What does "the sum of every third element in the $n$-th row of Pascal's triangle" mean? I am looking at the following problem. I don't want to know how it's done, I would just like to see the problem reworded in less confusing terms if possible:
Let ${n\choose k}$ be the $k^{th}$ entry of the $n^{th}$ row of the triangle. (starting at $k=0$) $S_{n,0} = {n\choose 0}+{n\choose 3}+{n\choose 6}+\cdots\\ S_{n,1} = {n\choose 1}+{n\choose 4}+{n\choose 7}+\cdots\\ S_{n,2} = {n\choose 2}+{n\choose 5}+{n\choose 8}+\cdots$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2170707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Group theory - composition vs. multiplication In my professor's lecture notes, I've noticed that sometimes the $\circ$ symbol is used when operations are combined, but at other times multiplication is referred to , for example in the definition of a homomorphic relationship: $$\phi:G\rightarrow H,\,\ \ \ g,h\in G, \ \ \ \phi(g), \phi(h)\in H$$ $$\phi(g\star h) = \phi(g)\cdot \phi(h)$$ Is there any difference between the two? To me it seems like the multiplication of $g$ and $h$ (e.g. in a symmetry group) simply represents two successive operations applied to an object, which is essentially the definition of function composition. EDIT: I was perhaps a bit unclear here; my question was not directed at the difference between $\star$ and $\cdot$, it was the difference between composition and the group operation. I'll leave it as it is now, so that BobaFret and Foobaz John's answers make sense. Here's a similar question, but that question deals with why we refer to the operation as multiplication, not whether it is analogous to composition (as far as I can tell).
The notation distinguishes between the binary operation for each group. $\star$ is the operation in $G$ and $\cdot$ is the operation in $H$. It stresses that $g \star h$ is in $G$ while $\phi(g) \cdot \phi (h)$ is in $H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2170872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Two different expansions of $\frac{z}{1-z}$ This is exercise 21 of Chapter 1 from Stein and Shakarchi's Complex Analysis. Show that for $|z|<1$ one has $$\frac{z}{1-z^2}+\frac{z^2}{1-z^4}+\cdots +\frac{z^{2^n}}{1-z^{2^{n+1}}}+\cdots =\frac{z}{1-z}$$and $$\frac{z}{1+z}+\frac{2z^2}{1+z^2}+\cdots \frac{2^k z^{2^k}}{1+z^{2^k}}+\cdots =\frac{z}{1-z}.$$ Justify any change in the order of summation. [Hint: Use the dyadic expansion of an integer and the fact that $2^{k+1}-1=1+2+2^2+\cdots +2^k$.] I don't really know how to work this through. I know that $\frac{z}{1-z}=\sum_{n=1}^\infty z^n$ and each $n$ can be represented as a dyadic expansion, but I don't know how to progress from here. Any hints solutions or suggestions would be appreciated.
Since minimalrho has explained how to proceed with the given hint, I'll give an alternative method. The $k$th summand of the first series can be written $$\frac{z^{2^k}}{1 - z^{2^{k}}} - \frac{z^{2^{k+1}}}{1-z^{2^{k+1}}}$$ and the $k$th summand of the second series can be written $$\frac{2^kz^{2^k}}{1 - z^{2^k}} - \frac{2^{k+1}z^{2^{k+1}}}{1-z^{2^{k+1}}}$$ Hence, the $N$th partial sums of the two series telescope to $$\frac{z}{1 - z} - \frac{z^{2^{N+1}}}{1 - z^{2^{N+1}}}\quad \text{and}\quad \frac{z}{1 - z} - \frac{2^{N+1}z^{2^{N+1}}}{1 - z^{2^{N+1}}}$$ respectively. Using the condition $\lvert z\rvert < 1$, argue that $z^{2^{N+1}}/(1 - z^{2^{N+1}})$ and $2^{N+1}z^{2^{N+1}}/(1 - z^{2^{N+1}})$ tend to $0$ as $N\to \infty$. Then the results follow.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2170972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
How do I add the terms in the binomial expansion of $(100+2)^6$? So, I stumbled upon the following question. Using binomial theorem compute $102^6$. Now, I broke the number into 100+2. Then, applying binomial theorem $\binom {6} {0}$$100^6(1)$+$\binom {6} {1}$$100^5(2)$+.... I stumbled upon this step. How did they add the humongous numbers? I am really confused. Kindly help me clear my query.
That number is $1\; 12 \; 6\; 16 \; 24\;192\; 64$ (space added for emphasis). Notice a relationship with the coefficients of the powers of $10$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2171078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Show that if $ab \equiv ac$ mod $n$ and $d=(a,n)$, then $b \equiv c$ mod $\frac{n}{d}$ What I know so far: We know by the definition of congruence that $n$ divides $ab-ac$. So, there exists an integer $k$ such that $a(b-c)=kn$, and since $d=(a,n)$ we know that $a=ds$ and $n=dt$ from some integers $s$ and $t$. Then substituting for $a$ and $n$, we see that $ds(b-c)=k(dt)$.
Since $\gcd(a,n)=d$, we get $$\gcd\bigg(\frac{a}{d},\frac{n}{d}\bigg)=1.$$ Write $$r=\frac{a}{d}\quad\text{and}\quad s=\frac{n}{d}.$$ Then $$\gcd(r,s)=1\quad\text{and}\quad \frac{a}{n}=\frac{r}{s}.$$ Also, we get $$n\big|(ab-ac).$$ Hence, $$\frac{ab-ac}{n}\in\Bbb Z.$$ But, $$\begin{align} \frac{ab-ac}{n}&=\frac{a(b-c)}{n}\\ &=\frac{r(b-c)}{s}. \end{align}$$ Hence, $$\frac{r(b-c)}{s}\in\Bbb Z.$$ Thus, $$s\big|r(b-c).$$ Since $\gcd(r,s)=1$, using the Euclid's Lemma, we get $$s\big|(b-c).$$ Hence, $$b \equiv c \mod s.$$ Thus, $$b \equiv c \mod{\frac{n}{d}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2171194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Evaluating the right limit of: $1+\sin(\frac{2\pi}{x}\sqrt{x})$ at $0$ I have a limit as $$\lim_{x\rightarrow 0^{+}} \left((1+\sin\left(\frac{2\pi}{x}\right))\sqrt{x}\right).$$ I am planning to use Squeeze Theorem, so I say that $-1 \leq \sin(x) \leq 1 \implies -1 \leq \sin(\frac{2\pi}{x}) \leq 1 \implies 0 \leq 1 + \sin\left(\frac{2\pi}{x}\right) \leq 2$ $\implies 0 \leq \left(1 + \sin\left(\frac{2\pi}{x}\right)\right)\sqrt{x} \leq 2\sqrt{x}$ I use the theorem so I get $\lim_{x\rightarrow 0^{+}} ((1+sin(\frac{2\pi}{x})\sqrt{x})=0$. Is there any problem? Also how can I find the limit $\lim_{x \rightarrow 0^{+}}\sin\left(\cfrac{2\pi}{x}\right)$ so then, I evaluate the whole limit without using Squeeze Theorem.
Is there any problem? Your derivation is fine. Using the squeeze theorem is a good idea since $$ \lim_{x \rightarrow 0^{+}}\sin\left(\cfrac{2\pi}{x}\right) $$does not exist. Observe that $$ \lim_{n\rightarrow \infty}\sin\left(\cfrac{2\pi}{1/n}\right)=\lim_{n\rightarrow \infty}\sin(2n\pi)=0 $$ and that $$ \lim_{n\rightarrow \infty}\sin\left(\cfrac{2\pi}{1/(n+\frac14)}\right)=\lim_{n\rightarrow \infty}\sin \frac{\pi}2=1\ne0 $$ proving $$ \lim_{x \rightarrow 0^{+}}\sin\left(\cfrac{2\pi}{x}\right) $$ can't exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2171291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
True or false: Every real homogeneous linear system of equation which has more than one solution has infinite solutions This is a task from a test exam you can find here (in German): http://docdro.id/QRtdXkM Is the following statement true or false? Every real homogeneous linear system of equation that has more than one solution, has infinite solutions. I think the statement is true because a linear system of equations can only have either one solution, no solution or infinite solutions. This statement clearly says "more than one solution $\rightarrow$ infinite solutions" which is true. Is it really correct like that or there is some special case which can make this statement false?
Indeed, this is even true for non-homogeneous linear systems. Consider the system $Ax=b$, and assume $x_0$ and $x_1$ are solutions. Then for any $x_\lambda = (1-\lambda)x_0+\lambda x_1$ you get $$Ax_\lambda = A((1-\lambda)x_0 + \lambda x_1) = (1-\lambda)A x_0 + \lambda A x_1 = (1-\lambda) b + \lambda b = b$$ Therefore $x_\lambda$ is also a solution, thus you get infinitely many (indeed even uncountably many) solutions. The homogeneous system is just the special case for $b=0$. Since $x=0$ is always a solution of a homogeneous linear system, for those you can even write the condition as: If any real homogeneous linear system of equations has a non-zero solution, it has infinitely many.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2171371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
Help with summation I have been working on the following summation, which is a part of some bigger problem. $\sum_{i=1}^{n-2} \frac{i}{2(n-2)}$ Now I am stuck, because non of the formulas, that I know, seem to be suitable. I tried to solve it in so many ways, but I get a wrong answer-- compared to what the professor obtained.
Hint: If you know this formula: $$\sum_{k=1}^ak=\frac{a(a+1)}2$$ Then you know this: $$\sum_{i=1}^{n-2}\frac i{2(n-2)}=\frac1{2(n-2)}\sum_{i=1}^{n-2}i=\frac1{2(n-2)}\frac{(n-2)(n-1)}2=\frac{n-1}4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2171494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Dimension of tracefree matrix subspace. Let $P\in M_n(\Bbb R)$ be an invertible matrix. Find the dimension of the following subspace: $$L = \{ X \in M_n(\Bbb R)| tr(PX)=0\}$$ Don't know where to start. Any help?
The map $f:X \rightarrow tr(PX)$ is linear from $M_n(\mathbb{R})$ to $\mathbb{R}$. Your subspace $L$ is the kernel of $f$. Since the image of $f$ has dimension 1 (take for example $X=P^{-1}$ to see that $f\neq0$). You deduce that $\mathrm{dim}(L)=\mathrm{dim}(M_n(\mathbb{R}))-\mathrm{dim}(\mathbb{R})=n^2-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2171577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Notation for a placeholder function that maps to other functions I'm trying to write a very general equation to calculate some $Value$ which relies on a context-dependent function $g$. How do I concisely communicate that $g$ maps to different functions under different contexts? So far I have: \begin{gather} &\text{Value (a, b, c)} = {g(a,b)} + 10\\ \ \text{where:}&\\ g \mapsto\\ & A(i,j) = \begin{cases} 1 ,& \text{if some condition}\\ 0 ,& \text{otherwise} \end{cases}\\ \nonumber\ & B(i,j) = \begin{cases} 1 ,& \text{if some condition}\\ 0 ,& \text{otherwise} \end{cases}\\ \nonumber & C(i,j) = \begin{cases} 1 ,& \text{if some condition}\\ 0 ,& \text{otherwise} \end{cases}\\ \nonumber \end{gather}
The Iverson brackets could be useful in this case. Let $P$ be a proposition. We write \begin{align*} [[P]]= \begin{cases} 1&\qquad \text{if $P$ is true}\\ 0&\qquad \text{if $P$ is false} \end{cases} \end{align*} This way we can write \begin{align*} g(a,b)=A(a,b)[[X(a,b)]]+B(a,b)[[Y(a,b)]]+C(a,b)[[Z(a,b)]] \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2171672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Uniqueness of the remainder and quotient in an Euclidean domain Let $R$ be a Euclidean ring with Euclidean norm $N$. Let $a,b\in R\setminus\{0\}$ and let $q,r\in R$ such that $a=bq+r$ with $r=0$ or $N(r)<N(b)$. Prove that $r$ and $q$ are unique if and only if $N(a+b)\le\max\{N(a),N(b)\}$. I do not see how to relate both. If I suppose the second: let $q'$ and $r'$ be such that $a=bq'+r'$ with $r'=0$ or $N(r')<N(b)$ It is sufficient to prove that $r'=r$, because in this case $a=bq'+r'=bq+r$ implies $bq'=bq$ and then $q'=q$. How $bq'+r'=bq+r$ then $r'-r=b(q-q')$. By the properties of Euclidian norm: $$ N(r')-N(r)\le N(r)<N(b) \quad\text{and}\quad N(r-r')=N(b(q-q'))\ge N(b) $$ I got other inequalities but I do not think they are any good. For the converse I have no idea.
We can assume that $N$ satisfies the property $N(a)\le N(ab)$ for all $a,b\in R\setminus \{0\}$ (for details check this). Using the above property it's easy to prove that $ N(x)=N (x') $ if $ x $ and $ x'$ are associates. $\Longrightarrow$) We're going to prove it by contrapositive. Let's suppose that $N(a+b)>\max\{N(a),N(b)\}$, then $a+b\neq 0$ and so we can divide $a$ by $a+b$. We have $$a=(a+b)0+a$$ with $N(a)<N(a+b)$ and $$a=(a+b)1+(-b)$$ with $N(-b)=N(b)<N(a+b)$. Therefore the quotient and the remainder are not unique. $\Longleftarrow$) As you noted, it's enough to prove that $q=q'$ or $r=r'$. If $r=0$ or $r'=0$ the result follows immediately. So, from now on we assume that $r,r'\neq 0$ and we're going to prove that $q=q'$. If $r=r'$ we're done. Otherwise, let's suppose that $q\neq q'$. Since $b(q-q')=r'-r$ we have $$N(b)\le N(b(q-q'))=N(r'-r)=N(r-r')\le \max\{N(r),N(-r')\}<N(b).$$ So we got $N(b)<N(b)$, an absurd. Hence, it must be $q=q'$ and therefore $r=r'$. $-------------------------------------$ The above result is used to characterize the euclidean domains having unique quotient and remainder. Check for example these papers: i) A Characterization of Polynomial Domains Over a Field by Tong-Shieng Rhai ii) Uniqueness in the Division Algorithm by M. A. Jodeit, Jr.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2171776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What are the chances? There is a class with 20 students. We pick 4 lucky students who can go to the cinema. What are the chances? Andrew can go to the cinema, but his best friend John can't go with them.
The probability that Andrew will be picked out is (as for every student) $\frac4{20}=\frac1{5}$. Under the condition that Andrew is picked out the probability that John will not be picked out is $\frac{16}{19}$. So:$$\Pr(\text{Andrew lucky}\wedge\text{John unlucky})=$$$$\Pr(\text{Andrew lucky})\Pr(\text{John unlucky}\mid\text{Andrew lucky})=\frac15\frac{16}{19}=\frac{16}{95}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2171866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Research papers in algebraic number theory for undergraduates I am an undergraduate. I am interested in algebraic number theory. My Background: (1) I have read the first 5 chapters of the book Number Fields by Daniel A Marcus. (2) I have read the first and third chapter of Koblitz's book on $p$-adic numbers. (3) I have read the first two chapters of Janusz's Algebraic Number Fields and also some parts of the fifth chapter on class field theory. Very often what happens with me is, I learn something properly only when I have to use it frequently while studying something more advanced. (For example, I got a better understanding of metric spaces when I took a course on functional analysis.) Keeping this in mind, I think it will be a good idea to read some papers in algebraic number theory. This will help me to consolidate whatever I have learnt so far. Also, it will help me to learn some new material. Looking at my background, can you point me to some research papers which are accessible to undergraduates ?
I think that the most accessible research-level number theory for an undergraduate with your background is where number theory intersects combinatorics. Two topics that I know professors who use number theory in their combinatorics work work on the arithmetic properties of finite fields and questions about Latin squares. There's been some recent breakthroughs in studying the combinatorial structure of the generalized game of Set too. Although these combinatorial problems don't look particularly algebraic, there is a lot of algebraic machinery (particularly Galois theory and group theory) going on. This isn't "algebraic number theory" in the mainstream sense of the field, but it's an algebraic approach to number theory that is going to be accessible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2171989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove that $x^p\equiv 1$ (mod $p$) has only one solution. I know that said solution is $x\equiv 1$ (mod $p$). However, I'm having difficulty proving this result. So far, I've tried $x^p\equiv 1$ (mod $p$) $ \Rightarrow $ $p\mid (x^p-1) \Rightarrow p\mid(x-1)(x^{p-1} + x^{p-2} + \cdots + x + 1)$. From here, it's clear that the objective is to somehow show that $p\mid(x^{p-1} + x^{p-2} + \cdots + x + 1)$ also yields $x\equiv 1$ (mod $p$), but I've been unsuccessful in showing this after $(x^{p-1} + x^{p-2} + \cdots + x)\equiv-1$ (mod $p$).
$(x-1)^p \equiv (x^p - 1) \mod p$ by binomial theorem. So, if $p$ divides $(x^p - 1)$, it certainly divides $(x-1)^p$. Therefore, by fermat's last theorem, $p$ divides $x-1$. Hence, you get the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2172068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is the tangent sheaf $\mathscr{T}_{\mathbf{P}^2}$ a direct sum of line bundles? It is well known (theorem of Grothendieck) that every vector bundle on $\mathbf{P}^1$ is a direct sum of line bundles. What about $\mathbf{P}^2$? I figure the answer must be no, but is the tangent sheaf $\mathscr{T}_{\mathbf{P}^2}$ a counterexample? By the Euler sequence it shares most invariants with a direct sum of line bundles.
The total Chern class of $T_{\mathbb P^2}$ is $1 + 3h + 3h^2$ which is not a product of linear polynomials in $h$, so it cannot be the sum of two lines bundles.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2172159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the number of generators of the cyclic group $\mathbb{Z}_{p^r}$ Let $p$ be a prime number. Find the number of generators of the cyclic group $\mathbb{Z}_{p^r}$, where $r \in \mathbb{Z} \geq 1$ I'm trying to understand the question and am experimenting with $p=5$ and $r=1,2,3$. When $r=1$ it generates $\mathbb{Z_5}$, where every non-zero element is a generator of the group. When $r=2$ it generates $\mathbb{Z_{10}}$. All the elements relatively prime to $10$ are $1,3,7,$ and $9$, also $4$ generators. When $r=3$ it generates $\mathbb{Z_{15}}$. All of the elements relatively prime to $15$ are $1,2,4,7,8,11,13$, and $14$, which are $8$ generators. So I'm trying to figure out how to find the number of relatively prime elements for the general group $\mathbb{Z}_{p^r}$
You have the right idea, but remember it's $p^r$, not $pr$. So for instance, for $p=5$ and $r=2$, you get $\mathbb{Z}_{25}$, not $\mathbb{Z}_{10}$. This also makes the question easier to answer: you just have to count how many integers between $1$ and $p^r$ are relatively prime to $p^r$. An integer is relatively prime to $p^r$ iff it is not divisible by $p$ (why?). To count such integers, you may find it easier to first count the integers between $1$ and $p^r$ that are divisible by $p$. I'll let you finish from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2172293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Poisson distribution formula How is the Poisson distribution formula obtained? according to the theory it represents, for instance, the number of cars that go through some fixed route during a certain time. But again, how is the formula of the Poisson distribution derived?
Suppose: * *The probability of an event occurring in a time interval of length $\Delta t$ is $\lambda \Delta t$, where $\Delta t$ is a small parameter (held fixed for the moment). *Events in disjoint intervals are independent *Only one event can occur in each interval. The first two assumptions are natural; the third is a bit artificial, and I merely make it to simplify the calculations. With some modification to the other assumptions, this third assumption can be dropped. In this case the number of events occurring in a time interval of length $n \Delta t$ is Binomial(n,$\lambda \Delta t$). In particular, the probability that there are exactly $k$ events is $\frac{n!}{k! (n-k)!} \lambda^k (\Delta t)^k (1-\lambda \Delta t)^{n-k}$ Now we want to fix $k$ and send $n \to \infty$ with $\Delta t = 1/n$. In the process we will get the number of events in a time interval of length $1$ in a Poisson process with intensity $\lambda$. In other words we will get $P(X=k)$ when $X$ is Poisson($\lambda$) distributed. Thus the issue is to calculate $$L=\lim_{n \to \infty} \frac{n!}{k! (n-k)!} \lambda^k n^{-k} (1-\lambda/n)^{n-k}.$$ We will write this as four factors: $$L=\lim_{n \to \infty} \frac{\lambda^k}{k!} \frac{n!}{(n-k)! n^k} (1-\lambda/n)^{-k} (1-\lambda/n)^n.$$ The first factor is already independent of $n$. The second and third factors will converge to $1$. The last factor will converge to $e^{-\lambda}$. So we will recover the familiar formula for the Poisson distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2172421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Summation, capital pi properties I know that for summation, $$\sum_{i = 1}^{x} a_{i} + \sum_{i = x+1}^{n} a_{i} = \sum_{i = 1}^{n} a_{i}$$ Does the same concept apply for capital pi? Such as: $$\prod_{i = 1}^{x} a_{i} \cdot\prod_{i = x+1}^{n} a_{i} = \prod_{i = 1}^{n} a_{i}$$ I am trying to show that $$\prod_{i = 1}^{0} a_{i} = 1$$
The notation $\displaystyle\prod_{i=n}^m a_i$ can also be written as $\displaystyle\prod_{i\in\{n,..,m\}}a_i$. This notation can be generalised as $\displaystyle \prod_{i\in A}a_i$ for any set of indices, $A\subseteq\Bbb N^+$. We do want the following to hold for any disjoint sets of indices, $A, B$. $$\prod_{i\in A\cup B} a_i = (\prod_{i\in A} a_i)\cdot(\prod_{i\in B} a_i)$$ Since the empty set is disjoint with any set, then this means we must have: $$\prod_{i\in A\cup\emptyset} a_i = (\prod_{i\in A} a_i)\cdot(\prod_{i\in \emptyset} a_i)$$ Because $A\cup\emptyset = A$, then clearly the empty product needs to equal $1$. $$\prod_{i\in\{\}} a_i = 1$$ PS: by the same reasoning, the empty series equals $0$. $$\sum_{i\in\{\}} a_i = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2172494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that linear subset is not a subspace of the Vector space $V$ I am given the following $V = \mathbb R^4$ $W = \{(w,x,y,z)\in \mathbb R^4|w+2x-4y+2 = 0\}$ I have to prove or disprove that $W$ is a subspace of $V$. Now, my linear algebra is fairly weak as I haven't taken it in almost 4 years but for a subspace to exist I believe that: 1) The $0$ vector must exist under $W$ 2) Scalar addition must be closed under $W$ 3) Scalar multiplication must be closed under $W$ I don't think the first condition is true because if I were to take the vector, there is no way I can get the zero vector back. Is that correct or am I doing something very wrong?
No, it is not a subspace. It is because you have to verify your first point: "$0\in W$". But $(0,0,0,0)\notin W$ because $$0+2\times 0-4\times 0+2\ne 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2172644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
How can I solve for mod to get a value? I want to solve the following part involving mod: 1 = -5(19) (mod 96) Apparently, this mod in brackets (mod 96) here is different from the mod that I know e.g. its not the remainder value that you get by dividing. What of kind of mod is it and how can I solve it step by step to get the result which is 77? Update: Okay, so I am trying to find what's 5 inverse mod 96. By following euclidean algorithm approach here's what I am doing: Step1: Find GCD of 5 and 96 96 = 5(19) + 1 which becomes 1 = 96-5(19) when expresses in 1 term Step 2: Take mod (96) both sides So I will have left: 1 = -5(19) (mod 96) That's from where I need to solve it to get 5 inverse (mod 96).
$1 \equiv -5(19) \mod 96$ is equivalent to $1 \equiv -95 \mod 96$ by multiplying out the brackets. For your question it's equivalent to going round a clock with $96$ hours on it - each time you reach $96$th hour it goes back to $0$ and starts again. This can also be extended to negative numbers, and adding multiples of $96$. Hence: $$1 \equiv -5(19) \mod 96 \equiv 1 \mod 96$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2172739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is every $T_1$ topological space homeomorphic to a subspace of a separable $T_1$ topological space (with the same cardinality)? Let $X$ be a $T_1$ topological space , then is it true that $X$ is homeomorphic to a subspace of a separable $T_1$ topological space with the same cardinality of $X$ ? If not then what if we drop the same cardinality requirement ? I know that the statement is true if we don't want the separable space t be $T_1$ , and I also know that the statement is false if we want the separable space to be $T_2$ because of cardinality restriction ( a separable $T_2$ space has cardinality at most $2^{c}$ ). But I can't figure out the $T_1$ case . Please help . Thanks in advance
We can define $\overline X = X \sqcup\{1,2,3,\ldots\}$ with the topology generated by these two flavours of open sets. * *Sets of the form $U \sqcup \{n,n+1,\ldots\}$ for some $n \in \mathbb N$ and open $U \subset X$. *Sets of the form $\varnothing \sqcup W$ for any subset $W \subset \{1,2,3,\ldots\}$ Observe that, by construction, every open set of $\overline X$ has some $1,2,3,\ldots$ as an element. Therefore $\{1,2,3,\ldots\}$ is dense in $\overline X$ The singletons of the subspace $X \subset \overline X$ being closed follows from how $X$ is $T_1$. We can separate any two elements $m,n \in \{1,2,3,\ldots\}$ using sets of the form $\varnothing \sqcup \{n\}$ and $\varnothing \sqcup \{m\}$. We can separate any element $n \in \{1,2,3,\ldots\}$ and $x \in X$ using sets $\varnothing \sqcup \{n\}$ and $U \sqcup \{n+1,n+2, \ldots\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2172850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Propositional logic: Proof question (p∧q)→r⊢(p→q)→r Am I correct to assume that there is no proof for $$(p∧q)→r ⊢ (p→q)→r$$ I´ve spent hours trying to figure it out, by now I suspect there might have been a mistake in the exercise. I have been able to proof $$(p∧q)→r ⊢ p→(q→r)$$ (using Fitch notation), so it seems unlikely to me that $$(p∧q)→r ⊢ (p→q)→r$$ is valid as well. I´m quite new to propositional logic, so I just wanted to ask whether my reasoning is sound!
There is no proof because the tableau of $\lnot (((p\land q)\to r)\to ((p\to q)\to r)))$ is not closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2172977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Closed Bounded Intervals and Uniform Continuity For this Theorem, the proof on my course notes is like: Suppose, for a contradiction,that $f: [a,b] \to \mathbf R$ is continuous but not uniformly continuous on $[a,b]$. Choose $\epsilon \gt 0$ so that for all $\delta \gt 0$ there exisit $x,y \in [a,b]$ such that $|x - y| \le \delta$ but $|f(x) - f(y)| \gt \epsilon$. Thus for each $k \in \mathbb Z^{+}$ choose $x_{k},y_{k}\in[a,b]$ with $|x_{k}-y_{k}| \le \frac{1}{k}$ and $|f(x_{k}) -f(y_{k})|\gt \epsilon$ By the Bolzano Weierstrass Theorem, we can choose a convergent subsequence $(y_{k_{j}})_{k_{j}}$ of $(y_{k})_k$} . Let $c = \lim_{j \to \infty}y_{k_{j}}$. For all $j$ we have $|x_{k_{j}} - y_{k_{j}}| \le \frac{1}{k_{j}}$,hence by sequeeze theorem we have $x_{k_{j}} \to c$. Since $f$ is continuous at $c$ and $x_{k_{j}} \to c$, $y_{k_{j}} \to c$, we have $f(x_{k_{j}}) \to f(c) ,f(y_{k_{j}}) \to f(c)$ so that $|f(x_{k_{j}}) - f(y_{k_{j}})| \le \epsilon$ giving a contradiction. I just don't understand why this proof only holds for Closed Interval ? Or where we used some special properties of Closed Interval in the above proof? Can someone help me? Thansk!
Please see Paramanand Singhs answer for a more detailed and correct exposition. Necessary for the application of the Bolzano-Weierstraß Theorem is that the sequence of question is bounded. But this can only happen if $[a,b]$ is a closed interval, making this condition necessary too (otherwise there might be unbounded sequences, ruining the proof). So, the fact that $f$ is defined on a closed bounded interval enables us to use the Bolzano-Weierstraß Theorem here. EDIT: More important, the declared limit of the given subsequences might not be within the interval, if its not closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2173076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Find the minimum value of $f=x^2+y^2+x(1-y)+y(1-x)$, holds on $x,y$ are integers. Let $x$ and $y$ are integers such that $-2\le x\le3$ and $-8\le y\le4$ Find the minimum value of $f=x^2+y^2+x(1-y)+y(1-x)\tag{1}$ From $(1)$, I get $f=(x-y)^2+x+y$ and I don't know what I should do next. I had tried to use differential of f then I get no critical point (the minimum is on the bound $x=-2,x=3,y=-8$ or $y=4$). I found the minimum on $x=-2$ but $y=-2.5$ that isn't integer, the equation f that holds on $x=-2$ is $(y+2.5)^2-4.25,$ So no conclusion for $x,y$ are integers or Can it conclude that $y=2,3$ minimize f ? (I think this solution is suitable for real numbers). I had tried to substitute all the possible cases then I got the minimum is $-4$, when $(x,y)$=$(-2,-2)$, $(-2,-3)$. All help would be appreciated.
This is a kind of solving : you can write a program in matlab to find minimum a=zeros(6,13); for x=-2:3 for y=-8:4 a(x+3,y+9)=(x-y)^2+x+y end; end; if you run this ,you will have $f(x,y)=(x-y)^2+x+y$ as you can see max $f(x,y)=116 $ ,min $f(x,y)=-4$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2173210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
f is analytic mapping of the unit disk into itself such that f(a) = 0. Show $|f(z)| \leq |\frac{z-a}{1-\bar{a}z}|$ $f$ is analytic mapping of the unit disk into itself such that $f(a) = 0$. Show $|f(z)| \leq |\frac{z-a}{1-\bar{a}z}|$ I considered $F(z) =f(\phi_{-a}(z))$ where $\phi_a(z) = \frac{z-a}{1-\bar{a}z}$ maps unit disk to unit disk and $\phi_{-a}$ is the inverse of $\phi_a$ I get F is analytic mapping unit disk to unit disk, and $F(0) = 0$, so by Schwarz lemma I get $|F(z)|\leq |z|$ and $|F'(0)|\leq1$. And then I get stuck. I am not sure what I can do to get the final inequality. Or maybe I am thinking about the problem in a wrong way.
You're almost there. Now, let $\xi=\phi_{-a}(z)$. We get $F(z)=f(\phi_{-a}(z))=f(\xi)$, for So the inequality would be, in this terms, $$|f(\xi)|<|\phi_{a}(\xi)|=\left|\frac{\xi-a}{1-\bar a\xi}\right|$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2173322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Complication in summation How do I deal with this summation? $$\sum_{j=n}^{2n-1} \frac{1}{n+j}$$ Do I just substitute all j's with n? That seems to be too easy. *edit: This is a part of a larger problem, which is to find the limit of the summation as n tends to infinity.
Hint #1: In general, simplifying the sum of unit fractions is not easy. Hint #2: Evaluating a sum given a summation notation is easy if you know the index of summation, the lower and upper limits for the index, and the general formula for the summands.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2173461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How to simplify ${(1+2i)}^6$? How to simplify ${(1+2i)}^6$ using De Moivre's formula? I have found that $r=\sqrt 5$ and $\tan x=2$ but I can't find the exact value of $x$.
$$|1+2i|=\sqrt5\;,\;\;\arg(1+2i)=\arctan 2\implies (1+2i)^6=5^3e^{6\arctan 2\cdot i} $$ and now: $$\begin{cases}&5^3=125\\{}\\ &e^{6\arctan2\cdot i}=0.936+0.352\,i\end{cases}\implies(1+2i)^6=125(0.936+0.352\,i)=117+44i$$ But this looks weird and, anyway, is way simpler first calculating the third power and then squaring...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2173524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Are the following convergent or divergent? Are the following convergent or divergent? Justify $1.\sum_{n=2}^\infty (-1)^n\frac{1}{\ln(n)}$ $2. \left(\frac{1}{ln(n)}\right)^\infty _{n=2}$ $3.\sum_{n=1}^\infty 2^{-n/4}\cos(\pi n/100)$ My thoughts are: $1.$ is covergent because for $\sum_{n=2}^\infty \frac{1}{\ln(n)}$, $ \lvert\frac{1}{\ln(n)}\rvert$ is decreasing as $n\to\infty$, so by the Alternating series test, $\sum_{n=2}^\infty (-1)^n\frac{1}{\ln(n)}$ converges. $2.$ is divergent because $n>\ln(n) \implies \frac{1}{n}<\frac{1}{\ln(n)}. $ Because $ \left(\frac{1}{n}\right)^\infty _{n=2}$ diverges and $\frac{1}{n}<\frac{1}{\ln(n)}$. So, by the comparison test, $ \left(\frac{1}{\ln(n)}\right)^\infty _{n=2}$ diverges. Is what I've written so far true? My most trouble comes from $3.$ I think it converges to $0$, but I'm not sure under what tests it does. Thanks a lot!
For the first - don't only state that $|\frac{1}{\ln(n)}|$ is decreasing - the fact that $\lim_{n\to\infty}|\frac{1}{\ln(n)}|=0$ is certainly relevant. The alternating series test states: if $|a_n|$ decreases monotonically, all $a_n$ are positive or all negative, and $\lim_{n\to\infty}a_n$, then the alternating series $a_0-a_1+a_2-\cdots$ converges. It is not enough to say $|a_n|$ is decreasing, nor is it to say that $|a_n|$ is monotonically decreasing and bounded. Take for example $a_n=1+\frac{1}{n}$, which is monotonically decreasing, and certainly bounded. But the alternating series $a_0-a_1+a_2-a_3+\cdots$ definitely doesn't converge - since each $a_n$ is about 1 for large $n$, this series keeps bouncing back and forth (this is obviously not a formal proof that it doesn't converge; I hope I made my point clear though). For the second, probably the sequence is meant rather than the series; $(a_n)_{2}^{\infty}$ is common notation for the sequence $(a_2,a_3,a_4,...)$, and as such, the question is, does the sequence $(\frac{1}{\ln(2)},\frac{1}{\ln(3)},\frac{1}{\ln(4)},...)$ converge? As you noted, $\ln(n)$ goes to infinity as $n$ approaches infinity - thus, $\frac{1}{\ln(n)}$ approaches $0$ as $n$ approaches infinity, and therefore the sequence converges. You only need the comparison test for the third one - simply note that $|\cos(\pi n/100)|\leq 1$. Can you now bound $\sum_{n=1}^{\infty}2^{-n/4}\cos(\pi n/100)$ by something you do know converges? As you might know, $|a+b|\leq |a|+|b|$. We can use this fact on the above series to note $$\left|\sum_{n=1}^{\infty}2^{-n/4}\cos(\pi n/100)\right|\leq\sum_{n=1}^{\infty}\left|2^{-n/4}\cos(\pi n/100)\right|$$ and we know that $|2^{-n/4}\cos(\pi n/100)|=|2^{-n/4}||\cos(\pi n/100)|\leq 2^{-n/4}$ and thus, $$\left|\sum_{n=1}^{\infty}2^{-n/4}\cos(\pi n/100)\right|\leq\sum_{n=1}^{\infty}2^{-n/4}$$ Now the only thing left for you to do is find a bound or value to $\sum_{n=1}^{\infty}2^{-n/4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2173625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Gamma distribution and pdf Let $X \sim \mathsf{Gamma}(2,3)$ and $Y = 2X.$ Find the pdf of $Y$ at $y=13.5.$ Attempt: $f_X(x)= 2*[1/9*\Gamma(2)]*x*e^{-x/3}.$ Do I have to integrate now?
$X\sim \text{Gamma}(2,3)\implies f_X(x)=\frac{1}{9\Gamma(2)}xe^{-x/3}$ for $x>0$. The method of CDF yields (for $y>0$): $$\begin{align}F_Y(y)&=P(Y\leq y)\\&=P\left(X\leq\frac{y}{2}\right)\\&=\int_0^{y/2}\frac{1}{9}xe^{-x/3}dx\\&=\frac{-1}{3}\left[(x+3)e^{-x/3}\right]^{y/2}_0\\&=1-\frac{y+6}{6}e^{-y/6}\end{align}$$ Using the fact that $f_Y(y)=\frac{d}{dy}F_Y(y)$, we see that: $$f_Y(y)=\begin{cases}\frac{1}{36}ye^{-y/6}\qquad x>0\\0\qquad\qquad\text{otherwise}\end{cases}$$ Thus, $Y\sim\text{Gamma}(2,6)$. Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2173781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Derivative of $ \sin^x(x) $ I was trying to find the derivative for $\sin^x(x)$ I followed two methods, to get to different answers and after comparing the answer with Wolfram Alpha, I found the one which was correct and which was wrong, however I am unable to reconcile why the one which was wrong is incorrect. The method which leads to the answer from Wolfram Alpha is as follows: $ y = \sin^x(x) $ $ \implies \ln(y) = x\ln(\sin(x)) $ Taking derivative with respect to x on both sides, applying chain rule for the first term on the LHS: $ \implies \frac{d}{dx}ln(y) = x.\frac{1}{\sin(x)}.\cos(x) + \ln(\sin(x)) $ $ \implies \frac{d}{dy}ln(y).\frac{dy}{dx} = x.\cot(x) +\ln(\sin(x)) $ $ \implies \frac{1}{y}.\frac{dy}{dx} = x.\cot(x) + \ln(\sin(x)) $ $ \implies \frac{dy}{dx} = y.[x.\cot(x) + \ln(\sin(x))] $ $ \implies \frac{dy}{dx} = \sin^x(x).[x.\cot(x) + \ln(\sin(x))]$ Which is the same as I get from Wolfram: https://www.wolframalpha.com/input/?i=derivative+of+sinx%5Ex Now the second method I tried and it leads to a partial answer: $ y = \sin^x(x) $ $ \frac{dy}{dx} = \frac{d}{dx}\sin^x(x) $ $ \frac{dy}{dx} = \frac{d}{dx}\sin(x).\sin^{x-1}(x) $ $ \frac{dy}{dx} =\sin(x).\frac{d}{dx}\sin^{x-1}(x) + \sin^{x-1}(x)\cos(x) $ $ \frac{dy}{dx} =\sin(x).\frac{d}{dx}\sin(x).\sin^{x-2}(x) + \sin^{x-1}(x)\cos(x)$ $ \frac{dy}{dx} =\sin(x).[\sin(x).\frac{d}{dx}\sin^{x-2} + \cos(x).\sin^{x-2}(x)] + \sin^{x-1}(x)\cos(x)$ $ \frac{dy}{dx} = \sin^2(x).\frac{d}{dx}\sin^{x-2} + 2.\sin^{x-1}(x).cos(x) $ Repeating this process $ \frac{dy}{dx} = \sin^3(x).\frac{d}{dx}\sin^{x-3} + 3.\sin^{x-1}(x).cos(x) $ Again $ \frac{dy}{dx} = \sin^4(x).\frac{d}{dx}\sin^{x-4} + 4.\sin^{x-1}(x).cos(x) $ until $ \frac{dy}{dx} = \sin^{x-1}(x).\frac{d}{dx}\sin^{x-(x-1)} + (x-1).\sin^{x-1}(x).cos(x) $ $ \frac{dy}{dx} = \sin^{x-1}(x).\frac{d}{dx}\sin(x) + (x-1).\sin^{x-1}(x).cos(x) $ $ \frac{dy}{dx} = \sin^{x-1}(x).\cos(x) + (x-1).\sin^{x-1}(x).cos(x) $ $ \frac{dy}{dx} = x.\sin^{x-1}(x).cos(x) $ When I compare this with the previous result: $ \frac{dy}{dx} = \sin^x(x).[x.\cot(x) + \ln(\sin(x))] $ $ \frac{dy}{dx} = x.\sin^{x-1}(x).\cos(x) + \sin^x(x)\ln(\sin(x)) $ There is this extra term $ \sin^x(x)\ln(\sin(x)) $ Now looking at both terms: $ x.\sin^{x-1}(x).\cos(x) $ and $ \sin^x(x)\ln(\sin(x)) $ I see the first one is something I would get from considering y to a be a polynomial in x and finding the derivative, while the second one is something I would get if I considered y to be an exponential in x and solved that. Perhaps I am forgetting some basic calculus here, I hope someone can help me reconcile the reasoning here.
You can use the following trick: take every instance of the variable ($x$) that appears in the expression, temporarily consider the others as constants, and differentiate. The desired derivative is the sum of all contributions. $$\sin^x(x)$$ yields $$(\sin^xc)'=\log\sin c\cdot\sin^xc\text{ (derivative of an exponential)},$$ $$(\sin^cx)'=c\sin^{c-1}x\cdot\cos x\text{ (derivative of power and chain rule)}.$$ Then by summation, $$(\sin^xx)'=\log\sin x\cdot\sin^xx+x\sin^{x-1}x\cdot\cos x.$$ The problem with your second approach lies in the fact that you assume $x$ to be a constant integer so that the summation terminates. This doesn't hold and what you get is the term with the constant exponent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2174117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Do there exists a number with $r$ repeats which forms a perfect square? Take a number $x=\overline{a_1a_2a_3\cdots a_n}$. Its repeated form is like $\overline{\underbrace{a_1a_2a_3\cdots a_n}_{\text{x}}\underbrace{a_1a_2a_3\cdots a_n}_{\text{x}}}$ And $\exists$ $s$ such that the repeated form is a perfect square, that is, $\overline{\underbrace{a_1a_2a_3\cdots a_n}_{\text{x}}\underbrace{a_1a_2a_3\cdots a_n}_{\text{x}}}=s^2$. In fact, I have done it and that's a pretty common problem. Still, here's my solution: Solution for $r=2$ (just a super gist): Let the number be $n$ with $m$ digits.. Its repeated form is $n(10^m+1)$ and we have to show that there exists $k$ such that $(10^m+1)n=k^2$. Since $n$ has $m$ digits and $10^m+1$ has $m+1$ digits, $n<10^n+1$. If GCD$(10^m+1,n)=1$, we will easily get a contradiction as $10^m+1$ has to be individually a square and that's not possible. Similarly, we will encounter a few more cases (I won't prove them as they are just easy and simple parity checking) and finally, we can claim that $10^m+1=a^2b$ and $n=bc^2$ where $a,b,c > 1$. We can check by Wolfram Alpha or simply make a program (C++, Python perhaps) and ensure that in case of $2^{11}+1$ we get such $a,b$ where $a=11, b=826446281$. Solve it to get $4\leq c\leq10$. Let's show that it holds for infinitely many. Just take any $\text{odd }x$ and it will hold for $10^{mx}+1$ as $10^m+1\mid 10^{mx}+1$ since $m,x$ are $\text{odd numbers}$. The value of $c$ will change. That's it!! But what if the repeat is done $r$ times, that is, the repeated number is $\overline{\underbrace{a_1a_2a_3\cdots a_n}_{\text{1st set}}\underbrace{a_1a_2a_3\cdots a_n}_{\text{2nd set}}\cdots \underbrace{a_1a_2a_3\cdots a_n}_{r\text{-th set}}}$ In that case, do there exists a perfect square?
Let $r\ge2$ be the number of repetitions. We look for $n$-digit numbers $a$ such that $$ \frac{10^{rn}-1}{10^n-1}\,a=\square. $$ Let $s$ be the square free part of $(10^{rn}-1)/(10^n-1)$, that is $$ \frac{10^{rn}-1}{10^n-1}=s\,m^2 $$ and $s$ has no square divisors (other than $1$.) Then we must have $$ a=s\,k^2\quad\text{for some integer }k. $$ In particular, $s$ must have at most $n$ digits. Doing a search when $r=2$, I found solutions with $11$, $21$, $33$, $39$, $55$ and $63$ digits. For instance $$ \frac{10^{2\times63}-1}{10^{63}-1}\times20408163265306122448979591836734693877551020408163265306122449=\\ (142857142857142857142857142857142857142857142857142857142857143)^2. $$ Edit: that number has $62$ digits. To get $63$ digits multiply it by $9$, $16$, $25$ and $36$. I have found no solutions for $3\le r\le15$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2174295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proof by induction the divisiblity Proof by induction, that $$x_n=10^{(3n+2)} + 4(-1)^n\text{ is divisible by 52, when n}\in N $$ for now I did it like that: $$\text{for } n=0:$$ $$10^2+4=104$$ $$104/2=52$$ $$\text{Let's assume that:}$$ $$x_n=10^{(3n+2)} + 4(-1)^n=52k$$ $$\text {so else}$$ $$4(-1)^n=52k-10^{3n+2}$$ $$for \text{ n+1}:$$ $$\text {after transforms get something like that:}$$ $$52k=10^{3n+3}$$ But I'm sure, that the last step I did wrong. Actually I don't know when the proof is done, if you would help me I would be thankful.
For $n+1$ you have: $$10^{(3n+2)+3}+4(-1)^{n+1}=10^3\cdot10^{3n+2}+4(-1)^n\cdot(-1)=\\ =10^3\cdot[52k-4(-1)^n]-4(-1)^n=10^3\cdot52k-(-1)^n(4004)$$ but $4004=52\cdot 77$, then $$10^{(3n+2)+3}+4(-1)^{n+1}=52[10^3k-77(-1)^n]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2174413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Does zero divide zero I wanted to know is zero divisible by zero? I've read that division by zero is not allowed in mathematics, but for instance in Apostol's Introduction to analytic number theory, it states that only $0$ divides $0$, and I've seen problems in form $0\mid f(x)$, wanting the possible amounts of $x$ (integer amounts). This might be similar to some other topics, but they examine the problem in the set of real numbers (analytic sight), but mine is concered with elementary number theory which concerns integers.
This depends on your context. In the context of fields, like the rational numbers or the real numbers, $0$ does not divide anything, since division is given by multiplying by the multiplicative inverse (which exists from the axioms). However, in the context of the natural numbers we define the divisibility relation as follows: $$n\mid m\iff\exists k:k\cdot n=m.$$ In that case, every number divides $0$, including $0$ itself. (Note that $\exists k$ is bounded to the natural numbers here!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2174535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 1 }
if the atlas of a topological manifold contain only one chart if I have an atlas $\mathcal{A}$ for a topological $m$-manifold $\mathcal{M}$ which consists of only one chart; a homeomorphism $\phi:U \subseteq \mathbb{R}^m \to \mathcal{M}$. is it then true that $\mathcal{M}$ is necessarily smooth? I think so, because there are no transition functions - hence all transitions functions (vacously) are smooth
It doesn't make sense to ask whether a topological manifold is "smooth" or not. Smoothness is not a property that topological manifolds may or may not possess; it's an additional structure that must be added -- either a maximal atlas of smoothly compatible charts, or an equivalence class of such atlases, depending on your definition. What you can say is that if $M$ is a topological manifold that has an atlas consisting of only one chart, then it does have a smooth structure -- in fact, there's a unique smooth structure for which that chart is a smooth chart. The proof is essentially the argument you sketched -- since there are no transition functions, any atlas with only one chart is automatically smoothly compatible. This is discussed at some length in Chapter 1 of my book Introduction to Smooth Manifolds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2174704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
if $\sum_0^\infty a_n x^n = (\sum_0^\infty x^n )(\sum_0^\infty x^{2n})$ what is $a_n$? if $$\sum_0^\infty a_n x^n = (\sum_0^\infty x^n )(\sum_0^\infty x^{2n})$$ what is $a_n$? Here is my approach let $b_n= 1$ and $c_n= x^n$ Then by forming / relating to cauchy product we can conlude that the product is equal to : $$\sum_0^\infty a_n x^n$$ where $a_n = \sum_{k=0}^{n} b_k c_{n-k} = \sum x^{n-k}= x^n+x^{n-1}+...+x^0 = \frac{1-x^{n+1}}{1-x}$ What do you think about my approach? I feel like there is something wrong with it. If not - could you provide a different approach? By the way depending on the way I solve - I keep finding different answers. I also ended up with $a_n=\frac{3+2n+(-1)^n}{4}$ without using cauchy product.
Using the geometric summation formula $\displaystyle\sum x^n=\dfrac1{1-x}$, $$A=\frac1{1-x}\frac1{1-x^2}=\frac1{4(1+x)}+\frac1{4(1-x)}+\frac1{2(1-x)^2}\\ =\frac1{4(1+x)}+\frac1{4(1-x)}+\frac12\frac d{dx}\frac x{1-x}$$ so that $$a_n=\frac14(-1)^n+\frac14+\frac12(n+1).$$ This is a generating function approach.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2174818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Show that $\varphi:\mathbb{R}→Gl_2 (\mathbb{R})$ defined by $\varphi(a)=\begin{pmatrix} 1 & a \\ 0 & 1\end{pmatrix}$ is not an isomorphism $\varphi:\mathbb{R}→Gl_2 (\mathbb{R})$ defined by the matrix $\varphi(a)=\begin{pmatrix} 1 & a \\ 0 & 1\end{pmatrix}$ An isomorphism is a homomorphism that is also bijective. $\varphi(a)$ is a homomorphism so in order to show it is not an isomorphism I must show it is not 1 to 1 or onto or that it does not have a 2 sided inverse. I am not sure how to show these things. Any help?
Take $A:=\begin{pmatrix}0 & 1 \\ 1 & 0\end{pmatrix}$; then $A\in GL_2(\Bbb R)$ but $\varphi(a)\neq A$ for every $a\in\Bbb R$, thus $\varphi$ is NOT surjective, and in particular it cannot be an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2174939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Inner Product on the space $P_2$ I would just like some confirmation on a problem and a small hint when starting the next. The initial question is: Show that $⟨p,q⟩=p(0)q(0)+p(1)q(1)+p(2)q(2)$ defines an inner product on the space $P_2$ of polynomials of degree at most 2. Here's my solution. $$\langle p,q \rangle = \langle (p(1),p(2),...,p(n)), (q(1),q(2),...,q(n))\rangle = \sum_{n=0}^\infty p(n)p(n) $$ For a degree 2 polynomial, this becomes $$ \sum_{n=0}^2 p(n)p(n) = p(0)q(0)+p(1)q(1)+p(2)q(2) $$ Therefore, this defines the inner product on the space of $P_2$. The next problem is: Apply the Gram-Schmidt algorithm to the basis $\{1,x,x^2\}$ for $P_2$ to construct a basis $\{p_0,p_1,p_2 \}$ that is orthogonal with respect to the inner product of the previous problem. I'm not sure how to approach this problem. If I could get a hint on how to set this up with respect to the inner product of the previous problem, I could figure the rest out easily enough. Thank you.
How to perform the Gram-Schmidt algorithm: Our new orthonormal basis will be $\{b_1, b_2, b_3\}$. But first let's just find an orthogonal basis $\{c_1, c_2, c_3\}$ and then we'll normalize it. Start with $c_1 = 1$. OK, the first one was easy. Now for $c_2$, we want to take the second basis vector in $\{1, x, x^2\}$ and subtract out its projection onto $c_1$: $$\begin{align}c_2 &= x - \operatorname{proj}_{1}(x) \\ &= x - \frac{\langle 1,x\rangle}{\langle 1,1\rangle}1 \\ &= x - \frac{(1)(0) + (1)(1) + (1)(2)}{(1)(1)+(1)(1)+(1)(1)}1 \\ &= x - 1\end{align}$$ For $c_3$, we want to take $x^2$ and subtract out its projections onto $c_1$ and $c_2$: $$\begin{align} c_3 &= x^2 - \operatorname{proj}_{x-1}(x^2) - \operatorname{proj}_{1}(x^2) \\ &= x^2 - \frac{\langle x-1,x^2\rangle}{\langle x-1,x-1\rangle}(x-1) - \frac{\langle 1,x^2\rangle}{\langle 1,1\rangle}1 \\ &= x^2 - \frac{(-1)(0) + (0)(1) + (1)(4)}{(-1)(-1) + (0)(0) + (1)(1)}(x-1) - \frac{(1)(0) + (1)(1) + (1)(4)}{(1)(1) + (1)(1) + (1)(1)}1\\ &= x^2 -2(x-1)-\frac 53 \\ &= x^2 - 2x +\frac 13 \end{align}$$ Now just normalize each of these by dividing each $c_i$ by $\sqrt{\langle c_i,c_i\rangle}$ to get your $b_i$'s and you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2175019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bessel function at large order I'm trying to expand a modified Bessel function such that $$K_n \left(\sqrt{n} \left(a_0 + a_1 \frac{1}{n} + a_2 \frac{1}{n^2} + \ldots \right)\right) = A(n) \left( b_0 + b_1 \frac{1}{n} + b_2 \frac{1}{n^2} + \ldots \right) $$ in the large $n$ limit and $A(n)$ is some function of $n$ that contains diverging part. Is such expansion possible? How would one obtain the coefficients $b_0, b_1, \ldots$ in terms of $a_0, a_1, \ldots$? I would appreciate any hint or reference. Thank you.
Unless I misunderstand, there's something missing here. It looks to me like $K_n(\sqrt{n})$ blows up rather rapidly as $n \to \infty$. For example, for $n=10$ I get approximately $1413.798936$, for $n = 20$ approximately $4.796177691 \times 10^9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2175241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can $H_n(A) \cong H_n(X)$, where $(X,A)$ is a simply-connected $CW$ pair, always be induced by the inclusion map? Let $(X,A)$ be a simply connected $CW$ pair such that $H_n(A)\cong H_n(X)$ for some $n$. I wonder if the isomorphism can be induced by inclusion $i:A\hookrightarrow X$ in this case. Remark: Note that if this is always true, then by Hatcher's corollary 4.33, i:$A \hookrightarrow X$ is a homotopy equivalence, and thus by his corollary 0.20, $A$ is a deformation retract of $X$. This will generalize whitehead's theorem. So I feel like it isn't true. But I can't come up with an counterexample.
No. For instance, let $X=D^3\vee S^2$ and let $A=\partial D^3\subset D^3\subset X$. Then $A$ and $X$ have isomorphic homology: $A$ is homeomorphic to $S^2$, and $X$ is homotopy equivalent to $S^2$. But the inclusion $A\to X$ is nullhomotopic, and in particular induces the $0$ map on $H_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2175329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proof of a trigonometric inequality in $(0,\pi/2)$ I want to show $$f(x)=-512\sin\frac{4x}7+1048\sin\frac{6x}7-800\sin\frac{8x}7+216\sin\frac{10x}7>0\quad x\in(0,\pi/2)$$ This trigonometric inequality has been verified by Mathematica using the Plot commend. However, I cannot give a rigorous proof of it. Any suggestion, idea, or comment is welcome, thanks!
Let $\cos\frac{2x}{7}=t$. Hence, $$f'(x)=\frac{16}{7}(t-1)^2(270t^3+140t^2-131t-34)$$ By the way, $t=\cos\frac{2x}{7}>\cos\frac{\pi}{7},$ which says that $f'(x)>0$ and $f(x)>f(0)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2175443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to prove floor function inequality $\sum\limits_{k=1}^{n}\frac{\{kx\}}{\lfloor kx\rfloor }<\sum\limits_{k=1}^{n}\frac{1}{2k-1}$ for $x>1$ Let $x>1$ be a real number. Show that for any positive $n$ $$\sum_{k=1}^{n}\dfrac{\{kx\}}{\lfloor kx\rfloor }<\sum_{k=1}^{n}\dfrac{1}{2k-1}\tag{1}$$ where $\{x\}=x-\lfloor x\rfloor$ My attempt: I try use induction prove this inequality. It is clear for $n=1$, because $\{x\}<1\le \lfloor x\rfloor$. Now if assume that $n$ holds, in other words: $$\sum_{k=1}^{n}\dfrac{\{kx\}}{\lfloor kx\rfloor }<\sum_{k=1}^{n}\dfrac{1}{2k-1}$$ Consider the case $n+1$. We have $$\sum_{k=1}^{n+1}\dfrac{\{kx\}}{\lfloor kx\rfloor }=\sum_{k=1}^{n}\dfrac{\{kx\}}{\lfloor kx\rfloor }+\dfrac{\{(n+1)x\}}{\lfloor (n+1)x\rfloor}<\sum_{k=1}^{n}\dfrac{1}{2k-1}+\dfrac{\{(n+1)x\}}{\lfloor (n+1)x\rfloor}$$ It suffices to prove that $$\dfrac{\{(n+1)x\}}{\lfloor (n+1)x\rfloor}<\dfrac{1}{2n+1}\tag{2}$$ But David gives an example showing $(2)$ is wrong, so how to prove $(1)$?
Some thoughts: Looking at several plots indicates that $$f_n(x):=\sum_{k=1}^n{\{kx\}\over\lfloor kx\rfloor}$$ is largest immediately to the left of $x=2$. Now for $x=2-\epsilon$ with $0<\epsilon\ll1$ one has $$\lfloor kx\rfloor=2k-1,\quad\{kx\}=1-2k\epsilon$$ and therefore $$f_n(x)=\sum_{k=1}^n{1-2k\epsilon\over2k-1}<\sum_{k=1}^n{1\over2k-1}\ .$$ Maybe you want to take a look at the following graph of $f_{250}$:
{ "language": "en", "url": "https://math.stackexchange.com/questions/2175624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 3, "answer_id": 0 }
Answering questions for higher dimensions This isn't a question about how to visualize higher dimensions, or how intuit them, or how unintuitive they are. Rather, it's a hypothetical question about the kinds of questions that might be easier to answer (not necessarily prove, but to suggest) if we could visualize $n$ dimensions as easily as we could 2 or 3. As an example, it's not hard to imagine we would probably have a better idea about kissing numbers if we could picture higher dimensions as easily as the euclidian plane. What other kinds of (unsolved) problems might lend themselves to analysis more easily if we could intuit $n$ dimensions?
When I asked this question, multiple commenters were surprised to learn that there don't exist three orthogonal lines with latices coordinates and length $2$ in $3$D space. Multidimension geometry in general tends to be unintuitive because of how used to $2$D we are. In the same vein of thought, manifolds would be a lot easier to study. A bucket-load of problems in graph theory would likely be easier, as it would grant us the ability to display extremely complex graphs without overlapping any edges. You really only need $3$D to do this, but many people find that hard to conceptualize.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2175782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Real life situation for an implicit function What could be an example of a real life situation for which an implicit function may arouse? In real life, while plotting a value against the other, wouldn't it be the case that the function would not be implicitly defined? This relates to the understanding Why is the function always differentiated with respect to x? Books and tonnes of sites just let us plug in the rule. But, I haven't found a single resource that states why is the implicit function differentiated with respect to x? I don't know if it is just a convention.
You have a circular curve of which you want to know the diameter. You have a measuring wheel to get the arc length and a meter to get the chord. (This is a true real-life situation in road management.) Then geometry tells you that $$\sin\frac ad=\frac cd.$$ Denoting the unknown $y:=a/d$ and the independent variable $x:=c/a$, we get the beautiful implicit equation $$\sin y=xy.$$ Note: we can express the solution as $$y=\text{sinc}^{-1}x$$ where $\text{sinc}$ denotes the cardinal sine function, but it's inverse is not accepted as valid for a closed-form expression. $x$ usually denotes an independent variable and $y$ the dependent one. This is why you are interested in $dy/dx$ rather than in $dx/dy$, even though these two derivatives are just the inverse of each other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2175943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Integral of $\frac1x$ When integrating $\frac1x$, you would get $\ln|x|+c$. Working under that assumption, any given derivative of $\ln|ax|+c$ would give you the same answer as any derivative of $\ln|x|+c$. Given this, after deriving and then integrating, $\ln|2x|+c$ and $\ln|100x|+c$ would have the same solutions of $\frac1x$ after integrating and $\ln|x|+c$, even though the two original equations have differing solutions, the resulting equation are identical. Is there a way to differentiate the equations even after deriving and integrating? Are the solutions found after this process incorrect?
$$\ln|ax|+C = \ln|a| + \ln|x| + C = \ln|x| + C$$ Note that $\ln|a|$ is constant, so we can simply wrap it into $C$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2176091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Compute $\int_{0}^{\infty} \frac{x^\alpha}{x^2+x+1}dx$ So I am asked to integrate: $\int_{0}^{\infty} \frac{x^{\alpha}}{x^{2}+x+1}dx$, where $-1<\alpha<1$ As this is done in the Complex Analysis course I tried to consider the function $f(z)=\frac{z^{\alpha}}{z^{2}+z+1}$ and to integrate it over the contour: $\gamma_{1}$: real line from 0 to R, which is basically our integral $\gamma_{2}$: the quarter of the circle of radius R, with $\theta$ from 0 to $\pi/2$. $\gamma_{3}$: line from $iR$ to $0.$ Now my argument is that the integral over this contour is $0$ as the are no singularities of $f$ inside the curve. I managed to prove that the integral over the quarter of the circle tends to $0$ as $R$ tends to infinity but I do not know how i can compute the integral over the imaginary line.
In order that $\frac{x^{\alpha}}{x^2+x+1}$ is integrable over $\mathbb{R}^+$ we need $-1<\text{Re}(\alpha)<1$. With such assumption $$ I(\alpha)=\int_{0}^{+\infty}\frac{x^{\alpha}}{x^2+x+1}\,dx = \int_{0}^{+\infty}\frac{x^{\alpha+1}-x^{\alpha}}{x^3-1}\,dx \tag{0}$$ can be written (by splitting the integration range and applying the sub $x\mapsto\frac{1}{x}$ in the second "half") as $$ I(\alpha)=\int_{0}^{1}\frac{x^{\alpha}-x^{\alpha+1}}{1-x^3}\,dx+\int_{0}^{1}\frac{x^{-\alpha}-x^{1-\alpha}}{1-x^3}\,dx =\int_{0}^{1}(x^{\alpha}+x^{-\alpha})\frac{\!1-x}{\,1-x^3}\,dx\tag{1}$$ By Taylor series and termwise integration, $$ I(\alpha)=\sum_{k\geq 0}\left(\frac{1}{3k+\alpha+1}-\frac{1}{3k+\alpha+2}+\frac{1}{3k-\alpha+1}-\frac{1}{3k-\alpha+2}\right)\tag{2}$$ or: $$ I(\alpha)=2\sum_{k\geq 0}\left(\frac{3k+1}{(3k+1)^2-\alpha^2}-\frac{3k+2}{(3k+2)^2-\alpha^2}\right)=2\sum_{n\geq 0}\frac{n\,\chi(n)}{n^2-\alpha^2}\tag{3}$$ with $\chi$ being the non-primitive Dirichlet character $\!\!\pmod{3}$. It also follows: $$ I(\alpha)=\frac{d}{d\alpha}\log\prod_{k\geq 0}\left(\frac{(3k+1+\alpha)(3k+2-\alpha)}{(3k+1-\alpha)(3k+2+\alpha)}\right)=\frac{d}{d\alpha}\log\frac{\sin\left(\frac{\pi}{3}(\alpha+1)\right)}{\cos\left(\frac{\pi}{6}(2\alpha+1)\right)} \tag{4}$$ by the $\Gamma$ reflection formula. By simplifying: $$ I(\alpha)=\int_{0}^{+\infty}\frac{x^\alpha\,dx}{x^2+x+1} = \color{red}{\frac{2\pi}{\sqrt{3}\left(1+2\cos\frac{2\pi\alpha}{3}\right)}}.\tag{5}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2176217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solve the initial value problem $9y''(t)-2y'(t)+y(t)=t\cdot e^{-t/4}; y(0)=0, y'(0)=1$ using Laplace transformations Solve the initial value problem $9y''(t)-2y'(t)+y(t)=t\cdot e^{-t/4}; y(0)=0, y'(0)=1$ using Laplace transformations I rearranged for $\bar y(s)=\dfrac{1}{(s+1/4)^2(9s^2-2s+1)}+\dfrac{9}{(9s^2-2s+1)}$ For the first term I solved using partial fractions and obtained: $\bar f_1(s)=\dfrac{1664}{1089(s+1/4)}+\dfrac{16}{33(s+1/4)^2}-\dfrac{1664s}{121(9s^2-2s+1)}+\dfrac{2380}{1089(9s^2-2s+1)}$ When I tried finding the inverse Laplace of these, the first two terms were fine but when I reached the 3rd and 4th term I felt like the amount of work required was worth too much for what it's worth... have I made a mistake anywhere? For the second term, I obtained: $f_2(t)=e^{t/9}\cdot \dfrac{9}{2\sqrt2} \cdot \sin (\dfrac{2\sqrt2}{9})$ edit1: fixed partial fraction decomposition
Noting $$ \dfrac{s}{9s^2-2s+1}=\frac19\frac{s}{(s-\frac19)^2+(\frac{2\sqrt2}{9})^2},\dfrac{1}{9s^2-2s+1}=\frac19\frac{1}{(s-\frac19)^2+(\frac{2\sqrt2}{9})^2}$$ one has \begin{eqnarray} &&L^{-1}(\dfrac{s}{9s^2-2s+1})\\ &=&\frac19L^{-1}(\frac{s}{(s-\frac19)^2+(\frac{2\sqrt2}{9})^2})\\ &=&\frac19L^{-1}(\frac{s-\frac19}{(s-\frac19)^2+(\frac{2\sqrt2}{9})^2})+\frac1{2\sqrt2}L^{-1}(\frac{\frac{2\sqrt2}{9}}{(s-\frac19)^2+(\frac{2\sqrt2}{9})^2})\\ &=&\frac19e^{\frac19t}\cos(\frac{2\sqrt2}{9}t)+\frac1{2\sqrt2}e^{\frac19t}\sin(\frac{2\sqrt2}{9}t) \end{eqnarray} and \begin{eqnarray} &&L^{-1}(\dfrac{1}{9s^2-2s+1})\\ &=&\frac1{2\sqrt2}L^{-1}(\frac{\frac{2\sqrt2}{9}}{(s-\frac19)^2+(\frac{2\sqrt2}{9})^2})\\ &=&\frac1{2\sqrt2}e^{\frac19t}\sin(\frac{2\sqrt2}{9}t) \end{eqnarray}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2176283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all possible positive integers $n$ such that $3^{n-1} + 5^{n-1} \mid 3^n + 5^n $. Proof explanation Question: Find all possible positive integers $n$ such that $3^{n-1} + 5^{n-1} \mid 3^n + 5^n $. Solution: Let's suppose that $3^{n-1}+5^{n-1}\mid 3^n+5^n$, so there is some positive integer $k$ such that $3^n+5^n=k(3^{n-1}+5^{n-1})$. Now, if $k\ge 5$ we have $$k(3^{n-1}+5^{n-1})\ge 5(3^{n-1}+5^{n-1})=5\cdot 3^{n-1}+5\cdot 5^{n-1}>3^n+5^n.$$ This means that $k\le 4$. On the other hand, $3^n+5^n=3\cdot 3^{n-1}+5\cdot 5^{n-1}>3(3^{n-1}+5^{n-1})$, then $k\ge 4$ and thus we deduce that $k=4$. In this case we have $3^n+5^n=4(3^{n-1}+5^{n-1})$, which gives us the equation $5^{n-1}=3^{n-1}$, but if $n>1$ this equation is impossible. Hence $n=1$ The above solution is from this answer. Could someone please explain a few things: * *why was the value of $k\le4$ and $k\ge4$ picked? where did they get 4? *why does $3\cdot 3^{n-1}+5\cdot 5^{n-1}>3(3^{n-1}+5^{n-1})$ mean that $k\ge4$ ? *Why does $5\cdot 3^{n-1}+5\cdot 5^{n-1}>3^n+5^n.$ mean that $k\le4$ ?
Perhaps an easier way to look at it: the equation $$3^n+5^n=k(3^{n-1}+5^{n-1})$$ can be rewritten $$5^{n-1}(5-k)=3^{n-1}(k-3)\ .$$ Therefore $5-k$ and $k-3$ must have the same sign. They can't both be negative as then $k$ would be simultaneously less than $3$ and greater than $5$, and they clearly can't both be zero, so they must both be positive. Hence $$k<5\quad\hbox{and}\quad k>3\ ,$$ so $k=4$. This makes our second equation $$5^{n-1}=3^{n-1}\ ,$$ and this is true if and only if $n-1=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2176378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Limit of an indeterminate form with a quadratic expression under square root The problem is: $$ \lim_{x\to0}\frac{\sqrt{1+x+x^2}-1}{x} $$ So far, I've tried substituting $\sqrt{1+x+x^2}-1$ with some variable $t$, but when $x\to0$, $t\to\sqrt{1}$. I have also tried to rationalize the numerator, and applied l'hospital. I simply can't figure out this limit. Any help would be greatly appreciated! Thanks in advance.
The binomial expansion for a square root is simple: $$\sqrt{a+b}=(a+b)^{1/2}=a^{1/2}+\frac12b\cdot a^{-1/2}+\mathcal O(b^2\cdot a^{-3/2})$$ Thus, $$\sqrt{1+x+x^2}=1^{1/2}+\frac12(x+x^2)1^{-1/2}+\mathcal O(x^2)$$ $$\begin{align}\frac{\sqrt{1+x+x^2}-1}x&=\frac{1+\frac12(x+x^2)-1+\mathcal O(x^2)}x\\&=\frac{\frac12(x+x^2)+\mathcal O(x^2)}x\\&=\frac12(1+x)+\mathcal O(x)\\&\to\frac12(1+0)=\frac12\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2176506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How can I 'prove' the derivative of this function? Consider the function $$ f: (-1, 1) \rightarrow \mathbb{R}, \hspace{15px} f(x) = \sum_{n=1}^{\infty}(-1)^{n+1} \cdot \frac{x^n}{n} $$ I am required to show that the derivative of this function is $$ f'(x) = \frac{1}{1+x} $$ I have attempted to do this using the elementary definition of a limit, as follows: $$ f'(x) = \lim_{a \rightarrow x} \left( \frac{f(a)-f(x)}{a-x} \right) = \lim_{a \rightarrow x} \left( \frac{ \sum_{n=1}^{\infty} \left[ (-1)^{n+1} \cdot \frac{a^n}{n} \right]- \sum_{n=1}^{\infty}\left[ (-1)^{n+1} \cdot \frac{x^n}{n} \right]}{a-x} \right) \\ = \lim_{a \rightarrow x} \left( \frac{ \sum_{n=1}^{\infty} \left[ (-1)^{n+1} \cdot \frac{a^n - x^n}{n} \right]}{a-x} \right) $$ but I am unsure of how to proceed from here (assuming my approach is correct. Can someone help me to show this?
HINT: if you knows that $$\int_0^x\frac{\mathrm dt}{1+t}=\ln(1+x),\quad x\in (-1,1)$$ then the question is equivalent to prove that $$\sum_{k=1}^\infty(-1)^{k+1}\frac{x^k}k=\ln(1+x),\quad x\in(-1,1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2176594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Are the fundamental groups of $X$ and $X/A$ isomorphic when $A$ is contractible? Let $X$ be a topological space, $A$ a contractible subspace of $X$, and $f : X \rightarrow X/A$ the quotient map. I want to say that $f$ always induces an isomorphism between fundamental groups, or at least that it does under certain assumptions (e.g., when $X$ is a CW complex), but I do not see how to show this explicitly except when $X$ is a finite graph and $A$ is a tree in $X$. Is there a way to prove this in general, or at least in a more general case than finite graphs?
There is a result which states: If the map $i:(A,x_0)\hookrightarrow (X,x_0)$ is a cofibration, and $A$ is contractible then the projection $p:(X,x_0)\to (X/A,\ast)$ is a homotopy equivalence. In particular, if $(X,A,x_0)$ is a relative CW-complex, then the inclusion $i:(A,x_0)\hookrightarrow (X,x_0)$ is a cofibration. Since homotopy equivalences induce isomorphisms on fundamental groups, the induced map $$p_*:\pi_1(X,x_0)\to\pi_1(X/A,\ast)$$ is an isomorphism in this case. To show this, let $$H:A\times I\to A,\quad H(a,0)=id_A(a)=a,\quad H(a,1)=x_0$$ be a homotopy giving $A\simeq\ast$. Since $i$ is a cofibration, this can be extended to a homotopy $K:X\times I\to X$ satisfying $$K(x,0)=id_X(x)=x,\quad K(a,1) = H(a,1)=x_0.$$ By the universal property of quotients, the map $K|_{X\times\{1\}}:X\to X$ factors through a map $k:X/A\to X$ such that $K|_{X\times\{1\}}=k\circ p$. Hence, $k\circ p\simeq id_X$. Next, as $p\circ K|_{A\times I}=\ast$, $K$ factors through a homotopy $\overline{K}:X/A\times I\to X/A$ such that $\overline{K}\circ(p\times id_I)=p\circ K$. It's not hard to verify that $\overline{K}$ gives $id_{X/A}\simeq p\circ k$. For a reference, everything presented above is in Switzer's Algebraic Topology, chapter 6.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2176683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Prove that (X,d) is complete Let $X=\mathbb{R}$ and $d(x,y)=\min(1, |x-y|)$ I am trying to prove that this metric space is compete. I know that means that every Cauchy sequence converges. So when |x-y| < 1, that would just be the standard metric on R which I know is complete. I'm a bit confused for when d(x,y)=1 though. Would this mean that the Cauchy sequences would both be equal and constant sequences?
First, some intuition: Convergence is a property that only care about "really big" values of $N$. If you change the behavior of the beginning of a sequence, that's never going to change the convergence behavior. Similarly, it only cares about "really small" values of $\epsilon$. Your metric only differs from the usual metric for "largish" (specifically, not arbitrarily small) values of $\epsilon$, so it shouldn't have any effect. Exactly the same sequences will converge, and they'll converge to exactly the same values as in the usual metric. Now to prove this fact: Take some Cauchy sequence, $a_k$. We wish to prove that this sequence converges. It follows from the definition of $d$ that $d(x,y)\leq |x-y|$ holds for all $x,y$. Considering $a_k$ in the metric space with the usual notion of distance, we know it has some limit, $L$, since $\mathbb{R}$ is complete. In order to show that $a_k\to L$ under the new metric, we need to show that $$\forall\epsilon>0,\exists N\text{ such that }n>N\Rightarrow d(a_n,L)<\epsilon$$ We already know that $$\forall\epsilon>0,\exists N\text{ such that }n>N\Rightarrow |a_n-L|<\epsilon$$ and so we can just evoke the fact that $d(x,y)\leq |x-y|$ to find out that $$\forall\epsilon>0,\exists N\text{ such that }n>N\Rightarrow d(a_n,L)\leq|a_n-L|<\epsilon$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2176790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why does factoring out eigenvector and result in the identity scaled by the eigenvalue? In deriving a method to find the eigenvalues (and corresponding eigenvectors of a linear mapping), we start with the definition: $$Av = \lambda v$$ $$Av - \lambda v = 0$$ $$(A - \lambda I)v = 0$$. I am confused about how to arrive at the third step. Whenever I've seen this, the author seems to take this fact for granted and not explain its significance. My question is $\textbf{twofold}$ and perhaps the second will follow easily from an answer to the first. 1) What definitions leads to the the result on the third line? 2) What is the geometric intuition for the action of what is happening in to the linear mapping by subtracting the eigenvalue the $i$th element of each $i$th column vector? Thanks in advance.
Say $V$ is a vector space over a field $\mathbb F$ and $A, B : V \to V$ are linear maps. Then, for any $\lambda,\mu \in \mathbb F$, the linear map $\lambda A + \mu B : V \to V$ is defined by $$(\lambda A + \mu B) (v) = \lambda A(v) + \mu B(v)$$ for all $v \in V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2176890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Prove the following: For n ≥ 4, n ^2 ≤ 2^n I have been asked to prove the following: For n ≥ 4, n$^2$ ≤ 2$^n$ I will argue by induction the statement P(k): for n ≥ 4, n$^2$ ≤ 2$^n$. First, consider the base case P(4) = 16 ≤ 16 which is true, so we assume P(n) holds and consider P(n+1). 2$^{n+1}$ = 2$^n$2 We know that 2$^n$2 ≥ 2$^n$ and from our induction hypothesis we also know that n$^2$ ≤ 2$^n$, so we know that 2$^n$2 ≥ 2$^n$ ≥ n$^2$. This is the part that makes me a bit uncomfortable and I think is wrong: (n+1)$^2$ ≥ 2$^n$ by our induction hypothesis and definition of inequality. Then, also by definition of inequality: 2$^n$2 ≥ (n+1)$^2$ ≥ 2$^n$ ≥ n$^2$ Thus, we have completed our induction step and shown that 2$^{n+1}$ ≥ (n+1)$^2$. I am worried about my jump that (n+1)$^2$ ≥ 2$^n$. Do I need to some kind of subproof here to prove this relation? Any help would be appreciated!
Hint For $n\ge 4$ you know that $(n+1)/n = 1 + 1/n \le 5/4$ so $(n+1)^2/n^2 \le 25/16 \le 2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2176978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Integral inequality possibly related to probability theory Let $f$ be Riemann integrable such that $\int_a^b f(t) \ dt = 1$ and $f \geq 0$ on $[a,b]$. If $\sigma \in C^2$ is convex, show $$\sigma\left(\int_a^b tf(t) \ dt\right)\leq\int_a^b f(t)\sigma(t) \ dt $$ and, in addition, discuss when equality holds given the tighter condition $f>0$. I'm assuming this is related to probability theory, since $f$ meets the conditions to be a probability density function. I was able to prove this using Jensen's inequality; however, my professor insists that isn't used. I tried a few other approaches, like fixing one variable and differentiating with respect to the other, but this doesn't seem to lead anywhere. Please help!
Of course it's just Jensen's inequality. Since you don't want to use that, here is a direct proof by integrating by parts several times. By integration by parts, $$\int_a^b t f(t) \, \mathrm{d}t = b - \int_a^b F(t) \, \mathrm{d}t$$ where $F(t) = \int_a^t f(s) \, \mathrm{d}x.$ Consider the function $$g(x) = \sigma \Big( x - \int_a^x F(t) \, \mathrm{d}t \Big).$$ It is differentiable with $$g'(x) = (1 - F(x)) \sigma'\Big(x - \int_a^x F(t) \, \mathrm{d}t\Big).$$ Since $\sigma'$ is monotone increasing and $F(t) \ge 0$, we can bound $$g'(x) \le (1 - F(x)) \sigma'(x)$$ and integrating that inequality over $[a,b]$ gives $$\sigma \Big( \int_a^b t f(t) \, \mathrm{d}t \Big) \le \sigma(b) - \int_a^b F(x) \sigma'(x) \, \mathrm{d}x = \int_a^b f(x) \sigma(x) \, \mathrm{d}x,$$ using integration by parts again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2177151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $\{ x_n \}_{n=1}^{\infty}$ such that $x_{n+1}=x_n-x_n^3$ and $0Let $\{ x_n \}_{n=1}^{\infty}$ such that $x_{n+1}=x_n-x_n^3$ and $0<x_1<1, \ \forall n\in \mathbb{N}.$ Prove: * *$\lim_{n \rightarrow \infty} x_n=0$ *Calculate$\ \lim_{n \rightarrow \infty} nx_n^2$ *Let $f$ be a differentiable in $\mathbb{R}$ such that $f(n)=x_n,\forall n\in \mathbb{N}.$ Prove that if $\lim_{x \rightarrow \infty} f'(x)$ exists, then it equals to $0$. I proved the first, but struggling with the next two. My intuition tells me that $\lim_{n \rightarrow \infty} nx_n^2=0$, and I tried to squeeze it but got stuck. I tried to prove by contradiction the third, and I managed to contradict the limit is greater than $0$, but couldn't get further. Any help appreciated.
$\quad$ $\bullet$ For the first one, we recall $t \in ]0,1[$ then $t > t^3 > 0$. So from $x_1 \in ]0,1[$ and $x_{n+1} = x_n - x_n^3, \forall n \geq 2$, we can prove that $ \forall n \in \mathbb{N}, x_n > 0$ and $$x_1 > x_2 > x_3 > \ldots > 0\ .$$ Hence $\{x_n\}_{n \in \mathbb{N}}$ converges. Suppose that $\lim\limits_{n \to + \infty} x_n = x$, then $x = x - x^3$ and then $x = 0$. $\quad$ $\bullet$ For the second and third one, you can do like Sangchul Lee .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2177262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What are the trailing number of the zeroes in the given integer Problem Statement:- The number of zeroes at the end of the integer $$100!-101!+\ldots-109!+110!$$ I am having a bit of a trouble in thinking how do I proceed. A little push in the right direction would be appreciated. And if you are posting a full solution do use the spoiler tag, as sometimes I cant stop myself from seeing the whole solution and lose the chance of thinking through it by myself with a push in the right direction from you guys. Also, I dont know whether I am using the right tag feel free to correct it if its wrong.
The number of zero digits at the end of $n!$ can be found by looking at the factors, and decomposing them into $2$ and $5$ factors. You need one $2$ and one $5$ to produce a trailing zero. How to combine this with that alternating sum, I have no idea yet. BTW: I explained it to my friend Ruby and she tells me the sum is $157393819470814604687108965690332604480484877$\ $508029687469884051113407737751285106008107839$\ $400103709226880772747397138959112221377791569$\ $61431310006359162880000000000000000000000000$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2177408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }