Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Find the limit of the vector function $lim_{t\to\infty} \Big(te^{-t},\frac{t^3+t}{2t^3-1},tsin(\frac{1}{t})\Big)$ a) $lim_{t\to\infty} te^{-t} = \infty \times 0$ $lim_{t\to\infty} 1e^{-t}+-e^tt = 0+(0\times\infty)$=undefined, and repeating l'hospitals rule will render the same result over and over. b) $lim_{t\to\infty}\frac{t^3+t}{2t^3-1} = \frac{\infty}{\infty}$ $lim_{t\to\infty}6=6$ (simpflifying using l'hospitals rule) c) $lim_{t\to\infty} tsin\Big(\frac{1}{t}\Big) = \infty \times 0$ $lim_{t\to\infty} 1sin\Big(\frac{1}{t}\Big)+cos\Big(\frac{1}{t}\Big)\frac{-t}{t^2}= \lim_{t\to\infty} sin\Big(\frac{1}{\infty}\Big)+cos\Big(\frac{1}{\infty}\Big)\frac{-1}{\infty} = 0+(1\times0)=0$ $lim_{t\to\infty}\Big(te^{-t},\frac{t^3+t}{st^3-1},tsin(\frac{1}{t})\Big) = \Big(idk,6,0\Big)$ Are b and c correct? I am not sure what to do about a and this is an even numbered exercise in my book so idk what the answer is.
(a) $$\lim_{t\to \infty}te^{-t}=\lim_{t\to \infty}\frac{t}{e^t}=\lim_{t\to \infty}\frac{1}{e^t}=0$$ (b) $$\lim_{t\to \infty}\frac{t^3+t}{2t^3-1}=\lim_{t\to \infty}\frac{1+1/t^2}{2-1/t^3}=\frac{1}{2}$$ (c) $$\lim_{t\to \infty}t\sin \frac{1}{t}=\lim_{t\to \infty}\frac{\sin\frac{1}{t}}{\frac{1}{t}}=\lim_{\frac{1}{t}\to 0}\frac{\sin\frac{1}{t}}{\frac{1}{t}}=1$$ (a) Using L'Hopital's rule in the second step. (b) No need to use L'Hopital's rule, but if you want, it will be $$\lim_{t\to \infty}\frac{t^3+t}{2t^3-1}=\lim_{t\to \infty}\frac{3t^2+1}{6t^2}=\lim_{t\to\infty}\frac{6t}{12t}=\lim_{t\to\infty}\frac{6}{12}=\frac{1}{2}$$ (c) The last step is well known. But if you want L'Hopital's rule, it is $$\lim_{x\to 0}\frac{\sin x}{x}=\lim_{x\to 0}\frac{\cos x}{1}=1$$ where $x=1/t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1847362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding the coefficient of $x^{50}$ in $\frac{(x-3)}{(x^2-3x+2)}$ First, the given answer is: $$-2 + (\frac{1}{2})^{51}$$ I have tried solving the problem as such: $$[x^{50}]\frac{(x-3)}{(x^2-3x+2)} = [x^{50}]\frac{2}{x-1} + [x^{50}]\frac{-1}{x-2}$$ $$ = 2[x^{50}](x-1)^{-1} - [x^{50}](x-2)^{-1}$$ $$=2\binom{-1}{50}-\binom{-1}{50} = \binom{-1}{50} = \binom{50}{50} = 1$$ which is different from the correct answer. Can anyone tell me what I'm doing wrong here? Edit: As Did mentions in the comments, $$2[x^{50}](x-1)^{-1} = -2\binom{-1}{50} = -2 \neq 2\binom{-1}{50}$$ Also, $$- [x^{50}](x-2)^{-1} = [x^{50}]\frac{1}{2-x} = \frac{1}{2}[x^{50}]\frac{1}{1-\frac{x}{2}}$$ $$ = \frac{1}{2}[x^{50}](1-\frac{x}{2})^{-1} = \frac{1}{2}\binom{-1}{50}(\frac{-1}{2})^{50}$$ $$=(\frac{1}{2})^{51}$$ Both added together gives the correct answer, which is the correct solution using the original method.
The beginning looks good, but I do not see how you justify the last line. I would use the geometric series instead: $$\begin{align*}\frac{x-3}{x^2-3x+2} &= \frac{2}{x-1} - \frac{1}{x-2}\\ &= -2\frac{1}{1-x} + \frac{1}{2}\frac{1}{1-\frac 12 x} \\ & = -2 \sum_{n=0}^\infty x^n + \frac{1}{2}\sum_{n=0}^\infty \frac{x^n}{2^n} \end{align*}$$ From that we can easily see that $$[x^{50}]\frac{x-3}{x^2-3x+2} = -2+\frac{1}{2^{51}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1847571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Numerical Value for $\lim \limits_{n \to \infty}\frac{x^n}{1+x^n}$ Let $$f (x) := \lim \limits_{n \to \infty}\frac{x^n}{1+x^n}$$ Determine the numerical value of $f(x)$ for all real numbers $x \ne -1$. For what values of $x$ is $f$ continuous? I honestly do not know how to find the numerical value. I don't even know what this means as the teachers notes do not seem to cover this topic directly. For the second part the answer seems too obvious, so I feel like my thinking is off. $f$ is continuous for all values of $x$ except for $x=-1$ and where $n$ is not odd. Any assistance on this is greatly appreciated.
In general you have for $f(x)=\frac{x^n+P(x)}{x^n+Q(x)}$ that $\lim \limits_{n \to \infty}f(x)=1$ if $P$ and $Q$ are polynomials both of degree less than $n$ (divide numerator and denominator by $x^n$ to see this). With your function $f(x)=\frac{x^n}{x^n+1}$ you have no discontinuity when $n$ is even (because the denominator never is zero) and just one discontinuity at $x=-1$ when $n$ is odd (because the denominator becomes zero).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1847637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Probability of getting 2 head and 2 tail If a fair coin is tossed 4 times what is the probability that two heads and two tails will result ? My calculation is. no. of ways of getting exactly 2 head and 2 tails .will be $6$ out of $8$. Eg $$HHTT,THHT,TTHH,HTTH,HTHT,THTH,HHHT,TTTH$$
There are $2^4=16$ possible outcomes: $HHHH$ $\ \ $ $HHHT$ $\ \ $ $HHTH$ $\ \ $ $HTHH$ $\ \ $ $THHH$ $\ \ $ $\color{red}{HHTT}$ $\ \ $ $\color{red}{HTHT}$ $\ \ $ $\color{red}{THHT}$ $\ \ $ $\color{red}{HTTH}$ $\ \ $ $\color{red}{THHT}$ $\ \ $ $\color{red}{TTHH}$ $\ \ $ $HTTT$ $\ \ $ $THTT$ $\ \ $ $TTHT$ $\ \ $ $TTTH$ $\ \ $ $TTTT$ $\ \ $ 6 of them have 2 tails and 2 heads. Thus the probability of getting 2 heads and 2 tails is $\frac {\text{No. of favorable outcomes}}{\text{No. of possible outcomes}}=\frac{6}{16}=\frac38$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1847751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Suppose $\left\{ x_{n}\right\} $ is convergent. Prove that if $c\in\mathbb{R} $, then $\left\{ cx_{n}\right\} $ also converges. Good morning! I wrote a proof of the following exercise but I don't know if it is fine: Suppose $\left\{ x_{n}\right\} $ is convergent. Prove that if $c\in\mathbb{R} $, then $\left\{ cx_{n}\right\} $ also converges. Proof: If $\left\{ x_{n}\right\} $ converges, then by definition $$\lim_{n\rightarrow\infty}x_{n}=x.$$ For $\frac{\epsilon}{\mid c\mid}>0$, let $N\in\mathbb{\mathbb{N}}$ be such that if $n>N$, then $\mid x_{n}-x\mid<\frac{\epsilon}{\mid c\mid} $. Now, we have this: $$|cx_{n}-cx|=|c(x_{n}-x)|=|c|| x_{n}-x|=|c||x_{n}-x|<|c|\frac{\epsilon}{|c|}=\epsilon\Rightarrow c\lim_{n\rightarrow\infty}x_{n}=cx$$ and $\left\{ cx_{n}\right\} $ converges. Please review this...
That looks fine. You may want to comment that this only works for $c\neq0$ (for obvious reasons).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1847847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proving i-th Fibonacci number by induction, can an inductive step be used for two sequential values? I am working through the beginning of Introduction to Algorithms, and came across the problem Prove by induction that the $i$-th Fibonacci number satisfies the equality $$ F_{i} = \frac{\phi^{i} - \hat{\phi^{i}}}{\sqrt{5}}$$ where $\phi$ and $\hat{\phi}$ are the golden ratio and it's conjugate, respectively. Now I know there are plenty of answers online regarding this proof, and I have already come to understand a few ways to approach it, but I am simply curious about the approach I originally took to solving the problem and whether it is valid or not. I am fairly convinced it is invalid, but I want to double check and wonder if there is some mechanism in the proof I can change to validate it. My approach: First, I proved (trivially) that both $\phi$ and $\hat{\phi}$ satisfy the equation \begin{equation} x^{2} = x+1 \tag{1} \end{equation} Then after trivially proving the base cases for the inductive proof, for the inductive step we assume $$ F_{k} = \frac{\phi^{k} - \hat{\phi^{k}}}{\sqrt{5}}$$ for some $k \in \mathbb{Z}^{+}$. Then for $k+1$, we have \begin{align} \frac{\phi^{k+1} - \hat{\phi^{k+1}}}{\sqrt{5}} &= \frac{\phi^{k-1}\phi^{2} - \hat{\phi}^{k-1}\hat{\phi^{2}}}{\sqrt{5} } \\ &= \frac{\phi^{k-1}(\phi + 1) - \hat{\phi}^{k-1}(\hat{\phi} + 1)}{\sqrt{5}} && \text{by (1)} \\ &= \frac{\phi^{k} - \hat{\phi}^{k}}{\sqrt{5}} + \frac{\phi^{k-1} - \hat{\phi}^{k-1}}{\sqrt{5}} \\ &= F_{k} + F_{k-1} \\ &= F_{k+1} && \text{by definition} \end{align} This looks downright incorrect to me because it implies that the inductive step holds for $k-1$, which is not permitted in inductive proofs, correct? If so, are there any measures I can take to validate this proof? I've already worked out a solution going the other way with the recurrence relation, I'm just curious how close this might be (I haven't touched inductive proofs in a while)
For $k\ge 1$, let $A_k$ be the assertion that $F_k$ and $F_{k-1}$ both satisfy the condition. You have shown that if $A_k$ holds, then the condition is satisfied at $k+1$, and therefore that $A_{k+1}$ holds. So you have proved that $A_n$ holds for all $n$, and therefore that $F_n$ satisfies the condition for all $n$. For another approach that is more generally useful, please see strong induction aka complete induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1847928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Different way solving limit $\lim \limits_{ x\rightarrow 0 }{ { x }^{ x } } $ I know how to solve this problem by using L'Hospital's rule $$\lim \limits_{ x\rightarrow 0 }{ { x }^{ x } } =\lim\limits _{ x\rightarrow 0 }{ { e }^{ x\ln { x } } } =\lim\limits _{ x\rightarrow 0 }{ { e }^{ \frac { \ln { x } }{ \frac { 1 }{ x } } } } =\lim\limits _{ x\rightarrow 0 }{ { e }^{ \frac { \frac { 1 }{ x } }{ \frac { 1 }{ { x }^{ 2 } } } } } ={ e }^{ 0 }=1,$$ what other different ways can you suggest or show.thanks
As pointed by Bernard, that is the same as showing that $$ \lim_{x\to 0^+} x\log x = 0 \tag{1}$$ or, by setting $x=e^{-t}$, $$ \lim_{t\to +\infty} t e^{-t} = 0\tag{2} $$ that follows by squeezing: for any $t>0$, $e^t>1+t+\frac{t^2}{2}$, hence: $$ 0\leq \lim_{t\to +\infty} te^{-t} \leq \lim_{t\to +\infty}\frac{t}{1+t+\frac{t^2}{2}}=0.\tag{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1848004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Integral Problem help Given definite integral $ f(t)=\int_{0}^{t} \frac{x^2+13x+36}{1+{\cos{x}}^{2}} dx $ At what value of $t$ does the local max of $f(t)$ occur? What I did is replace $x$ variable with $t$... and the $f'(t)=\frac{x^2+13x+36}{1+{\cos{x}}^{2}}=0$ because looking for local extrema points is when the derivative is $0$. So based on my knowledge, $t = -4$ and $-9$ when $f'(x)=0$ and used a point before $-4$, between $-4$,$-9$ and after $-9$ and pug it back in the $f(t)$. Did I do it right? Thanks
Note that the first derivative is positive up to $-9$, then negative up to $-4$, then positive. So there is a local max at $-9$ and a local min at $-4$. I do not know of any way of evaluating the function at the local max and local min except numerically, say using Simpson's Rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1848183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can I identify $X/A$ with the one-point compactification of $X-A$, where $X\supset A$ is a topological space? Intuitively, $A$ collapses to a single point which may represent the infinity point of the one point compactification of $X-A$. Definitely, we should assume $X$ is locally compact Hausdorff. For example, if $X=S^n$ and $A={pt}$, a single point, then $X/A \cong X = S^n$ while $X-A \cong \mathbb R^n$, whose one point compactification is exactly $S^n$. However, I am not sure if this is right for general topological spaces. And, I am trying to find out the exact conditions to guarantee this identification. Please help. Thanks.
(Partial answer) Edit: Corrected earlier omission: it is also necessary that $A$ be closed. As G. Sassatelli points out, the definition of an open set in $X / A$ and in the compactification of $(X \setminus A)$ are different. A partial answer to your question is to determine under what conditions the canonical bijection between $X / A$ and the compactifiation of $(X \setminus A)$ is a homeomorphism. First we consider the topology restricted to $(X \setminus A)$. $X \setminus A$ is open in the one-point compactification of the space $X \setminus A$, so it must also be open in $X / A$. It follows then immediately that $\boldsymbol{A}$ must be closed. This guarantees that the topology on $X \setminus A$ has the same open sets as in $X$, so that open sets in $X \setminus A$ are the same as in the original topology of $X$ in both spaces. From here, the additional condition we want is that the open sets coincide for sets containing the extra point: for any set $V \subseteq (X \setminus A)$, $V \cup \{[A]\}$ is open in $X/A$ if and only if $V \cup \{\infty\}$ is open in the compactification of $(X \setminus A)$. Unpacking these definitions, we want $$ V \cup A \text{ is open} \iff X \setminus (V \cup A) \text{ is compact} $$ The $\Longleftarrow$ is true if $X$ is Hausdorff: if $X \setminus (V \cup A)$ is compact, then it is closed, so its complement $V \cup A$ is open. The $\Longrightarrow$ direction says: the complement of an open set containing $A$ is compact. Or in other words, a closed set disjoint from $A$ must be compact. In summary: * *If $X$ is Hausdorff, then a necessary and sufficient condition is that (i) $A$ is closed, and (ii) every closed set disjoint from $A$ is compact. (Perhaps others can reduce this condition further to something else well-known.) *In particular, if $X$ is compact Hausdorff and $A$ is closed, then the two are always equivalent. This covers the case you discussed, of $S^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1848284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What is the range of $λ$? Suppose $a, b, c$ are the sides of a triangle and no two of them are equal. Let $λ ∈ IR$. If the roots of the equation $x^ 2 + 2(a + b + c)x + 3λ(ab + bc + ca) = 0$ are real, then what is the range of $λ$? I got that $$λ ≤\frac{ (a + b + c)^ 2} {3(ab + bc + ca)}$$ After that what to do?
For a triangle with sides $a,b,c$ by triangle inequality, we have $$|a-b|<c$$ Squaring both sides we get, $$(a-b)^2<c^2\tag{1}$$ Similarly, $$(b-c)^2<a^2\tag{2}$$ And $$(c-a)^2<b^2\tag{3}$$ Adding $(1),(2)$ and $(3)$, we get $$a^2+b^2+c^2 <2(ab+bc+ca) \Longleftrightarrow (a+b+c)^2 <4(ab+bc+ca)\tag{4}$$ From $(4)$, we get $\lambda <\dfrac{4}{3}$ To see why the upper bound is the supremum consider a degenerate triangle with sides $(a,a,c)$,(where $c \rightarrow 0$) and permutations. For example, when $(a,b,c)=(1,1, 0.01)$, then $\lambda \leq \dfrac{4.0401}{3.0600} \approx 1.321 $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1848389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Justify why there can't be a strict local extremum at $\|x\|$ = $\sqrt{1 \over 2}$ for $f(x) = \|x\|^4 - \|x\|^2$ Given $f: \Bbb R^2 \to \Bbb R$ defined by $$f(x) = \|x\|^4 - \|x\|^2$$ with $x := (x_1, x_2)$, justify why there can't be a strict local extremum at $\|x\|$ = $\sqrt{1 \over 2}$. Approach Well, I would guess that there can't be a strict local extremum if I could justify that the gradient of the function isn't $0$ at the given point. But in this thread I received answers on where the gradient of the function is $0$. Rodrigo de Azevedo expressed his solution in a way that might be helpful in this case: he found that the gradient of the function vanishes at $\|x\|_2 = \sqrt{1 \over 2}$. Am I supposed to go ahead from there and argue why the gradient can't be identical at the point given in the exercise?
The given function is $$ f(x) = \Vert x \Vert^4 - \Vert x \Vert^2 = \left(x_1^2 + x_2^2 \right)^2 - (x_1^2 + x_2^2) = x_1^4 + 2x_1^2x_2^2 + x_2^4 -x_1^2 -x_2^2 $$ We find that $$ \nabla f(x) = \left[ \begin{array}{c} \frac{\partial f}{\partial x_1} \\[2mm] \frac{\partial f}{\partial x_2} \\[2mm] \end{array} \right] = \left[ \begin{array}{c} 4x_1^3 + 4x_1 x_2^2 - 2x_1 \\[2mm] 4x_2^3 + 4x_1^2 x_2 - 2x_2\\[2mm] \end{array} \right] $$ The critical points of $f$ are obtained by solving $$ \Delta f(x) = 0 $$ or equivalently, the system of equations $$ \left. \begin{array}{ccc} 4x_1^3 + 4x_1 x_2^2 - 2x_1 & = & 0 \\[2mm] 4x_2^3 + 4x_1^2 x_2 - 2x_2 & = & 0 \\[2mm] \end{array} \right. \tag{1} $$ Clearly, $(x_1, x_2) = (0, 0)$ is a solution of (1). Hence, the origin is a critical point of the function $f(x)$. Next, we suppose that $x_1 \neq 0$ and $x_2 \neq 0$. Dividing the first equation of (1) by $x_1$ and the second equation of (1) by $x_2$, we get: $$ 4x_1^2 + 4x_2^2 - 2 = 0\\ 4x_2^2 + 4x_1^1 - 2 = 0 $$ or equivalently, the equation $$ x_1^2 + x_2^2 = {1 \over 2} $$ Thus, we conclude that the critical points of $f$ consist of the origin and all points on the circle with centre at the origin and radius ${1 \over 2}$, which can be also represented compactly as $$ \Vert \mathbf{x} \Vert = \sqrt{1 \over 2} $$ Calculating the Hessian of $f$, we obtain $$ Hf(x) = \left[ \begin{array}{cc} 12 x_1^2 + 4 x_2^2 -2 & 8 x_1 x_2 \\[2mm] 8 x_1 x_2 & 12 x_2^2 + 4 x_1^2 - 2 \\[2mm] \end{array} \right] $$ Along the circle $\Vert \mathbf{x} \Vert = \sqrt{1 \over 2}$ or $ 4 x_1^2 + 4 x_2^2 - 2 = 0$, we get $$ Hf(x) = \left[ \begin{array}{cc} 8 x_1^2 & 8 x_1 x_2 \\[2mm] 8 x_1 x_2 & 8 x_2^2 \\[2mm] \end{array} \right] $$ If we calculate the principal minors of $Hf(x)$, we get $$ \Delta_1 = 8 x_1^2 > 0, \ \ \Delta_2 = 0 $$ Using the Sylvester's test, we conclude that $Hf(x)$ is not positive definite. However, using the Sylvester's test, we can conclude that $Hf(x)$ is positive semi-definite. Hence, any point on the circle $C$ defined by $x_1^2 + x_2^2 = {1 \over 2}$ is a local minimizer for $f$ but not a strict local minimizer. This is because all points on $C$ have the same value of $f$ and the definition for strict local minimizer requires that in a small neighborhood $N$ of a point $\xi \in C$, $f(\xi)$ is smaller than $f(x), x \neq \xi, x \in N$. Obviously, any small neighborhood $N$ of $\xi \in C$ will include points in $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1848500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Series with digammas (Inspired by a comment in answer https://math.stackexchange.com/a/699264/442.) corrected Let $\Psi(x) = \Gamma'(x)/\Gamma(x)$ be the digamma function. Show $$ \sum_{n=1}^\infty (-1)^n\left(\Psi\left(\frac{n+1}{2}\right) -\Psi\left(\frac{n}{2}\right)\right) = -1 $$ As noted, it agrees to many decimals. But care may be required since the convergence is only conditional. added Both solutions are good. But could use explanation for exchange of summation. Note that $$ 2\sum_{n=1}^\infty\sum_{m=0}^\infty \left(\frac{(-1)^n}{2m+n} -\frac{(-1)^n}{2m+n+1}\right) = 2\sum_{m=0}^\infty\sum_{n=1}^\infty \left(\frac{(-1)^n}{2m+n} -\frac{(-1)^n}{2m+n+1}\right) = -1 $$ is correct. But "Fibini" justification fails, since $$ \sum_{m=0}^\infty\sum_{n=1}^\infty \left|\frac{(-1)^n}{2m+n} -\frac{(-1)^n}{2m+n+1}\right| = \infty $$ Similarly in Random Variable's solution, the exchange $\sum_{n=1}^\infty \int_0^1 = \int_0^1 \sum_{n=1}^\infty$, although correct, cannot be justified by Fubini.
Using the integral representation $$\psi(s+1) = -\gamma +\int_{0}^{1} \frac{1-x^{s}}{1-x} \, dx ,$$ we get $$ \begin{align} \sum_{n=1}^{\infty} (-1)^{n} \left(\psi \left(\frac{n}{2} \right)- \psi \left(\frac{n+1}{2}\right) \right) &= \sum_{n=1}^{\infty} (-1)^{n} \int_{0}^{1} \frac{x^{(n+1)/2-1} - x^{n/2-1}}{1-x} \, dx \\ &= \int_{0}^{1} \frac{1}{1-x} \sum_{n=1}^{\infty} (-1)^{n} \left(x^{n/2-1/2} - x^{n/2-1} \right) \, dx \\ &= \int_{0}^{1} \frac{1}{1-x} \left(- \frac{1}{1+\sqrt{x}}+ \frac{1}{(1+\sqrt{x})\sqrt{x}} \right) \, dx \\ &= 2 \int_{0}^{1} \frac{u}{1-u^{2}} \left(- \frac{1}{1+u} + \frac{1}{(1+u)u} \right) \, du \\&= \int_{0}^{1} \frac{1-u}{(1-u^{2})(1+u)} \, du\\ &= 2 \int_{0}^{1} \frac{1}{(1+u)^{2}} \, du \\ &=1. \end{align}$$ EDIT: Daniel Fischer was kind enough to point out to me that we can use the dominated convergence theorem to justify interchanging the order of integration and summation. Specifically, we may write $$ \begin{align} \sum_{n=1}^{\infty} (-1)^{n} \int_{0}^{1} \frac{x^{(n+1)/2-1} - x^{n/2-1}}{1-x} \, dx &= \lim_{N \to \infty}\sum_{n=1}^{N} (-1)^{n} \int_{0}^{1} \frac{x^{(n+1)/2-1} - x^{n/2-1}}{1-x} \, dx \\ &=\lim_{N \to \infty}\sum_{n=1}^{N} (-1)^{n+1} \int_{0}^{1} \frac{x^{n/2-1}} {1+\sqrt{x}} \, dx \\ &= \lim_{N \to \infty} \int_{0}^{1} \sum_{n=1}^{N} (-1)^{n+1} \frac{x^{n/2}} {x(1+\sqrt{x})} \, dx. \end{align}$$ But for $x \in [0,1]$, $$\left|\sum_{n=1}^{N} (-1)^{n+1} \frac{x^{n/2}} {x(1+\sqrt{x})} \right| \le \frac{\sqrt{x}}{x(1+\sqrt{x})},$$ which is integrable on $[0,1]$. This, combined with the fact that $$\sum_{n=1}^{\infty} (-1)^{n+1} \frac{x^{n/2}} {x(1+\sqrt{x})} $$ converges pointwise on $[0, 1)$, allows us to move the limit inside the integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1848681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Fundamental group of a compact space with compact universal covering space I have this problem for Riemannian manifold, but think that it is just a topological problem. I know that this is probably a silly question, but it is since a while that I don't study general topology and algebraic topology.. Let $X$ be a compact topological space and assume that its universal covering space $\tilde{X}$ is also compact. How can I prove that the fundamental group of $X$ is finite? Thanks!
Consider the covering map $\pi \colon \tilde{X} \rightarrow X$. Above any $p \in X$, the fiber $\pi^{-1}(p)$ is a discrete closed subset of a compact space $\tilde{X}$ and so must be finite. By the general theory of covering spaces, if we fix some $\tilde{p} \in \tilde{X}$ with $\pi(\tilde{p}) = p$ then we obtain a bijection between $\pi_1(X,p)$ and $\pi^{-1}(p)$ given by sending (the homotopy class of) a based loop $\gamma \colon [0,1] \rightarrow X$ at $p$ to $\tilde{\gamma}(1)$ where $\tilde{\gamma} \colon [0,1] \rightarrow X$ is the unique lift of $\gamma$ to $\tilde{X}$ satisfying $\tilde{\gamma}(0) = \tilde{p}$. Thus, $\pi_1(X,p)$ is finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1848792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Question on the inequality of sequences Given two sequence $(a_n)_{n \geq 0}$ , $(b_n)_{n \geq 0}$ satisfing $a_n,b_n >0$ for all $n$ and $\sum_{n}a_n \gtrsim \sum_{n}b_n$. My question is that: For a sequence $(c_n)_{n \geq 0}$ be positive, we have the following inequality ? $\sum_{n}a_nc_n \gtrsim \sum_{n}b_nc_n$
The answer is negative. Consider the three sequences: $$ (a_n)= (1,0,0\ldots) \quad (b_n)=\left( 0, \frac12, 0\ldots \right),\quad (c_n)=\left(\frac{1}{4}, 1, \ast, \ast \ldots\right).$$ (Here $\ast$ means any positive number). You have that $$ \sum_n a_n=1>\frac12=\sum_n b_n, $$ but $$ \sum_n c_na_n=\frac14<\frac12=\sum_nc_nb_n.$$ P.S.: I noticed that you require $a_, b_n >0$, so strictly speaking the present example is ruled out. But it can be fixed easily, replacing the zeroes with any converging series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1848890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If you take the reciprocal in an inequality, would it change the $>/< $ signs? Example:$$-16<\frac{1}{x}-\frac{1}{4}<16$$ In the example above, if you take the reciprocal of $$\frac{1}{x}-\frac{1}{4} = \frac{x}{1}-\frac{4}{1}$$ would that flip the $<$ to $>$ or not? In another words, if you take the reciprocal of $$-16<\frac{1}{x}-\frac{1}{4}<16$$ would it be like this: $$\frac{1}{-16}>\frac{x}{1}-\frac{4}{1}>\frac{1}{16}$$
It depends if $x$ and $y$ are the same sign. Case 1: $0 < x < y$ then $0 < x(1/y) < y(1/y)$ and $0 < x/y < 1$ and $0 < x/y(1/x) < 1 (1/x)$ so $0 < 1/y < 1/x$. If both positive, flip. Case 2: $x < 0 < y$ then $x/y < 0 < 1$. Then as $x < 0$ we flip when we do $x/y*(1/x) > 0 > 1*(1/x)$ so $ 1/y > 0 > 1/x$ so $1/x < 0 < 1/y$. Don't flip. Case 3: $x < y < 0$ then $x/y > 1 > 0$ and $1/y < 1/x < 0$. Flip if they are the same sign. But FOR THE LOVE OF GOD!!!!!!! the reciprical of $1/x - 1/4$ is !!!!!!!NOT!!!!!! $x/1 - 4/1$!!!!!!!! It is $\frac{1}{1/x - 1/4} = \frac{4x}{4 - x}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 0 }
System of two quadratic equations in two variables with two parameters leads to quintic polynomial Actually, it's two closely related systems. Let $a,b \in \mathbb{Q}$ be the parameters. The first system has the form: $$(1+a y)x^2-2(a+y)x+(1+a y)=0 \\ (1-b x)y^2-2(b-x)y+(1-b x)=0$$ One of the solutions can be written as: $$x=\frac{\sqrt[5]{(a+1)^2(b+1)}-\sqrt[5]{(a-1)^2(b-1)}}{\sqrt[5]{(a+1)^2(b+1)}+\sqrt[5]{(a-1)^2(b-1)}}$$ $$y=\frac{\sqrt[5]{(a-1)(b+1)^2}-\sqrt[5]{(a+1)(b-1)^2}}{\sqrt[5]{(a-1)(b+1)^2}+\sqrt[5]{(a+1)(b-1)^2}}$$ The second system has the form: $$(1-a y)x^2+2(a+y)x-(1-a y)=0 \\ (1-b x)y^2-2(b+x)y-(1-b x)=0$$ One of the solutions can be written as: $$x=\tan \left(\frac{1}{5} \arctan \frac{1-a^2-2ab}{(1-a^2)b+2a} \right)$$ $$y=\tan \left(\frac{1}{5} \arctan \frac{1-b^2+2ab}{(1-b^2)a-2b} \right)$$ In both cases $x,y$ are in general algebraic integers of order $5$, meaning their minimal polynomial with integer coefficients is quintic (putting aside the degenerate cases). Is there an intuitive explanation about the reason why a system of two coupled quadratic equations leads to two separate quintic equations for each variable? Is $5$ the maximal degree of a polynomial such a system can lead to? Just a note - I derived both solutions myself, so there is no need to show how these systems can be solved. Unless you really want to, but that's not what I'm asking. By changing the signs we can make a lot of related systems, but most of them have rational solutions, so are not very interesting. Edit Actually, as @egred pointed out, these are not quadratic equations, but cubic equations (or at least cubic forms, because we have terms like $ayx^2$ and $bxy^2$ which are of course, cubic).
You're wrong in considering the two systems as consisting of quadratic equations. The curve represented by $$ (1+a y)x^2-2(a+y)x+(1+a y)=0 $$ is, in general, cubic (unless $a=0$). For $a\ne0$ and $b\ne0$, the first system has, properly counting multiplicities, nine solutions and it's very possible that one of the solutions consists of numbers having degree $5$ over the rationals. To make a simpler example, Menaechmus found the duplication of the cube by intersecting two conic sections; in modern terms, he intersected $$ \begin{cases} xy=2 \\[4px] y=x^2 \end{cases} $$ finding $x^3=2$. Similarly, the two cubics \begin{cases} x^2y=2 \\[4px] y=x^3 \end{cases} intersect in a point where $x^5=2$. Not the same form as your curves, but that's the idea.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to turn the reflection about $y=x$ into a rotation. If we reflect $(x,y)$ about $y=x$ then we get $(y,x)$. And because $x^2+y^2=y^2+x^2$ this can also be represented by a rotation. Using this we get: $$(x,y)•(y,x)=2xy=(x^2+y^2)\cos (\theta)$$ Hence $\theta=\arccos (\frac{2xy}{x^2+y^2})$ So using complex numbers we rotate $(x,y)$ clockwise/counterclockwise an angle of $\arccos (\frac{2xy}{x^2+y^2})$ depending on which side of $y=x$ the point is, i.e. Depending on wether or not $y > x$. My question : How can we know the angle is $\arccos (\frac{2xy}{x^2+y^2})$ without using the already known result that $(x,y)$ rotated about $x=y$ is $(y,x)$?
If $\theta$ is the angle between the x-axis and the line from 0 to $(x,y)$, then $\theta = \arccos(\frac{y}{\sqrt{x^2+y^2}})$. We reflect about a line with angle $\frac{\pi}{4}$, so the angle between $(x,y)$ and the line $x=y$ is $\frac{\pi}{4} - \theta$. The angle between $(x,y)$ and the reflected point will be double the previous angle, or $\frac{\pi}{2} -2 \theta$. Compute the cosine of $\frac{\pi}{2} - 2\theta$ and you will get $\frac{2xy}{x^2+y^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
About the existence of the diagonal set of Cantor The classic proof of the Cantor set start with the assumption that the set $$B=\{x\in A:x\notin f(x)\}$$ exists, where $f: A\to\mathcal P(A)$ is a bijective function. I understand the proof but I dont understand the assumptions where you start to make this proof. To be clear, why a set $B$ can be constructed? How you can justify this assumption? To me the proof of the Cantor theorem is far to be clear or complete if there are not an explanation about why $B$ must be possible. Then, can someone explain to me or justify, via other theorems if possible, why $B$ must exist? Thank you in advance. P.S.: can someone explain to me the downvotes in the question?
Another example, from the infinite set of natural numbers: \begin{eqnarray} S=\mathbb{N}\\ P\left(S\right)=\{\phi, S, \forall U\neq\phi;U\subset S\}\\ f\left(x\right)=\{x\} \end{eqnarray} Then f is 1-1. $B=\{x\in S;x\notin f\left(x\right)\}$ Then $\forall x\notin B$ $B=\phi \in P\left(S\right)$ Which exists and again is not in the image of $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Constructing a map of degree 2 $f:T^2\rightarrow S^2$ I know the definition of degree and homology type stuff. But I don't know what a map $T^2\rightarrow S^2$ should actually look like. We never work with explicit examples in my class and I just have no idea what to write. Should I map toroidal coordinates to toroidal coordinates? If not, then what? How do I build this map explicitly? I know his is a stupid question but please help, I'm having trouble with the formalism and I need somebody to directly spell it out for me. Please show me what to do. I don't know how to easily describe maps between spaces like this or how to tell intuitively what I should do to make it have a particular degree. I get what this doing algebraically to the homology groups but I don't know what it is doing to the actual set.
You want a map $f: S^1 \times S^1 \to S^2$ of degree two. It suffices to provide a map of degree one, since the map $z \times z^2$ is of degree two from the torus to the torus, and degree is functorial. For a map of degree one, consider the smash product map $$\gamma: S ^1 \times S^1 \to S^1 \wedge S^1 \cong S^2.$$ To see that this has degree one, you can use the fact that cellular maps are determined by their degrees when collapsing subcomplexes. With the "standard" CW complex structure on $S^1 \times S^1$, $\gamma$ becomes the identity map on the non-collapsed $2$-cell.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What is this operator called? If $x \cdot 2 = x + x$ and $x \cdot 3 = x + x + x$ and $x^2 = x \cdot x$ and $x^3 = x \cdot x \cdot x$ Is there an operator $\oplus$ such that: $x \oplus 2 = x^x$ and $x \oplus 3 = {x^{x^x}}$? Also, is there a name for such a set of operators ops where... Ops(1) is addition Ops(2) is multiplication Ops(3) is exponentiation Ops(4) is $\oplus$ ...and so on Also, is there a branch of math who actually deals with such questions? Have these questions already been answered like 2000 years ago?
It is known as a tetration, and it is normally written as $^na$ where n is the height of the power tower. It is the forth hyperoperation. The zeroth hyperoperation is the successor function, and the first is the zeroth hyperoperation iterated, and so on A more general way to define the nth hyperoperation is, using the notation, $H_n(a,b)$ where n is the nth hyperoperation, ${\displaystyle H_{n}(a,b)={\begin{cases}b+1&{\text{if }}n=0\\a&{\text{if }}n=1{\text{ and }}b=0\\0&{\text{if }}n=2{\text{ and }}b=0\\1&{\text{if }}n\geq 3{\text{ and }}b=0\\H_{n-1}(a,H_{n}(a,b-1))&{\text{if }n\in\mathbb{N},n>3}\end{cases}}}$ Some notations for hyperoperations are(for $H_n(a,b)$: * *Square bracket notation: $a[n]b$ *Box notation: $a{\,{\begin{array}{|c|}\hline {\!n\!}\\\hline \end{array}}\,}b$ *Nambiar's notation : $a\otimes ^{n-1}b$ *Knuth's up arrow notation: $a\uparrow^{n-2}b$ *Goodstien's notation: $G(a,b,n)$ *Conway's chained arrow notation: $a\rightarrow b\rightarrow (n-2)$ *Bowers exploding array function: $\{a,b,n,1\}$ *Original Ackermann function: ${\begin{matrix}\phi (a,b,n-1)\ {\text{ for }}1\leq n\leq 3\\\phi (a,b-1,n-1)\ {\text{ for }}n\geq 4\end{matrix}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 3, "answer_id": 1 }
Evaluate $\lim_{x \to 0^-} \left( \frac{1}{x} - \frac{1}{|x|} \right)$ (possible textbook mistake - James Stewart 7th) I was working on a few problems from James Stewart's Calculus book (seventh edition) and I found the following: Find $$\lim_{x \to 0^-} \left( \frac{1}{x} - \frac{1}{|x|} \right)$$ Since there's a $|x|$ on the limit and knowing that $|x| = -x$ for any value less than zero, we have $$\lim_{x \to 0^-} \left( \frac{1}{x} - \frac{1}{|x|} \right) = \lim_{x \to 0^-} \frac{2}{x}$$ So far so good. Continuing, $$\lim_{x \to 0^-} \left( \frac{1}{x} - \frac{1}{|x|} \right) = \lim_{x \to 0^-} \frac{2}{x} = - \infty$$ since the denominator becomes smaller and smaller. When checking the textbook's answer I've found the following: Am I missing something or should the limit really be $- \infty$ ?
bru, I think that the problem is just about terminology. Your derivation is correct, but it is likely that what Stewart is claiming is (I guess) that a limit that goes to $-\infty$ or to $+\infty$ on only one side (as in this example, where the limit is only from the left), is "non existing". Otherwise, the statement "the limit does not exist becuase the denomiator approaches $0$ while the numerator does not" would be simply nonsense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
If $x + y \sim x$ in a commutative monoid, does this imply $y \sim 0$? Let $(M,0,+)$ be a commutative monoid. A congruence relation is an equivalence relation, such that $$ a \sim b, c \sim d \quad \mbox{implies} \quad a + c \sim b + d. $$ for all $a,b,c,d \in M$. Fix some $x,y \in M$. Does $x + y \sim x$ imply $y \sim 0$? Do you know a counter-example?
Minimal counterexample: $(M, \oplus)$ with $M = \{0, 1, 2 \}$, $x \oplus y = \min \{ x+y , 2\}$ and $1 \sim 2$. Then $1 \sim (1 \oplus 1)$ but $1 \not\sim 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
why are subgroups defined based on group homomorphisms rather than on its law of composition? i am getting reacquainted with Algebra after some time. I thought i had understood it the last time, but apparently not. The question vexing me is why are subgroups (like the kernel) of a group defined on the basis of a homomorphism rather than its internal law of composition? Is every subgroup of a group a kernel of some homomorphism? Is it possible that a subgroup which is a kernel of some homomorphism not a subgroup when it is evaluated on another homomorphism? Please forgive the naivette. Its difficult to grasp things when one is doing self-study. Thanks, Sachin
No, the definition of a subgroup is defined on the basis of its internal law of composition, and those subgroups which are defined on the basis of some homomorphism are defined as normal subgroups. And there are a lot of subgroups which are not normal: for example, the group $A_5$ is simple; this means that there are no non-trivial normal subgroups of $A_5.$ Finally, yes, it is possible that the kernel of one homomorphism might not be the kernel of another homomorphism; e.g. $C_2\triangleleft C_4,$ where $C_n$ means the cyclic group of order $n,$ is the kernel of the homomorphism $f:C_4\rightarrow C_4$ defined as $f(x):=x^2.$ But $C_2$ is not the kernel of the homomorphism $g:C_4\rightarrow C_4$ defined as $g(x):=x^3.$ If you still have confusions, feel free to point it out! Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Differential Forms on the Riemann Sphere I am struggling with the following exercise of Rick Miranda's "Algebraic Curves and Riemann Surfaces" (page 111): Let $X$ be the Riemann Sphere with local coordinate $z$ in one chart and $w=1/z$ in another chart. Let $\omega$ be a meromorphic $1$-form on $X$. Show that if $\omega=f(z)\,dz$ in the coordinate $z$, then $f$ must be a rational function of $z$. I have unfortunately no idea how I should begin the proof (since I am new to this topic). Can someone give me a hint? Edit: The transition map is $T:z\rightarrow 1/z$. We have $\omega=f(z)dz$ in the coordinate $z$. Then we know that $\omega$ transforms into $\omega_2=f_2(w)\,dw$ as follows: $f_2(w)=f(T(w))T'(w)=f(1/w)(-1/w^2)$, but how does this help?
EDIT: By multiplying by an appropriate polynomial, we may assume that $\omega$ has poles (at most) at $0$ and $\infty$. On $\Bbb C-\{0\}$ you now have holomorphic functions $f$ and $g$ (your $f_2$) with $$z^2f(z)=-g(1/z).$$ Since $f$ and $g$ have at worst poles at $0$, this equation tells us that each of their Laurent series has only finitely many nonzero terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Is the subgroup of homotopically trivial isometries a closed subgroup of the isometry group? Let $(M,g)$ be a connected Riemannian manifold. Then according to the Steenrod-Myers-Theorem, the isometry group $\text{Isom}(M,g)$ of $(M,g)$ is a compact lie group with the compact-open topology. Is the subgroup $G$ of isometries which are homotopically trivial (i.e. homotopic to the identity) a closed subgroup of $\text{Isom}(M,g)$? Background: We can choose a basepoint $x \in M$ and consider the group homomorphism $$ \varphi \colon \text{Isom}(M,g) \to \text{Out}(\pi_1(M,x)) $$ which maps an isometry $f$ to the class of the automorphism $$ [\gamma] \mapsto [P \;*\; (f \circ \gamma) \;*\; \overline{P}] $$ of $\pi_1(M,x)$, where $P$ is any path in $M$ from $x$ to $f(x)$ and $*$ is concatenation of paths. Now if $M$ is aspherical, then $G$ is the kernel of $\varphi$ and its plausible that with the right choice of a topology on the outer automorphism group, $\varphi$ is a continuous map and thus $G$ is closed.
Let $I$ denote the isometry group of $(M,g)$. Being a Lie group, it is locally path connected. It follows that the subgroup $G< I$ you are interested in, is open. Now, it is a nice exercise to work out is that each open subgroup of an arbitrary topological group is also closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1849953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Infinitely Concatenated Sine and Cosine Using a graphing calculator, if one concatenates sine and cosine repeatedly, i.e. $$y=\sin(\cos(\sin(\cos(x))))$$ the graph appears to approach a horizontal line, suggesting that at infinite concatenation, there is a single value of the function for all $x$. Is this correct? If so is this value known?
This is an example of an attractive fixed point. A fixed point of the function $x\mapsto \sin(\cos(x))$ is a number $x_0$ satisfying $x_0 = \sin(\cos(x_0))$, i.e. the number you put in is the same number that you get out. An attractive fixed point is one for which, if $x$ is sufficiently close to $x_0$, then $\sin(\cos(x))$ is even closer to $x_0$, and may be made as close as desired by iterating the process a large enough number of times. In this case, "sufficiently" close appears to include all numbers in the domain. The Banach fixed-point theorem is applicable here. The derivative of $x\mapsto\sin(\cos(x))$ is bounded in absolute value by a number less than $1$; from that it follows from the mean value theorem that this is a contraction mapping. And the space is complete, i.e. closed under limits of Cauchy sequences. Hence the theorem is applicable. The theorem says that every function to which it is applicable has exactly one attractive fixed point. In fact if you draw the graph of $y=\sin(\cos(x))$ and that of $y=x$, you see that they intersect exactly once, and the $x$-coordinate of the intersection point is then the one fixed point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1850050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
In this proof for the rationals being countably infinite, is the enumeration strategy important? During some set theory exercises I came across the proof that the rationals are countably infinite. It used what appears to be a common proof where all the pairs are listed and a zig-zagging path is taking through them, starting with $(1,1), (1,2), (2,1)...$ and so forth. Of course I am tasked with proving all kinds of related things and my first snag is the following: Is the path taken, or maybe better put, is the enumeration strategy important? I'm guessing it is, if you would just list all pairs $\sum_{i=1} (1,i)$ first, you would never reach $(2,1), (2,2)...(3,1)...$ and such. From a computational perspective, this makes sense to me. Intuitively, not so much, since we are already dealing with infinites. But the way my textbook does it, you have a diagonal of pairs, starting with $(1,1)$ that grows and as such all the rows get even "exposure". Is this reasoning correct? Note, my textbook doesn't count 0 as a natural number.
For the traditional bijection with the positive integers you only get one free trip to infinity. The bijection you proposed requires infinitely many trips to infinity. In set theory ordered mappings beyond infinity are associated with "ordinal" numbers. Ordinal $\omega$ maps to {1 .. $\infty$}. The sequence {2 .. $\infty$, 1} would be $\omega+1$. The sequence {2, 4, 6, 8, .. $\infty$, 1, 3, 5, 7, $\infty$} makes two trips to infinity and is mapped to $\omega + \omega = 2\cdot\omega = 2\omega$. The mapping you described is represented by $\omega \cdot\omega = \omega^2$. If you know that $\omega^2$ is countable then you've proved the rationals are countable. But that requires knowledge beyond a typical countability proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1850159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Mills Test Running Time Can Miller's Test be replaced with the bound below in hopes that it would make a faster general-purpose primality test (compared to ECPP). If $n$ is an $a$-SPRP for all primes $a$ $<$ ($\log_2 n$)/$2$, then $n$ is prime. I checked the bounds for the first few prime bases $a$, and the tests are deterministic for: If $n$ $<$ $2^4$ is a $2$-SPRP, then $n$ is prime. If $n$ $<$ $2^6$ is a $2, 3$-SPRP, then $n$ is prime. If $n$ $<$ $2^{10}$ is a $2, 3, 5$-SPRP, then $n$ is prime. If $n$ $<$ $2^{14}$ is a $2, 3, 5, 7$-SPRP, then $n$ is prime. $.........$ I did this up to $2^{74}$ to test prime bases $2$ to $37$ and all results are deterministic, but I am running short on a proof. $1.$ Can anyone find a counterexample or show the bound above is accurate. $2.$ If the running time of the test above is faster than ECPP?
A lot of work has been done on the Miller-Rabin primality test to examine how large a number can be (deterministically) assessed with a few given bases, see wikipedia chapter here. According to that, testing with the prime bases up to $37$ is good to $ 318665857834031151167461 \approx 2^{78}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1850270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Differential Geometry for General Relativity I'm going to start self-studying General Relativity from Sean Caroll's Spacetime and Geometry: An Introduction to General Relativity. I'd like to have a textbook on Differential Geometry/Calculus on Manifolds for me on the side. I do like mathematical rigor, and I'd like a textbook whose focus caters to my need. Having said that, I don't want a exchaustive mathematics textbook (although I'd appreciate one) that'll hinder me from going back to the physics in a timely manner. I looked for example at Lee's textbook but it seemed too advanced. I have done courses on Single and Multivariable Calculus, Linear Algebra, Analysis I and II and Topology but I'm not sure what book would be the most useful for me given that I have a knack of seeing all results formally. P.S: I'm a student of physics with a mathematical leaning.
Check out Barrett O'Neill's book on semi-Riemannian geometry. This book is written exactly for your purposes: it discusses manifolds with symmetric nonsingular metrics, and in particular spacetime metrics. There are even chapters on cosmology and the Schwarzchild metric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1850381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Suppose that $(s_n)$ converges to $s$, $(t_n)$ converges to $t$, and $s_n \leq t_n \: \forall \: n$. Prove that $s \leq t$. I'm stuck with the proof of the following: Suppose that $(s_n)$ converges to $s$, $(t_n)$ converges to $t$, and $s_n \leq t_n \: \forall \: n$. Prove that $s \leq t$. I've tried starting with $s_n \leq t_n \: \forall : n$ and the definitions of each limit (i.e. $|s_n - s| \leq \epsilon \: \forall \: n > N_1$), but I'm not really getting very far. Any help is appreciated!
Since $\{s_n\}$ converges to $s$ and $\{t_n\}$ converges to $t$, $\{t_n - s_n\}$ converges to $t - s$. Since $s_n \leq t_n$ for all $n$, each term $t_n - s_n$ is nonnegative. It thus suffices to show that a sequence of nonnegative terms cannot converge to a negative limit (use proof by contradiction).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1850471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Laurent series of $\frac{1}{z^2(z-1)}$ when $0<\lvert z\rvert<1$ $\frac{1}{z^2(z-1)} = -\left(\frac{1}{z}+\frac{1}{z^2}+\frac{1}{1-z}\right)$. I know that $\frac{1}{1-z}=\sum\limits_{n=0}^\infty z^n$, but what about the other two terms, should they be left as they are, since we can already think of $\frac{1}{z}$ and $\frac{1}{z^2}$ as Laurent series where $b_1 = 1$ in the first case, and the rest of $b_n=0$, and $b_2 = 1$ in the 2nd case, with all other $b_n=0$? Please let me know if there is a better way to find Laurent series.
Your approach is right, at the end you just add the three series to obtain the Laurent series of $\frac{1}{z^2(z-1)}$ : $$-(\sum\limits_{n=0}^\infty z^n+\frac{1}{z}+\frac{1}{z^2})=-\sum\limits_{n=-2}^\infty z^n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1850620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Classifying singularities of $\frac{\sin(\pi z)}{z^4+1}$ If $f(z)=\frac{\sin(\pi z)}{z^4+1}$, we have four roots of unity, which are isolated singularities of $f$: $$z=-(-1)^{1/4},z=(-1)^{1/4}, z=-(-1)^{3/4}, z=(-1)^{3/4}.$$ Do we need to find the Laurent series about all four of the singularities in order to classify them, or is there a better method?
No need to compute the Laurent series. All the singularities are zeroes of order $1$ of the denominator, and the numerator does not vanish at them. This implies that they are poles of order one. You can see it as follows. Call $z_i$, $1\le i\le4$ the singularities. Then $$ f(z)=\frac{1}{z-z_1}\,\frac{\sin(\pi\,z)}{(z-z_2)(z-z_3)(z-z_4)}, $$ and $$ \lim_{z\to z_1}(z-z_1)f(z)=\frac{\sin(\pi\,z_1)}{(z_1-z_2)(z_1-z_3)(z_1-z_4)}\ne0. $$ Similarly for the other singularities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1850731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Method for calculating integral of $e^{-2ix\pi\psi}/(1+x^2)$ I am seeking the method for calculating the following integral $$\int_{-\infty}^\infty\frac{e^{-2ix\pi\psi}}{1+x^2} dx $$ Ideas I have are: 1) substition (however which one?) 2) integration by parts The integral comes from the Fourier transform of $$\frac{1}{1+x^2}$$
$$\int_{-\infty}^\infty\frac{e^{-2ix\pi\psi}}{1+x^2} dx =\int_{-\infty}^\infty\frac{\cos(2\pi\psi\,x)}{1+x^2}dx-i\int_{-\infty}^\infty\frac{\sin(2\pi\psi\,x)}{1+x^2}dx$$ Please check this question $$\color{red}{I(\lambda)=\int_{-\infty}^{\infty}{\cos(\lambda x)\over x^2+1}dx=\frac{\pi}{e^{\lambda}}}$$ and $$\color{red}{J(\lambda)=\int_{-\infty}^{\infty}{\sin(\lambda x)\over x^2+1}dx=0}$$ Now set $\lambda=2\pi\psi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1850829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How is defined the inner product $g_p$ on $T_p \mathbb{R}^n/\Gamma$ at the point $p$? In the book Eigenvalues in Riemannian Geometry of Isaac Chavel page $28$, I have some questions related to the resolution of the spectrum of the tori. The lattice acts on $\mathbb R^n$ by $$γ(x)=x+γ$$ for $x \in\mathbb R^n$, $γ ∈ Γ$; the action is properly discontinuous, and determines the Riemann covering $p:ℝ^n→ℝ^n/Γ$. Precision : The lattice $\Gamma$ is defined to be $$\Gamma = \left\{\sum_{j=1}^n \alpha^j v_j : \alpha^j \in \mathbb{Z}, j=1,\dots, n\right\}.$$ How is defined the inner product $g_p$ on $T_p \mathbb{R}^n/\Gamma$ at the point $p$? Thanks!
The metric on $R^n/\Gamma$ is induced by the scalar product of $R^n$. More generally, let $(X,g)$ be a manifold $X$ endowed with a differentiable metric $g$, $G$ a subgroup of isometries which acts properly and freely on $X$, $X/G$ is a manifold and $g$ induces a metric on $X/G$ as follows: Let $p:X\rightarrow X/G$ the projection (it is a covering), $x\in X/G, u,v\in T_xX/G$ let $y\in X$ such that $p(y)=x$, since $dp_y:T_yX\rightarrow T_xX/G$ is an isomophism, there exists $u',v'\in T_yX$ such that $dp_y(u')=u, dp_y(v')=v$. Write $g'_x(u,v)=g_y(u',v')$. The definition does not depend on the choices. If $p(z)=x,$ there exists $h\in G$ such that $h(y)=z$, $dp_z(dh_y(u'))=u, dp_z(dh_y(v'))=v$. We have $g_z(dh_y(u')),dh_y(v'))=g_{h(y)}(dh_y(u')),dh_y(v'))=g_y(u',v')$ since $g$ preserves the metric. In the example here, $G$ is a group of translations thus preserves the Euclidean metric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1850900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove by induction that $3^{2n+3}+40n-27$ is divisible by 64 for all n in natural numbers I cannot complete the third step of induction for this one. The assumption is $3^{2n+3}+40n-27=64k$, and when substituting for $n+1$ I obtain $3^{2n+5}+40n+13=64k$. I've tried factoring the expression, dividing, etc. but I cannot advance from here.
Let $A_n = 3^{2n+3}+40n-27$, then $A_n = 11A_{n-1} - 19A_{n-2} + 9A_{n-3}$. From this it's clear that if 64 divides $A_n$ three consecutive values of $n$ then it holds for the next. So by induction it's enough to check it for $n = -1,0,1$, which is easy enough to do by hand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1850968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Math symbols on vectors e.g item in, item not in, for all I have a vector s= <1,2,3> and I want to perform various operations on it like these ones: check if an item x exists in s, x not in s, a for all i in s where i is the item. What is the correct math symbols for doing this on a vector? This is perhaps an unconventional way of utilizing vectors. Note that set notation can't be used for my application as I am highly dependent on an ordered s.
Viewing a (coordinate) vector as a map from an index set to the set of reals (or whatever), "$x$ is a component of $s$" is the same as saying that $x$ is in the image of that map ... And unless you are also doing something really vector-ish with them, I'd prefer to call $s$ a finite sequence of reals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1851354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is intermediate value property equivalent to Darboux property? I always thought that a function $f:\mathbb{R} \to \mathbb{R}$ has the intermediate value property (IVP) iff it maps every interval to an interval (Darboux property): Proof: Let $f$ have the Darboux property and let $a<b$ and $f(a) < f(b)$. Then $f([a,b])$ is an interval and so contains $[f(a),f(b)]$. If $u \in [f(a),f(b)]$ then of course $u \in f([a,b])$ and thus there exists some $k \in [a,b]$ such that $f(k) = u$, i.e. $f$ has IVP. Now let $f$ have the IVP and let $a < b$, $x,y \in f([a,b])$ and $z \in \mathbb{R}$ with $x<z<y$. We claim $z \in f([a,b])$. Indeed we have $x = f(x')$ and $y = f(y')$ for some $x',y' \in [a,b]$. W.L.O.G assume that $x' < y'$. Then by the IVP there is some $c\in [x', y']$ such that $f(c) = z$, i.e. $z \in f([a,b])$ and therefore $f([a,b])$ is an interval. But on this blog in problem 5. the author says that they are not equivalent: "This [Darboux property] is slightly different from continuity and intermediate value property. Cotinuity implies Darboux and Darboux implies Intermediate value property." Have I missed something and if yes, where does the proof given above break?
That is correct according to the definition of intermediate value property saying that for all $a<b$ in the domain, for all $u$ between $f(a)$ and $f(b)$, there exists $k\in(a,b)$ such that $f(k) = u$. The two properties are equivalent, and your proof of that is correct. The blog's author Beni Bogoşel clarified in a comment that he was using a different definition of intermediate value property, meaning that the entire image is an interval. The ambiguity is understandable given that the Intermediate Value Theorem is often stated in terms of a particular interval: $f:[a,b]\to \mathbb R$ continuous implies $f([a,b])$ contains the interval $\{\min\{f(a),f(b)\},\max\{f(a),f(b)\}\}$. (In this case, because the restriction of a continuous function is also continuous, this theorem automatically implies the stronger intermediate value property for continuous functions on intervals.) The author acknowledged that the convention is not universal, and he may edit to clarify.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1851413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Decreasing sequence numbers with first digit $9$ Find the sum of all positive integers whose digits (in base ten) form a strictly decreasing sequence with first digit $9$. The method I thought of for solving this was very computational and it depended on a lot of casework. Is there a nicer way to solve this question? Note that there are $\displaystyle\sum_{n=0}^{9} \binom{9}{n} = 2^9$ such numbers.
Here a simple way to compute it with haskell. The idea is to take all subsequences of "876543210", prepend "9", parse that as an integer and sum them all: Prelude> (sum $ map (read.("9"++)) $ Data.List.subsequences "876543210")::Integer 23259261861
{ "language": "en", "url": "https://math.stackexchange.com/questions/1851478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Complexity of LUP decomposition of tri-diagonal matrix to solve an equation? Doing LU decomposition of tri-diagonal matrix and then solving the eqn by using forward substitution followed by backward substitution is done is O(n) time. http://www.cfm.brown.edu/people/gk/chap6/node13.html But what are the number of operations and the time complexity of LUP decomposition of tri-diagonal matrix?
LU factorization with partial pivoting for banded matrices of size $n\times n$ with bandwidth $w$ (number of subdiagonals plus number of superdiagonals) requires $O(w^2n)$ flops, triangular solvers require $O(wn)$ flops. Thus, solving linear equations with tridiagonal matrix using LU factorization with partial pivoting also can be done in $O(n)$ floating point operations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1851549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A rectangle with perimeter of 100 has area at least of 500, within what bounds must the length of the rectangle lie? Problem The problem states that there is a rectangle that has a perimeter of $100$ and an area of at least $500$ and it asks for the bounds of the length which can be given in interval notation or in the <> (greater than or less than) signs My steps and thought process So I set some few equations 1)$2x+2y=100$ which becomes $x+y=50$ 2)$xy \geq 500$ 3)I then made $y = 50-x$ so that I can substitute it into equation #2 to get: $50x-x^2 \geq 500$ which eventually got me $0 \geq x^2-50x+500$ and this is where I got stuck.
You are correct that $x + y = 50$ and that $xy \geq 500$. We can solve the inequality by completing the square. \begin{align*} xy & \geq 500\\ x(50 - x) & \geq 500\\ 50x - x^2 & \geq 500\\ 0 & \geq x^2 - 50x + 500\\ 0 & \geq (x^2 - 50x) + 500\\ 0 & \geq (x^2 - 50x + 625) - 625 + 500\\ 0 & \geq (x - 25)^2 - 125\\ 125 & \geq (x - 25)^2\\ \sqrt{125} & \geq |x - 25|\\ 5\sqrt{5} & \geq |x - 25| \end{align*} Hence, \begin{align*} -5\sqrt{5} & \leq x - 25 \leq 5\sqrt{5}\\ 25 - 5\sqrt{5} & \leq x \leq 25 + 5\sqrt{5} \end{align*} Alternatively, note that since the perimeter of the rectangle is $100$ units, the average side length is $100/4 = 25$ units. Hence, we can express the lengths of adjacent sides as $25 + k$ and $25 - k$. Hence, the area is $$A(k) = (25 + k)(25 - k) = 625 - k^2$$ The requirement that the area must be at least $500$ square units means \begin{align*} 625 - k^2 & \geq 500\\ 125 & \geq k^2\\ 5\sqrt{5} & \geq |k|\\ 5\sqrt{5} & \geq k \geq -5\sqrt{5} \end{align*} Thus, for the area of the rectangle to be at least $500$ square units, the length of the longer side of the rectangle must be at most $25 + 5\sqrt{5}$ units and the length of the shorter side of the rectangle must be at least $25 - 5\sqrt{5}$ units. Also, note that the maximum area of $625$ square units occurs when $k = 0$, that is, when the rectangle is a square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1851791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A question on infinite abelian group Let $G$ be an infinite abelian group such that every proper non-trivial subgroup of $G$ is infinite cyclic ; then is $G$ cyclic ? ( The only characterization I know for infinite abelian groups to be cyclic is that every non-trivial subgroup has finite index . But I am not getting anywhere with the above statement . Please help . Thanks in advance ) NOTE : Here is what I have thought for finitely generated infinite abelian ; if $G$ is infinite and finitely generated , then $G \cong \mathbb Z_{n_1} \times ... \times \mathbb Z_{n_k} \times \mathbb Z^r$ , where $r \ne 0$ and $n_1|n_2|...|n_k ; n_1>1$ ; now if $k=0$ then for $r=1$ , $G$ is cyclic ; if $k=0 ; r>1$ then since for $\mathbb Z^r$ has a non-cyclic infinite non-trivial subgroup (a copy of ) $\mathbb Z \times 2\mathbb Z$ violating the hypothesis . So then let $k \ne 0$ ; then a copy of $\mathbb Z_{n_1} \times ... \times \mathbb Z_{n_k} \times 2\mathbb Z$ is contained in $G$ which is non-trivial , infinite and non-cyclic , violating the hypothesis again . So the only possibility is $G \cong \mathbb Z$ i.e. $G$ is cyclic ; am I correct here ?
The answer is yes. Here is an elementary argument. Suppose firstly that for each $n\in\mathbb N$ we have $nG=G$. Since $G$ is clearly torsion free, it follows that $G$ is a $\mathbb Q$-vector space, immediately violating the hypothesis. Thus for some $n\in\mathbb N$ we have $nG<G$, so by hypothesis $nG\cong\mathbb Z$. However $G\cong nG$ since it is torsion free: the map $G\rightarrow G$, $x\mapsto nx$ is injective and has image $nG$. The group $\mathbb Z\!\left [\frac12\right]$ is not a counterexample: there are plenty of proper non-cyclic subgroups, for example $3\mathbb Z\!\left [\frac12\right]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1851875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why there is no value for $x$ if $|x| = -1$? According to the definition of absolute value negative values are forbidden. But what if I tried to solve a equation and the final result came like this: $|x|=-1$ One can say there is no value for $x$, or this result is forbidden. That reminds me that same thinking in the past when mathematical scientists did not accept the square root of $-1$ saying that it is forbidden. Now the question is :"is it possible for the community of math to accept this term like they accept imaginary number. For example, they may give it a sign like $j$ and call it unreal absolute number then a complex number can be expanded like this: $x = 5 +3i+2j$ , where $j$ is unreal absolute number $|x|=-1$ An other example, if $|x| = -5$, then $x=5j$ The above examples are just simple thinking of how complex number may expanded You may ask me what is the use of this strange new term? or what are the benefits of that? I am sure this question has been raised before in the past when mathematical scientists decided to accept $\sqrt{-1}$ as imaginary number. After that they knew the importance of imaginary number.
The beauty of math is that you can define everything. The question is: what properties you want this "j" to satisfy? For example, I guess that you want the absolute value $|\cdot|$ to satisfy the triangle inequality. Note that $$ 0=|0|=|j+(-j)|\leq|j|+|-j|=-1-1=-2 $$ a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Formal solution without handwaving about Jordan normal form Let $A$ be a $7\times 7$ matrix over $\mathbb C$ with minimal polynomial $(t-2)^3$. I need to prove $\dim \ker (A-2)\geq 3$. The handwavy argument I have is that $\deg m$ is the size of the greatest Jordan block while $\dim \ker (A-2)$ is the number of blocks, and since $2\cdot 3<7$, the dimension must be at least $3$. However, I realized I don't know how to formally prove this, i.e without taking the sentences I said as facts.
I wouldn’t call your argument handwavy, but we can replace the use of the Jordan normal form by the underlying calculations: Lemma: Let $V$ be a finite-dimensional vector space and $f, g \colon V \to V$ be two endomorphisms. Then $\dim \ker (fg) \leq \dim \ker f + \dim \ker g$. Proof: We have $g( \ker(fg) ) \subseteq \ker f$ and thus by the dimensional formula that $$ \dim \ker (fg) = \dim \operatorname{im} g|_{\ker(fg)} + \dim \ker g|_{\ker(fg)} \leq \dim \ker f + \dim \ker g. $$ Thus we get the following by induction: Corollary: If $V$ is a finite-dimensional vector space and $f_1, \dotsc, f_n \colon V \to V$ are endomorphisms, then $$ \dim \ker(f_1 \dotsm f_n) \leq \dim \ker f_1 + \dotsb + \dim \ker f_n. $$ We have in particular that $\dim \ker f^k \leq k \dim \ker f$ for every endomorphism $f \colon V \to V$ and every $k \in \mathbb{N}$. If we apply this corollary to the endomorphism $f \colon \mathbb{C}^7 \to \mathbb{C}^7$, $x \mapsto (A-2) x$ we find that $$ 7 = \dim \mathbb{C}^7 = \dim \ker f^3 \leq 3 \dim \ker f = 3 \dim \ker (A-2), $$ and thus $\dim \ker (A-2) \geq 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is $77!$ divisible by $77^7$? Can $77!$ be divided by $77^7$? Attempt: Yes, because $77=11\times 7$ and $77^7=11^7\times 7^7$ so all I need is that the prime factorization of $77!$ contains $\color{green}{11^7}\times\color{blue} {7^7}$ and it does. $$77!=77\times...\times66\times...\times55\times...\times44\times...\times33\times...\times22\times...\times11\times...$$ and all this $\uparrow$ numbers are multiples of $11$ and there are at least $7$ so $77!$ contains for sure $\color{green}{11^7}$ And $77!$ also contains $\color{blue} {7^7}:$ $$...\times7\times...\times14\times...\times21\times...\times28\times...\times35\times...42\times...49\times...=77!$$ I have a feeling that my professor is looking for other solution.
If $p$ is a prime number, the largest number $n$ such that $p^n \mid N!$ is $\displaystyle n = \sum_{i=1}^\infty \left \lfloor \dfrac{N}{p^i}\right \rfloor$. Note that this is really a finite series since, from some point on, all of the $\left \lfloor \dfrac{N}{p^i}\right \rfloor$ are going to be $0$. There is also a shortcut to computing $\left \lfloor \dfrac{N}{p^{i+1}}\right \rfloor$ because it can be shown that $$\left \lfloor \dfrac{N}{p^{i+1}}\right \rfloor = \left \lfloor \dfrac{\left \lfloor \dfrac{N} {p^i} \right \rfloor}{p}\right \rfloor$$ For $77!$, we get $\qquad \left \lfloor \dfrac{77}{11}\right \rfloor = 7$ $\qquad \left \lfloor \dfrac{7}{11}\right \rfloor = 0$ So $11^7 \mid 77!$ and $11^8 \not \mid 77!$ Since $7 < 11$, it follows immediatley that $7^7 \mid 77!$. But we can also compute $\qquad \left \lfloor \dfrac{77}{7}\right \rfloor = 11$ $\qquad \left \lfloor \dfrac{11}{7}\right \rfloor = 1$ $\qquad \left \lfloor \dfrac{1}{7}\right \rfloor = 0$ So $7^{12} \mid 77!$ and $7^{13} \not \mid 77!$ It follows that $77^7 = 7^{7} 11^7 \mid 77!$. Added 3/9/2018 The numbers are small enough that we can show this directly Multiples of powers of $7$ between $1$ and $77$ \begin{array}{|r|ccccccccccc|} \hline \text{multiple} & 7 & 14 & 21 & 28 & 35 & 42 & 49 & 56 & 63 & 70 & 77 \\ \hline \text{power} & 7 & 7 & 7 & 7 & 7 & 7 & 7^2 & 7 & 7 & 7 & 7 \\ \hline \end{array} So $7^{12} \mid 77!$. Multiples of powers of $11$ between $1$ and $77$ \begin{array}{|r|ccccccc|} \hline \text{multiple} & 11 & 22 & 33 & 44 & 55 & 66 & 77\\ \hline \text{power} & 11 & 11 & 11 & 11 & 11 & 11 & 11 \\ \hline \end{array} So $11^7 \mid 77!$. Hence $77^7 \mid 7^{12}11^7 \mid 77!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 1 }
How I can find all solutions of the ODE $(y')^{2}+y^{2}=4$ I want to find all solutions of this ordinary differential equation: $$ (y')^{2}+y^{2}=4 $$ but I don't know how. It is impossible by use of series method or Laplace transform?
$$y'(x)^2+y(x)^2=4\Longleftrightarrow$$ $$y'(x)=\pm\sqrt{4-y(x)^2}\Longleftrightarrow$$ $$\frac{y'(x)}{\sqrt{4-y(x)^2}}=\pm1\Longleftrightarrow$$ $$\int\frac{y'(x)}{\sqrt{4-y(x)^2}}\space\text{d}x=\int\pm1\space\text{d}x\Longleftrightarrow$$ $$\int\frac{y'(x)}{\sqrt{4-y(x)^2}}\space\text{d}x=\text{C}\pm x\Longleftrightarrow$$ For the integral on the LHS: Substitute $u=y(x)$ and $\text{d}u=y'(x)\space\text{d}x$: $$\int\frac{1}{\sqrt{4-u^2}}\space\text{d}u=\text{C}\pm x\Longleftrightarrow$$ $$\frac{1}{2}\int\frac{1}{\sqrt{1-\frac{u^2}{4}}}\space\text{d}u=\text{C}\pm x\Longleftrightarrow$$ Substitute $s=\frac{u}{2}$ and $\text{d}s=\frac{1}{2}\space\text{d}u$: $$\int\frac{1}{\sqrt{1-s^2}}\space\text{d}s=\text{C}\pm x\Longleftrightarrow$$ Now, use: $$\int\frac{1}{\sqrt{1-x^2}}\space\text{d}x=\arcsin(x)+\text{C}$$ $$\arcsin\left(s\right)=\text{C}\pm x\Longleftrightarrow\arcsin\left(\frac{u}{2}\right)=\text{C}\pm x\Longleftrightarrow\arcsin\left(\frac{y(x)}{2}\right)=\text{C}\pm x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
What is the general formula for these matrix coefficients? Question I devised an interesting math puzzle for myself but couldn't deduce any solution: Given: $$AB=BA=A+B$$ $$ (AB)^n = \sum_{j=1}^n a_j A^j + b_j B^j$$ It's obvious $a_j=b_j$ but what is the general formula for any given $n$? $$a_j=b_j = ?$$ For Example $$ (AB)^2= A^2 + 2A + 2B + B^2 $$ or: $$ (AB)^3= A^3 + 3A^2 + 3A + 3B + 3B^2 + B^3$$
Let's take a more general approach by evaluating $A^m B^n$. Let $$A^m B^n=\sum_{i=1}^m {f_{i}(m,n)A^i}+\sum_{j=1}^n {g_{j}(m,n)B^j}$$ It can be seen that $f_{1}(1,n)=g_{i}(1,n)=1$ for $1\le i\le n$ By considering $A^{m+1}B^n$, one can show that: $$f_{1}(m+1,n)=\sum_{i=1}^n {g_{i}(m,n)}$$ $$g_{i}( m+1,n)=\sum_{j=i}^n {g_{j}(m,n)}$$ Also notice that $f_{i}(m,n)=f_{1}(m-i+1,n)$ Then by induction, $$f_{i}(m,n)=\binom{m+n-i-1}{m-i}, g_{i}(m,n)=\binom{m+n-i-1}{m-1}.$$ The answer can then be deduced by setting $m=n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Faster way to find Taylor series I'm trying to figure out if there is a better way to teach the following Taylor series problem. I can do the problem myself, but my solution doesn't seem very nice! Let's say I want to find the first $n$ terms (small $n$ - say 3 or 4) in the Taylor series for $$ f(z) = \frac{1}{1+z^2} $$ around $z_0 = 2$ (or more generally around any $z_0\neq 0$, to make it interesting!) Obviously, two methods that come to mind are 1) computing the derivatives $f^{(n)}(z_0)$, which quickly turns into a bit of a mess, and 2) making a change of variables $w = z-z_0$, then computing the power series expansion for $$ g(w) = \frac{1}{1+(w+z_0)^2} $$ and trying to simplify it, which also turns into a bit of a mess. Neither approach seems particularly rapid or elegant. Any thoughts?
For this particular problem, try a different substitution: $x=z^2$. Then $$ \frac1{1+x} = \sum (-1)^nx^n$$ so $$ \frac1{1+z^2} = \sum (-1)^nz^{2n}$$ The probelm of finding a closed form is not always easy. If you can find a closed form for the coefficient of $z^k$ in $$ \frac{1}{(1-z)(1-z^2)(1-z^3)(1-z^4)\cdots} $$ tell me about it so I can steal your result, publish it, and become famous. (LOL - this will be a closed form for the partition number of $k$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }
How are proofs formatted when the answer is a counterexample? Suppose it is asked: Prove or find a counterexample: the sum of two integers is odd The fact that 1 + 1 = 2 is a counterexample that disproves that statement. What is the proper format in which to write this? I will provide my attempt. Theorem: the sum of two integers is odd Proof: 1 + 1 = 2 => the sum of two integers may be even I understand the math, but I'm not sure how to properly present a counterexample relative to an initial theorem. Was this an acceptable presentation of a proof?
Cite the counterexample. Since $1 + 1 = 2$, the sum of two arbitrary integers is not always odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 15, "answer_id": 0 }
Exponentiation with negative base and properties I was working on some exponentiation, mostly with rational bases and exponents. And I stuck with something looks so simple: $(-2)^{\frac{1}{2}}$ I know this must be $\sqrt{-2}$, therfore must be imaginary number. However, when I applied some properties I have something unexpected, and I don't know what I did wrong: $(-2)^{\frac{1}{2}}=(-2)^{2\cdot\frac{1}{4}}=[(-2)^2]^\frac{1}{4}=4^{\frac{1}{4}}=\sqrt2$ I know the above is wrong, but I don't know exactly what is wrong. My initial suspect was from 2nd to 3rd expression, so I checked the property ($x^{mn}=(x^m)^n$), and I realized that it is only true for integer exponents. I dug a little more, and found the following passage on Wikipedia: "Care needs to be taken when applying the power identities with negative nth roots. For instance, $−27=(−27)^{((2/3)⋅(3/2))}=((−27)^{2/3})^{3/2}=9^{3/2}=27$ is clearly wrong. The problem here occurs in taking the positive square root rather than the negative one at the last step, but in general the same sorts of problems occur as described for complex numbers in the section § Failure of power and logarithm identities." But could anyone clarify the explanation here? If we simply follow the order of operation, don't we really get 27?
Note that $4^{\frac {1}{4}}$ has 4 values $(\sqrt {2},-\sqrt {2},i\sqrt {2},-i\sqrt {2})$ When you square a number or an equation then you are increasing solution values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
The map $f:\mathbb{Z}_3 \to \mathbb{Z}_6$ given by $f(x + 3\mathbb{Z}) = x + 6\mathbb{Z}$ is not well-defined By naming an equivalence class in the domain that is assigned at least two different values prove that the following is not a well defined function. $$f : \Bbb Z_{3} \to \Bbb Z_{6} \;\;\;\text{ given by } f(\overline x) = [x] $$ In this case we represent an element of the domain as an $\bar x$ and use the notation $[x]$ for equivalence classes in the co-domain. $f(\overline0) = [0] \;,$ $ \Bbb Z_{3} \quad (3x+0)\;\; \overline 0 = \{ ...-6,-3,0,3,6... \}, \; \Bbb Z_{6}\; (6x+0)\; \overline0 =\{ ...-12,-6,0,6,12...\}$ $f(\overline1) = [1], $ $\qquad \; (3x+1) \; \;\;\;\overline 1 = \{ ...-5,-2,1,4,7 ... \},\; \; (6x+1)\;\overline1 =\{...-11,-5,1,7,13.. \}$ $f(\overline2) = [2], $ $\qquad \qquad \qquad \;\overline 2 = \{ ...-4,-3,2,5,8 ... \},\;\;\overline 2 = \{ ...-10,-4,2,8,14 ...\},\;$ $f(\overline3) = [3] ,$ $\qquad \qquad \qquad \qquad \qquad \qquad,\; \quad \quad \quad \; \; \; \;\overline 3 = \{ ...-9,-3,3,9,15 ... \},$ $f(\overline4) = [4],\qquad \qquad \qquad\qquad \qquad\qquad \; \quad \quad \quad \quad \; \;\overline 4 = \{ ...-8,-2,4,10,16... \}, $ $f(\overline5) = [5], \qquad \qquad \qquad\qquad \qquad\qquad \; \quad \quad \quad \quad \;\;\overline 5 = \{ ...-7,-1,5,11,17... \}, $ $f(\overline6) = [6] ,$ So my main question for this problem is how to find out if this question is not a function. From the information I have gathered here I still cannot see why this is not a function any help on showing how this is not function would be much appreciated. The set of equivalence classes for the relation $\cong_{m}$ is denoted $\Bbb Z_{m}$ The $ 3x+0 \text{ and } 6x+0$ are just showing how I got $\overline 0 $
It is easy to show that $\mathbb Z_3=\left\{\begin{array}\{ \{...,-6,-3,0,3,6,...\},\\ \{...,-5,-2,1,4,7,...\},\\ \{...,-4,-1,2,5,8,...\}\end{array}\right\}$ and that $\mathbb Z_6=\left\{\begin{array}\{ \{...,-12,-6,0,6,12,...\},\\ \{...,-11,-5,1,7,13,...\},\\ \{...,-10,-4,2,8,14,...\},\\ \{...,-9,-3,3,9,15,...\},\\ \{...,-8,-2,4,10,16,...\},\\ \{...,-7,-1,5,11,17,...\}\end{array}\right\}$ , because $\mathbb Z_3$ and $\mathbb Z_3$ are sets of equivalence classes. Let the relation $f$ be represented by a diagram: Here, the relation lines show that $f(\bar1)=[1]$, $f(\bar2)=[2]$, and $f(\bar3)=[3]$. But consider that $\bar0\cong_3\bar3$, which means they are the same element in $\mathbb Z_3$ (Remember, $\mathbb Z_3$ is a set of equivalence classes). Then, consider that $f(\bar0)=f(\bar3)=[3]$. Putting this into the diagram yields , which shows that $f$ is clearly not a function, because a function maps each element in its domain to exactly $1$ element in its co-domain. $Q.E.D.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 2 }
Proof: A function is convex iff it is convex when restricted to any line .. Let $f\colon \Bbb R^n\to \Bbb R$ be a function. Then, $f$ is convex if and only if the function $g\colon\mathbb{R} \to \mathbb{R}$ defined as $g(t) \triangleq f(x+tv)$, with domain $$ \operatorname{dom}(g)=\{t\mid x+tv \in \operatorname{dom}(f), x\in \operatorname{dom}(f), v\in \mathbb{R}^n\}, $$ is convex in $t$. It says that: a function is convex if and only if it is convex when restricted to any line that intersects its domain. My question was how to prove it? Reference: Steven Boyd, Convex Optimization, Cambridge University Press, Page 67.
The "$\Rightarrow$" part is easy. The other direction can be proven by contradiction: Assume that $f$ is not convex. Then, $\operatorname{dom}(f)$ is not convex or there exist $x,y \in \operatorname{dom}(f)$ and $\lambda \in (0,1)$ with $f( \lambda \, x + (1-\lambda) \, y ) > \lambda \, f(x) + (1-\lambda) \, f(y)$. * *If $\operatorname{dom}(f)$ is not convex, there exist $x,y \in \operatorname{dom}(f)$ and $\lambda \in (0,1)$ with $\lambda \, x + (1-\lambda) \, y \not\in \operatorname{dom}(f)$. Now, set $v = y - x$ and use the $g$ as in the statement of the theorem. Then, it is easy to check that $\operatorname{dom}(g)$ is not convex, since it contains $0$ and $1$, but not $1-\lambda$. *In the other case, you can define $g$ in the same way and get $g(1-\lambda) > \lambda \, g(0) + (1-\lambda) \, g(1)$. In both cases, we obtained a contradiction to the convexity of $g$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Show that $[L:K]=1 \Leftrightarrow L=K$ Let $L/K$ be a field extension. I want to show that $$[L:K]=1 \Leftrightarrow L=K$$ $$$$ I have done the following: For the direction $\Rightarrow \ : $ Since $[L:K]=1=\text{dim}_KL$ we have that there exist $a\in L$ with $\langle a\rangle$ a $K$-basis of $L$. So, let $\ell\in L$, then $$\ell=ak, k\in K$$ To get the desired result, can we just take $a=1$ ? $$$$ Could you ive me a hint for the other direction?
To get the desired result, can we just take $a = 1$? Yes, you are using that in a one-dimensional vector space, any non-zero vector gives a basis. Can you give me a hint for the other direction? You have to show that $K$ is one-dimensional as a vector space over itself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1852968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Determine number of real roots on an incomplete polynomial Let's say that I have an incomplete quartic equation with real coefficients, which is $$x^4 - 3x^3 + ... - 10 = 0$$ And also given 2 complex roots, $a + 2i$ and $1 + bi$ where $a$ and $b$ are real numbers. The problem asks the sum of the real roots, but firstly I don't know how to determine if the equation even has a real root. I can't do Rule of Signs because obviously the polynomial is incomplete. Although I can do a (heavy) assumption that $1 + bi$ is the conjugate of $a + 2i$ or not, that still would give me $0$ or $2$ real roots. How can I know if it has a real root or not? EDIT : I got the second term wrong, it should be $-3x^3$ and not $-3x^2$ ! I have edited the original equation. Sorry!
Edit : The OP changed $-3x^2$ to $-3x^3$. Let $\alpha,\beta,\gamma,\omega$ be the four roots. Then, by Vieta's formulas, $$\alpha+\beta+\gamma+\omega=3$$ $$\alpha\beta\gamma\omega=-10$$ Case 1 : If $(a,b)=(1,-2)$, then we may suppose that $\alpha=1+2i,\beta=1-2i$, so $$\gamma+\omega=1,\quad \gamma\omega=-2$$ So, $\gamma,\omega$ are the solutions of $x^2-x-2=(x-2)(x+1)=0$, i.e. $$\gamma,\omega=2,-1$$ giving that sum of real roots is $\color{red}{1}$. Case 2 : If $(a,b)\not=(1,-2)$, then there is no real roots since four roots are $a\pm 2i,1\pm bi$ where $b\not=0$. However, as Tobias Kildetoft comments, this does not happen because there is no real a,b such that $(a^2+2^2)(1^2+b^2)=-10$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1853031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Where is the fault in my proof? I had some spare time, so I was just doing random equations, then accidentally came up with a proof that showed that i was -1. I know this is wrong, but I can't find where I went wrong. Could someone point out where a mathematical error was made? $$(-1)^{2.5}=-1\\ (-1)^{5/2}=-1\\ (\sqrt{-1})^5=-1\\ i^5=-1\\ i=-1$$
Your mistake is that you have "$(-1)^{5/2} = -1$". It actually holds that $(-1)^{5/2} = i$ since you get by euler identity that $$(-1)^{5/2} = {e^{i\pi}}^{5/2} = e^{5/2 i\pi} = i.$$ Furthermore you shouldn't write $\sqrt{-1} = i$ because the root isn't defined for negative values and you can get all sorts of wrong proofs by using the rules for square roots in combination with this notation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1853127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
quadratic variation of brownian motion doesn't converge almost surely I just came across the following remark: If $(B_t)_{t\geq0}$ is a one dimensional Brownian motion and if we have a subdivison $0=t_0^n<...<t_{k_n}^n=t$ such that $\sup_{1\leq i\leq k_n}(t_i^n-t_{i-1}^n)$ converges to $0$ is $n$ converges to $\infty$ then $\lim_{n\to\infty}\sum_{i=1}^{k_n}(B_{t_i^n}-B_{t_{i-1}^n})^2=t$ in $L^2$. If $\sum_{n\geq 1}\sum_{i=1}^{k_n} (t_i^n-t_{i-1}^n)^2<\infty$, then we also have almost sure convergence Now I'm trying to find a subdivison so that $\lim_{n\to\infty}\sum_{i=1}^{k_n}(B_{t_i^n}-B_{t_{i-1}^n})^2$ does not converge almost surely and I'm kinda stucked. My idea was the following: First note that the subdivision needs to fulfill the following: $\sum_{n\geq 1}\sum_{i=1}^{k_n} (t_i^n-t_{i-1}^n)^2=\infty$. Next I thought about using Borel-Cantelli, which says: If $\sum_{n=1}^{\infty} \mathbb{P}(E_n)=\infty$ and if the events $E_n$ are independent, we have that $\mathbb{P}(\limsup E_n=\infty)=1$. Now if we can create/formulate events $E_n$ s.t. $\mathbb{P}(E_n)=\sum_{i=1}^{k_n}(t_i^n-t_{i-1}^n)^2$ and such that $\limsup E_n=lim_{n\to\infty}\sum_{i=1}^{k_n} (B_{t_i^n}-B_{t_{i-1}^n})^2$, then Borel-Cantelli would tell us that $\sum_{i=1}^{k_n} (B_{t_i^n}-B_{t_{i-1}^n})^2$ would not converge almost surely. My problem is that I can't see how in this case $E_n$ needs to be defined. (And I'm not sure whether this is a good approach...)Can anyone help me? Thanks in advance!
Consider the sequence of partitions $$\pi(n) = \bigcup_{i=0}^n \left\{\frac in t \right\},\ n\geqslant1 $$ of $[0,t]$, that is, $t_i^n = \frac int$ for $0\leqslant i\leqslant n$. Then $$\sup_{1\leqslant i\leqslant n}\left(t_i^n-t_{i-1}^n\right) = \frac1n\stackrel{n\to\infty}\longrightarrow0, $$ but $$\sum_{n=1}^\infty\frac1n=\infty. $$ From here we may use the second Borel-Cantelli lemma to show that $\sum_{i=1}^n \left(B_{\frac itn}-B_{\frac{i-1}tn}\right)^2$ does not converge to $t$ almost surely. However, this is somewhat of a moot point, considering that $L^2$ convergence implies convergence in probability, which in turn implies a.s. convergence along a subsequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1853219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Sensitivity analysis in linear programming Could someone please explain in detailed steps how to apply a sensitivity analysis to such problem: $$maximize \ \ 2x_1 + 3x_2 \\ s.t. \ \ 4x_1+3x_2≤600 \\ 2x_1+2x_2≤320 \\ 3x_1+7x_2≤840 \\ x_i≥0$$ The goal is it to determine the boundaries of $x_2$.
I am not particularly keen to do your homework for you "in detail". But here is a starting point. The red line is a contour of $2x+3y$. The blue line is $4x+3y=600$; the orange line is $x+y=160$; the green line is $3x+7y=840$. The other lines are $x\ge0,y\ge0$. The first question to settle is obviously where the allowed area is. It is in fact the area "below" the green, orange and blue lines but in the first quadrant. So the allowed area is the irregular pentagonal area shown in the plot. Ignoring sensitivities we would maximise $2x+3y$ at the intersection of the green and orange lines, because we want the red line to be as far "upwards" as possible (whilst keeping its slope fixed). You say nothing in the question about what kind of sensitivity analysis you want. There are a whole range of things you can change. You can "change the RHS", meaning that you shift the constraint lines, whilst keeping their slopes unchanged. Here you can see that the blue line is a "non-binding" constraint - shifting it will not change the optimisation for unless you shift it further to the left than the intersection of the green and orange lines. On the other hand, the green and orange constraints are "binding" - any shift however small will shift the optimisation. Another sensitivity is to "change the objective function coefficients". In other words you change the coefficients 2, 3 in $2x+3y$. That has the effect of changing the slope of the red line and hence the optimisation. Finally, of course, you can change the coefficients in the constraints, thus changing the slopes of the blue, green, orange lines. Again small changes to the blue line will make no difference, but small changes to the orange and green lines will.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1853359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Trigonometric identities: $ \frac{1+\cos(a)}{1-\cos(a)} + \frac{1-\cos(a)}{1+\cos(a)} = 2+4\cot^2(a)$ I don't really know how to begin, so if I'm missing some information please let me know what it is and I'll fill you guys in :). This is the question I can't solve: $$ \frac{1+\cos(a)}{1-\cos(a)} + \frac{1-\cos(a)}{1+\cos(a)} = 2+4\cot^2(a) $$ I need to prove their trigonometric identities. I have the $5$ basic set of rules, I could write them all here but I suppose it's not needed, if it is please let me know since it's not gonna be simple to type. I have over $40$ questions like these and I just couldn't seem to understand how to prove them equal, my best was $4 \cot^2(a) = 2 + 4 \cot^2(a)$ Thanks for everything!
Put $t=\tan (\frac a2)$.You have $$\frac{1+\cos(a)}{1-\cos(a)}=\frac{1+\frac{1-t^2}{1+t^2}}{1-\frac{1-t^2}{1+t^2}}=\frac{1}{t^2}$$ Besides $\cot(a)=\frac{1-t^2}{2t}$ so we have to prove $$\frac{1}{t^2}+t^2=2+4\left(\frac{1-t^2}{2t}\right)^2$$ Immediate calculation gives $$\frac{1+t^4}{t^2}=\frac{1+t^4}{t^2}$$ which is obviously true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1853418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
$x(a^{1/x}-1)$ is decreasing Prove that $f(x)=x(a^{1/x}-1)$ is decreasing on the positive $x$ axis for $a\geq 0$. My Try: I wanted to prove the first derivative is negative. $\displaystyle f'(x)=-\frac{1}{x}a^{1/x}\ln a+a^{1/x}-1$. But it was very difficult to show this is negative. Any suggestion please.
Using simple algebra it is possible to prove that $g(x) = f(1/x) = \dfrac{a^{x} - 1}{x}$ is strictly increasing for $x > 0, a > 0, a \neq 1$ and $x$ being rational. The extension to irrational values of $x$ is easily done by considering sequences of rationals converging to $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1853575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Hypergeometric Random Variable Expectation In a binomial experiment we know that every trial is is independent and that the probability of success, $p$ is the same in every trial. This also means that the expected value of any individual trial is $p$. So if we have a sample of size $n$, by the linearity property of the expectation, the expected value of the same is just $n \cdot p$. This is all intuitive. When the population size is finite and when we don't replace the items after every trial, we can't use the binomial distribution to get the probability of $k$ successes in a sample of size $n$ where the population is of size $N$ and the number of successes is $R$ simply because the probability of obtaining a success after every trial changes as the $R$ or/and $N$ change(s). So far so good. Yet when they calculate the expected value of the hypergeometric random variable, it is $(n \cdot R/N)$. This seems to me as the same as saying the probability of obtaining a success in every trial is the same ($R/N$) which is not intuitive at all because I should at least be expecting to see $N$ reducing by $1$ after every trial. I know that there's a flaw in my thinking. Can someone help point it out ? Edit: I think I'm going to give up on understanding why the expected value of the hypergeometric random variable (HRV) is at it is. None of the answers have alleviated my confusion. I don't think I've made my confusion clear enough. My problem is I'm going about the process of finding the expected value of the HRV in the same way as that of the binomial random variable (BRV). In the BRV's case, if the sample is of size $n$ and we consider each item in the sample as random variable of its own, then $X = X_1+X_2+\cdots+X_n$. To find the $E[X]$, we simply add the $X_i$. Since an item is returned after it is checked, the probability of success does not change. In the case of the HRV, I should expect the probability of success to change because it is not returned back to the population. However, this doesn't seem to be the case. This is my problem.
As others have pointed out, the probability of a red ball at each of your $n$ draw actually is $R/N$. They are just correlated. You can also compute this expectation directly from the identity $$ \sum_{r=0}^n r\binom{R}{r}\binom{N-R}{n-r} = R\binom{N-1}{n-1} $$ To see this, the rhs counts the number of ways to pick a red ball and then $n-1$ other balls of any colour. The lhs counts the number of ways to pick $r$ red balls, $n-r$ white balls, and then pick a red ball. These are the same. Since $$ N\binom{N-1}{n-1} = n\binom{N}{n} $$ by a similar argument, the expectation you want is $$ \frac{R\binom{N-1}{n-1}}{\binom{N}{n}} = n\frac{R}{N} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1853678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 4 }
How to solve system of differential equations? I would like to solve a system of differential equations \begin{align*} &x''(t) = -a_0(a_1 - bz'(t))\cos(wt), &&x(t_o)= 0, &&x'(t_o)=0\\ &z''(t)= -a_0 bx'(t)\cos(wt), &&z(t_o) =0, &&z'(t_o)= 0 \end{align*} It reduces to a third order equation $z'''(t) = a(1-cz'(t))\cos^2(wt)-\tan(wt) dz''(t), z(t_o)=0,z'(t_o)= 0$ I tried mathematica and matlab but they do not want to return analytical solution. This is free electron in electromagnetic field. If the field is complex mathematica find some solution. But the answer is also complex. Is it possible to get real Z for cos(wt) field out of it. eqns = {z'''[t] == a*Exp[I*t*2*w]*(1 - z'[t]*c) + z''[t]*(I*w), z[t0] == 0, z'[t0] == 0}; soln = Real[DSolve[eqns, z, t][[1]]]
$$x''(t) = -a_0(a_1 - bz'(t))\cos(wt),\ \ \ \ x(t_o)= 0,\ \ \ \ \ x'(t_o)=0$$ let $x'(t)=y(t) ,z'(t)=\beta(t)$ than we get $$y'(t)=-a_0(a_1 - b\beta(t))\cos(wt)$$ the second equation converts to $$\beta'(t)= -a_0b y(t)\cos(wt) $$ i think it can be solved now on solving and referring results from work of @okrzysik Solution of a system of linear odes $$\beta(t) = A \cos\left(\frac{c \sin(\omega t)}{\omega}\right) + B \sin\left(\frac{c \sin(\omega t)}{\omega}\right) + \frac{d}{c^2} $$ and $$y'(t)=(-a_0a_1\cos(wt)+a_0b A \cos\left(\frac{c \sin(\omega t)}{\omega}\right) \cos(wt)+ a_0Bb \sin\left(\frac{c \sin(\omega t)}{\omega}\right)\cos(wt) +a_0 \frac{bd}{c^2}\cos(wt))$$ on integrating $$y(t)=\frac{1}{w}(a_0 \frac{bd}{c^2}-a_0a_1)\sin(wt)+a_0bA\frac{w^2}{c}\sin(\frac{c\sin(wt)}{w})-a_0Bb\frac{w^2}{c}\cos(\frac{c\sin(wt)}{w})+K$$ now $z(t),x(t)$ could be calculated from $\beta(t),y(t)$ $$z(t) = A \int \cos\left(\frac{c \sin(\omega t)}{\omega}\right) dt+ B \int\sin\left(\frac{c \sin(\omega t)}{\omega}\right)dt +\int \frac{d}{c^2} dt$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1853782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$x^2-x+1$ has a root $\!\bmod p\,$ for infinitely many primes $p$ Prove that the equation $$x^2 - x + 1 = p(x+y)$$ has integral solutions for infinitely many primes $p$. First, we prove that there is a solution for at least one prime, $p$. Now, $x(x-1) + 1$ is always odd so there is no solution for $p=2$. We prove there is a solution for $p=3$. If $p=3$, $y = (x-2)^2/3-1$. We get integral solutions whenever we get $x = 3m +2$, where $m$ is any integer.$\\$ We provide a proof by contradiction which is similar to Euclid's proof of there being infinitely many primes. \Let us assume that it is is true for only finitely many primes, and name the largest prime for which the equation is true $P$.\\We set $$x = 2\cdot3\cdot5\dots P$$ $x$ is the product of all primes upto $P.\\$ Then, the term $x(x-1) + 1$ is either prime or composite. If it is prime, then we set $p = x(x-1) + 1, y = 1 - x$ and get a solution. If it is composite, we write $x(x-1) + 1 = p\times q$, where $p$ is any prime factor of $x(x-1)+1$, and $q$ is an integer, $q = (x(x-1)+1)/p$. Now, $x(x-1) + 1$ is not divisible by any prime upto $P$ since it leaves a remainder of $1$ with all of them. So, $p > P$. We set $y=q-x$ for a solution.$\\$In either case, we get a solution for a prime $p > P$, which means there's no largest prime for which this equation has solutions. This contradicts the assumption that there are solutions for only finitely many primes. I feel like I'm missing some step. Is this correct ?
This answer uses Quadratic Reciprocity (QR). $$x^2 - x + 1 = p(x+y)$$ $$\iff x^2-x+1-px=py$$ for a fixed $x\in\mathbb Z$ and prime $p$ has a solution $y\in\mathbb Z$ if and only if $p\mid x^2-x+1-px$, i.e. if and only if $p\mid x^2-x+1$. Your problem is equivalent to proving that $p\mid x^2-x+1$ has a solution $x\in\mathbb Z$ for infinitely many primes $p$. Let $p\ge 5$. $$\exists x\in\mathbb Z\left(x^2-x+1\equiv 0\pmod{p}\right)$$ $$\stackrel{\cdot 4}\iff \exists x\in\mathbb Z\left((2x-1)^2\equiv -3\pmod{p}\right)$$ $$\stackrel{(*)}\iff \left(\frac{-3}{p}\right)=1$$ $$\stackrel{(\text{QR})}\iff p\equiv 1\pmod{3}$$ $(*)$: I'll explain a few things. Firstly, I'm using the Legendre Symbol. Secondly, the equivalence holds because if $t^2\equiv -3\pmod{p}$ for some $t\in\mathbb Z$ (i.e. if $-3$ is a quadratic residue mod $p$), then the congruence $t\equiv 2x-1\pmod{p}$ has the solution $x\equiv 2^{-1}(t+1)\pmod{p}$. Therefore your equation has a solution $(x,y)\in\mathbb Z^2$ if and only if either $p=3$ or $p\equiv 1\pmod{3}$ ($p=3$ gives a solution $(x,y)=(2,-1)$ and you can find that $p=2$ gives no solutions, because $x^2-x=x(x-1)$ is always even). There are infinitely many primes $p\equiv 1\pmod{3}$. It follows from the more general Dirichlet's Theorem or from a simpler proof that only uses QR, e.g. the one given in the last page of this paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1853846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Finding percentage of a dollar amount I'm working with a website that can be used to pay contractors on my behalf, instead of requiring them to submit to me their W9 for taxes. The website takes $2.75\%$ in processing fees. If I'm paying someone $\$22$ per hour, and the website requires $2.75\%$, I believe that would be $\$0.60$ of each hour that would be paid to the website. That would mean if I still wanted to pay the developer $\$22$/hr including the fees, I would effectively be paying him $\$21.40$ per hour. My problem is with checking my math. I was trying to figure out how to take the $\$21.40$ and multiply it some value to reach the $\$22$, but I don't know how to do that. What value times $\$21.40$ equals $\$22$? [I also could not figure out why the dollar sign caused the post to lose its formatting so surrounded it in preformatted tags.]
If the processing fee is $2.75\%$ of the amount processed, and you want to have $\$22$ after the fee is taken out, then you have the following equation: $$x-x\times2.75\%=22,$$ where $x$ is the initial amount (i.e. before the processing fee is taken). Read the equation as: * *From the initial amount $x$ *take out $2.75\%$ of the initial amount $x$, *and that should be equal to $22$. Factorization of the left-hand side (further referred to as LHS) gives $$x\left(1-2.75\%\right)=22$$ (multiply out to check); then notice that a percent is exactly one hundredth of the unity: $$x\left(1-2.75\frac1{100}\right)=22;$$ now rewrite the unity as $100/100$ and multiply the $2.75$ by the fraction, which in this case just moves $2.75$ into the numerator: $$x\left(\frac{100}{100}-\frac{2.75}{100}\right)=22;$$ denominators are now equal, so we can bring the numerators over one fraction bar: $$x\left(\frac{100-2.75}{100}\right)=22;$$ perform the subtraction in the numerator: $$x\frac{97.25}{100}=22;$$ divide both sides by the fraction: $$x\frac{97.25}{100}/\frac{97.25}{100}=22/\frac{97.25}{100};$$ that gets rid of the fraction on the LHS: $$x=22/\frac{97.25}{100};$$ division by a fraction is equivalent to multiplication by the same fraction but with numerator and denominator swapped: $$x=22\frac{100}{97.25};$$ multiply the integer $22$ by the fraction, which brings it into the numerator as a multiple: $$x=\frac{22\times100}{97.25};$$ perform the multiplication: $$x=\frac{2200}{97.25};$$ we arrived at the desired answer; the fraction may be further simplified, or a decimal approximation up to four decimals after the decimal point may be obtain by division on a calculator: $$x\approx22.6221;$$ round the value up (need to explain why?), which gives $$22\text{ dollars and }63\text{ cents.}$$ P. S. Notice that this StackExchange site employs MathJax to enable $\LaTeX$ typesetting of mathematical formulae. For a basic tutorial and reference on the language, please refer to this link.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1854111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Real roots of $z^2+\alpha z + \beta=0$ Question:- If equation $z^2+\alpha z + \beta=0$ has a real root, prove that $$(\alpha\bar{\beta}-\beta\bar{\alpha})(\bar{\alpha}-\alpha)=(\beta-\bar{\beta})^2$$ I tried goofing around with the discriminant but was unable to come with anything good. Just a hint towards a solution, might work.
Let the roots be $-x,-y$ with $x$ real. Then $\alpha=x+y$ and $\beta=xy$, hence $$ \alpha\bar{\beta}-\beta\bar{\alpha}=(x+y)x\bar{y}-xy(x+\bar{y})=x^2(\bar{y}-y) \\ (\bar{\alpha}-\alpha)=\bar{y}-y \\ (\beta-\bar{\beta})^2 =x^2 (y-\bar{y})^2 \\ $$ It is easy to see that the product of the first two is the last.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1854220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Concept of Trigonometric identities The value of the expression $$\dfrac{\sin x}{ \cos 3x} + \dfrac{\sin 3x}{ \cos 9x} + \dfrac{\sin 9x}{ \cos 27x}$$ in terms of $\tan x$ is My Approach If I take L.C.M of this as $\cos 3 \cos 9x \cos 27x$ and respectively multiply the numerator then it is getting very lengthy. Even if I will use the identity of $\sin 3x$ then also I am not getting appropriate answer. Please suggest some nice and short way of doing this question.
$$ {\sin x \over \cos 3x}$$ $$= {\sin x \over \cos x(1 - 4\sin^2 x)}$$ $$= {\tan x \over (1 - 4\sin^2 x)}$$ Now you can get $$\sin^2 x = {\tan^2 x \over 1 + \tan^2 x}$$ from $$ \cot^2 x + 1= { 1\over \sin^2 x}$$ $$\therefore {\sin x \over \cos 3x} = {\tan x \over (1 - 4\sin^2 x)} = {\tan x (1 + \tan^2 x) \over (1 - 3\tan^2 x)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1854289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What does it mean for a pdf to have this property? What does it mean for a probability density function $f(x)$ to have the following property? $$1+\int_{x=0}^{\infty}x^2 \left(\frac{f'(x)^2}{f(x)}-f''(x)\right)dx>0$$ I have tried a lot to simplify this condition and see what it means (in terms of moments of $f(x)$, etc), but no luck yet. Do you have any idea?
That can be written as $$ \int_{0}^{+\infty} x^2 \cdot\frac{d^2}{dx^2}\log(f(x))\cdot f(x)\,dx < 1 $$ that is a constraint that depends on minimizing a Kullback-Leibler divergence. It essentially gives that your distribution has to be close to a normal distribution (in the KL sense).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1854474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
General solution for $\frac{\mathrm{d}^2 y}{\mathrm{d} x^2} = y$? Start with $$\frac{\mathrm{d}^2 y}{\mathrm{d} x^2} = y$$ then $$\frac{1}{\mathrm{d} x} \, \mathrm{d} \left(\frac{\mathrm{d} y }{\mathrm{d} x}\right) = y$$ $$\frac{\mathrm{d} y}{\mathrm{d} x} \, \mathrm{d} \left(\frac{\mathrm{d} y }{\mathrm{d} x}\right) = y \, \mathrm{d} y$$ $$\frac{\mathrm{d} y}{\mathrm{d} x} = \sqrt{y^2 + c}$$ $$\int \frac{1}{\sqrt{y^2+c}} \mathrm{d} y = x + c_1$$ $$ \ln\left(\sqrt{y^2 +c_0} + y\right) = x + c_1$$ $$ \sqrt{y^2 +c_0} + y = e^{x + c_1}$$ This does look trigonometric and exponential as it should but I don't know how to proceed to simplify it. I actually know the actual solution but I want to prove it from first principles. This means I don't want to use guess and check methods because they can't handle solving problems in general and I don't want to assume the result in order to prove it.
For linear equations with constant coefficients, the "guess-and-check" method, which amounts to assuming $y=e^{\lambda x}$ and solving for $\lambda$, actually does generalize to all possibilities (provided that you can solve the necessary equation for $\lambda$, and with appropriate adjustment for duplicate roots). One can also prove using general linear ODE theory that it doesn't miss any solutions. That said, if you want to do it without this, one way is operator factorization: $y''-y=0$ is the same as $(D^2-1)y=0$ which is the same as $(D+1)(D-1)y=0$. You can do this by finding the general solution $z$ to $(D+1)y=0$ (i.e. $y'+y=0$) and then solving $(D-1)y=z$. Each of these can be solved using the integrating factor technique. (Of course in a way the integrating factor technique is itself guess-and-check...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1854570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
Writing domains: $∈$ or $⊆$? Usually when we write domains for functions (e.g. $f(x)=x^2$) in set notation, we would write something like this: $$D=\{x∈ℝ\}$$ This means that all values of x are part of the set of real numbers. However, would it not be more appropriate to write $$D=\{x⊆ℝ\}$$ or $$D=\{x⊂ℝ\}$$ Because the set of $x$ is a subset of the set of real numbers? Why do we write the domain the first way rather than the second or third ways?
When a set $D$ is specified or desribed by use of brace brackets, it means that the members of $D$ are all those and only those things that satisfy the conditions written between the brackets. So $D=\{x\in \mathbb R\}$ means that for any $x,$ we have $ x\in D\iff x\in \mathbb R$. Of course that means $D=\mathbb R$. And if we write $D=\{x\subset \mathbb R\}$ that means that $D$ is the set of all subsets of $\mathbb R.$ To say that every member of D is a real number, write $D\subset \mathbb R$, which says that any member of $D$ is a member of $R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1854641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Sorting rows then sorting columns preserves the sorting of rows From Peter Winkler's book: Given a matrix, prove that after first sorting each row, then sorting each column, each row remains sorted. For example: starting with $$\begin{bmatrix} 1 & -3 & 2 \\ 0 & 1 & -5 \\ 4 & -1 & 1 \end{bmatrix}$$ Sorting each row individually and in ascending order gives $$\begin{bmatrix} -3 & 1 & 2 \\ -5 & 0 & 1 \\ -1 & 1 & 4 \end{bmatrix}$$ Then sorting each column individually in ascending order gives $$\begin{bmatrix} -5 & 0 & 1 \\ -3 & 1 & 2 \\ -1 & 1 & 4 \end{bmatrix}$$ And notice the rows are still individually sorted, in ascending order. I was trying to find a 'nice' proof that does not involve messy index comparisons... but I cannot find one!
Let's say after sorting, the new matrix is $A$ with m rows and n columns. We have to prove that \begin{equation} A_{ij}\leq A_{ik},\, \forall j \leq k \end{equation} where $A_{xy}$ is element at $x_{th}$ row and $y_{th}$ column. Here we know that there are at least $i$ elements (including $A_{ik}$) in column $k$ which are less than or equal to $A_{ik}$ and corresponding to their initial positions we have $i$ elements in column $j$ which are at most those elements therefore at least $i$ elements exist in column $j$ which are less than or equal to $A_{ik}$, Hence when column j is sorted \begin{equation} A_{ij} \leq A_{ik} \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1854750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Subgroup of Order $n^2-1$ in Symmetric Group $S_n$ when $n=5, 11, 71$ $G$ is a symmetric subgroup of symmetric group $S_n$ acting on $n$ objects where $n=5, 11, 71$ and order of $G$ is $n^2-1$. Question: * *Does $G$ (as defined above for $n=5, 11, 71$) exist? *How can I compute such calculation ? Is there a online system/ website I can use? in this case, some introductory infromation will help.
This is only a partial attempt at tackling a more general underlying question. Note that if $n \ge 5$ is odd, then $n^{2} - 1$ divides $n!$. In fact $n^{2} = (n - 1) (n +1)$. Clearly $n-1$ divides $n!$. As to $n+1$, since $n$ is odd, $n+1$ is even, so $$ n+1 = 2 \cdot \frac{n+1}{2}. $$ Clearly also $2$ and $\frac{n+1}{2}$ divide $n!$, Now if $n \ge 5$, we have that $$ n - 1, 2, \frac{n+1}{2} $$ are distinct. Update 1 Direct computation with GAP show that $S_{7}$ has three conjugacy classes of subgroups of order $48$. Update 2 Direct computation with GAP show that $S_{9}$ has seven conjugacy classes of subgroups of order $80$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1854953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How to find the union&intersection of two lines by their equations? I will try to be as clear as possible concerning my confusion, and I will use some examples(several ones). Case number 1. Assume two equations(in cartesian form) of two planes. $2x+2y-5z+2=0$ and $x-y+z=0$ Now,we need to find their vectors. For the first on, we get: {1,-1,0}, {0,5/2,1} and {1,0,2/5}. For the second equation, we get: {1,1,0}, {0,1,1},and {1,0,-1}. Now, I have a hard time understanding how I have to figure out where is their intersection and their union? Case 2: Assume the one of the previous planes $x-y+z=0$ and the line $x-y=0$. How do I find the intersection and union of these two?
First of all: $Ax+By+Cz+D=0$ is plane equation. Case 1: Intersection of to planes is line. To find equation of that line you have to solve system of equations: $$ 2x+2y-5z+2=0\\ x-y+z=0 \Rightarrow x=y-z \\ $$ If we substitute second equation into first we got $$ 2(y-z)+2y-5z+2=0 \Rightarrow 4y-7z+2=0 \Rightarrow y=\frac{7z-2}{4} $$ and for x $$ x=\frac{7z-2}{4}-z=\frac{3z-2}{4} $$ Now we have parametric equation of the intersection $$ x=-\frac{1}{2}+\frac{3}{4}z\\ y=-\frac{1}{2}+\frac{7}{4}z\\ z=0+1z $$ It's line equation which can be written in another form $$ \frac{x+\frac{1}{2}}{\frac{3}{4}}=\frac{y+\frac{1}{2}}{\frac{7}{4}}=\frac{z}{1} $$ If we multiply denominators by 4 we get shorter equation $$ \frac{x+\frac{1}{2}}{3}=\frac{y+\frac{1}{2}}{7}=\frac{z}{4} $$ Case 2: $x-y=0$ is not line equation - it's plane equation $x-y+0z=0$. So this case is also intersection of two planes - like case 1. Solution: $$ x-y+z=0 \Rightarrow x=y-z\\ x-y=0 \Rightarrow y-z-y=0 \Rightarrow z=0 $$ Using $x=y-z$ we have $x=y$. So, intersection is a line whose equation is $$ x=0+1y\\ y=0+1y\\ z=0+0y $$ or in another form $$ \frac{x-0}{1}=\frac{y-0}{1}=\frac{z-0}{0} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1855054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Sierpinski triangle formula: How to take into account for $0^{th}$ power? The formula to count Sierpinski triangle is $3^{k-1}$ .It is good if you don't take the event when $k=0$.But how can you write a more precise formula that takes the $k=0$ into account which gives $3^{-1}$? Just to note, I did figure out the equation myself as I learned it to write a program although the equation is available online.I am doing it purely for fun and out of curiosity, no homework question. Add-on: I tried to draw a tree to find the relation but still the nodes start to show pattern from level 2.
As I understand it, you want a formula to count the number $n$ of triangles that remain at level $k$ in the standard trema construction of the Sierpinski triangle. If we say that level one is the initial triangle, then that leads to a sequence of images that looks like so: We can then clearly see your formula: $n=3^{k-1}$. (Recall that $3^0=1$, so that $k=1$ yields the correct result.) This depends, however, on where you choose to start counting. I would personally prefer to call the initial triangle level zero but, really, this is somewhat arbitrary. One other comment: I'm not sure why you're using an unbalanced binary tree to model this situation. It seems to me that a balanced ternary tree (where each node has three children) would be more natural.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1855106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solving $b(5px - 3x) = a(qx - 4)$ for $x$, and stating any restrictions on the variables I am a high school student in Algebra II and while I normally have no trouble with problems dealing with algebraic equations, I simply cannot muster the answer to this question. Solve for $x$: $$b(5px - 3x) = a(qx - 4)$$ State any restrictions on the variables. I'll show you the process of deriving the solution that I have been taking, and if you could identify where I have been erroneous that would be greatly appreciated. \begin{align*} b(5px - 3x) &= a(qx - 4)\\ 5bpx - 3bx &= aqx - 4a \\ 5bpx &= aqx - 4a + 3bc \\ 5bpx - aqx &= -4a + 3bc\\ x(5bp - aq) &= -4a + 3bc\\ x &= \frac{-4a + 3bc}{5bp - aq}&&& (5bp ≠ aq) \end{align*}
In the between the second and third lines, it looks like you just made a typo. You added $3bx$ to both sides, but then you wrote $3bc$ on the other side. The equation in the third line should have been $$ 5bpx = aqx - 4a + 3bx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1855183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fundamental solution of linear system of ODEs I struggle to understand what the fundamental solution is supposed to be. Specifically it's about a linear system of homogen ODEs with constant coefficents of the form: $\dot{\textbf{F}}=\textbf{AF}$ where $\textbf{F},\dot{\textbf{F}}:\mathbb{R} \to \mathbb{R^n}, \textbf{A} \in \mathbb{R} ^{n \times n}$. In the lecture we were told $\Phi(t) = Exp(\textbf{A}t) : \mathbb{R} \to \mathbb{R} ^{n \times n} $ is the fundamental solution of the system, since symbolically $\Phi'(t)=\textbf{A}\Phi(t)$. With the definition I see this makes sense but why is the solution a matrix instead of a vector? I really don't understand the relatonship between the fundamental solution and $\textbf{F}$ nor what the fundamental solution even is? (Calculus in Engineering is sometimes a bit short with explanations...)
This is a subtle point of terminology. The solution to the first-order, linear, homogeneous system $\dot{f}(t) = A\,f(t)$ where $f,\dot{f} : \mathbb{R} \rightarrow\mathbb{R}^{n\times1}$ and $A\in\mathbb{R}^{n\times n}$ has the solution $$ f(t) = \sum_{k=1}^{n} c_k f_k(t) $$ where $c_k \in \mathbb{R}$ and $f_k: \mathbb{R} \rightarrow\mathbb{R}^{n\times1}$ $\forall k$. In others words, the solution to the above linear system of dimension $n$ is a linear combination of $n$ independent solutions (ideally). We can re-write the summation in linear algebra notation as $$ f(t) = \Phi(t)\,c $$ where $$ \Phi(t) = [f_1,f_2,\ldots,f_n]\quad\mbox{and}\quad c=[c_1,c_2,\ldots,c_n]^{\intercal}. $$ Therefore, just as $y = \exp(x)$ may be called a fundamental solution to $\dot{y} = y$ as it encompasses all of the solution's behavior up to a multiplicative scalar determined by initial conditions, the matrix of solutions $\Phi(t)$ is sometimes called the fundamental solution to the linear system $\dot{f}(t) = A\,f(t)$ as it encompasses all of the solutions' behaviors up to a multiplicative vector determined by initial conditions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1855279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Double integration over a general region $\iint x^2 +2y$ bound by $y=x$ $y=x^3$ $x \geq 0$ this is either a type I or type II since the bounds are already nicely given for a type I, I integrated it as a type I: Finding the bounds: $x^3=x \to x^3-x=0 \to x(x^{2}-1)= 0 \to x=0, x=\pm1$ Since $-1\lt 0$ my bounds for $x$ are $[0,1]$ and since $x \gt x^3$, $x$ is my upper bound for dy. $\int_{0}^{1} \int_{x^3}^{x} x^2+2y$ $dydx$ $\int_{x^3}^{x}$ $x^2+y^2$ $\Big\vert_{x}^{x3}$dy $=$ $(2x^2) - (-x^2+x^6)$ $\int_{0}^{1} 3x^2+x^6 dx=$ $x^{3}+\frac{1}{7}x^7 \Big\vert_{0}^{1} \to 1 -\frac{1}{7}$ $=\frac{6}{7}$
Note that $x^3\lt x$ in the interval $(0,1)$. (A picture always helps in this kind of problem.) So $y$ travels from $y=x^3$ to $y=x$. One can see without checking details that the answer $-\frac{4}{21}$ cannot be right. Your integrand is positive in the region, so the answer must be positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1855395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
compute marginal I have tried to solve this exercise Let $X$ and $Y$ be random variables with joint probability density function given by: $f(x,y)=\frac{1}{8}(x^2-y^2)e^{-x}$ if $x>0$, $|y|<x$ Calculate $E(X\mid Y=1)$ so, the marginal $f_Y(y)$ is $\int_y^\infty \frac{1}{8}(x^2-y^2)e^{-x} dx +\int_{-y}^\infty \frac{1}{8}(x^2-y^2)e^{-x} dx\ $ ? Is correct?
$$ \text{What you need for the marginal is } \begin{cases} \displaystyle \int_y^\infty & \text{if } y\ge 0, \\[10pt] \displaystyle \int_{-y}^\infty & \text{if } y<0. \end{cases} $$ Or you can just write it as $\displaystyle \int_{|y|}^\infty\!\!.~~$ At any rate in $f_{Y=1}(y)$ you'd have $\displaystyle\int_1^\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1855501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why the probability is $0$ but possible We want to take a random number from natural numbers how much is the probability that,the number be $1$? When we want to say the probability we say it is $0$ but we say zero for impossible things but that is possible.I know that every number divided by infinity is zero but maybe we take $1$ what about that?
There are two issues here: Firstly, you haven't specified what probability distribution on the natural numbers we should assume. You probably mean one which is in some sense uniform: $Pr(0)=Pr(1)=Pr(2)=\cdots$ to infinity. However there is no such distribution, as explained here. The other issue is that "possible" isn't a very well-defined mathematical term. When we talk about probability formally, we usually talk about a space of outcomes $X$ (eg. in this case that would be the natural numbers) and a probability measure $P$ which takes subsets of $X$ and gives the probability of the outcome being in that subset (there's also something else called a $\sigma$-algebra but there's no need to add that extra confusion here). Probabilists then do not distinguish between probability measures that differ on sets of zero probability events. For example, the $P$ which uniformly picks a number in the interval $[0,1]$ and the $P'$ which uniformly picks a number in $[0,1]\setminus\{0.42\}$ are considered the same distribution since in either case the outcome has a zero probability of being in the set $A=\{0.42\}$. However, in the first case you would say $A$ is "possible," in the second case you would not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1855618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Which one is bigger $\sqrt[1023]{1024}$ or $\sqrt[1024]{1023}$ Which one is bigger $\sqrt[1023]{1024}$ or $\sqrt[1024]{1023}$ I am really stuck with this one.My friend says that it can be solved by $AM-GM$ but I didn't succes.Any hints?
Raise both numbers to the power of $1023\cdot 1024$ to get $1024^{1024}$ and $1023^{1023}$. Which one looks bigger now? Alternatively, pick your fravourite from among the two numbers $\sqrt[1023]{1023}$ or $\sqrt[1024]{1024}$, and compare each of the original two numbers to the one you picked.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1855716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Martingale Convergence Theorem I have a Question regarding MCT which I am stuck in, the question goes like this: Let $X_0 = 1$ and assume that $X_n$ is distributed uniformly on $(0,X_{n-1})$. and $Y_n = 2^nX_n$. the questions are: a) Show that $\left( Y_n\right)$ converges to $0$ a.s. b) Is $Y_n$ uniformly integrable? (No!) regarding the first think. I know that $Y_n$ is a non negative Martingale and therefore it is Bounded (Fatou's Lemma). I want to show that $Y_n$ can be written as a product if i.i.d r.v which are Uni(0,2) and then I will finish my proof. how can say this? after I will solve (a), (b) will be very easy since if it was UI then $E(Y_{\infty})=E(Y_0)=1$ but it is not! Thanks for the help
As you say, we can write $Y_n = U_1 \cdots U_n$ where $U_i$ are iid $U(0,2)$. That means $\ln Y_n = \sum_{i=1}^n \ln U_i$. Compute $E[\ln U_i]$ and note that it is negative. So by the strong law of large numbers, $\frac{1}{n} \ln Y_n \to E[\ln U_i] < 0$ a.s. This implies $\ln Y_n \to -\infty$ a.s. which is to say $Y_n \to 0$ a.s. More martingale-y solution: Since $Y_n$ is a nonnegative martingale, it converges almost surely to some random variable $Y_\infty$. Clearly $Y_\infty \ge 0$ and by Fatou's lemma $E[Y_\infty] \le 1$. Now let $U$ be a $U(0,2)$ random variable independent of everything in sight. Clearly $U Y_n$ has the same law as $Y_{n+1}$, so passing to the limit, $U Y_\infty$ has the same law as $Y_\infty$. By Jensen's inequality, $E[\sqrt{Y_\infty}] \le E[Y_\infty] < \infty$. So we may write $E[\sqrt{Y_\infty}] = E[\sqrt{U Y_\infty}] = E[\sqrt{U}] E[\sqrt{Y_\infty}]$. Since $E[\sqrt{U}] \ne 1$ (Jensen or direct computation), we must have $E[\sqrt{Y_\infty}]=0$, which is to say $Y_\infty = 0$ a.s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1855835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
On the Liouville-Arnold theorem A system is completely integrable (in the Liouville sense) if there exist $n$ Poisson commuting first integrals. The Liouville-Arnold theorem, anyway, requires additional topological conditions to find a transformation which leads to action-angle coordinates and, in these set of variables, the Hamilton-Jacobi equation associated to the system is completely separable so that it is solvable by quadratures. What I would like to understand is if the additional requirement of the Liouville-Arnold theorem (the existence of a compact level set of the first integrals in which the first integrals are mutually independent) means, in practice, that a problem with an unbounded orbit is not treatable with this technique (for example the Kepler problem with parabolic trajectory). If so, what is there a general approach to systems that have $n$ first integrals but do not fulfill the other requirements of Arnold-Liouville theorem? Are they still integrable in some way?
Let $M= \{ (p,q) \in \mathbb{R}^{n} \times \mathbb{R}^n \}$ ($p$ denotes the position variables and $q$ the corresponding momenta variables). Assume that $f_1, \cdots f_n$ are $n$ commuting first integrals then you get that $M_{z_1, \cdots, z_n} := \{ (p,q) \in M \; : \; f_1(p,q)=z_1, \cdots , f_n(p,q)=z_n \} $ with $z_i \in \mathbb{R}$ is a Lagrangian submanifold. Observe that if the compactness and connectedness condition is satisfied then there exist action angle variables which means that the motion lies on an $n$-dimensional torus (which is a compact object). The compactness condition is equivalent to that a position variable, $p_k$, or a momentum variable, $q_j$, cannot become unbounded for fixed $z_i$. Consequently, if the compactness condition is not satisfied there is no way you can expect to find action angle variables since action angle variable imply that the motion lies on a torus which is a compact object.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1855956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Prove that the derivative of $x^w$ is $w x^{w-1}$ for real $w$ Can anyone give a proof of the derivative of this type of function? Specifically showing that $\dfrac{d(x^w)}{dx} = wx^{w-1}$ for a real $w$? I tried to use the Taylor series expansion for $(x+dx)^w$ and got the correct result. However, the proof of the Taylor series requires knowledge of the derivative of these functions. So this is essentially circular reasoning. I know that the same series is also given by the binomial expansion, but that's not entirely satisfactory either, because where's the proof that the binomial expansion works for all reals (isn't it only apparent for integers)? So far all of the arguments I've come across involve circular reasoning. I was thinking of showing that the binomial expansion is true for all reals using some form of proof by induction e.g. something like this. http://www.math.ucsd.edu/~benchow/BinomialTheorem.pdf I'm really not sure.
I wrote an answer to a question that was closed as a duplicate of this one. I thought I would add a different answer to this question. Integer Case For integer $n\ge0$, the Binomial Theorem says $$ (x+h)^n=\sum_{k=0}^n\binom{n}{k}x^{n-k}h^k\tag1 $$ So $$ \frac{(x+h)^n-x^n}h=\sum_{k=1}^n\binom{n}{k}x^{n-k}h^{k-1}\tag2 $$ Therefore, taking the limit, we get $$ \begin{align} \frac{\mathrm{d}}{\mathrm{d}x}x^n &=\binom{n}{1}x^{n-1}\\[3pt] &=nx^{n-1}\tag3 \end{align} $$ Inverting Suppose that $$ x=y^m\tag4 $$ Taking the derivative of $(4)$ using $(3)$ and substituting $y=x^{\frac1m}$ yields $$ 1=my^{m-1}\frac{\mathrm{d}y}{\mathrm{d}x}=mx^{1-\frac1m}\frac{\mathrm{d}}{\mathrm{d}x}x^{\frac1m}\tag5 $$ Therefore, $$ \frac{\mathrm{d}}{\mathrm{d}x}x^{\frac1m}=\frac1mx^{\frac1m-1}\tag6 $$ Rational Case Applying the Chain Rule with $(3)$ and $(6)$ gives $$ \begin{align} \frac{\mathrm{d}}{\mathrm{d}x}x^{\frac nm} &=n\left(x^{\frac1m}\right)^{n-1}\frac1mx^{\frac1m-1}\\[3pt] &=\frac nmx^{\frac nm-1}\tag7 \end{align} $$ Real Case When a sequence of functions converges pointwise and their derivatives converge uniformly, the derivative of the limit equals the limit of the derivatives (see this question). Therefore, the full case for non-negative exponents follows by continuity. Negative Case Applying the Chain Rule with $(7)$ and $\frac{\mathrm{d}}{\mathrm{d}x}\frac1x=-\frac1{x^2}$, we get $$ \begin{align} \frac{\mathrm{d}}{\mathrm{d}x}x^{-s} &=-\frac1{(x^s)^2}sx^{s-1}\\ &=-sx^{-s-1}\tag8 \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1856074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 10, "answer_id": 2 }
Prove: if $f(0)=0$ and $f'(0)=0$ then $f''(0)\geq 0$ let $f$ be a nonnegative and differentiable twice in the interval $[-1,1]$ Prove: if $f(0)=0$ and $f'(0)=0$ then $f''(0)\geq 0$ * *Are all the assumptions on $f$ necessary for the result to hold ? *what can be said if $f''(0)= 0$ ? Looking at the taylor polynomial and lagrange remainder we get: $$f(x)=f(0)+f'(0)x+\frac{f''(c)x^2}{2}$$ $$f(x)=\frac{f''(c)x^2}{2}$$ Because the function is nonnegative and $\frac{x^2}{2}\geq 0$ so $f''(c)\geq 0$ For 1., all the data is needed but I can not find a valid reason. For 2., can we conclude that the function the null function?
Your proof is enough when $f''$ is continuous. Here's a way without the continuity assumption. This Taylor expansion $f(x) = f(0) + xf'(0) + \frac{x^2}{2}f''(0) + o(x^2)$ yields here: $$f(x) = \frac{x^2}{2}f''(0) + o(x^2)$$ Either $f''(0)=0$ and we're good, otherwise the previous equality rewrites as $\displaystyle \lim_{x\to 0}\frac{2f(x)}{f''(0) x^2}= 1$ Consequently, $f(x)$ and $f''(0) x^2$ must share the same sign on a neighborhood of $0$. Since $f\geq 0$ and $x^2\geq 0$ that implies $f''(0)\geq 0$ When $f''(0)=0$, $f(x)=o(x^2)$. I don't see anything more you could say.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1856135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 0 }
A simple question about the Hamming weight of a square Let we define the Hamming weight $H(n)$ of $n\in\mathbb{N}^*$ as the number of $1$s in the binary representation of $n$. Two questions: * *Is it possible that $H(n^2)<H(n)$ ? *If so, is there an absolute upper bound for $H(n)-H(n^2)$? It is interesting to point out that, quite non-trivially, the answers to the same questions for polynomials $\in\mathbb{R}[x]$, with the Hamming weigth being replaced by the number of non-zero coefficients, are yes and no, but I am inclined to believe that the situation for the Hamming weigth is radically different, due to the non-negativity of coefficients. What are your thoughts about it?
Let $n_k=2^{2k-1}-2^k-1$. We have $$H(2^{2k-1}-2^k-1)=2k-2,$$ because we flip one of the $2k-1$ ones of $2^{2k-1}-1$ to a zero. On the other hand $$ n_k^2=2^{4k-2}-2^{3k}+2^{k+1}+1. $$ Here the integer $m_k=2^{4k-2}-2^{3k}$ has Hamming weight $k-2$, so $H(n_k^2)=k$. Therefore $$H(n_k)-H(n_k^2)=k-2,$$ and the answers are * *Yes. *No.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1856223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How to find the coordinate vector x with respect to the basis B for R^3? Find the coordinate vector of $x = \begin{bmatrix}-5\\-2\\0\end{bmatrix}$ with respect to the basis $B = \{ \begin{bmatrix}1\\5\\2\end{bmatrix}, \begin{bmatrix}0\\1\\-4\end{bmatrix}, \begin{bmatrix}0\\0\\1\end{bmatrix} \}$ for $\mathbb{R}^3 $ $[x]_B = ?$ So I think I have an idea, but i'm not quite sure.. Should I just put this in matrix form and then put it in RREF form?
Basically I found that you need to just Put the Basis B matrices together with the vector x at the end and you get $[x]_B = \begin{bmatrix}-5\\23\\102\end{bmatrix}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1856318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Roulette and Discrete Distribution A roulette wheel has 38 numbers. Eighteen of the numbers are black, eighteen are red, and two are green. When the wheel is spun, the ball is equally likely to land on any of the 38 numbers. Each spin of the wheel is independent of all other spins of the wheel. One roulette bet is a bet on black - that the ball will stop on one of the black numbers. The payoff for winning a bet on black is 2 dollars for every 1 dollar bet. That is, if you win, you get the dollar ante back and an additional dollar, for a net gain of 1 dollar; if you lose, you get nothing back, for a net loss of 1 dollar. Each 1 dollar bet thus results in the gain or loss of 1 dollar. Suppose one repeatedly places 1 dollar bets on black, and plays until either winning 7 dollars more than he has lost, or losing 7 dollars more than he has won. What is the chance that one places exactly 9 bets before stopping? I supposed 9 bets consist of eight wins(loses) and one lose(win). I realized that $${}_{9}C_8 \times (\frac{18}{38})^8 \times\frac{20}{38}+ {}_{9}C_1 \times (\frac{20}{38})^8 \times \frac{18}{38}$$ doesn't work because it cannot be winning or losing 8 times consecutively. Any help is appreciated.
"What is the chance that one places exactly 9 bets before stopping?" The last two bets have to both be wins or both be loses otherwise stopping would happen after 7 bets, not 9 bets. That makes 7 bet sequence results possible for a final, 7 bet ahead, win and 7 bet sequence results possible for a final, 7 bet behind, loss. They are: Lwwwwwwww, Wllllllll, wLwwwwwww, lWlllllll, wwLwwwwww, llWllllll, wwwLwwwww, lllWlllll, wwwwLwwww, llllWllll, wwwwwLwww, lllllWlll, wwwwwwLww, llllllWll, Let p be the probability "that one repeatedly places 1 dollar bets on black, and plays until either winning 7 dollars more than he has lost, or losing 7 dollars more than he has won." and "places exactly 9 bets before stopping?" Here is the equation that gives the answer p: p=((7 choose 6)(18^8)(20)+(7 choose 1)(20^8)(18))/38^9 p=(7(9^8)(10)+7(10^8)(9))/19^9 answer: p=490172130/16983563041=0.02886156036967 The chance that one places exactly 9 bets before stopping is about 2.89 percent or about 1 in 34.65.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1856398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
The product of five consecutive positive integers cannot be the square of an integer Prove that the product of five consecutive positive integers cannot be the square of an integer. I don't understand the book's argument below for why $24r-1$ and $24r+5$ can't be one of the five consecutive numbers. Are they saying that since $24-1$ and $24+5$ aren't perfect squares it can't be so? Also, the argument after that about how $24r+4$ is divisible by $6r+1$ and thus is a perfect square is unclear. Book's solution:
$24r-1$ and $24r+5$ are also divisible neither by $2$ nor by $3$. So they must also be coprime to the remaining four numbers, and thus must be squares. But this is impossible, because we already know that $24r+1$ is a square, and two non-zero squares can't differ by $2$ or $4$. For the second part: $6r+1$ is coprime to $24r,24r+1,24r+2$, and $24r+3$. So it must be a square. Hence $24r+4=4(6r+1)$ is a square. But then the two perfect squares $24r+1$ and $24r+4$ differ by $3$, and the only two squares differing by $3$ are $1$ and $4$. This forces $r=0$, which contradicts $r=k(3k\pm 1)/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1856530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving that $2^{2a+1}+2^a+1$ is not a perfect square given $a\ge5$ I am attempting to solve the following problem: Prove that $2^{2a+1}+2^a+1$ is not a perfect square for every integer $a\ge5$. I found that the expression is a perfect square for $a=0$ and $4$. But until now I cannot coherently prove that there are no other values of $a$ such that the expression is a perfect square. Any help would be very much apreciated.
I will assume that $a \ge 1$ and show that the only solution to $2^{2a+1}+2^a+1 = n^2$ is $a=4, n=23$. This is very non-elegant but I think that it is correct. I just kept charging forward, hoping that the cases would terminate. Fortunately, it seems that they have. If $2^{2a+1}+2^a+1 = n^2$, then $2^{2a+1}+2^a = n^2-1$ or $2^a(2^{a+1}+1) = (n+1)(n-1)$. $n$ must be odd, so let $n = 2^uv+1$ where $v $ is odd and $u \ge 1$. Then $2^a(2^{a+1}+1) = (2^uv+1+1)(2^uv+1-1) = 2^u v(2^u v+2) = 2^{u+1} v(2^{u-1} v+1) $. If $u \ge 2$, then $a = u+1$ and $2^{a+1}+1 =v(2^{u-1} v+1) $ or $2^{u+2}+1 =v(2^{u-1} v+1) =v^22^{u-1} +v $. If $v \ge 3$, the right side is too large, so $v = 1$. But this can not hold, so $u = 1$. Therefore $2^a(2^{a+1}+1) = 2^{2} v( v+1) $ so that $a \ge 3$. Let $v = 2^rs-1$ where $s$ is odd and $r \ge 1$. Then $2^{a-2}(2^{a+1}+1) = v( 2^rs) $ so $a-2 = r$ and $2^{a+1}+1 = vs \implies 2^{r+3}+1 = vs = (2^rs-1)s = 2^rs^2-s $. Therefore $s+1 =2^rs^2-2^{r+3} =2^r(s^2-8) \ge 2(s^2-8) \implies 2s^2-s \le 17$ so $s = 1$ or $3$. If $s = 1$, then $2^{r+3}+1 =2^r-1 $ which can not be. If $s = 3$ then $2^{r+3}+1 =9\cdot 2^r-3 \implies 4 =9\cdot 2^r-2^{r+3} =2^r \implies r = 2, v = 11, a = 4$ and $2^9+2^4+1 =512+16+1 =529 =23^2 $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1856654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How do I know that this system of equations has infinitely many solutions? \begin{cases} 2x + 4y - 2z = 0\\ \\3x + 5y = 1 \end{cases} My book is using this as an example of a system of equations that has infinitely many solutions, but I want to know how we can know that just from looking at the equations?
One can write your system $A x = b$ as augmented matrix and bring it into row echelon form $$ \left[ \begin{array}{rrr|r} 2 & 4 & -2 & 0 \\ 3 & 5 & 0 & 1 \end{array} \right] \to \left[ \begin{array}{rrr|r} 1 & 2 & -1 & 0 \\ 3 & 5 & 0 & 1 \end{array} \right] \to \left[ \begin{array}{rrr|r} 1 & 2 & -1 & 0 \\ 0 & -1 & 3 & 1 \end{array} \right] \to \left[ \begin{array}{rrr|r} 1 & 2 & -1 & 0 \\ 0 & 1 & -3 & -1 \end{array} \right] \to \left[ \begin{array}{rrr|r} 1 & 0 & 5 & 2 \\ 0 & 1 & -3 & -1 \end{array} \right] $$ This translates back into $$ x + 5z = 2 \\ y - 3z = -1 $$ or $x = (2-5z, -1+3z, z)^T$, where $z \in \mathbb{R}$. So there are infinite many solutions. From a geometric point of view, each equation defines an affine plane in $\mathbb{R}^3$, which is a plane, not necessarily including the origin. $$ (2, 4, -2) \cdot (x,y,z) = 0 \\ (3, 5, 0) \cdot (x, y, z) = 1 $$ The first plane has a normal vector $(2,4,-2)^T$ and includes the origin, the second plane has normal vector $(3,5,0)$ and is $1/\sqrt{3^2 + 5^2}$ away from the origin. (Large version) The solution of the system is the intersection of those two planes. And only the cases empty intersection, intersection is a line or intersection is a plane can happen. Here the intersection is a line. The image shows the two intersecting planes, the intersection line, and the point $P$ which corresponds to the solution with $z = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1856771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Show that $\inf \{ m+n\omega: m+n\omega>0,and,m,n\in{Z}\}= 0$, where $\omega>0 $ is irrational. Let $\omega\in\mathbb {R}$ be an irrational positive number. Set $$A=\{m+n\omega: m+n\omega>0,and,m,n\in{Z}\}.$$ Show that $\inf{A}=0.$ How should I start this problem? I don't get this problem.
Fix $\epsilon>0$. Fix an integer $N>1/\epsilon$. Consider the fractional parts of the numbers $k\omega, 0<k\le N$. These are commonly denoted $$ \{k\omega\}:=k\omega-\lfloor k\omega\rfloor. $$ Because $\omega$ is irrational the numbers $\{k\omega\}\in(0,1)$, $k=1,2,\ldots,N$, are all distinct. Because there are $N$ of them some two of them, say $\{k\omega\}$ and $\{\ell\omega\}$ are within $\epsilon$ of each other (you may use the pigeonhole principle to see this). If $k>\ell$ (resp. if $k<\ell$) then $0<\{t\omega\}<\epsilon$ for $t=k-\ell$ (resp. $t=\ell-k$). This means that $$0<n\omega+m<\epsilon$$ for $n=t$ and $m=-\lfloor t\omega\rfloor$. Making this CW for this is surely a duplicate, but I don't have time to look for one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1856956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Union of two vector spaces Can someone tell me how to union two vector spaces, it might be simple question but I forgot how to do that. Lets say I was given two vector spaces $W$ and $U$: $$W = \operatorname{span}\{ (1,2), (1,1) \}$$ $$U = \operatorname{span}\{ (3,4), (2,2) \}$$ What is $W \cup U$?
If these are both subspaces of $\mathbb{R}^2$, then in fact $U = W = \mathbb{R}^2$, so $W \cup U = \mathbb{R}^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1857039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do you rigorously explain the fact that $u \in L^p$ can be non defined over sets of measure 0? In all the definitions of $L^p(\Omega)$ spaces I have been given these are defined to be the set of functions $f: \Omega \to \mathbb{R}$ whose norm $||\cdot||_{L^p}$ is finite. We define is as the quotient with the equivalence relation $f=g$ if and only if $f=g$ almost everywhere. Now, the books I am dealing with say that $f$ can be non-defined on sets of measure $0$. But the definition of function explicitly says $f(x)$ is defined on every $x \in \Omega$. Do we really relax this last condition to be "defined almost everywhere" or the books I am dealing with want to express that we cannot make sure what the value at a especific set of measure $0$ is as it can always be redefined (but it is defined)?
It doesn't matter - the two versions of the definition give isometrically isomorphic spaces. Allowing functions to be undefined on a set of measure zero can be convenient, for example allowing us to refer to $f(x)=|x|^{-1/2}$ as an element of $L^1([-1,1])$ without having to define $f(0)$. Or allowing us to define $f=\lim f_n$ when the limit only exists almost everywhere, etc. It's so clear that it doesn't matter that people do use the second version, or write as though they were using it, without every worrying about giving a precise statement of the second version of the definition. If I wanted to state that definition precisely I'd probably start like so: If $\mu$ is a measure on $X$ then an almost function on $X$ is a function $f:X\setminus E\to\Bbb C$ for some set $E$ with $\mu(E)=0$. Then if you feel like it you can give definitions of the sum and product of two almost functions, the integral of an almost function, what it means for two almost functions to be equal almost everywhere, etc, finally defining $L^p$ as a certain space of equivalence classes of almost functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1857155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Constructing $\mathbb{R}$ from $\mathbb{Z}$? I have been told that the real number line $\mathbb{R}$ can be constructed from the cartesian product $\mathbb{Z} \times [0,1)$. How exactly is that true? Surely, the cartesian product $\mathbb{Z} \times [0,1)$ would give a set of ordered pairs of numbers? How is this equivalent to $\mathbb{R}$?
I have been told that the real number line $\mathbb{R}$ can be constructed from the cartesian product $\mathbb{Z} \times [0,1)$. "constructed from" is perhaps relatively vaguely defined, but I assume you mean "has the same cardinality as" or, equivalently, "a bijection exists to". $$ f((n, r)) = n + r $$ is exactly such a bijection. How exactly is that true? Surely, the cartesian product $\mathbb{Z} \times [0,1)$ would give a set of ordered pairs of numbers? How is this equivalent to $\mathbb{R}$? "Equivalent" makes no sense in this case. One could define equivalence in many distinct ways. When you say "constructed from" I assume you are searching for a bijection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1857241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
$C_c^\infty(\Omega)\subseteq L^p(\Omega)$ for any open $\Omega$? Let $d\in\mathbb N$ and $\Omega\subseteq\mathbb R^d$. Can we show that $$C_c^\infty(\Omega)\subseteq L^p(\Omega)\tag 1$$ for all $p\in [1,\infty]$? It's clear that $(1)$ holds if $\Omega$ has finite Lebesgue measure. And it's clear that $(1)$ holds for $p=\infty$.
Let $f\in C^\infty_c(\Omega)$. Then $f$ is supported in a compact set $K$ and $|f|$ attains a maximum $C$ in this $K$. Thus $$\int_{\Omega} |f|^p dx = \int_K |f|^p dx \le \int_K C^p dx = \text{Vol}(K) C^p.$$ Thus $f\in L^p$ for all $p$. Indeed $C^\infty_c(\Omega)$ is dense in $L^p$ for all $1\le p <\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1857441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Using cross product prove that if $\vec{u} \times \vec{v} = \vec{0}$ and $\vec{u} \cdot \vec{v} = 0$ then $\vec{v} = \vec{0}$ I am asked to elaborate on the following proof: Let $\vec{u} \neq \vec{0}$. Prove that if $\vec{u} \times \vec{v} = \vec{0}$ and $\vec{u} \cdot \vec{v} = 0$ then $\vec{v} = \vec{0}$. My attempt was to say that $$ u . v = 0 \Rightarrow (u_1, u_2, u_3) \cdot (v_1, v_2, v_3) = 0 \Rightarrow u_1 v_1 + u_2 v_2 + u_3 v_3 = 0 $$ and that $$ \begin{vmatrix} \vec{i} & \vec{j} & \vec{k} \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{vmatrix} = 0 \Rightarrow (u_2 v_3 - v_2 u_3 , u_3 v_1 - u_1 v_3 , u_1 v_2 - u_2 v_1) = (0,0,0) $$ but it seems like the wrong way to go. How can I achieve that proof?
Hint: both $\sin$ and $\cos$ can't be $0$ at the same time. Note the formula for dot and cross product in terms of angles and magnitudes. $$a\cdot b=\|a\|\|b\|\cos \theta$$ $$\|a\times b\|=\|a\|\|b\| \sin \theta $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1857535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding the basis and dimension of a subspace of the vector space of 2 by 2 matrices I am trying to find the dimension and basis for the subspace spanned by: $$ \begin{bmatrix} 1&-5\\ -4&2 \end{bmatrix}, \begin{bmatrix} 1&1\\ -1&5 \end{bmatrix}, \begin{bmatrix} 2&-4\\ -5&7 \end{bmatrix}, \begin{bmatrix} 1&-7\\ -5&1 \end{bmatrix} $$ in the vector space $M_{2,2}$. I don't really care about the answer, I am just hoping for an efficient algorithm for solving problems like this for matrices. I am not sure how to account for interdependence within the matrices. My instinct as of now is to find the maximum restriction imposed by the matrices. It is clear that the $1$ in position $a_{1,1}$ in each matrix will allow me to get any number in that position, so one vector in the basis will be: $$ \begin{bmatrix} 1&0\\ 0&0 \end{bmatrix} $$ But depending on which of the matrices I scale, I have restrictions on the other entries. So I don't think I can include that matrix in my basis. It just occurred as I was writing this that I could maybe just think about these as $4$ by $1$ vectors and proceed as usual. Is there any danger in doing so?
Inputs $$ \alpha = \left( \begin{array}{rr} 1 & -5 \\ -4 & 2 \end{array} \right), \qquad \beta = \left( \begin{array}{rr} 1 & 1 \\ -1 & 5 \end{array} \right), \qquad \gamma = \left( \begin{array}{rr} 2 & -4 \\ -5 & 7 \end{array} \right), \qquad \delta = \left( \begin{array}{rr} 1 & -7 \\ -5 & 1 \end{array} \right) $$ Find the basis for these matrices. Matrix of row vectors As noted by @Bernard, compose a matrix of row vectors. Flatten the matrices in this manner $$ \left( \begin{array}{rr} 1 & -5 \\ -4 & 2 \end{array} \right) \quad \Rightarrow \quad \left( \begin{array}{crrc} 1 & -5 & -4 & 2 \end{array} \right) $$ Compose the matrix $$ \mathbf{A} = \left( \begin{array}{crrr} 1 & -5 & -4 & 2 \\\hline 1 & 1 & -1 & 5 \\\hline 2 & -4 & -5 & 7 \\\hline 1 & -7 & -5 & 1 \end{array} \right) $$ Row reduction Column 1 $$ \left( \begin{array}{rccc} 1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ -2 & 0 & 1 & 0 \\ -1 & 0 & 0 & 1 \\ \end{array} \right) % \left( \begin{array}{crrc} 1 & -5 & -4 & 2 \\ 1 & 1 & -1 & 5 \\ 2 & -4 & -5 & 7 \\ 1 & -7 & -5 & 1 \\ \end{array} \right) % = % \left( \begin{array}{crrr} \boxed{1} & -5 & -4 & 2 \\ 0 & 6 & 3 & 3 \\ 0 & 6 & 3 & 3 \\ 0 & -2 & -1 & -1 \\ \end{array} \right) % $$ Column 2 $$ \left( \begin{array}{rccc} 1 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 \\ -2 & 0 & 1 & 0 \\ -1 & 0 & 0 & 1 \\ \end{array} \right) % \left( \begin{array}{crrr} \boxed{1} & -5 & -4 & 2 \\ 0 & 6 & 3 & 3 \\ 0 & 6 & 3 & 3 \\ 0 & -2 & -1 & -1 \\ \end{array} \right) % = % \left( \begin{array}{cccc} \boxed{1} & 0 & -\frac{3}{2} & \frac{9}{2} \\ 0 & \boxed{1} & \frac{1}{2} & \frac{1}{2} \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{array} \right) % $$ The fundamental rows are marked by the unit pivots. Solution The basis is $$ \mathcal{B} = \left\{ \alpha, \, \beta \right\} = \left\{ \left( \begin{array}{rr} 1 & -5 \\ -4 & 2 \end{array} \right), \ \left( \begin{array}{rr} 1 & 1 \\ -1 & 5 \end{array} \right) \right\} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1857633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How would you work out these combinations? * *If there are 16 different ice-cream flavours, how many combinations are there for a two scoop? *If there are still 16 different ice-cream flavours, how many combinations are there for a three scoop? How would you work out the above combinations? I found it just sitting in my notes app and I don't recall I ever found an answer. My thoughts at the moment are 16 * (no. of scoops) but I am still lost since of course the flavours can appear in any order. (As my SE profile will demonstrate, I'm not much of a mathematician!) So what would an equation be for the following as applicable to both Problem 1 and Problem 2? A. Working out the number of combinations including duplicate scoops (e.g. chocolate-chocolate-vanilla) B. Working out the number of combinations where a flavour only appears once in each possible combination (e.g. chocolate-vanilla-strawberry and then vanilla-chocolate-strawberry). Any help would be much appreciated!
1) working out the number of combinations including duplicate scoops (e.g. chocolate-chocolate-vanilla) Consider the case where there is only one scoop of ice cream. There are 16 flavors (choices), and thus 16 "combinations." The next case is 2 scoops. One way to think about this problem is to consider how many choices you have per scoop. There are 16 choices for the first scoop and 16 choices for the second scoop since duplicates are allowed. This works out to $16^2 = 256$. It should be more clear as to how you can expand this to more scoops. 2) working out the number of combinations where a flavour only appears once in each possible combination (e.g. chocolate-vanilla-strawberry and then vanilla-chocolate-strawberry) Now try the case of 3 scoops. There are 16 choices for the first scoop, but 15 choices for the second scoop since duplicates are not allowed. For the third scoop there are 14 choices, which works out to $16*15*14=3,360$. Note that with this way of combining flavors, the order is important since chocolate-vanilla-strawberry and vanilla-chocolate-strawberry are both counted. EDIT (for cases where duplicates are allowed): 1.If there are 16 different ice-cream flavours, how many combinations are there for a two scoop? There's a decent explanation from another post. The formula is indeed $$ \binom{n+k-1}{k} $$ where $n$ is the number of flavors and $k$ is the number of scoops. This is called a combination. Note that using this formula gives a different answer than I originally provided, since combinations do not consider the order in which objects are counted $$ \binom{16+2-1}{2}=\binom {17}2=\frac{17\cdot16}{1\cdot2}=136 $$ *If there are still 16 different ice-cream flavours, how many combinations are there for a three scoop? Using the formula above it can be worked out similarly $$ \binom{16+3-1}{3}=\binom {18}3=\frac{18\cdot17\cdot16}{1\cdot2\cdot3}=816 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1857731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$A \subseteq \mathbb R^n $ s.t. for every continuous function $f : A \to \mathbb R$ , $f(A)$ is closed in $\mathbb R$ , is $A$ closed $\mathbb R^n$? Let $A \subseteq \mathbb R^n $ such that for every continuous function $f : A \to \mathbb R$ , $f(A)$ is closed in $\mathbb R$ ; then I know that $A$ is bounded ; my question is , is $A$ closed in $\mathbb R^n$ ? ( If we changed the co-domain from $\mathbb R$ to $\mathbb R^n$ , the answer would be trivially yes , but I don't know what happens when the co-domain is real line ) . Please help . Thanks in advance
Assume $A$ is not closed. Then there is some $x_0 $ in the closure of $A$, but not in $A$. This implies that the continuous function $f(x) := |x - x_0| $ assumes only positive values on $A$, but the closure of the image contains $0$, which contradicts closeness of $f(A)$. Hence, $A$ is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1857832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Negation of definition of continuity This should be a very easy question but it might just be that I'm confusing myself. So we have the definition of a function $f$ on $S$ being continuous at $x_0$: For any $\epsilon$>0, there exists $\delta>0$ such that: whenever $|x-x_0|<\delta$, we have $|f(x)-f(x_0)|<\epsilon$ And I assume the negation is There exists $\epsilon$>0 such that for all $\delta>0$, $|x-x_0|<\delta$ yet $|f(x)-f(x_0)|\ge \epsilon$. Now I want to show that the function $f(x)=\sin(\frac{1}{x})$ together with $f(0)=0$ cannot be made into a continuous function at $x=0$. So I need to show that there exists $\epsilon>0$ such that for all $\delta>0$, $|x|<\delta$ yet $|f(x)|\ge\epsilon$. Let $\epsilon = \frac{1}{2}$. Then no matter what $\delta$ we choose, let $|x|<\frac{1}{2}$. It is certainly possible that $|f(x)|\ge \frac{1}{2}$, because, well, $\frac{1}{x}$ can really take on arbitrarily large value as $x$ is small. Now, what confuses me is that, as $x$ gets small, $f(x)$ can certainly be greater than $\frac{1}{2}$ for infinitely many times, but it will be less than that infinitely many times, too. But I suppose it doesn't really matter. So I think there's something wrong with my negation but I couldn't figure out where. Update: The correct version can be found here. Watch for Lemma 4.6
The negation is: there exists $\epsilon >0$ such that for any $\delta>0$ we can find an $x$ such that $|x-x_0|<\delta$ and $|f(x)-f(x_0)| > \epsilon$. And you have just proved this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1857945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
Integral over infinity of $F(x) = e^{ix}$ how to calculate integral of following function over infinity ? $F(x) = e^{ix}$ ($i$ imaginary) $$ \int\limits_{-\infty}^\infty e^{ix} \, dx $$
$$\int_0^{\infty}e^{ix}dx=\left[\frac{e^{ix}}{i}\right|_0^{\infty}=\left[-ie^{ix}\right|_0^{\infty}=\left[ie^{ix}\right|^0_{\infty}=i-e^{i\infty}$$ this result not converge, are you sure the sign in the exponential is correct?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1858043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Chain rule, time derivative and change of variables A simple calculus question. If I apply the chain rule to a composite function: $$\frac{d}{dt}f(x(t))=\frac{\partial}{\partial x}f(x(t))\frac{dx}{dt}$$ Now, if I change variables, and define: $$x=x_1+\lambda x_2$$ I can say: \begin{equation} \frac{d}{dt}f(x_1(t),x_2(t))=\frac{\partial}{\partial x_1}f(x_1(t),x_2(t))\frac{dx_1}{dt}+\frac{\partial}{\partial x_2}f(x_1(t),x_2(t))\frac{dx_2}{dt} \label{eq:1} \end{equation} but also $$\frac{\partial}{\partial x}=\frac{\partial}{\partial x_1}+\lambda ^{-1}\frac{\partial}{\partial x_2}$$ so that, from the equation at the top $$\frac{d}{dt}f(x(t))=\left[\frac{\partial}{\partial x_1}+\lambda ^{-1}\frac{\partial}{\partial x_2}\right]f(x_1(t)+\lambda x_2(t))\left(\frac{dx_1}{dt}+\lambda\frac{dx_2}{dt}\right)$$ How does this relate to the third equation?
\begin{equation} \frac{df(x)}{dt}=\frac{df(x)}{dx}\frac{dx}{dt}=\frac{df(x)}{dx}(\frac{dx}{dx_1}\frac{dx_1}{dt}+\frac{dx}{dx_2}\frac{dx_2}{dt})=\frac{df(x)}{dx}(\frac{dx_1}{dt}+\lambda^{-1}\frac{dx_2}{dt}) \label{eq:1} \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1858208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $f(n) =\displaystyle\sum_{r=1}^{n}\Biggl(r^n\Bigg(\binom{n}{r}-\binom{n}{r-1}\Bigg) + (2r+1)\binom{n}{r}\Biggr)$, then what is $f(30)$? Please give me hints on how to solve it. I tried 2-3 methods but it doesn't go beyond two steps. I am out of ideas now. Thank you
We may simply deal with each piece separately: $$ \sum_{r=1}^{n}(2r+1)\binom{n}{r}=\left.\frac{d}{dx}\sum_{r=1}^{n}\binom{n}{r}x^{2r+1}\right|_{x=1}=\left.\frac{d}{dx}\left(x\cdot\left(1+x^2\right)^n-1\right)\right|_{x=1}=2^n(n+1)-1.$$ On the other hand, by summation by parts: $$ T_n=\sum_{r=1}^{n}\left(\binom{n}{r}-\binom{n}{r-1}\right)r^n \\= n^n\left(\binom{n}{n}-\binom{n}{0}\right)-\sum_{r=1}^{n-1}\left(\binom{n}{r}-1\right)((r+1)^n-r^n) $$ so: $$ T_n = n^n-1-\sum_{r=1}^{n-1}\binom{n}{r}((r+1)^n-r^n) $$ and you may compute the last sum through Dobinski's formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1858347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }