Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How can I solve this differential equation (not a obvious 2nd order ODE) I am not sure how to manipulate this 2nd order ODE to find the general solution $$u_r u^{'}+\dfrac{u_r}{r}u=\dfrac{\nu}{r}\left(\dfrac{d}{dr}\left(r u^{'}\right) - \dfrac{u}{r}\right)$$ Where $$u_r=-\dfrac{r_0 v_w}{r}$$ where $r_0$ and $v_w$, $\nu$ are constants. I know I can rewrite the left hand side as $$\dfrac{u_r}{r}\dfrac{d}{dr}(ru)$$ but I am not sure how I can manipulate the right hand side any ideas? Note $u(r)$ only
The usage of abbreviation $u_r$ is somewhat misleading. Instead, I would suggest to do as follows. Let us denote constant $r_0 v_w$ as $\mu$. Then your equation can be rewritten as follows: $$ -\frac{\mu}{r^2}\left(ru'+u\right) = \frac{\nu}{r}\left(ru''+u'-\frac{u}{r}\right). $$ After simple manipulations, we get $$ u'' + \left(1+\frac{\mu}{\nu}\right) \frac{1}{r} u' + \left(\frac{\mu}{\nu}-1\right) \frac{1}{r^2} u = 0. $$ This is the so-called Euler Differential Equation, cf. http://mathworld.wolfram.com/EulerDifferentialEquation.html And general solution (which depends on $\mu$ and $\nu$) for this equation is known.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1985613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
how to find a Borel function $f$ such that $a(t) = f(b(t))$ Let $b(t)$ be a non-increasing, right-continuous and positive function on $(0,\infty)$ (clearly, it is Borel) and $a(t)$ be a measurable function on $(0,\infty)$. Moreover, $a(t)$ is a constant on an interval when $b(t)$ is a constant on this interval. Can we find a Borel function $f:\sigma(b(t))\rightarrow R$ such that $a(t) = f(b(t))$?
In general, no: if $L$ is the Cantor function, let $b=2-L$ and $a=2+1_{S}$, where $1_S$ is the indicator function of a non-Borel subset $S$ of the Cantor set $C$. Then $a$ is non-Borel and constant on all the intervals where $b$ is constant (i.e. on $\Bbb R\setminus C$), but the RHS you require is clearly a Borel function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1985740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $|z| = 1$ if and only if $\bar{z} = \frac{1}{z}$. Maybe a very stupid question but I am stuck. Show that $|z| = 1$ if and only if $\bar{z} = \frac{1}{z}$. Is it enough to simply multiply, i.e. $z\bar{z} = \frac{1\times z}{z} = 1$? Showhow I feel this is not correct. I know that if $z = \pm 1$ or $z \pm i$ then $|z| = 1$. Am I supposed to draw the circle $|z| = 1$? But what does $\frac{1}{z}$ represent? If someone could give me a hint.
Take $z$ in the unit circle. $\bar{z}$ is the reflection of $z$ with respect to the real axis. Therefore, $\bar{z}$ has modulus $1$ and argument the negative of the argument of $z$. Since we multiply complex numbers by multiplying their modulus and adding their arguments, we have the $z\bar{z}$ has modulus $1$ and argument $0$ and so $z\bar{z}=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1985829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 1 }
Calculate the sum of the series $\sum_{n=1}^\infty \frac 1{n(n+1)^2}$ I have already proven that $$\int_0^1 \ln(1-x)\ln (x)\mathrm d x=\sum_{n=1}^\infty \frac 1{n(n+1)^2}.$$ So now I want so calculate $$S:=\sum_{n=1}^\infty \frac 1{n(n+1)^2}.$$ I think I should use the fact that $$\sum_{n=1}^\infty \frac 1{n^2}=\frac{\pi^2}6$$ and I know using Wolfram Alpha that $S=2-\frac{\pi^2}6$. So finally I will get $$\int_0^1 \ln(1-x)\ln (x)\mathrm d x=2-\frac{\pi^2}6.$$ But how to calculate $S$ ?
we have that $$\sum_{n=1}^{m }\frac{1}{n}- \frac{1}{n+1}=\frac{m}{m+1}=1-\frac{1}{m+1}$$ the sum is $1$when the $m\rightarrow \infty$ $$\sum_{n=1}^{\infty }\frac{1}{n(n+1)^2} = \sum_{n=1}^{\infty }(\frac{1}{n}- \frac{1}{n+1}) -\sum_{n=1}^{\infty } \frac{1}{(n+1)^2}=1 -\sum_{n=2}^{\infty } \frac{1}{n^2}$$ $$=1-(\frac{\pi^2}{6}-1)=2-\frac{\pi^2}{6}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1986102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$x + y = 7 $and $x^3 - y^3 = 37$, find $xy$ $x+y=7$ $x^3-y^3 = 37$ Find $xy$ I have tackled this one but I am stuck, when I get to this point: $$(x-y)((x+2)^2 -xy) = 37$$ I would be glad if you could give me any suggestions.
Since $(x^3-y^3)^2$ is symmetric in $x$ and $y$, you can express it as a polynomial in terms of $x+y$ and $xy$ : $(x^3-y^3)^2 = x^6 - 2x^3y^3 + y^6 = (x+y)^6 - (6x^5y+15x^4y^2+22x^3y^3+15x^2y^4+6xy^5) \\ = (x+y)^6 - 6xy(x+y)^4 + (9x^4y^2+14x^3y^3+9x^2y^4) \\ = (x+y)^6 - 6xy(x+y)^4 + 9(xy)^2(x+y)^2 - 4(xy)^3$ Plugging the values $x+y = 7$ and $x^3-y^3 = 37$ gives you the equation $4(xy)^3 - 441(xy)^2 +14406(xy)-116280 = 0$ The only rational root of this cubic polynomial is $12$, so this factors as $(xy-12)(4(xy)^2-393(xy)+9690) = 0$ And now you can check that the second factor doesn't have real roots. Finally, getting a value for $xy$ (here, $12$) determines the values of $x$ and $y$ completely, because $x = (x+y)/2 + (x^3-y^3)/(2((x+y)^2-xy))= 7/2 + 37/(2.(7^2-12)) = 4$ and then $y = (x+y)-x = 7-4 = 3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1986213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Prove that if $f : X \to Y$ is a function between non-empty finite sets such that $|X| \lt |Y|$, then $f$ is not a surjection. Theorem 11.1.6: Suppose that $f: X \to Y$ is a function between non-empty finite sets such that $|X| \lt |Y|$. Then $f$ is not a surjection, i.e. there exists an element of $Y$ which is not a value of the function. This is a theorem from the book "Introduction to Mathematical Reasoning" by P.J.Eccles which I would like to prove. The author provides the following hints: "This can be proved by similar methods to the pigeonhole principle. Alternatively it can be deduced from the pigeonhole principle by observing that from a surjection $X \to Y$ it is possible to construct an injection $Y \to X$." The latter approach is given as an exercise further on the book, so I am most interested in the first. The author proved the pigeonhole principle as follows: Theorem 11.1.2 (Pigeonhole principle): Suppose that $f: X \to Y$ is a function between non-empty finite sets such that $|X| \gt |Y|$. Then $f$ is not an injection, i.e. there exist distinct elements $x_1$ and $x_2 \in X$ such that $f(x_1) = f(x_2)$. $Proof$ This is the contrapositive of Corollary 11.1.1 and so follows from that result. $\tag*{$\blacksquare$}$ And the revelant corollary is Corollary 11.1.1: Suppose that $X$ and $Y$ are non-empty finite sets. If there exists an injection $f: X \to Y$ then $|X| \le |Y|$. For the proof of Theorem 11.1.6 I decided to make use of the following: Ex.11.1: Suppose that ${\mathbb N_n} \to X$ is a surjection. Then $X$ is a finite set and $|X| \le n$. $Proof\ (of\ Theorem\ 11.1.6)$ Let $X$ be non-empty finite set such that $|X| = n$. Then there exists a bijection $$g: \mathbb{N_n} \to X$$ Suppose there exists a surjection $f : X \to Y$. We can then define $$ h = f \circ g : \mathbb{N_n} \to X \to Y$$ which is a surjection, given that is it a composite of surjections. Then there exists a surjection $$\mathbb {N_n} \to Y$$ By Ex.11.1, $n \ge |Y|$, or $|X| \ge |Y|$. This is the contrapositive of what we wished to prove, and so we are done. $\tag*{$\blacksquare$}$ QUESTION Is the above proof correct? Specifically, is it really the contrapositive of the wanted? And if so, was it deduced correctly? Thank you
This proof is correct. Normally people would phrase this argument in a more informal manner, along the lines of: "If $f: X \to Y$ is surjective, and $X$ has $n$ elements, then $Y$ has $\leq n$ elements." But in the context of your particular text and course, your argument looks good.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1986426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Classifying permutations in terms of their cycle notation Is there are a standard way of referring to permutations in terms of their cycle notation? For example: Does the set of all permutations in $S_4$ that can be expressed as the composition of two two-cycle permutations $\left\{ (12)(34), (13)(43), (14)(23)\right\}$ has a name?
The sets you're looking for are the conjugacy classes of $S_n$. It's a good exercise to convince oneself that "$a$ and $b$ are conjugates in $S_n$" is equivalent to "$a$ and $b$ have the same cycle type".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1986546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove the following series converges Let $u, v, w$ be real numbers such that $u+v+w=0$. Suppose that $\{b_k\colon k=0,1,2,\dots\}$ is a sequence of real numbers such that $\lim_{k\to\infty} b_k=0$. For $k=0,1,2,\dots$ define $a_{3k}=u b_k,$ $a_{3k+1}= v b_k,$ $a_{3k+2} = w b_k.$ Prove that the series $ \sum_{n=0}^{\infty} a_n$ converges. Im having a thouhg problem with this one, any idea I will appreciate.
Let us have $c_{3k}=u,\ c_{3k+1}=v,\ c_{3k+2}=w$. By the Dirichlet test, we need to show that $\sum_{k=0}^N c_k$ is bounded and that $\lim_{n\to\infty}b_n=0$. Clearly, we have $$\sum_{k=0}^N c_k\le\sum_{k=0}^Mc_k$$ for some $M\in[0,1,2]$ and we already have $\lim_{n\to\infty}b_n=0$, meaning that $$\sum_{k=0}^\infty c_kb_k$$ converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1986648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Statistical significance in the interval The statistical significance of an observed result is not always the same as the material significance - the practical importance - of that result. I am curious as to why this would be the case in some circumstances. And is there an example when a statistically significant result is also materially significant, and vice versa?
Statistically significant, not of practical importance. I once consulted on a study to determine whether a new machine and a standard lab test gave the same results for hemoglobin (Hgb) in the blood. Hgb was determined for about three dozen newborn babies using both methods for each. The purpose of the study was to see if the new machine (quicker, easier) could be used instead of the lab method. The machine gave slightly higher values for most of the babies, to the extent that a paired t test indicated a highly significant difference. But that did not mean that the machine couldn't be used. (a) The difference, although almost certainly 'real', was so small as not to be clinically important. (b) A slight adjustment to the machine could make it give values that matched the lab results. (No one knows or cares which was really more accurate.) Statistically significant, and of practical importance. Often, with some prior knowledge of the variances involved, it is possible to design a study so that statistical significance matches practical importance. This is done by specifying the size of the difference that is of practical importance, and making sure that the sample size is just right that the power of the test for that difference is reasonably high (say 95%). Thus the study is 'tuned' in advance so statistical significance and practical importance are quite likely to coincide.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1986772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\cos^2(z)+\sin^2(z)=1$ Can anybody show me how to prove that $\cos^2(z)+\sin^2(z)=1$ for $z\in \mathbb{C}$? I can prove it for the case where $z$ is real, but can't seem to find a way to prove it for complex numbers in general.
The very same proof that works for real values works for complex values: both $\sin z$ and $\cos z$ admit convergent powerseries on the whole complex plane, which you can differentiate term by term, giving that $$(\cos z) ' = -\sin z$$ $$(\sin z)' = \cos z$$ If you consider $f(z) = \cos^2 z +\sin^2 z-1$ the above gives $f'(z) =0$: $$f'(z) = -2\cos z\sin z+2\sin z\cos z=0$$ Thus $f$ must be constant, and because $f(0)=0$, it is constantly zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1986878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Uniqueness for a linear transformation given nullspace and range Find the representation in a canonical base for: $$\mathcal{T}: \mathbf{R^3} \rightarrow \mathbf{R^3}$$ Such that :$$ \quad\mathcal{N}(\mathcal{T}) = \quad \langle\left(\begin{matrix} 1\\0\\-1 \end{matrix}\right)\rangle$$ and the image of $\mathcal{T}$ is defined by the equation:$$x +2y -z=0$$ Is $\mathcal{T}$ unique? The only thing I've figured out is:$$Im(\mathcal{T}) = \quad \langle\left(\begin{matrix} -2\\1\\0 \end{matrix}\right), \left(\begin{matrix} 1\\0\\1 \end{matrix}\right)\rangle $$ and for any vector $\mathbf{x}$ in $\mathbf{R^3}$:$$\mathcal{T}(\mathbf{x})= \alpha\left(\begin{matrix} -2\\1\\0 \end{matrix}\right) + \beta\left(\begin{matrix} 1\\0\\1 \end{matrix}\right)$$ I have absolutely no idea on how to proceed, any ideas...
Suppose we have a linear map from $T:U\rightarrow V$ and we have a basis for $U$. If I tell you where $T$ takes each basis vector of $U$ then you know $T$ exactly, and therefore these conditions define $T$ uniquely. Now suppose instead of telling you where specific basis vectors go, I just tell you a vector in $U$ gets mapped to $(-2,1,0)$ and some other vector gets mapped to $(1,0,1)$ and another vector gets sent to $(0,0,0)$, without specifying more information you do not know $T$ exactly. For example if we use the standard basis, you could say $e_1$ goes to $(-2,1,0)$ and $e_2$ goes to $(1,0,1)$, or you could say $e_2$ goes to $(-2,1,0)$ and $e_1$ goes to $(1,0,1)$. Both of these maps would satisfy the conditions I gave. Is it clear how the conditions I gave are the same as the ones you were given above? The key is to remember these are linear transformations so $T(c_1e_1+c_2e_2)=c_1T(e_1)+c_2T(e_2)$. Hope that helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1986963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove or disprove this lemma? Lemma: For any real-valued continuous function $ f(x,y) $, where $ x \in R^m $ and $ y \in R^n $. There are smooth scalar functions $ A(x)\geq 0 $, $ B(y)\geq 0 $, $ C(x)\geq 1 $ and $ D(x)\geq 1 $ such that $ \left| f(x,y) \right| \leq A(x)+B(y) $ $ \left| f(x,y) \right| \leq C(x)D(y) $.
Let $M(N) = \max \{ |f(x,y)|: \max(\|x\|,\|y\|) \le N+1 \}$ (which is finite because $\{(x,y): \max(\|x\|,\|y\|) \le N\}$ is compact). There are smooth increasing functions $a(x)$ and $a(y)$ such that $a(n) > M(n)$ and $b(n) > M(n)$ for each nonnegative integer $n$. Take $A(x) = a(\|x\|)$ and $B(y) = b(\|y\|)$ (that may not be smooth at $0$, so adjust it slightly). Then for any $(x,y)$, if $\|x\| \le \|y\|$ with $N \le \|y\| \le N+1$ we have $|f(x,y)| \le M(N) \le b(N) \le B(y) \le A(x) + B(y)$, and similarly if $\|y\| \le \|x\|$. For the second case, note that $|f(x,y)| \le C(x) D(y)$ if $\log |f(x,y)| \le \log C(x) + \log D(y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1987119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $a^2$ when divided by $5$ cannot have a remainder of $3$. For any natural number $a$, prove that $a^2$ when divided by $5$ cannot have a remainder of $3$. (Hint: What are the possible values of the remainder when $a$ is divided by $5$? Using the hint, I found that the possible values of the remainder are $1, 2, 3, 4$, and $0$, and $3$ is only a remainder when the last digit of $a$ is $3$ or $8$. But I'm not sure how to explain this in a proof, and I don't know how to extend it to $a^2$. Any help would be much appreciated!
if $a \equiv 0 \mod 5$, then $a^2 \equiv 0 \mod 5$. if $a \equiv 1 \mod 5$, then $a^2 \equiv 1 \mod 5$. if $a \equiv 2 \mod 5$, then $a^2 \equiv 4 \mod 5$. if $a \equiv -2 \mod 5$, then $a^2 \equiv 4 \mod 5$. if $a \equiv -1 \mod 5$, then $a^2 \equiv 1 \mod 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1987223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
When are $\beta_1, \beta_2, \ldots, \beta_n$ linear independent? Given $n$ linear independent vectors, $\alpha_1, \alpha_2, \ldots, \alpha_n$. Now, let $$\beta_1 = \alpha_1 + \alpha_2 + \cdots + \alpha_m$$ $$\beta_2 = \alpha_2 + \alpha_3 + \cdots + \alpha_{m+1}$$ $$\ldots$$ $$\beta_{n-m} = \alpha_{n-m} + \alpha_{n-m+1} + \cdots + \alpha_n$$ $$\beta_{n-m+1} = \alpha_{n-m+1} + \alpha_{n-m+2} + \cdots + \alpha_{n+1}$$ $$\ldots$$ $$\beta_{n-1} = \alpha_{n-1} + \alpha_{n} + \cdots + \alpha_{m-2}$$ $$\beta_{n} = \alpha_{n} + \alpha_{1} + \cdots + \alpha_{m-1}$$ where $1 \lt m \lt n$. For example, if $n=3$ and $m=2$, then $$\beta_1 = \alpha_1 + \alpha_2$$ $$\beta_2 = \alpha_2 + \alpha_3$$ $$\beta_3 = \alpha_3 + \alpha_1$$ The question is, to which condition $n$ and $m$ must meet when $\beta_1, \beta_2, \ldots, \beta_n$ are linear independent? A guess is that $n$ and $m$ must be relatively prime, but I can neither prove or disprove it.
Isn't the implied matrix a circulant matrix? Then, because of this determinant formula, your problem (for the case of linearly dependent vectors) appears to ask, for given $m < n$ When is it the case that there is a $j \mid n$ such that $\omega_{j}$ is a root of $f_{m} = x^{m-1} + \dots + x + 1$, or equivalently $\phi_{j}$ divides $f_{m}$? Here $\omega_{j}$ is a primitive $j$-th root of unity, and $\phi_{j}$ is the $j$-th cyclotomic polynomial. If $\phi_{j}$ divides $f_{m}$, then $j > 1$ (as $f_{m}(\omega_{1}) = f_{m}(1) = m \ne 0$), and $\phi_{j}$ divides $x^{m} - 1 = (x-1) f_{m}$, so $j \mid m$. Conversely, if $j > 1$ and $j \mid m$, then $\phi_{j}$ divides $x^{j} - 1$ which divides $x^{m} - 1 = (x - 1) f_{m}$, so that $\phi_{j}$ divides $f_{m}$, as $j > 1$. Therefore the condition for linear dependence appears indeed to be that $\gcd(n, m) \ne 1$. Addendum. The key point here is really $$ \gcd(x^{n} - 1, x^{k} - 1) = x^{\gcd(n, k)} - 1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1987341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Prove: $ \frac{1}{\sin 2x} + \frac{1}{\sin 4x } + \cdots + \frac{1 }{\sin 2^n x} = \cot x - \cot 2^n x $ Prove $$ \frac{1}{\sin 2x} + \frac{1}{\sin 4x } + \cdots + \frac{1 }{\sin 2^n x} = \cot x - \cot 2^n x $$ where $n \in \mathbb{N}$ and $x$ not a multiple of $\frac{ \pi }{2^k} $ for any $k \in \mathbb{N}$. My try. If $n=2$, we have $$\begin{align} \frac{1}{\sin 2x} + \frac{ 1}{\sin 4 x} &= \frac{1}{\sin 2x} + \frac{1}{2 \sin 2x \cos 2x } \\[6pt] &= \frac{ 2 \cos 2x + 1 }{2 \sin 2x \cos 2x} \\[6pt] &= \frac{2 \cos^2 x - 2 \sin^2 x + \cos^2 x + \sin^2 x}{2 \sin 2x \cos 2x} \\[6pt] &= \frac{3 \cos^2 x - \sin^2 x}{2 \sin 2x \cos 2x} \end{align}$$ but here I got stuck. I am on the right track? My goal is to ultimately use induction.
$$\csc2A+\cot2A=\dfrac{1+\cos2A}{\sin2A}=\cot A$$ $$\iff\csc2A=\cot A-\cot 2A$$ Put $2A=2x,4x,8x,\cdots,2^nx$ and add See also: Telescoping series
{ "language": "en", "url": "https://math.stackexchange.com/questions/1987415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Periodic functions and properties of integration This is from a book in Fourier series. I don't understand how the integral can be split as in equality 2 and 3. Which properties of integration justify that? $F(x)$ and $f(x)$ are two functions with the periodicity $2l$. The integral $F(x)=\int_{-l}^x f(t) \, \mathrm dt$ is equal to zero at $x=-l$ and at $x=l$. \begin{align*} F(x+2l)= \int_{-l}^{x+2l} f(t) \,\mathrm dt &= \underbrace{ \int_{-l}^x f(t) \, \mathrm dt+ \int_x^{x+2l}f(t)\, \mathrm dt }_\text{How?}\\ &= \underbrace{ \int_{-l}^x f(t)\, \mathrm dt + \int_0^{2l} f(t) \, dt}_\text{How?} \\ &=\int_{-l}^x f(t) \, \mathrm dt= F(x). \end{align*}
Let us consider the function f (x)=cos x. At x=-phi and+phi f (x) is zero. Here,l=phi and let us consider x=phi/6. When considered the area of f (x) from -phi to 13phi/6. In the region 0 to phi the curve area is positive. In the region phi to 2phi the curve area is negative. Now cosider the area from -phi to 13phi/6.we can divide this area as (-phi to 0), (0 to phi), (phi to 2phi), (2phi to 13phi/6).since the areas (0 to phi), (phi to 2phi) are equal and opposite in direction they cancel each other.Now we are left with the areas (-phi to 0) and (2phi to 13phi/6).this is what you got
{ "language": "en", "url": "https://math.stackexchange.com/questions/1987557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proj as a quotient of an affine variety by $\mathbb C^*$ I wonder how good is the quotient of $\operatorname{Spec} A$ by $\mathbb C^*$ given by $\operatorname{Proj} A$. First, an action of the group $\mathbb C^*$ on an complex affine variety $\operatorname{Spec} A$ is the same as $\mathbb Z$-grading on its function ring $A$. Suppose also that there are only positive gradings, so it is $\mathbb N$-grading. Then there is a rational map $\operatorname{Spec} A \dashrightarrow \operatorname{Proj} A$ which is defined on the open subset $(\operatorname{Spec} A)_{ss}$ whose complement is given by the irrelevant ideal $\sum_{i>0} A_i$. Second, it is well-known that for a reductive group $G$ the variety $\operatorname{Spec} B^G$ classifies closed $G$-orbits in $\operatorname{Spec} B$. But $\operatorname{Proj} A$ is locally given by $\operatorname{Spec} (A_f)_0$ for a homogeneous element $f \in A_k$, so I wonder whether it classifies the closed orbits in $(\operatorname{Spec} A)_{ss}$? As the group is $\mathbb C^*$, every orbit is either one point or one-dimensional. Maybe $(\operatorname{Spec} A)_{ss}$ contains only one-dimensional orbits, so all of them are closed? If nothing this simple holds, is there a simple counter-example, or a good reference?
Let's write $X:=\operatorname{Spec}(A)\subseteq\Bbb A^n\newcommand{\sms}{\mathrm{ss}}$ where I also assume that the coordinates of $\Bbb A^n$ are such that $\Bbb C^\times$ acts by a character on it (we can always choose coordinates in this way). Since you insisted that the induced grading vanishes in negative degrees, this implies that $t\in\Bbb C^\times$ acts on a point $x=(x_1,\ldots,x_n)\in X$ as $t.(x_1,\ldots,x_n)=(t^{k_1}x_1,\ldots,t^{k_n}x_n)$ for certain natural numbers $k_i\ge 0$. From this you can easily construct examples where $0$ is not in the closure of $\Bbb C^\times.x$ (i.e. $x\in X_\sms$), but the orbit of $x$ is not closed: Choose $n=2$, $X=\Bbb A^2$, $k_1=0$, $k_2=1$, then $x=(1,1)$ is semistable because $t.(1,1)=(1,t)$ and zero is not in the closure of these points, but the point $y=(1,0)$ is in the closure of the orbit, but not the orbit itself. Hence, going from $X$ to $X_\sms$ does not guarantee in general that you have only closed orbits left, even for an action of $\Bbb C^\times$. I also hope that the first paragraph might make things easier to work with for you, because the situation is still quite controllable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1987659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Very stuck with flow on a circle dynamical system problem I want to find the bifurcation points and be able to draw and classify the phase portrait as the parameter u varies for the following $$\theta^{\bullet}=u+\sin(\theta)+\cos(2\theta)$$ (where $\theta^{\bullet}=\frac{d}{dt}\theta$) but I am having trouble. This is my work and thoughts so far, I know that fixed points occur when $\theta^{\bullet}=0$ That is when $u+\sin(\theta)+\cos(2\theta)=0$ I also know $\cos(2\theta)=\cos^{2}(\theta)-\sin^{2}(\theta)$ $u+\sin(\theta)+\cos^{2}(\theta)-\sin^{2}(\theta)=0$ and the most I can simplify in terms of u is $$u=\frac{-\cos^{2}(\theta)}{\sin(\theta)(1-\sin(\theta))}$$ for $\theta$ not $0$ or $\pi/2$. And then I am stuck, so can anyone help me with this? Should I be trying to solve for the parameter? Or should I be trying some other method Other thoughts: Since I cant seem to solve directly is it possible I just need to try many different parameter values like u very negative, $u=-1$, $u=0$, $u=1$ and greater? Maybe that could help but it doesn't seem systematic to me so I am not confident in that method. Thanks
Consider the Jacobian of your system defined by $$\dot{\theta} = f(\theta) = u+\sin(\theta)+\cos(2\theta)$$ which we can rewrite in terms of $\sin(\theta)$ and $\cos(\theta)$ as $$f(\theta) = u+\sin(\theta)+\cos^2(\theta)-\sin^2(\theta)$$ The Jacobian is $$J(\theta) = \cos(\theta)-2\sin(2\theta) = \cos(\theta)-4\sin(\theta)\cos(\theta)$$ The system loses structural stability when this becomes $0$ at one of its equilibria (i.e. that equilibrium becomes nonhyperbolic). Let's figure out where $(f(\theta), J(\theta)) = 0$. We note that $J(\theta) = 0$ when $\sin(\theta) = \frac{1}{4}$ or $\cos(\theta) = 0$. In the first case, $f(\theta) = u+\frac{9}{8}$, so $u = -\frac{9}{8}$ is a value of $u$ at which one or more equilibria are nonhyperbolic. In the second case, $f(\theta) = u$ or $u-2$, so $u = 0$ and $u = 2$ are two more values of $u$ where one or more equilibria are nonhyperbolic. I recommend you plot some solution curves around these values of $u$ and see precisely what is happening to the equilibria there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1987793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does the Discrete Fourier Transform and the Fast Fourier Transform give different results? I have a function $$ f(t) = \begin{cases} a(1-a|t|) &\text{if } |t|<\dfrac{1}{a} \\[6pt] 0 &\text{if } |t|>\dfrac{1}{a} \end{cases} $$ Here is what the function looks like for $a=2$: I wrote two scripts to evaluate the Fourier Transform of this function, $g(\omega)=\mathcal{F}[f(t)]$. One is simply a numerical integration of the standard Fourier Transform, and the other is an implementation of the Discrete Fourier Transform. Here are the results of running those two scripts, with the DFT on the top: Since these two look largely the same, minus some differences in normalization, I'm convinced my implementations are working properly. However, if I run an FFT on $f(t)$, and plot the result vs. frequency, it looks completely different than what I get from either other method of computation: Why is this? Shouldn't they look exactly the same? I mean the FFT is literally the DFT, just implemented in a cleverly optimized way. What am I doing wrong?
The FFT returns you a signal in the range $[0-2f_\text{max}]$ where $f_\text{max}=1/(2dt)$. Also it gives you a signal where the negative frequencies come after the positive. To resolve that issue use 'fftshift' after using 'fft', you will see that the spectrum will become symmetric as you want it since your original signal is real.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1987907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $X_{1}, X_{2},..$ are identically distributed then $M_{n}/n \rightarrow 0$ in probability. If $X_{1}, X_{2},..$ are identically distributed then $M_{n}/n \rightarrow_{p} 0$ where $M_{n}=max\{|X_{1}|,...,|X_{n}|\}$ and $E(|X_{1}|)$ is finite. $M_{n}/n \rightarrow_{p} 0$ by definition means given any $\epsilon$, $P(M_{n}/n > \epsilon) \rightarrow 0$ as $n \rightarrow \infty$. Can anyone give any hint?
$M_n>n\varepsilon$ if and only if $|X_i|>n\varepsilon$ for some $1\leq i\leq n$. Therefore $$ \{M_n>n\varepsilon\}=\bigcup_{i=1}^n\{|X_i|>n\varepsilon\}$$ hence $$ \mathbb{P}(M_n>n\varepsilon)\leq \sum_{i=1}^n\mathbb{P}(|X_i|>n\varepsilon)=n\mathbb{P}(|X_1|>n\varepsilon) $$ Therefore it is enough to show that $$ \lim_{n\to\infty}n\mathbb{P}(|X_1|>n\varepsilon)=0 $$ for any $\varepsilon>0$, and this follows from the dominated convergence theorem, since $\mathbb{E}[|X_1|]<\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1988028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is the alternating series $\sum_{n=1}^{\infty }(-1)^n \frac {n^2 - 1}{2n^2 + 3}$ divergent? I tried every test for convergence and really came up with nothing. Without answering the problem directly, is it possible to determine the divergence or convergence for this series? $$\sum_{n=1}^{\infty }(-1)^n \frac {n^2 - 1}{2n^2 + 3}$$ If it can be determined could someone give a little hint as to what direction to take? Thank you!
Let $$a_n=(-1)^n\frac{n^2-1}{2n^2+3}$$ we have $$\forall n\in \mathbb N^* \;\; |a_n|=\frac{1-\frac{1}{n^2}}{2+\frac{3}{n^2}}$$ which yields to $$\lim_{n\to +\infty}|a_n|=\frac{1}{2}$$ and $$\lim_{n\to+\infty} a_n \neq 0$$ thus, the series $\sum a_n$ is divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1988194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Solving $\frac{3}{2} e^y y^2 + y = 0$ Trying to solve the following Eq, but none of the usual methods seem to work: $$3/2 \exp{(y)}y^2+y=0$$ Any help is appreciated.
If $ay^2e^y+y=0$, then $ye^y=-1/a$. By the definition of the Lambert W function (see https://en.m.wikipedia.org/wiki/Lambert_W_function, for example) $y=W^{-1}(-1/a)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1988333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Open superset of $\mathbb{Q}$ Let $S$ be an open set such that $\mathbb{Q}\subset S$. We can also define a set $T=\mathbb{R}\setminus S$. I have been trying to prove or disprove whether $T$ could be uncountable. I suspect $T$ has to be at most countable, is my intuition correct?
Your intuition is reasonable, but incorrect. HINT: the rationals are countable, so we can list them as $\mathbb{Q}=\{q_1, q_2, q_3, . . .\}$ (obviously this listing isn't "in order," in any sense, but that's fine). Now, let $U_n=(q_n-2^{-n}, q_n+2^{-n})$. Each $U_n$ is open, so the union $V=\bigcup U_n$ is open, and clearly $V$ contains $\mathbb{Q}$. Do you see how to show that $\mathbb{R}\setminus V$ is uncountable? If you are unfamiliar with Lebesgue measure, the above hint will be very difficult. An alternate approach is via the Baire category theorem: let $C$ be the Cantor set, and show that there is some real $r$ such that $C+r=\{x+r: x\in C\}$ contains no rational numbers. Then the complement of $C+r$ is open, contains the rationals, and has uncountable complement. HINT: To use BCT, show that for each rational $q$, the set $B_q=\{r: q\in C+r\}$ is nowhere dense. Then, since there are only countably many rationals, BCT implies that $\bigcup_{q\in\mathbb{Q}} B_q$ is meager, and hence $\not=\mathbb{R}$ . . .
{ "language": "en", "url": "https://math.stackexchange.com/questions/1988455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Condition for pointwise convergence of $({f_n}')_{n\in\mathbb{N}}$ to $f'$ if $({f_n})_{n\in\mathbb{N}}$ converges pointwise to $f$ Problem Let $(f_n)_{n\in\mathbb{N}}$ be a sequence of differentiable functions such that $f_n:A(\subseteq \mathbb{R})\to \mathbb{R}$ for all $n\in\mathbb{N}$. Let $(f_n)_{n\in\mathbb{N}}$ converges pointwise to a differentiable function $f$. If $({f_n}')_{n\in\mathbb{N}}$ also converges pointwise to $g$ and $g$ satisfies the intermediate value property prove or disprove that $f'=g$. For the example given here (see page no. 63, Example 5.17) $g$ doesn't satisfy the intermediate value property and hence doesn't meet the condition of the problem. Can anyone help me in proving or disproving it?
Hint: The function $h(x) =\int_0^x \sin (1/t)\,dt$ is differentiable everywhere, with $h'(x) = \sin (1/x)$ for $x\ne 0$ by the FTC, and $h'(0)=0$ because of all the oscillation at $0.$ (You have to do some work to see this last bit. ) Now you have mentioned an example of a sequence $f_n$ converging to $0$ uniformly on $\mathbb R,$ such that $f_n'$ converges pointwise to $g = \chi_{\{0\}}.$ For your problem here, consider the functions $f_n + h.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1988673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing an analytic function on the unit disk is nonzero in a certain neighborhood Suppose $f(z)$ is analytic for $|z|\le 1$ and $f(0) = a_0 \ne 0$. If $M = \max_{|z|=1} |f(z)|$, then show $f(z)\ne 0$ for all $z$ with $|z| < \frac{|a_0|}{|a_0|+M} =:r$. I know we can write $f(z) = a_0 + z^kg(z)$, some $k\ge 1$ and $g$ analytic and $g(0)\ne 0$. From here, I've tried various techniques, like contradiction by assuming $f$ has a root in the disk $\{|z| < r\}$, or trying to use Rouche's Theorem on the disk by examining $|f(z)-a_0|$, but I haven't really gotten anywhere. Any hints would be greatly appreciated.
The map $g(z):=f(z)/M$ is a analytic map from the unit disc in itself. Then by the Schwarz-Pick Theorem, for $|z|< 1$, $$\left|\frac{g(z)-g(0)}{1-\overline{g(z)}g(0)}\right|\leq |z|.$$ If $f(z)=0$ and $|z|<r<1$ then $g(z)=0$ and we obtain $$\frac{|a_0|}{M}=|g(0)|\leq |z| <r= \frac{|a_0|}{|a_0|+M}$$ which is a contradiction because $f(0) = a_0 \ne 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1988771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
4-digit password with unique digits not in ascending or descending order I need to calculate how many possible passwords there are if each password is 4 digits long, using the digits 0-9. All digits in the password must be unique, and cannot all be neither increasing nor decreasing. For example “3569” is not allowed, because the digits are in increasing order, while “1374” is allowed I know that a four digit password could be anything between 0000 to 9999, hence there are 10,000 combinations. But I am now stuck figuring out how to calculate the number of all possible passwords that are unique, neither increasing nor decreasing. I have tried to calculate the possible number of passwords if every digit only may be used once: $$P(n,r)=\frac{10!}{(10−4)!}=\frac{10⋅9⋅8⋅7⋅6⋅5⋅4⋅3⋅2⋅1}{6⋅5⋅4⋅3⋅2⋅1}=5040$$ But I am now quite sure if this is the answer to the question? If not how should I calculate such a question?
As you have already worked out, there are $^{10}P_4=5040$ passwords that repeat no digit. From this number we are to subtract those passwords whose digits are all increasing or all decreasing. All such passwords can be generated by picking four digits out of the ten without regards to order – there are $\binom{10}4=210$ ways to do so – and then arranging them in increasing or decreasing order as required. Since we have two choices of order, we subtract $210\cdot2=420$ passwords. Hence there are $5040-420=4620$ passwords with unique digits that are not all increasing or all decreasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1988872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 0 }
What's so discrete about discrete topology? I am a beginner at topology. I recently learned about discrete topology. But the definition of discrete topology doesn't convey anything 'discrete' to me. Is it just whimsically named like top, bottom quarks in physics or does discreteness has anything to do with discrete topology? P.S As I've not found anything like this on this site I hops this is not duplicate.
Do you understand what's so isolated about an "isolated" point? Well, a discrete space is a space in which every point is isolated. This seems to agree well with the dictionary definition of "discrete": "consisting of distinct or unconnected elements."
{ "language": "en", "url": "https://math.stackexchange.com/questions/1988978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
Non Linear PDE Using Charpit's Method . Using Charpit's method . How Can I solve this : $$u^{2}(p^{2}+q^{2})=x^{2}+y^{2}$$ where$$ du=p dx+q dy \ , p =u_{x} ,q=u_{y}$$ So we have $$f(x,y,p,q,u)=u^{2}(p^{2}+q^{2})-x^{2}+y^{2}=0$$ and $$\frac{dx}{-2pU}=\frac{dy}{-2qU}=\frac{dU}{-2U(p^{2}+q^{2})}=\frac{dp}{-2x+p(p^2+q^2)}=\frac{dq}{-2y+q(p^2+q^2)}$$ the solution is $$u^2=b+x\sqrt{(x^2+a^2)}+ a\ ln(x+\sqrt{(x^2+a^2)})+y\sqrt{(y^2-a^2)}+ a\ ln(y+\sqrt{(y^2-a^2)})$$ I don't how he come up with that solution ! Thank you .
Use the change of variable $$ X=x^2,\quad Y=y^2,\quad U=u^2\\ P=\frac{\partial U}{\partial X},\quad Q=\frac{\partial U}{\partial Y} $$ Note that $$ P=\frac{\partial U}{\partial u}\frac{\partial u}{\partial x}\frac{\partial x}{\partial X}=\frac{up}{x}\\ Q=\frac{\partial U}{\partial u}\frac{\partial u}{\partial y}\frac{\partial y}{\partial Y}=\frac{uq}{y} $$ Our pde in terms of the new variables is $$\begin{align} & P^2X+Q^2Y=X+Y\\\implies& X(P^2-1)+Y(Q^2-1)=0 \end{align}$$ General method for solving $$f(x,y,u,p,q)=f_1(x,p)+f_2(y,q)=0$$ In this case Charpit's auxiliary equations become $$ \frac{dp}{\frac{\partial f_1}{\partial x}}=\frac{dq}{\frac{\partial f_2}{\partial x}}=\frac{du}{-p\frac{\partial f_1}{\partial p}-q\frac{\partial f_2}{\partial q}}=\frac{dx}{-\frac{\partial f_1}{\partial p}}=\frac{dy}{-\frac{\partial f_2}{\partial q}} $$ Cross-multiplying 1st and 4th ratios we get $$\begin{align} &\frac{\partial f_1}{\partial x}dx+\frac{\partial f_1}{\partial p}dp=0 \implies df_1=0\\ \therefore\;&f_1(x,p)=a \end{align}$$ where $a$ is a constant. And, $$f_2(y,q)=-f_1(x,p)=-a$$ We can write $p,q$ as $$p=F_1(x,a),\quad q=F_2(y,a)$$ Finally, integrating we get the required solution $$u=\int F_1(x,a)\,dx+\int F_2(y,a)\,dy$$ Current problem $$\begin{align} F_1(X,a)=&\sqrt{1+\frac{a}{X}},\quad F_2(Y,a)=\sqrt{1-\frac{a}{Y}}\\ \implies U=&\int F_1(X,a)\,dX+\int F_2(Y,a)\,dY\\ \implies u^2=& 2\int\sqrt{x^2+a}\,dx+2\int\sqrt{y^2-a}\,dy\\ =& b+x\sqrt{x^2+a}+\sqrt a\ln|x+\sqrt{x^2+a}|+y\sqrt{y^2-a}-\sqrt a\ln|y+\sqrt{y^2-a}| \end{align}$$ Here we have assumed $a>0$ and $b$ is an arbitrary constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1989128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integration of $\cos(x)/(5+\sin(x)^2)$ Is the following integration correct? Consider the integral $\int_{-\pi}^\pi \frac{\cos(x)}{5+\sin(x)^2} dx$. Substitute $y = \sin(x)$ then we have $\frac{dy}{dx} = \cos(x)$ and hence $dy = \cos(x) dx$ and we get $$\int_{-\pi}^\pi \frac{\cos(x)}{5+\sin(x)^2}dx = \int_{\sin(-\pi)}^{\sin(\pi)} \frac{1}{5+y^2}dy = \int_{0}^{0} \frac{1}{5+y^2}dy = 0.$$ The substitution seems a bit odd, but the result $0$ is correct. Thanks in advance :)
Hint: split the integral into two pieces $-\pi$ to $0$ and $0$ to $\pi$. Note the point symmetry of the integrand for the points $P_1=(-\pi/2,0)$ and $P_2=(\pi/2,0)$. You can achieve this by substitution $x=u\pm \pi/2$ and $dx = du$. You should see that you will get odd functions after the substitutions, which both evaluate to $0$. An alternative approach: Use the substitution $\sin(x)=\sqrt{5}u \implies \cos(x)dx=\sqrt{5}du$: $$\int_{-\pi}^\pi \frac{\cos(x)}{5+\sin(x)^2}dx=\frac{1}{5}\int\frac{du}{1+u^2}=\frac{1}{5}\arctan(u)=\frac{1}{5}\arctan \left(\frac{\sin(x)}{\sqrt{5}}\right)\biggr|_{-\pi}^{\pi}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1989269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Linear System of Equations I've been given: $$ \begin{array}{rcrcrcr} x & + &2y &+& z &=& 3\\ x & +& y &+& 2z &=& 2\\ & & y&-& z &=& 1 \end{array} $$ Let $S$ be the set of all triples that are solutions to the system above and $$T = {(x,y,z)}={(1-3t, 1+t, t)} $$ I need to show that $S=T$. So, I tried solving for the system of equations first which got me nowhere because solving the first two equations gives us the third equation. So that was a bust. Also, I'm not sure what to do with T?
* *First of all, do you know how to solve a linear equation by Gaussian elimination? *To show $T=S$, you could show $T\subset S$ and $S\subset T$. One of the direction is trivial, can you see why? *Note that $$ T = \{(1-3t, 1+t, t) \, | \, t \in \mathbb{R}\}=\{u+tv\mid t\in\mathbb{R}\} $$ where $u=(1,1,0)$ and $v=(-3,1,1)$. Can you see why you should be expecting that the solution space to $Ax=0$ where $A$ is the matrix of your linear equation should be of dimension $1?$ *Do you know how to systematically solve $Ax=0$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1989447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Number of different possibilities including repeats so in one of my math classes, we're learning about counting and using this operation called "n choose k" $\binom{n}{k}=\frac{n!}{k!(n-k)!}$, however this operation does not account for the possibilities of repeats, and answers with repeating variables. An analogy would be that you want to buy a pizza with the following conditions: * *up to 3 toppings on each pizza *7 toppings to choose from The pizza toppings must be unique—double or triple toppings are not allowed. The arrangement of the toppings on each pizza does not matter; e.g., tomatoes on top of pepperoni is the same as pepperoni on top of tomatoes. What is the total number of possibilities for a pizza order in this deal? And we learned that the answer would be $x=\binom{7}{0} + \binom{7}{1} + \binom{7}{2}+\binom{7}{3}=64$. However, how would you find how many pizzas you could order with double or triple toppings. So pretty much using this "choose" function, but allowing for those doubles and triples to come up in the count. I tried this manually, by writing down all the possibilities and I found that the "choose" did not account for a pretty significant amount of doubles and triples. Ex: $\binom{7}{3}=35$, and when I drew out all the possibilities for lets say the 3 toppings were A, B, C, D, E, F, G. I found there to be an extra 49 pizzas which could be accounted for (if there are repeated toppings, like AA, BB etc...). Another Ex: For $\binom{7}{2}=21$, if we draw the two by two table you can see that the diagonal is the only place where two of the same variables meet, and this "choose" function does not account for them. So essentially my question is, how do you account for these without doing what I did, by manually drawing out all the tables, and physically finding all the possibilities which can have 2 or 3 repeated values? Like is there a mathematical way of finding this? And to conclude from what I found then there would be a total of $64+49+7+7=127 $ ways to get a pizza with the 2nd condition. The extra 7 I added in there is intuitively there can only be 7 pizzas which have the same toppings, like AAA, BBB, CCC, DDD, EEE, FFF, GGG.
So for example, if you want to compute the number of ways to choose $3$ toppings out of $7$ not necessarily distinct, and the order in which they are put on the pizza does not matter, here is how you could do it: * *Choose three indices $i_1<i_2<i_3$ from $\{1,2,...,9\}$, since $7+3-1=9$. *Map $i_2\mapsto i_2-1$ and $i_3\mapsto i_3-2$. Then $i_1\leq i_2\leq i_3$ are chosen from $\{1,2,...,7\}$. This gives you $$ \binom{9}{3}=84 $$ instead of $$ \binom73=35 $$ thus accounting for the remaining $84-35=49$ other possibilities where some toppings are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1989552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How do I show this without using Euler Formula :$e^{i\frac{\pi}{2}}-i=e^{i\pi}+1$? I would like to show this without using Euler Formula:$$e^{i\frac{\pi}{2}}-i=e^{i\pi}+1$$.? Note: I multiplied both side by $i$ but i don't succeed . Edit: I edited the question because i meant Euler formula and the last very related to the titled question Thank you for any help
Without defining what $e^z, z \in \mathbb{C}$ is, none of the $e^{i\pi},e^{i\frac{\pi}{2}}$ make sense. And, inevitably, this leads to Euler's formula. Now, depending on the complexity of the course, either * *easy - define $e^{i\phi}=\cos{\phi}+i\sin{\phi}$, like this one, page 7, with some reasoning why this definition works, or *more dificult - define $e^z$ using power series and prove the relation. Also, using geometry and vector representations of complex numbers: * *$e^{i\pi}$ is the exponential (and polar) form of the vector $(-1, 0)$. Now $(-1, 0) + (1, 0) = (0,0)$. *$e^{i\frac{\pi}{2}}$ is the exponential (and polar) form of the vector $(0, 1)$ and $i=(0,1)$, so $(0, 1)-(0, 1)=(0,0)$. One may ask why "inevitably", it's because power series are used to define even more "exotic" exponentials, like $e^A$, wher $A$ is a matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1989668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Let f be an arbitrary, twice differentiable function for which $f''\neq0$ Let f be an arbitrary, twice differentiable function for which $f''\neq0$ . The function $u(x,y)=f(x^2+axy+y^2)$ satifsfies equation $U_{xx}-U_{yy}=0$ then the constant a is. $f_{xx}(x^2+axy+y^2)(2x+ay)^2+2f_x(x^2+axy+y^2)-f_{yy}(x^2+axy+y^2)(2y+xa)^2-2f_y(x^2+axy+y^2)=0$
By the chain rule, $$U_{xx} = 2f'(x^2+axy+y^2) + (2x+ay)^2f''(x^2+axy+y^2)$$ and $$U_{yy} = 2f'(x^2+axy+y^2) + (2y+ax)^2f''(x^2+axy+y^2).$$ Since $U_{xx}-U{yy}=0$, and $f''(x) \neq 0, \forall x \in D_f$, we have $$((2x+ay)^2-(2y+ax)^2)f''(x^2+axy+y^2) = 0 \Rightarrow 4x^2+4axy+a^2y^2 - 4y^2 -4axy - a^2x^2 = 0.$$ In particular, taking $x=1$ and $y=0$, we have $$4-a^2= 0$$ therefore the answer, if it exists, must be $2$ or $-2$ but checking on the original equation, we have that both $a = 2$ and $a = -2$ are solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1989883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Elementary proof for a generalized version of Pascal's hexagon theorem? I saw a generalization version of Pascal's theorem on a book, which is actually proved by deep algebraic geometry theorems (e.g., Cayley-Bacharach-Chasles theorem, or Bézout's theorem). I just wonder whether we can prove it by elementary/synthetic methods (e.g., perspective points, or invariance of cross-ratios and properties of second order point-rows)? Given six points $A,B,C,D,E,F$ on a conic, we take another point $G$ and draw two conics, one (denoted as $a$) constructed from $A,B,G,E,F$ and another (denoted as $b$) from $B,C,D,G,E$. Two conics $a$ and $b$ intersect on four points $B,G,E,H$. Suppose $AF\cap CD=O$. Prove: $G$, $O$ and $H$ are co-linear.
I don't know about synthetic proof but this theorem is the radical axis theorem, seen from a different projective point of view (I.e. Gaged differently). Extend this configuration from the real projective plane to the complex one. Then by applying a complex projective transformation, send the two points $B$ and $E$ to the two complex points at infinity $[1:i:0]$ and $[1:-i:0]$ respectively. Then the three conics on the picture get transformed into three conics from the pencil defined by $[1:i:0]$ and $[1:-i:0]$ which means they become three circles when we restrict ourselves back to the real projective plane. The three lines determined by pairwise intersecting conics are mapped to the radical axes of the three corresponding pairs of circles. By the radical axis "theorem" the three radical axes are concurrent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1989954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving the following integral $\int_1^c\frac{1}{x} \cos(a_nx)\mathrm{d}x\ $ Solve the following integral $$\int_1^c\frac{1}{x} \cos(a_nx)\mathrm{d}x$$ Where $$a_{n}=\frac{(2n-1)\pi}{2c}$$ What I tried: I tried using integration by parts and I got an expression of the form $$\int_1^c\frac{2}{(a_n)^2x^3} \cos(a_nx)\mathrm{d}x +\frac{1}{ca_n}\sin(a_nc)-\frac{1}{a_n}\sin(a_n)$$ which complicates the expression rather than solving it. I then tried using wolfram alpha to solve the following integral but what it says is that the following integral cannot be solve by using the usual method found in calculus and it went on to solve using methods found in Complex Analysis (Which I dont quite get) and its final answer is in complex form. Could anyone explain to me how to solve the following integral and get the answer in the real form and not the complex form. Thanks The answer found in my answer key gives $$\frac{(-1)^{n+1}}{a_n}$$ Thus this the final answer that I must arrive at.
As said in comments and answers, the problem and the answer look rather strange. $$\int \frac{\cos(\alpha x)}x \,dx=\text{Ci}(\alpha x)$$ where appears the cosine integral function which cannot be expressed using any elementary function. $$\int_1^c \frac{\cos(\alpha x)}x \,dx=\text{Ci}(c \alpha )-\text{Ci}(\alpha )\qquad (\Re(c)\geq 0\lor c\notin \mathbb{R})$$ Replacing $\alpha$ by $a_n$ gives $$I_n=\int_1^c \frac{\cos(a_n x)}x \,dx=\text{Ci}\left(\frac{(2 n-1) \pi }{2}\right) -\text{Ci}\left(\frac{(2 n-1) \pi }{2 c}\right)$$ For $c=2$, I put the numerical values in the following table $$\left( \begin{array}{ccc} n & I_n & \frac{4 (-1)^{n+1}}{\pi (2 n-1)}\\ 1 & +0.286652 & +1.273240 \\ 2 & -0.529005 & -0.424413 \\ 3 & +0.252214 & +0.254648 \\ 4 & +0.052774 & -0.181891 \\ 5 & -0.013756 & +0.141471 \\ 6 & -0.146271 & -0.115749 \\ 7 & +0.110343 & +0.097942 \\ 8 & +0.021827 & -0.084883 \\ 9 & -0.011210 & +0.074896 \\ 10 & -0.083508 & -0.067013 \end{array} \right)$$ If you make a scatter plot of the results, you will notice that there is no clear relation between them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1990054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The idea behind the sum of powers of 2 I know that the sum of power of $2$ is $2^{n+1}-1$, and I know the mathematical induction proof. But does anyone know how $2^{n+1}-1$ comes up in the first place. For example, sum of n numbers is $\frac{n(n+1)}{2}$. The idea is that we replicate the set and put it in a rectangle, hence we can do the trick. What is the logic behind the sum of power of $2$ formula?
Another geometric interpretation. Start with a line segment, AB. Extend this into a line. Use compassto mark off a point C by setting center at B and passing radius through A. Then lengths, AB=BC. Create point D in a similar way. We have thus constructed a length double that of the original line segment. This doubling can continue AS MUCH S you like. Now ignore the first line segment, AB. The series of segments now starts with BD which has double the length of AB. In general the nth segment in the first case is the n-1th segment upon removal of AB. Further, each is double the length of its corresponding section. So By subtracting a part we've doubled what remains, but with fewer sections to account for. This gives the formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1990137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56", "answer_count": 11, "answer_id": 7 }
A Problem in Pigeonhole Principles. A computer network consists of six computers. Each computer is directly connected to at least one of the other computers. Show that there are at least two computers in the network that are directly connected to the same number of other computers. I want to solve this problem with pigeonhole principle, but I can't do it. Please give me a hint or answer.
Note: the most computers any one computer connect to is 5. There just aren't any other computers to connect to. So you have 5 possible numbers of connections, and 6 computers so... pigeon hole!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1990227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Simplifying Factorials in Mathematical Induction I'm trying to understand an example for proof by mathematical induction from "Book of Proof (PDF)" (pg 158). I understand the basis step, but I'm not following the simplification in the inductive step, which I've copied below. Suppose $\sum_{i=0}^{k}i.i!=(k+1)!-1$. Then: $\sum_{i=0}^{k+1}i.i!=(\sum_{i=0}^{k}i.i!)+(k+1)(k+1)!$ = $((k+1)!-1)+(k+1)(k+1)!$ = $(k+1)!+(k+1)(k+1)!-1$ = $(1+(k+1))(k+1)!-1$ = $(k+2)(k+1)!-1$ = $(k+2)!-1$ = $((k+1)+1)!-1$ From lines 3 to 4 and from 5 to 6 it looks like $(k+1)!$ simplifies to $1$, but how?
Suppose $\sum_{i=0}^{k}i.i!=(k+1)!-1$. Then: $\sum_{i=0}^{k+1}i.i!=(\sum_{i=0}^{k}i.i!)+(k+1)(k+1)!$ $((k+1)!-1)+(k+1)(k+1)!$ $(k+1)!+(k+1)(k+1)!-1$ Factor out $(k+1)!$ to get $(1+(k+1))(k+1)!-1$ $(k+2)(k+1)!-1$ Since (k+2)(k+1)! = (k+2), !$(k+2)!-1$ $((k+1)+1)!-1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1990371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show $x^4+x^2+x+1$ is irreducible in $\mathbb{F}_5[x]$ Show $x^4+x^2+x+1$ is irreducible in $\mathbb{F}_5[x]$. I need to make use of Eisenstein's criterion, but I'm not sure how.
Hint: Let $x^4+x^2+x+1=(x^2+ax+c)(x^2+bx+d)$ and you search for $a,b,c,d\in F_5[x]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1990460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Pointwise convergence vs norm convergence in $X^*$ Let $X$ be a normed space, $X^*$ its dual space and $\left\|{T}\right\|=\sup\{|T(x)|:\left\|x\right\|=1\}$, the usual norm in $X^*$. Let $(T_n)\subseteq X^*$. It is true that if $T_n\to 0$ in $X^*$ then $T_n$ converges pointwise to zero, because for every $x\in X$ we have $|T_n(x)|\le \left\|{T_n}\right\|\left\|{x}\right\|$. The converse is not true. However, in all counterexamples I've seen we have $\dim X=\infty$. Could it be true if $\dim X<\infty$? If not, could anyone help me to find a counterexample even in this case? Thank you.
When $X $ is infinite-dimensional, the topology of pointwise convergence in $X^*$ can never agree with the topology of norm-convergence. Because the former is weak$^*$-convergence. And in the weak$^*$ topology, the unit ball is compact. In the norm topology, though, the unit ball is compact precisely when $X $ is finite-dimensional.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1990567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Closed Convex Set and convex combination with parameter $\frac{1}{2}$ I'm trying to show that given $C$, a closed set: C is convex $\Leftrightarrow \forall \{x,y\} \subseteq C: \frac{1}{2}x + \frac{1}{2}y \in C$ But i'm stuck at the $\leftarrow$ part. I'm trying to prove by contradiction: $C$ is closed, $\forall \{x,y\} \subseteq C: \frac{1}{2}x + \frac{1}{2}y \in C$ and $C$ is not convex. If $C$ is not convex, $\exists \lambda \in [0,1], x,y \in C: \lambda x+(1-\lambda)y \notin C$ But can't see how to proceed. An alternative approach that i thought was saying that if i take the average point between $x$ and $y$, infinitely(since $\forall \{x,y\} \subseteq C: \frac{1}{2}x + \frac{1}{2}y \in C$) then the whole segment between $x$ and $y$ belongs to $C$, hence $C$ is convex. But i'm not sure this is true(i think that maybe there are gaps between $x$ and $y$). What's the best approach? Thanks in advance.
Hint: Let $(x,y) \in C^2$, $\tau \in [0,1]$. We want to show that $x + \tau(y-x) \in C$. Let $\epsilon > 0$. Given that $x + j\cdot 2^{-n}(y-x) \in C$ for all $n$ and $j \in \{0,\dots, 2^n\}$ (why ?), we're hoping to find $j_\epsilon$ and $n_\epsilon$ such that: \begin{align*} || x + \tau(y-x) - (x + j_\epsilon \cdot 2^{-n_\epsilon}(y-x))|| = ||(\tau - j_\epsilon \cdot 2^{-n_\epsilon})(y - x) || < \epsilon. \end{align*} Such $j_\epsilon$ and $n_\epsilon$ exist as the bisection method reminds us, therefore given any $\epsilon >0$,$ B(x + \tau(y-x), \epsilon) \bigcap C \neq \emptyset$, yielding that $x + \tau(y-x) \in \text{cl}(C) = C$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1990690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Differential equation $f''(x)+2 x f(x)f'(x) = 0$ I am trying to solve, $ f''(x)+2 x f(x)f'(x) = 0$ with boundary conditions $f(-\infty)=1$ and $f(\infty)=0$. I have found that for instance $f(x) = 3/2 x^{-2}$ but obviously it does not satisfy the proper boundary conditions. Any ideas for a solution?
Hint: This belongs to a generalized Emden–Fowler equation. $f''(x)+2xf(x)f'(x)=0$ $\dfrac{d^2f}{dx^2}=-2xf\dfrac{df}{dx}$ $\therefore\dfrac{d^2x}{df^2}=2fx\left(\dfrac{dx}{df}\right)^2$ Follow the method in http://science.fire.ustc.edu.cn/download/download1/book%5Cmathematics%5CHandbook%20of%20Exact%20Solutions%20for%20Ordinary%20Differential%20EquationsSecond%20Edition%5Cc2972_fm.pdf#page=377: Let $\begin{cases}y=\dfrac{df}{dx}\\t=f^2\end{cases}$ , Then $\dfrac{d^2t}{dy^2}=2y^{-1}t^{-\frac{1}{2}}\left(\dfrac{dt}{dy}\right)^3$ $\therefore\dfrac{d^2y}{dt^2}=-2t^{-\frac{1}{2}}y^{-1}$ Which reduces to an Emden–Fowler equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1990854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Proof of invertibility of $I+S$ such that $S^* = -S$ We're given a linear transformation $S:V \rightarrow V$ on a finite dimensional inner product space $V$ such that its adjoint is negative $S$ (i.e. $S^* = -S$). We are able to prove that $(v+Sv,v+Sv)=(v,v) + (Sv,Sv)$ and that the kernel of $(I+S) = \{\mathbf{0}\}$. Using these facts how can we deduce that the linear transformation $I+S$ is invertible? Any help would be much appreciated!
If $T \colon V \rightarrow V$ if a linear map on a finite dimensional vector space with trivial kernel then $\dim \ker T + \dim \operatorname{Im} (T) = \dim V$ implies that $\dim \operatorname{Im}(T) = \dim V$ and so $T$ is also onto and invertible. In your case, if you have shown that $I + S$ has trivial kernel it implies that $I + S$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1990943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If there is $a>0$ s.t. $f|[0,a]$ and $f|[a, +\infty)$ are uniformly continuous, $f:[0, +\infty) \rightarrow \mathbb{R}$ also is Given $f:[0, +\infty) \rightarrow \mathbb{R}$, suppose that exists $a>0$ such that $f|[0,a]$ and $f|[a, +\infty)$ are uniformly continuous. Prove that $f$ is uniformly continuous. Any hints on how to go about this? I've tried using the definition, but this is not like proving if $f$ and $g$ are u.c., then $f+g$ is as well. The best result I know is that if $f$ and $g$ are uniformly continuous and the codomain of $f$ is the same as the domain of $g$, $g \circ f$ is also uniformly continuous. I'm not looking for a particular proof of it all, just a potential proposition that could help me see it.
The following Lemma is well-known and easy to prove (see, for instance, “General topology” by Ryszard Engelking (Heldermann Verlag, Berlin, 1989)). Let $X$ and $Y$ be topological spaces, $\mathcal F$ be a (locally) finite cover of the space $X$ by its closed subspaces. Given a family $\{f_F:F\to Y | F\in\mathcal F\}$ of continuous functions such that for each $F,F’\in\mathcal F$ and each point $x\in F\cap F’$ we have $f_F(x)=f_{F’}(x)$, the function $f:X\to Y$ such that $f(x)=f_F(x)$ for any point $x\in X$ and any set $x\in F\in\mathcal F$ is continuous. Applying Lemma to a finite closed cover $\mathcal F\{[0,a],[a,+\infty)\}$ of the space $X=[0,+\infty)$ we see that the function $f$ is continuous. In particular, the restriction $f|[0,a+1]$ of the function $f$ on a subspace $[0, a+1]$ of the space $[0,+\infty)$ is continuous. Since the set $[0, a+1]$ is compact then the function $f|[0,a+1]$ is uniformly continuous. Now let $\varepsilon>0$ be an arbitrary number. Uniform continuity of the functions $f|[0,a+1]$ and $f|[a,+\infty)$ implies that there exist numbers $0<\delta_1, \delta_2<1$ such that if $0\le x,y\le a+1$ and $|x-y|<\delta_1$ then $|f(x)-f(y)|<\varepsilon$ and if $a\le x,y$ and $|x-y|<\delta_2$ then $|f(x)-f(y)|<\varepsilon$. If we put $\delta=\min\{\delta_1,\delta_2\}$ then we shall have $|f(x)-f(y)|<\varepsilon$ for each $0\le x,y$ such that $|x-y|<\delta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1991026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Inclusion exclusion: How many bit strings of length eight do not contain six consecutive $0$'s? Q. How many bit strings of length eight do not contain six consecutive $0$s? I solved this problem with my hand. $$256 - ( 1 + 2 + 2 + 1 + 2 ) = 248$$ I calculated all possible events. Is my answer right? And can this problem be solved by the inclusion-exclusion principle?
I think your answer is wrong but the idea is good. The number to subtract can indeed be found using the inclusion-exclusion principle, for 3 sets. Can you explain why you subtracted those numbers ? EDIT: Apologies - I was mistaken. See N. F. Taussig's answer and comment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1991252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Cholesky decomposition vs. $\mathbf L\mathbf D\mathbf L^\top$ decomposition In different books and on Wikipedia, you can see frequent mentions of Cholesky decomposition and $\mathbf L\mathbf D\mathbf L^\top$ decomposition is seldom mentioned. Why so? As far as I understand, $\mathbf L\mathbf D\mathbf L^\top$ decomposition can be applied to a broader range of matrices (we don't need a matrix to be positive-definite) and for the same computational price.
A frequent reason for preferring $\mathbf L\mathbf D\mathbf L^\top$ over Cholesky is that the latter requires the evaluation of square roots, while the former does not. In symbolics, where one might want to keep the entries radical-free, or in numerics, where the square root operation on a particular machine might be slower than division, this can be a deciding factor. As for positive-definiteness: it is true that $\mathbf L\mathbf D\mathbf L^\top$ can work on matrices that aren't positive definite, but it is no longer guaranteed to be numerically stable. Also, without an extra operation like pivoting, $\mathbf L\mathbf D\mathbf L^\top$ will fail on matrices such as $$\begin{pmatrix}0&1\\1&0\end{pmatrix}$$ or in general, matrices with a singular leading submatrix. (This got too long for a comment.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1991336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding if two sets are equal I have an formula $(A\cap C) \cup ( B \cap C) = C - ( A \cap B)$ and i have to prove if this equation is correct. I transformed it into $(\forall x\in U: x \in A \wedge x\in C) \vee ( x\in B\wedge x\in C) = x \in C \wedge ( x \notin A \wedge x\notin B) $ how could i further transform the equations in order to find if its true or not?
Instead of transforming the equation just go after the english meaning: those in A and C or B and C are those in C but not both A and B. This is obvioyusly not true: Consider A=B
{ "language": "en", "url": "https://math.stackexchange.com/questions/1991442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
real values of $a$ for which the range of function $ f(x) = \frac{x-1}{1-x^2-a}$ does not contain value from $\left[-1,1\right]$ All real values of $a$ for which the range of function $\displaystyle f(x) = \frac{x-1}{1-x^2-a}$ does not contain any value from $\left[-1,1\right]$ $\bf{My\; Try::}$ Let $\displaystyle y = \frac{x-1}{1-x^2-a}\Rightarrow y-x^2y-ay=x-1$ So $$x^2y+x+y(1-a)-1=0$$ Now for real values of $y<-1\cup y>1$ equation has real roots So $$1-4y^2(1-a)\geq 0\Rightarrow 1\geq 4y^2(1-a)\Rightarrow y^2\leq \frac{1}{4(1-a)}\;, a\neq 1$$ Now how can i solve it after that, Help required, Thanks
Given $$f(x)=\dfrac{x-1}{1-x^2-a}$$ for some $a$ notice that if for $x\ne0$ you multiply both the numerator and denominator by $\dfrac{1}{x}$ you obtain $$ f(x)=\dfrac{1-\frac{1}{x}}{\frac{1}{x}-x-\frac{a}{x}}\text{ for }x\ne0 $$ Now notice what happens to the value of $f(x)$ for large values of $x$. Regardless of the size of $a$ one may make both $\frac{1}{x}$ and $\frac{a}{x}$ as small as one pleases, so small that they are negligible. Thus for sufficiently large positive or negative values of $x$ the graph of $f(x)$ approaches the graph of $y=-\frac{1}{x}$ which has the $x$ axis as a horizontal asymptote. Thus, the $x$-axis is also the horizontal asymptote of $f(x)$. So regardless of the value of the variable $a$, the range of $f$ must always contain numbers in the interval $[-1,1]$. Once you get to calculus there is an even easier way to show that the graph of $f$ approaches the $x$-axis as $x$ approaches either $+\infty$ or $-\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1991546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Simple Proof of Obvious Fact. Have you any idea of how to proof the following as simpler as possible? Let $M$ be family of $n$ finite sets: $$M = \{X_1, X_2, ..., X_n\}$$ Then $$\bigr(\forall k = 1, 2, ..., n : |X_{i_{1}}\cup X_{i_{2}}\cup\cdots\cup X_{i_{k}}|\geq k\bigl)\Rightarrow\bigl(\exists \chi_1,\chi_2,...\chi_n : \chi_i\neq\chi_j,\,\chi_i\in X_i\bigr)$$ In other words If union of any k sets has at least k elements then there exist n different elements which represent whole family M.
Your obvious fact is a harder part of a generalization of Hall’s Marriage Theorem, proved in 1935 and it probably has no very simple proof (at least I was not told about it in my graph theory lecture course).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1991592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Permutation Groups - Rewrite as disjoint cycles / transpositions Why is this true? "(1243)(3521): Written as a product of disjoint cycles, we get (354)." Shouldn't (1243)(3521) = (354)(21)? Where did the (21) go? Thanks!!
The cycle $(3521)$ sends $3$ to $5$, and the cycle $(1243)$ does nothing to $5$, so the product sets $3$ to $5$. The cycle $(3521)$ sends $5$ to $2$, which $(1243)$ sends to $4$, so the product sends $5$ to $4$. The cycle $(3521)$ leaves $4$ alone, and $(1243)$ sends it to $3$, so the product sends $4$ to $3$. This establishes that the product has a cycle $(354)$. Now what about $1$ and $2$? The cycle $(3521)$ sends $1$ to $3$, and $(1243)$ sends $3$ right back to $1$, so the product leaves $1$ fixed. The cycle $(3521)$ sends $2$ to $1$, and $(1243)$ sends $1$ right back to $2$, so the product leaves $2$ fixed. Thus, the product is $(1)(2)(354)=(354)$, since we don’t normally write out the $1$-cycles (fixed points) explicitly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1991686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prob. 14, Chap. 3, in Baby Rudin: The arithmetic mean of a complex sequence Here's Prob. 14, Chap. 3 in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition: If $\left\{ s_n \right\}$ is a complex sequence, define its arithmetic mean $\sigma_n$ by $$ \sigma_n = \frac{s_0 + s_1 + \cdots + s_n}{n+1} \ \ \ (n = 0, 1, 2, \ldots). $$ (a) If $\lim s_n = s$, prove that $\lim \ \sigma_n = s$. (b) Construct a sequence $\left\{ s_n \right\}$ which does not converge, although $\lim \ \sigma_n = 0$. (c) Can it happen that $s_n > 0$ for all $n$ and that $\lim \sup s_n = \infty$, although $\lim \ \sigma_n = 0$? (d) Put $a_n = s_n - s_{n-1}$, for $n \geq 1$. Show that $$s_n - \sigma_n = \frac{1}{n+1} \sum_{k=1}^n k a_k.$$ Assume that $\lim \left( n a_n \right) = 0$ and that $\left\{ \sigma_n \right\}$ converges. Prove that $\left\{ s_n \right\}$ converges. [This gives a converse of (a), but under the aditional assumption that $n a_n \to 0$. ] (e) Derive the last condition from a weaker hypothesis: Assume $M < \infty$, $\left\vert n a_n \right\vert \leq M$ for all $n$, and $\lim \ \sigma_n = \sigma$. Prove that $\lim s_n = \sigma$, by completing the following outline: If $m < n$, then $$s_n - \sigma_n = \frac{m+1}{n-m} \left( \sigma_n - \sigma_m \right) + \frac{1}{n-m} \sum_{i=m+1}^n \left( s_n - s_i \right).$$ For these $i$, $$ \left\vert s_n - s_i \right\vert \leq \frac{(n-i)M}{i+1} \leq \frac{(n-m-1)M}{m+2}.$$ Fix $\varepsilon > 0$ and associate with each $n$ the integer $m$ that satisfies $$ m \leq \frac{n-\varepsilon}{1+ \varepsilon} < m +1.$$ Then $\frac{m+1}{n-m} \leq \frac{1}{\varepsilon}$ and $\left\vert s_n - s_i \right\vert < M \varepsilon$. Hence $$ \lim_{n\to\infty} \sup \left\vert s_n - \sigma \right\vert \leq M \varepsilon.$$ Since $\varepsilon$ was arbitrary, $\lim s_n = \sigma$. My effort: Part (a): If $\lim s_n = s$, then we can find a natural number $N$ such that $n > N$ implies that $$ \left\vert s_n - s \right\vert < 1.$$ So, for $n > N$, we have \begin{align} \left\vert \sigma_n - s \right\vert &= \left\vert \frac{ s_0 + s_1 + \cdots + s_n}{n +1} - s \right\vert \\ &\leq \frac{1}{n+1} \left( \left\vert s_0 - s \right\vert + \cdots + \left\vert s_N - s \right\vert + \cdots + \left\vert s_n - s \right\vert \right) \\ &\leq \frac{1}{n+1} \left( (N+1) \max \left\{ \left\vert s_0 - s \right\vert, \ldots, \left\vert s_N - s \right\vert \right\} + \left\vert s_{N+1} - s \right\vert + \cdots + \left\vert s_n - s \right\vert \right) \\ &< \frac{1}{n+1} \left( (N+1) \max \left\{ \left\vert s_0 - s \right\vert, \ldots, \left\vert s_N - s \right\vert \right\} + (n-N) \right) \end{align} Let $\varepsilon > 0$ be given. What next? Part (b): Let $s_n = (-1){n+1}$ for $n = 0, 1, 2, \ldots$. Then $$ \sigma_n = \begin{cases} \frac{1}{n+1} \ \mbox{ if } n \mbox{ is even}; \\ 0 \ \mbox{ if } n \mbox{ is odd}. \end{cases} $$ Then $\left\{ s_n \right\}$ fails to converge, but $\lim \ \sigma_n = 0$. Is this example correct? Part (c): My feeling is the answer is no, but I cannot establish this rigorously. How to? Part (d): If we put $a_n = s_n - s_{n-1}$, for $n \geq 1$, then $s_n = a_0 + s_1 + \cdots + a_n$, and so \begin{align} s_n - \sigma_n &= s_n - \frac{ s_0 + s_1 + \cdots + s_n}{n+1} \\ &= \frac{ (n+1) s_n - s_0 - s_1 - \cdots - s_n }{n+1} \\ &= \frac{ (n+1) \left( s_0 + a_1 + \cdots + a_n \right) - s_0 - \left( s_0 + a_1 \right) - \left( s_0 + a_1 + a_2 \right) - \cdot - \left( s_0 + a_1 + \cdots + a_n \right) }{n+1} \\ &= \frac{ a_1 + 2 a_2 + \cdots + n a_n }{n+1}. \end{align} Now we assume that $\lim_{n \to \infty} n a_n = 0$ and that $\left\{ \sigma_n \right\}$ converges. How to show that $\left\{ s_n \right\}$ convreges? Part (e): If $m < n$, then \begin{align} & \frac{ m+1 }{ n-m } \left( \sigma_n - \sigma_m \right) + \frac{1}{n-m} \sum_{i = m+1}^n \left( s_n - s_i \right) \\ &= \frac{ m+1 }{ n-m } \left( \frac{ s_0 + s_1 + \cdots + s_n}{n+1} - \frac{ s_0 + s_1 + \cdots + s_m}{m+1} \right) + \frac{1}{n-m} \sum_{i = m+1}^n \left( s_n - s_i \right) \\ &= s_n + \frac{1}{n-m} \left[ (m+1) \frac{ s_0 + \cdots + s_n}{n+1} - \left( s_0 + \cdots + s_n \right) \right] \\ &= s_n - \sigma_n. \end{align} How to proceed from here?
For (c): Consider $s_{2^m} = m, m = 1,2, \dots, s_n = 0$ for all other $n.$ (We don't have $s_n >0$ for all $n,$ but it will give you the idea.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1991792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$f(1+x) = f(x)$ for all real $x$, $f$ is a polynomial and $f(5) = 11$. What is $f\large(\frac{15}{2}\large)$? I was looking through a GRE math subject test practice test (here) and in particular I was confused regarding this question I chose E because I thought that (since the problem didn't specify) it could be an infinite polynomial that becomes the Taylor expansion of $f(x) = A \sin (x\pi)+11$, where $A$ can be anything. The correct answer is apparently C. Is there a reason (other than my probably unwarranted assumption) that the answer HAS to be C?
$$f(x) = a_0 + a_1x^1 + \cdots + a_nx^n$$ $$f(x+1) = a_0 + a_1(x+1)^1 + \cdots + a_n(x+1)^n$$ It can be shown by induction that $$f(x) = f(x+1) \to a_1=a_2=\cdots=a_n=0$$ Note that $$f(x) = a_0 + a_1x^1 + \cdots$$ is not a polynomial
{ "language": "en", "url": "https://math.stackexchange.com/questions/1991889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Basis for a topology on $\mathbb{R}$ Let $B = \{[a, b] \mid \forall\, a, b \in \mathbb{R}, a < b\}$. Then $B$ a basis for some topology on $\mathbb{R}$. Is it true that the set of all closed subsets in $\mathbb{R}$ generate a basis for some topology on $\mathbb{R}$? I know from previous assignments that the lower limit topology with half open intervals generates a topology but I doubt that the closed intervals would do the same. However I am not sure how to prove this.
Let $X$ be a set and $\mathcal B$ some family of subsets of $X$. Then there exists unique topology with basis $\mathcal B$ if and only if: $(1)$ $\bigcup\mathcal B = X$ $(2)$ $(\forall B_1,B_2\in\mathcal B)(\forall x\in B_1\cap B_2)(\exists C_x\in\mathcal B)\ x\in C_x\ \wedge\ C\subseteq B_1\cap B_2$ Proof. One direction is trivial since any open set is union of elements of basis. Assume that $X$ is a set and $\mathcal B$ is family of subsets of $X$ satisfying $(1)$ and $(2)$. Define $\tau = \{ \bigcup S\mid S\subseteq \mathcal B\}$. Then, obviously $\emptyset$ and $X$ are in $\tau$ and $\tau$ is closed under arbitrary unions. To see that it is closed under finite intersections, it is enough to prove that for any $B_1, B_2\in\mathcal B$ we have $B_1\cap B_2\in\tau$. But this is a direct consequence of $(2)$, since $B_1\cap B_2=\bigcup_{x\in B_1\cap B_2} C_x$. I will leave the uniqueness to you (either show that two topologies with the same basis are the same or that $\tau$ defined above is the intersection of all topologies on $X$ containing $\mathcal B$).$\tag*{$\square$}$ Your family $B$ satisfies $(1)$, but not $(2)$. Counterexample is already given in the answer by Aweygan, and I will not repeat it. I will just add that any counterexample is of that form, since the other cases are $[a,b]\cap[c,d] = [c,b]$ and $[a,b]\cap[c,d] = \emptyset$ neither of which contradicts $(2)$. What this means is that all you need to change in the definition of $B$ is "$a < b$" to "$a\leq b$" to get basis for topology. But that would mean that $B$ contains all singletons and thus topology generated by $B$ would be discrete topology on $X$, as noted by Alex Kruckman in the comments (actually, one only needs singletons to generate discrete topology, other segements are superflous).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1992067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Nilpotent and Idempotent Elements of Ring $\mathbb{Z}_6 \times \mathbb{Z}_2$ I'd just like to make sure I understand nilpotent and idempotent elements of rings properly. A nilpotent element is an element $b \in R$ s.t. for some $n$, $b^n = 0$. An idempotent element is an element $b \in R$ s.t. $b^2 = b$. So, for the ring $\mathbb{Z}_6 \times \mathbb{Z}_2$, its nilpotent and idempotent elements would be the pairs of nilpotent elements of $\mathbb{Z}_6$ and $\mathbb{Z}_2$, and similarly for idempotent elements, right? From what I see, the nilpotent elements of $\mathbb{Z}_6$ are only $0$, and the nilpotent elements of $\mathbb{Z}_2$ are $0$ and $2$. So, does this means the nilpotent elements of $\mathbb{Z}_6$ $\times$ $\mathbb{Z}_2$ are $(0, 0)$ and $(0, 2)$? Also, from my calculations, the idempotent elements of $\mathbb{Z}_6$ are $0, 1, 3, 4$, while the idempotent elements of $\mathbb{Z}_2$ are $0$ and $1$. Similar to nilpotent elements, does this mean the idempotent elements of $\mathbb{Z}_6 \times \mathbb{Z}_2$ are the combinations of those elements? Thanks for your help. EDIT: I forgot to see (0, 0) = (0, 2) for the nilpotent elements.
Yes, you are right and this is quite easy to prove since we have: $$(a, b)^n=(a^n, b^n)$$ Thus, $(a, b)^n=(0, 0)$ if and only if $a^n=0$ and $b^n=0$, so only pairs of nilpotent elements are nilpotent. Also, if we have two nilpotent elements $a$ and $b$ such that $a^m=0$ and $b^n=0$, then $(a, b)^{\max(m, n)}=0$ and thus all pairs of nilpotent elements are nilpotent. Also, $(a, b)^2=(a, b)$ if and only if $a^2=a$ and $b^2=b$, so only and all pairs of idempotent elements are idempotent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1992166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Relation between left null space, row space and cokernel, coimage I know the null space and the column space of a matrix $A$ can be identified with the kernel and the image of the associated homomorphism $L_A: x \mapsto Ax$, once a basis is chosen. I was reading this Wikipedia article that calls the left null space and the row space of $A$ respectively cokernel and coimage. This seems to imply that (using my notation) $\text{coker } L_A = \ker L^t_A = \ker L_{A^t}$ which in turn implies $\text{coker }A$ is the orthogonal complement of $\text{im } A$. However the cokernel of the homomorphism $L: V \rightarrow W$ is defined here as $W/\text{im L}$, the set of all equivalence classes of every vector of $\text{im } L$. This means $\text{coker }L$ and $\ker L^t$ have different types entirely and that relation can't possibly be true. Is this true maybe in some other sense? What is the actual relation?
Suppose the matrix $A$ is $m\times n$ and has rank $k$. Thus $A$ defines a linear map $L_A\colon \mathbb{R}^n\to \mathbb{R}^m$. The null space has dimension $n-k$ and the image has dimension $k$. The cokernel $\mathbb{R}^{m}/\operatorname{im}L_A$ has dimension $m-k$. The coimage $\mathbb{R}^n/\ker L_A$ has dimension $n-(n-k)=k$. The matrix $A$ also defines a linear map $R_A\colon \mathbb{R}_m\to \mathbb{R}_n$ (spaces of row vectors) by $y\mapsto yA$. The image of this linear map has dimension $k$ and the kernel has dimension $m-k$. Hey! The dimensions agree! Maybe we can find a “canonical” isomorphism $$ f\colon \ker R_A\to \operatorname{coker}L_A $$ In other words, we'd like to associate to each vector $y\in\ker R_A$ a unique element in $F^m/\operatorname{im}L_A$. Let $y\in\ker R_A$, so $y^T\in \mathbb{R}^m$. The natural choice would be defining $f(y)=y^T+\operatorname{im}L_A$. This is clearly a linear map. Is it injective? Suppose $y\in\ker f$. This means $y^T=Ax$ for some $x\in F^n$. Also $yA=0$ by definition, so $$ 0=A^Ty^T=A^TAx $$ and so $x^TA^TAx=0$ and therefore $x=0$; hence $y^T=Ax=0$. Yes! The map $f$ is injective. The dimensions coincide, so $f$ is indeed an isomorphism. Similarly you can find an isomorphism between $\mathbb{R}^n/\ker L_A$ and $\operatorname{im}R_A$. Note however that these isomorphisms require that $x^TA^TAx=0$ implies $x=0$. This is not true over general fields.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1992261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$f(x)=x/1$ is not injective? In chapter 3 of Atiyah and Macdonalds Intro to Comm Algebra there is the statement "We also have a ring homomorphism $f:A\rightarrow S^{-1}A$ defined by $f(x)=x/1$. This is not in general injective." (Note: $A$ denotes an integral domain and $S$ a multiplicatively closed subset of $A$.) How could this possibly not be injective? If $f(x)=f(y)$ then $x/1=y/1$ so $1x=1y$ hence $x=y$. What am I missing here?
EDIT: The answer below the line assumes $A$ is just a ring. In case $A$ is an integral domain, that answer of course does not apply, and your reasoning is right . . . . . . assuming $S$ doesn't contain $0$. If $0\in S$, then $S^{-1}A$ is the trivial ring, and $f$ is extremely not injective. The localization is a bit more complicated than you're giving it credit for. The general definition of the localization $S^{-1}A$ is as follows: elements are equivalence classes of pairs $(r, s)$ with $r\in A$ and $s\in S$ (thought of as meaning "${r\over s}$"), under the equivalence relation $$(r_1, s_1)\sim (r_2, s_2)\iff \mbox{ there is some $t\in S$ such that }t(r_1s_2-r_2s_1)=0.$$ Now suppose $a, b\in R$ and $s\in S$ are such that $s(a-b)=0$. Then $(a, 1)\sim (b, 1)$! But this can happen even if $a\not=b$ - e.g. working in $\mathbb{Z}/6\mathbb{Z}$, let $S=\{1, 2, 4\}$, $s=2$, $a=4$, $b=1$. It sounds like your thinking of the much simpler relation $$(r_1, s_1)\approx (r_2, s_2)\iff r_1s_2=r_2s_1.$$ Why don't we use that instead? Well, it's not necessarily transitive! If $(r_1, s_1)\approx (r_2, s_2)\approx (r_3, s_3)$, then we know that $r_1s_2=r_2s_1$ and $r_2s_3=r_3s_2$ and we want to conclude that $r_1s_3=r_3s_2$. But we can't do this in general. E.g. in $\mathbb{Z}/12\mathbb{Z}$, we have $$(3, 2)\approx (6, 4)\approx (6, 6)$$ but $$(3, 2)\not\approx (6, 6)$$ (since $6\cdot 3=18=6\not=0=6\cdot 2$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1992449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Are partitial differential operators commutative? I am trying to convert $\Delta=\nabla^2 = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}$ to polar coordinates. If anyone has any references on how to do that I would appreciate it. In my evaluation, I am messing with a lot of partials. Alghough, I am not sure as to if partials are commutative, and google has not been helping me. Enough of a prompt; is this true? $$ \frac{\partial u}{\partial x}\frac{\partial y}{\partial r}=\frac{\partial u}{\partial r}\frac{\partial y}{\partial x} $$ In the previous equation I just switched the denominators.
What you're looking for is the "Polar Laplacian". Googling "Polar Laplacian Derivation" leads to many papers such as this one that go through step-by-step with how to derive it. If after reading this you're still interested in a challenge, try deriving the Laplacian for Spherical or Cylindrical coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1992540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How do you "simplify" the sigma sign when it is raised to a power? How do you simplify the following expression: $$\left(\sum^{n}_{k=1}k \right)^2$$ I am supposed to show that $$\left(\sum^{n}_{k=1}k \right)^2 = \sum^{n}_{k=1}k^{3} $$ The problem is I do not really know how to manipulate the sigma sign. I know that I (probably) need to use induction somehow, but the main question is how do you "simplify" the sigma sign when it is raised to a power. Due to the problem itself I know that (most likely); $$\left(\sum^{n}_{k=1}k \right)^2 = \sum^{n}_{k=1}k^{3} $$ so is it possible to simply manipulate the LHS so that it looks like the RHS?
$$\begin{align} \sum^{n}_{k=1}k&=\frac{n(n+1)}{2}\\ \left(\sum^{n}_{k=1}k\right)^2&=\left(\frac{n(n+1)}{2}\right)^2\\ \left(\frac{n(n+1)}{2}\right)^2&=\frac{n^2(n+1)^2}{4}\\ \sum^{n}_{k=1}k^3&=\frac{n^2(n+1)^2}{4}\\ \therefore \left(\sum^{n}_{k=1}k\right)^2&=\sum^{n}_{k=1}k^3 \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1992625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
How many #8 digits can you build? How many different 8-digit numbers can be formed using two 1s, two 2s, two 3s, and two 4s such that no two adjacent digits are the same? So we have $0 -> 9 = 10$ options for the digits but with the constraints we have something different. Cases: $1$ _ _ $2$ _ _ $3$ _ _ $4$ _ _ We can do recursion in the form $A(N), B(N),.. D(N)$ as to how many sequences can be built from the $1, 2, .. 4$ for $N$ letters. We have $T(n) = A(N-1) + B(N-1) + C(N-1) + D(N-1)$ We are after $T(8)$. We have that for instance $A(n-1) = B(n-2) + C(n-2) + D(n-2) = [A(n-3) + C(N-3) + D(n-3)] + [A(n-3) + B(n-3) + D(n-3)] + [A(n-3) + B(n-3) + C(n-3)] = 3A(n-3) + 2B(n-3) + 2C(n-3) + 2D(n-3)$. But this gets long fast.
Use Inclusion-exclusion principle. There are $\frac{8!}{2^4}$ 8-digits numbers that can be formed using two 1s, two 2s, two 3s, and two 4s. i) How many of these 8-digits numbers have two adjacent 1s? There are $7\cdot \frac{6!}{2^3}$ such numbers. ii) How many of these 8-digits numbers have two adjacent 1s and two adjacent 2s in this order? There are $15\cdot \frac{4!}{2^2}$ such numbers. iii) How many of these 8-digits numbers have two adjacent 1s, two adjacent 2s and two adjacent 3s in this order? There are $10\cdot \frac{2!}{2^1}$ such numbers. iv) How many of these 8-digits numbers have two adjacent 1s, two adjacent 2s, two adjacent 3s and two adjacent 4s in this order? They is only $1$ such number. Hence the final result is (why?): $$\frac{8!}{2^4}-4\cdot7\cdot \frac{6!}{2^3} +(4\cdot 3)\cdot 15\cdot \frac{4!}{2^2} -(4\cdot 3\cdot 2)\cdot 10\cdot \frac{2!}{2^1} +(4!)\cdot 1\cdot \frac{0!}{2^0}=864.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1992775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Analytical solution of quadratic programming when $H$ is not p.d.? Considering the unconstrained quadratic programming: $$\min_x \frac{1}{2}x^\top Hx+c^\top x$$ if $H$ is not positive definite, can we still calculate the analytical solution?
We consider the function $$ f \colon \def\R{\mathbf R}\R^n \to \R, \quad x \mapsto \frac 12 x^tHx + c^t x $$ for $H\in \def\M{\mathrm{Mat}}\M_n(\R)$, $c \in \R^n$. Taking derivatives gives \begin{align*} f'(x)h &= \frac 12 x^t(H+H^t)h + c^t h\\ f''(x)[h,k] &= \frac 12 h^t(H+H^t)k \end{align*} So we see that the critical points of $f$ are the solutions of $$ \frac 12(H + H^t)x = c$$ which are minima iff $\frac 12(H+H^t)$ is positive semi-definite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1992986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
First-order logic and NBG I know that NBG set theory can be viewed as an interpretation of the first-order logic with equality. However, I do not really understand how do I get this interpretation. As I see it: I call variables "classes" and add the class membership relation $\in$. I call a class $x$ a "set" if $\exists y \ (x \in y)$. But what do I do with constants? Also, there is an empty set and the universal class, and I do not know how to represent them in the form $\emptyset = \dots$ and $V = \dots$
It's not that NBG is an interpretation of FOL, it's that NBG is a theory in FOL, in the same way that e.g. the theory of rings is a theory in FOL. The language of NBG has a single binary relation, "$\in$," and no constants or functions. So there's no need to interpret those (any more than we would demand that a ring understand what "$<$" means, say). As to $\emptyset$ and $V$, those aren't constant symbols, but rather definable elements - e.g. NBG proves that there is a unique $x$ satisfying $\forall y(y\not\in x)$, and a unique $z$ satisfying $\forall w(\exists u(w\in u)\implies w\in z)$. We refer to these as "$\emptyset$" and "$V$," but this is done outside the formal language.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1993104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inverses of bijective functions. Whenever I see a $1-1$ and onto function from $\mathbb{R}$ to $\mathbb{R}$,I wonder how do I even go about finding the inverse. Is there a general way of finding the inverse of a bijective function $f :\mathbb{R}\to\mathbb{R}$? Or is it really possible that inverse of all bijective functions can be written down as a formula?
There are an uncountable number of bijections on $\mathbb{R}$, and only a countable set of "formulas" consisting of finite strings of mathematical symbols from some (finite) alphabet. Therefore, by the pigeonhole principle, there are bijections on $\mathbb{R}$ where no formula (with any set of finite mathematical symbols) exists. Thus, there can be no general algorithm for finding inverses of bijections on $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1993262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can`t find the integral $\int\frac{dx}{\sqrt x\sin(x)}$ $\int\frac{dx}{\sqrt x\sin(x)}$ Im stuck in this problem for about 40 minutes. Any clue how to solve this? The actual problem is an ODE problem, it is given $x^2y''+xy'+(x^2-1/4)y=0$ and one of the solutions is $y_1(x)=\sin(x)/\sqrt(x)$. So I need to find $v=\int{\frac{e^{-\int{p(x)\,dx}}}{\sin(x)/\sqrt(x)}}\,dx$.
The solution you have suggests that you let $$ y(x)=v(x)/\sqrt{x}. $$ Doing so, your differential equation will turn into $$ x^{3/2}(v''(x)+v(x))=0. $$ I'm sure you can take it from there. (I don't see where you got your integral from, so I cannot help you with that.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1993375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $M$ is a local martingale and $τ_k^n$ is a stopping time, then $t↦\sum_{k∈ℕ}M_{τ_{k-1}^n}\left(M_{τ_k^n∧t}-M_{τ_{k-1}^n∧t}\right)$ is a martingale Let * *$T>0$ *$(\Omega,\mathcal A,\operatorname P)$ be a probability space *$(\mathcal F_t)_{t\in[0,\:T]}$ be a filtration on $(\Omega,\mathcal A)$ *$M$ be an almost surely continuous local martingale on $(\Omega,\mathcal A,\mathcal F,\operatorname P)$ with $M_0=0$ almost surely and $$N:=\left\{\omega\in\Omega:X(\omega)\text{ is not continuous}\right\}\cap\left\{X_0\ne0\right\}$$ *$n\in\mathbb N$, $\tau_0^n:=0$ and $$\tau_k^n:=\inf\left\{t\in(t_{k-1}^n,T]:\left|M_t-M_{\tau_{k-1}^n}\right|=\frac1{2^n}\right\}\wedge T\;\;\;\text{for }k\in\mathbb N$$ with $\inf\emptyset:=\infty$ We can show that $\tau_k^n$ is an $\mathcal F$-stopping for all $k\in\mathbb N$ with $$\tau_k^n\uparrow T\;\;\;\text{for }k\to\infty\tag1$$ on $\Omega\setminus N$. Now, let $$X_t:=\left\{\begin{array}{{{\displaystyle}}l}\displaystyle\sum_{k\in\mathbb N}M_{\tau_{k-1}^n}\left(M_{\tau_k^n\:\wedge\:t}-M_{\tau_{k-1}^n\:\wedge\:t}\right)&&\text{on }\Omega\setminus N\\0&&\text{on }N\end{array}\right\}\;\;\;\text{for }t\in[0,T]\;.$$ We can show that the sum in the definition of $X(\omega)$ has only finitely many terms, for all $\omega\in\Omega\setminus N$. How can we show that $X$ is a martingale on $(\Omega,\mathcal A,\mathcal F,\operatorname P)$? You may assume that $M$ is bounded (and hence a martingale), if it's not possible to show the statement otherwise.
Show that each term in the sum is a martingale. To wit, a stopped local martingale is a local martingale, and since $M_{\tau_k^n\wedge t}$ is a.s. bounded (by $k2^{-n}$), each $M_{\tau_k^n\wedge t}$ is a martingale, as is the difference $M_{\tau_k^n\wedge t}-M_{\tau_{k-1}^n\wedge t}$. As this difference vanishes on $[0,\tau_{k-1}^n]$, when you multiply it by a bounded $\mathcal F_{\tau_{k-1}^n}$-measurable random variable, it is still a martingale.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1993487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is mathematical logic? What does mathematical logic mean? In the book Analysis 1 by Terence Tao, it says: The purpose of this appendix is to give a quick introduction to mathematical logic, which is the language one uses to conduct rigourous mathematical proofs. Checking Wikipedia: Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. This seems like a completely different definition. Why, for example, is set theory considered part of logic?
Mathematical logic is a strange beast. It is a perfectly ordinary branch of mathematics whose goal is ... to study mathematics itself. Thus, the different branches of mathematical logic are devoted to the study of some basic building blocks of mathematical practice : language, model, proof, computation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1993596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 6, "answer_id": 0 }
Compute the limit for $x \to \infty$ of a certain rational function Compute the following limit: $$ \lim_{x \to \infty} \frac{9x^8 + 5x^2 − 6}{ 3x^8 + 2x^4 + 1} $$ The answer may be A. 2 B. 3 C. 5 D. 7
Hint: Divide numerator and denominator by $x^8$ to get: $$\lim_{x \to \infty}\frac{9+5/x^{6}-6/x^8}{3+2/x^4+1/x^8}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1993702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Transitive subgroup of symmetric group I'm working on the following question, and honestly have no idea how to begin. Any hints would be greatly appreciated! Let $H$ be a subgroup of $S_n$, the symmetry group of the set $\{1,2,\dots, n\}$. Show that if $H$ is transitive and if $H$ is generated by some set of transpositions, then $H=S_n$.
Proof by induction on $n$. Case $n\le 2$ is trivial. Let $X$ be the set of transpositions in $H$, $X_1$ the set of transpositions in $X$ fixing 1 and $K$ the subgroup generated by $X_1$. We have to prove that $K$ is transitive on $\{2,\dots,n\}$. If true, induction hypothesis implies $K$ is the full symmetry group on $\{2,\dots,n\}$, and since $H$ properly contains $K$ which is a maximal subgroup in $S_n$, it's immediate to conclude. So let's prove this transitivity claim on $K$. Otherwise $K$ has at least two orbits $I,J$ in $\{2,\dots,n\}$. Since $H$ does not stabilize $I$ and is generated by transpositions, it contains a transposition $t=(u,v)$ with $u\notin I$ and $v\in I$. If $u\neq 1$, we deduce that $t\in X_1\subset K$ and deduce that $u$ is in the same $K$-orbit as $v$, a contradiction. So $u=1$ and $H$ contains the transposition $(1,v)$, $v\in I$. Similarly, $H$ contains a transposition $(1,w)$ with $w\in J$. Hence $H$ contains $(1,v)(1,w)(1,v)=(v,w)$. This is again a contradiction since this transposition would have to belong to $K$ and contradict that $v,w$ are in distinct $K$-orbits. So $K$ is transitive on $\{2,\dots,n\}$. This finishes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1993832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Arithmetic or Geometric sequence? Given a sequence: $$1, \frac12, \frac13, \frac14, \frac15,...$$ Its explicit formula can be given as: $a(n) = \frac1n$ where $n \ge 1$. I actually want to know is it a geometric sequence or an arithmetic one? I tried finding common ratio and the common difference in this sequence to see if it's either one of them but was not successful. My common ratio ($r$) and common difference ($d$) were some horrible values.
This is not a geometric series. $a_1=1, a_2=\frac12, a_3=\frac13$ If this is a geometric sequence, then it is necessary that $\frac{a_2}{a_1}=\frac{a_3}{a_2}$ $$\frac{a_2}{a_1}=\frac{1}{2}$$ $$\frac{a_3}{a_2}=\frac{2}{3}$$ The two numbers are different, hence it is not a geometric sequence. Similarly, you can verify that $$a_2-a_1 \neq a_3-a_2$$ To prove that something is a geometric sequence, you have to show that $\frac{a_{n+1}}{a_n}$ is a constant. To prove that something is an arithmetic sequence, you have to show that $a_{n+1}-a_n$ is a constant. For this problem, $$\frac{a_{n+1}}{a_n}=\frac{1/(n+1)}{1/n}=\frac{n}{n+1}=\frac{1}{1+1/n}$$ which is dependent on $n$. while $$a_{n+1}-a_n = \frac{1}{n+1}-\frac{1}{n}=-\frac{1}{n(n+1)}$$ which is again dependent on $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1993989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Repeated Differences of the series 1^n 2^n 3^n 4^n… I kind of stumbled across this pattern recently and cant make any sense of it. Attached is my work explaining what I did. If there is already an answer to this please post the link. Thanks!
It's more natural to start the sequence at $0$: for example, $$0 \quad 1 \quad 4 \quad 9$$ $$1 \quad 3 \quad 5$$ $$2 \quad 2$$ $$0$$ from which you can read off the leftmost side that the top (the sequence $n^2$, $n \in \mathbb{N}$) is actually $$\color{red}{0} \cdot \binom{n}{0} + \color{red}{1} \cdot \binom{n}{1} + \color{red}{2} \cdot \binom{n}{2} = n + 2 \cdot \frac{n(n-1)}{2}.$$ This idea goes by the name Binomial transform. The fact that you will end up with the last coefficient $k!$ when studying the sequence $1^k,2^k,3^k$ is because the coefficient of $n^k$ in $\binom{n}{k}$ itself is exactly $1/k!$; and you need the result to be $1 \cdot n^k.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1994088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does globally stability implies existence of Lyapunov function? I recently had a chance to ask a professor this question and he mentioned that this is true for ordinary differential equations (but not for delay differential equation). He did not remember the reference, I could not find a reference for this, so can anyone please point me to the right source?
You will find what you need in this paper and the references of Karafyllis: Converse Lyapunov–Krasovskii theorems for systems described by neutral functional differential equations in Hale's form Pierdomenico Pepe & Iasson Karafyllis
{ "language": "en", "url": "https://math.stackexchange.com/questions/1994209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Pointwise convergence of $X_n$ vs $X_nI_{\{|X_n|\leq c_n\}}$ and of $\sum X_n$ vs $\sum X_nI_{\{|X_n|\leq c_n\}}$ Let $\{X_n,n\geq 1\}$ be a sequence of random variables, and $\{c_n,n\geq1\}$ a positive sequence. Let also $\sum_n P(|X_n|> c_n)<\infty$. Prove: * *If $Y_n=X_nI_{\{|X_n|\leq c_n\}}$ and $P\left( {\mathop {\lim }\limits_n {Y_n} = X} \right) = 1$, then $$P\left( {\mathop {\lim }\limits_n {X_n}} =X\right) = 1$$ *$$P\left( {\sum\limits_n {{X_n}} } \,\text{converges}\,\right) = P\left( {\sum\limits_n {{X_n}{I_{\left\{ {\left| {{X_n}} \right| \le {c_n}} \right\}}}} } \,\text{converges}\,\right)$$ I have tried to prove it by the following method. First, by Borel-Cantelli Theorem, we have $$P\left( {\mathop {\lim \sup }\limits_n {\mkern 1mu} \left| {{X_n}} \right| > {c_n}} \right) = 0.$$ And$$0 \le P\left( {\mathop {\lim \inf }\limits_n {\mkern 1mu} \left| {{X_n}} \right| > {c_n}} \right) \le P\left( {\mathop {\lim \sup }\limits_n {\mkern 1mu} \left| {{X_n}} \right| > {c_n}} \right) = 0,$$then $$P\left( {\mathop {\lim \inf }\limits_n {\mkern 1mu} \left| {{X_n}} \right| > {c_n}} \right) = 0.$$ Since $P\left( {\mathop {\lim }\limits_n {Y_n} = X} \right) = 1$,we obtain $$P\left( {\mathop {\lim }\limits_n \left| {{X_n}} \right| \le {c_n}} \right) = 1.$$ Hence $$P\left( {\mathop {\lim \inf }\limits_n {\mkern 1mu} {{X_n}} } \right) = P\left( {\mathop {\lim \inf }\limits_n {\mkern 1mu} \left| {{X_n}} \right| > {c_n}} \right) + P\left( {\mathop {\lim \inf }\limits_n {\mkern 1mu} \left| {{X_n}} \right| \le {c_n}} \right) = 1.$$ Then $$1 = P\left( {\mathop {\lim \inf }\limits_n {\mkern 1mu} {X_n}} \right) \le P\left( {\mathop {\lim }\limits_n {\mkern 1mu} {X_n} = X} \right) \le 1,$$ which means $$P\left( {\mathop {\lim }\limits_n {X_n}} =X\right) = 1.$$ It seems something wrong! Thank everyone for good ideas!
By Hypothesis, $\sum_n P(\{X_n\neq Y_n\})<\infty$. By Borel-Cantelli Lemma, $P\left( {\mathop {\lim \inf }\limits_n {\mkern 1mu} {{\{X_n=Y_n\}}}} \right) = 1.$ Put $A={\mathop {\lim \inf }\limits_n {\mkern 1mu} {{\{X_n=Y_n\}}}}$ For every $\omega \in A, X_n(\omega)=Y_n(\omega)$ for all large n. This shows, for every $\omega \in A,$ $$\mathop {\lim}\limits_n X_n(\omega)=\mathop {\lim}\limits_n Y_n(\omega)$$ $$\sum_n X_n(\omega)\;converges \Leftrightarrow \sum_n Y_n(\omega)\;converges$$ Now, the required results follow easily.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1994300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
double integral wrt minimum How can one calculate the following integral? $$ \int_0^1{\int_0^1 xy \quad d[min(x,y)]} $$ I have no idea how to hande the $d(min(x,y))$. Does anyone have an idea for this problem?
Certainly this is a (Lebesgue-)Stieltjes integral. Nevertheless, fortunately $C_1=xy$ and $C_2=\min(x,y)$ are copulas and for copulas the following holds (partial dderivatives exist a.e. of course): $$\iint_{[0,1]^2}C_1(x,y)\,\mathrm{d}C_2(x,y)=\frac{1}{2}-\iint_{[0,1]^2}\frac{\partial}{\partial x}C_1(x,y)\frac{\partial}{\partial y}C_2(x,y)\,\mathrm{d}(x,y).$$ Therefore $\iint_{[0,1]^2}xy\,\mathrm{d}\min(x,y)=\frac{1}{2}-\iint_{T}y\,\mathrm{d}(x,y),$ where $T=\{(x,y)\in[0,1]^2: ~0\leq x\leq 1, ~0\leq y<x\}$. Finally $\iint_{[0,1]^2}xy\,\mathrm{d}\min(x,y)=\frac{1}{2}-\frac{1}{6}=\frac{1}{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1994383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $p$ be an odd prime. Suppose that $a$ is an odd integer and also $a$ is a primitive root mod $p$. Show that $a$ is also a primitive root mod $2p$. Let $p$ be an odd prime. Suppose that $a$ is an odd integer and also $a$ is a primitive root modulo $p$. Show that a is also a primitive root modulo $2p$. Any hints will be appreciated. Thanks very much.
A way is like this (you may have to justify the points a bit): * *There are $p-1$ invertible classes mod $2p$, the same number as for $p$. *$a$ is invertible mod $2p$. *The multiplicative order of $a$ mod $p$ is $p-1$. *The multiplicative order of $a$ mod $2p$ is at least as large as the one mod $p$. From this you can conclude directly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1994510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prove that the restriction $f\vert_A$ is continuous Let $S\subset \mathbb{R}$ and $A\subset S$. Let $f:S\rightarrow \mathbb{R}$ be a continuous function. Then the restriction $f\vert_A$ is continuous.
By definition of the subspace topology, if $(X,\tau)$ is a topological space then $\{U\cap A | U\in\tau\}$ are the open sets of $A\subseteq X$. But then if $O$ is an open subset of $\Bbb R$ we have that $$f\big|_A^{-1}(O) = f^{-1}(O)\cap A$$ but since $f$ is continuous, this is the intersection of open sets, hence is itself open, showing the restriction is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1994627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Describe the set whose points satisfy the following relation: $|z^2 - 1| < 1$ There is a hint which states to use polar coordinates, but I feel like that complicates the problem more. As far as trying it myself, I get lost very early on. If we take $z = r(\cos{\theta} + i\sin{\theta})$, then we have $|r^2(\cos{2\theta} + i\sin{2\theta}) - 1| < 1$ But I have no idea how to find the modulus of this point with that extra $-1$ in there.
If you use $re^{i\theta}$ as the polar representation than you get $$|r^2e^{2i\theta}-1|<1\Rightarrow |(re^{i\theta}+1)(re^{i\theta}-1)|<1$$ which might be enlightening....
{ "language": "en", "url": "https://math.stackexchange.com/questions/1994750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Compact subset of the space of all bounded sequences of real numbers Let $X$ be the metric space of bounded sequences of real numbers $ x = (x_n) $ with the metric $ d(x,y) = \sup_n |x_n - y_n| $. Show that the set $$ Y = \{ x = (x_n) \in X \mid |x_n| \leq c_n = \text{ const for all } n \} $$ is compact in $X$ if and only if $ c_n \to 0 $ as $ n \to \infty$. I tried to prove that $Y$ is compact by showing that $Y$ is complete and totally bounded. For completeness I think I can prove it without the hypothesis $ c_n \to 0 $ as $n \to \infty$ by using the same technique as the proof here, but I'm stuck as to how once can prove that $Y$ is totally bounded. Or is there a better way to prove that $Y$ is compact? Any help is appreciated. Thanks. Edit: we were only supposed to use the definition of a convergent sequence, so I guess we must prove this either by considering open covers or by showing that an infinite subset must have a limit point.
We can show total bounded in the following way: let $\varepsilon$ be a positive number. Choose $n_0$ such that if $n\geqslant n_0$, then $\left|c_n\right|\lt\varepsilon$. The set $S=\prod_{j=1}^{n_0-1} [-c_j,c_j]$ is compact for the metric $d_{n_0}(x,y) :=\max_{1\leqslant j\leqslant n_0-1}\left|x_j-y_j\right|$. Choose $F\subseteq S$ a finite subset such that $\sup_{x\in S} d_{n_0}\left(x,F\right)\lt \varepsilon$. For each $y\in F$, define $\widetilde y :=(y_1,\dots,y_{n_0-1},0,\dots, 0,\dots )$. Then $\widetilde F:=\left\{\widetilde y, y\in F\right\}$ is finite and the collection of balls centered at the point of $\widetilde F$ and with radius $\varepsilon$ covers $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1994856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove the sum of deviations from the mean is minimum? I am trying to prove that the sum of deviations from the mean is minimum. I saw this prove in a book where $\bar{X}$ is the mean and $Y\neq \bar{X}$. What I am confused about is where does the zero in the last line come from?
The middle line should in fact be $\displaystyle \sum_{i=1}^n (X_i-Y)^2 = \sum_{i=1}^n (X_i-\overline{X})^2 + 2(\overline{X}-Y)\sum_{i=1}^n (X_i-\overline{X}) + \sum_{i=1}^n (\overline{X}-Y)^2$ and you then have $\displaystyle\sum_{i=1}^n (X_i-\overline{X})=\left(\sum_{i=1}^n X_i\right) - n \left(\frac1n\sum_{i=1}^n X_i\right)=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1994979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do the limits exist? Determine whether the following limits exist and determine them in case of convergence: 1.) $$ \lim_{n\to\infty}\frac{\arctan(\exp(-n))}{\exp(n^2)}-\frac{\ln(n+1)}{n}. $$ 2.) $$ \lim_{x\downarrow 0}\frac{\tan(x^2)-\ln(\ln(x+1))}{\ln(1/x)} $$ Here's what I tried. 1.) Considering the first summand, $$ \frac{\arctan(\exp(-n))}{\exp(n^2)}\to 0\text{ as }n\to\infty $$ since the nominator tends to $\pi/2$ as $n\to\infty$ and the denominator tends to $\infty$ as $n\to\infty$. For the second summand, I apply L'Hospital, getting $$ \lim_{n\to\infty}\frac{\ln(n+1)}{n}=\lim_{n\to\infty}\frac{1}{n+1}=0. $$ Both together, I get that that the searched limit exists and is 0. 2.) I am not that sure with this. I again consider two summands, namely $$ \frac{\tan(x^2)}{\ln(1/x)}\text{ and }\frac{\ln(\ln(x+1))}{\ln(1/x)}. $$ Since $\tan(x^2)\to 0$ as $x\downarrow 0$ and $\ln(1/x)\to\infty$ as $x\downarrow 0$, the first summand should tend to $0$ as $x\downarrow 0$. Considering the second summand, $$ \ln(\ln(x+1))\to -\infty\text{ as }x\downarrow 0, $$ and $$ \ln(1/x)=+\infty\text{ as }x\downarrow 0. $$ Hence, I am applying L'Hospital. $$ \frac{d}{dx}(\ln(\ln(x+1)))=\frac{1}{(x+1)(\ln(x+1))}\to +\infty\text{ as }x\downarrow 0 $$ $$ \frac{d}{dx}(\ln(1/x))=-\frac{1}{x}\to -\infty\text{ as }x\downarrow 0 $$ So, I have to apply again L'Hospital, giving me $$ \frac{d}{dx}\left(\frac{1}{(x+1)(\ln(x+1))}\right)=\frac{-1}{(x+1)^2\cdot\ln(x+1)}-\frac{1}{(x+1)^2\cdot (\ln(x+1))^2}\to -\infty, x\downarrow 0 $$ and $$ \frac{d}{dx}\left(-\frac{1}{x}\right)=\frac{1}{x^2}\to +\infty, x\downarrow 0. $$ Hm, another application of L'Hospital?
You are right about the first limit, with a slight note that $\arctan(\exp(-n))$ does not tend to $\pi/2$ as $n \to +\infty$ but rather to zero. I think you missed the minus in the exponent. In the second limit, yes, you can use a second iteration of l'Hôpital's rule. That's a pretty common situation. One just needs to make sure they check the assumptions anew at each step, which you did. So go ahead taking a derivative of the new numerator and denominator and you'll find the numerical answer from there easily.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1995116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Closest point on a plane to a surface. And vice versa. Find the point on $z=1-2x^2-y^2$ closest to $2x+3y+z=12$ using Lagrange multipliers. Point on surface closest to a plane using Lagrange multipliers Although the methods used in the answers are helpful and do work, my professor told me that the way he wants us to do this problem is to recognize that the distance will be minimized when the normal vector to one point on the surface is parallel to the normal vector of the plane. To find the normal vector to the surface at a point $(x,y,z)$ we write our function implicitly and take the gradient of the new function (gradient is parallel to level curve): $$G(x,y,z,)=z+2x^2+y^2=1$$ $$\nabla G=\langle4x,2y,1\rangle$$ The normal vector of the plane is, $$\langle 2,3,1 \rangle$$ Hence we have, $$\langle 4x,2y,1 \rangle=\lambda \langle 2,3,1 \rangle$$ And that, $$z+2x^2+y^2=1$$ Question $1$: The part I don't understand is the claim that the distance will be minimized when the normals are parallel. And how is this using Lagrange multipliers, May someone please explain. Question $2$ Find the point on $2x+3y+z=12$ closest to $z=1-2x^2-y^2$ using Lagrange multipliers. I suppose we have use the fact that the normal of the plane is $\langle 2,3,1 \rangle$ with the fact that the point closest to the plane on the surface is $(\frac{1}{2},\frac{3}{2},-\frac{7}{4})$ to come up with the equation of the line that goes through our two closest points in terms of $t$ and then substitute values of $x$, $y$, $z$ into our plane equation to come up with the point on the plane. But again, I don't see where Lagrange multipliers comes into play.
Answering to "I don' understand..claim that the distance will be minimized when the normals are parallel". On a pure intuitive basis (not pretending to be rigorous) consider a point on the given surface, and the tangent plane to the surface at that point. In a small domain around the point, the surface is "approximated" by the tangent plane $t$. If this is not parallel to the other given plane $p$, then there will be a point in that small domain which is closer to $p$ than the original. Move in that point, repeat the conceptual operation, and you will end (if exists) onto a point on the surface, where the tangent plane is parallel to $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1995257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove using Induction | Tricky one! I actually need some help. I want to prove using simple induction that Q.1) $2^n > n^3$ for all $n \geq 10 $ I tried solving it like this... Base Step: $n = 10$: $2^{10} > 10^3 = 1024 > 1000$ So, that's true and fine. Inductive step: Suppose $2^n > n^3$ is true for some $n$. So it means that it should also be true for $n+1$ So, $2^{n+1} > (n+1)^3$ $2^{n+1} > n^3 + 3n^2 + 3n + 1$ L.H.S: $2^{n+1} = $? Now here I'm stuck in further expanding this prove. I want to solve it further and make the $2^{n+1}$ with a power on top of it that I can use to compare with $n^3 + 3n^2 + 3n + 1$. Kindly, tell me as easy as possible as I'm not expert in it. Thanks
Hint: From the induction hypothesis, you deduce that $$2^{n+1}=2\cdot 2^n>2n^3,$$ hence by transitivity, it's enough to show that $2n^3\ge (n+1)^3$, or $\Bigl(1+\dfrac 1n\Bigr)^3\le2$. Observe that $$\Bigl(1+\dfrac 1n\Bigr)^3=1+\frac3n+\frac3{n^2}+\frac1{n^3}\le 1+\frac9n\quad\text{(why?)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1995387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
If $\lambda = i$ is an eigenvalue of $A \in \mathbb R^{n \times n}$, explain why the rank of $A^3 + A$ is less than $n$ $A$ is a $n \times n$ matrix with real entries and $\lambda = i$ is an eigenvalue of $A$. Explain why the rank of $A^3 + A$ is less than $n$. I understand completely how to get eigenvalues, as well as information regarding the dimension of null space, rank, etc., but have a really hard time understanding how I am supposed to relate the two, which I can only assume I have to do to answer this question.
Let $\vec{x}$ be the eigenvector corresponding to the eigenvalue $\lambda=i$. By definition, this means that $A\vec{x}=\lambda\vec{x}=i\vec{x}$. Consider the product $A^3\vec{x}$ and note that we can move the scalars $i$ from each product to the front, yielding $A^3\vec{x}=A(A(A\vec{x}))=iA(A\vec{x})=i^2A\vec{x}=i^3\vec{x}=-i\vec{x}$ So now $(A^3+A)\vec{x}=A^3\vec{x}+A\vec{x}=-i\vec{x}+i\vec{x}=\vec{0}$. Eigenvectors are nonzero by definition, so $\vec{x}$ is apprently in the nullspace of $A^3+A$. But since $A^3+A$ has a nontrivial nullspace, it must have rank strictly less than $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1995525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Linearly independent set of vectors in a normed space Lemma: Let $\left \{ v_{1},\cdot \cdot \cdot ,v_{n} \right \}$ be a linearly independent set of vectors in a normed space $V,\left \| \cdot \right \|$. Then, there exists a constant $c>0$ such that for any scalars $\alpha_{1},\cdot \cdot \cdot ,\alpha_{n}$: $\left \| \alpha^{1}v_{1}+\cdot \cdot \cdot +\alpha^{n}v_{n} \right \|\leq c\left ( \left | \alpha^{1} \right |+\cdot \cdot \cdot + \left |\alpha^{n} \right | \right )$. If $s=0$, the proof is trivial. So suppose $s\neq 0$ and define $\beta^{i}:= \frac{\alpha^{i}}{s}$. With some manipulation we get $\left \| \sum_{i}^{n} \beta^{i}v_{i}\right \| \leq c$ Suppose this statement is false. Then there exists a sequence $\left ( \beta_{m} \right )=\left ( b_{m}^{1}v_{1}+\cdot \cdot \cdot +\beta_{m}^{n}v_{n} \right )$ in $V $ with $\sum_{i=1}^{n}\left | \beta_{m}^{i} \right |=1$ such that $\lim_{m\rightarrow \infty}\left \| \beta_{m} \right \| = 0$ I'll like to know why the limit of $\beta_{m}$ tends to $0$. Any help is appreciated.
$$||\alpha^1v_1+...+\alpha^nv_n||\le ||\alpha^1v_1||+...+||\alpha^nv_n||=|\alpha^1| ||v_1||+...+|\alpha^n| ||v_n||\le max(||v_i||)(|\alpha^1|+...|\alpha^n|)$$ Any $c\ge max(||v_i||)$ is ok
{ "language": "en", "url": "https://math.stackexchange.com/questions/1995656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Suppose $t>0$ and $0<\delta<1$. Prove: $\int_{0}^{t} (t-s)^{-1/2}s^{-1+\delta/2}ds=t^{(-1+\delta)/2}$. Suppose $t>0$ and $0<\delta<1$. Prove: $\int_{0}^{t} (t-s)^{-1/2}s^{-1+\delta/2}ds=t^{(-1+\delta)/2}$. I tried dilation but it didn't work. I appreciate if anyone can give some hints. Thank you.
Using the same steps as DonAntonio in his answer,$$\int_0 ^t \frac{s^{\frac{\delta }{2}-1}}{\sqrt{t-s}}\,ds=\sqrt{\pi }\frac{\Gamma \left(\frac{\delta }{2}\right) }{\Gamma \left(\frac{\delta +1}{2}\right)}t^{\frac{\delta -1}{2}}$$ If $\delta$ is close to $0$, Taylor series give $$\sqrt{\pi }\frac{\Gamma \left(\frac{\delta }{2}\right) }{\Gamma \left(\frac{\delta +1}{2}\right)}=\frac{2}{\delta }+\log (4)+ \left(\log ^2(2)-\frac{\pi ^2}{12}\right)\delta+O\left(\delta ^2\right)$$ If $\delta$ is close to $1$, Taylor series give $$\sqrt{\pi }\frac{\Gamma \left(\frac{\delta }{2}\right) }{\Gamma \left(\frac{\delta +1}{2}\right)}=\pi -\pi \log (2)(\delta -1)+O\left((\delta -1)^2\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1995773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculate exponential limit involving trigonometric functions Calculate the following limit: $$\lim_{x \rightarrow 0} \left( \frac{\tan x}{x} \right) ^ \frac{1}{\sin^2 x}$$ I know the result must be $\sqrt[3]{e}$ but I don't know how to get it. I've tried rewriting the limit as follows: $$\lim_{x \rightarrow 0} e ^ {\ln {\left( \frac{\tan x}{x} \right) ^ \frac{1}{\sin^2 x}}} = \lim_{x \rightarrow 0} e ^ {\frac{1}{\sin^2 x} \ln {\left( \frac{\tan x}{x} \right)}}$$ From this point, I applied l'Hospital's rule but got $1$ instead of $\sqrt[3]{e}$. Thank you!
You have an indeterminate form of kind $1^{\infty}.$ You can solve this type of problems via: $$\lim_{x\to a} f(x)^{g(x)}=e^{\lim_{x\to a} (f(x)-1)g(x)}.$$ Edit If $\lim_{x\to a}f(x)=1$ then it is $$\lim_{x\to a} f(x)^{g(x)}=\lim_{x\to a} (1+f(x)-1)^{\dfrac{f(x)-1}{f(x-1)}g(x)}=\lim_{x\to a} \left( (1+f(x)-1)^{\dfrac{1}{f(x)-1}}\right)^{f(x-1)g(x)}.$$ Now, it is $$\lim_{x\to a} \left( (1+f(x)-1)^{\dfrac{1}{f(x)-1}}\right)=e.$$ So, if $\lim_{x\to a} (f(x)-1)g(x)$ exists we have $$\lim_{x\to a} f(x)^{g(x)}=e^{\lim_{x\to a} (f(x)-1)g(x)}.$$ End of the edit In this case, $$\begin{align}\lim_{x\to 0}\left(\dfrac{\tan x}{x}-1\right)\dfrac{1}{\sin^2 x} & \\ &= \lim_{x\to 0}\dfrac{\tan x-x}{x\sin^2 x} \\ &=\lim_{x\to 0}\dfrac{\dfrac{1}{\cos^2 x}-1}{\sin^2 x+2x\sin x\cos x} \\& =\lim_{x\to 0}\dfrac{\sin^2 x}{\cos^2 x(\sin^2 x+2x\sin x\cos x)}\\&=\lim_{x\to 0}\dfrac{\sin x}{\cos^2 x(\sin x+2x\cos x) }\\& =\lim_{x\to 0}\dfrac{\cos x}{-2\cos x\sin x(\sin x+2x\cos x)+\cos^2x(3\cos x-2x\sin x) }\\&=\dfrac 13.\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1995940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
When is $E[f(X)]=0$ for even functions Let $X$ be standard normal and $f$ a function that satisfies * *$f(0)=0$ *$f$ is even *$(x-{\rm sign}(x) \cdot a) \cdot x \le f(x) \le (x+{\rm sign}(x) \cdot a) \cdot x$, for all $x$, and some fixed $a>0$. Moreover, these bounds are asymptotcily tight. I am either trying to find and example of $f(x)$ such that \begin{align} E[f(X)]=0 \end{align} or that to show that this can not happen. Thanks.
If $a < \sqrt{\dfrac{\pi}{2}}$, then for any $f$ satisfying the given conditions, we have: $\mathbb{E}[f(X)]$ $= \displaystyle\int_{-\infty}^{\infty}\dfrac{1}{\sqrt{2\pi}}e^{-x^2/2}f(x)\,dx$ $= \displaystyle\int_{0}^{\infty}\sqrt{\dfrac{2}{\pi}}e^{-x^2/2}f(x)\,dx$ $\ge\displaystyle\int_{0}^{\infty}\sqrt{\dfrac{2}{\pi}}e^{-x^2/2}(x^2-ax)\,dx$ $= 1 - a\sqrt{\dfrac{2}{\pi}} > 0$. So, no such function exists if $a < \sqrt{\dfrac{\pi}{2}}$. If $a \ge \sqrt{\dfrac{\pi}{2}}$, then we can pick $f(x) = \begin{cases}x^2-\sqrt{\dfrac{\pi}{2}}x & \text{if} \ x \ge 0 \\ x^2+\sqrt{\dfrac{\pi}{2}}x & \text{if} \ x < 0\end{cases}$. This $f$ satisfies the given conditions, and $\mathbb{E}[f(X)]$ $= \displaystyle\int_{-\infty}^{\infty}\dfrac{1}{\sqrt{2\pi}}e^{-x^2/2}f(x)\,dx$ $= \displaystyle\int_{0}^{\infty}\sqrt{\dfrac{2}{\pi}}e^{-x^2/2}f(x)\,dx$ $= \displaystyle\int_{0}^{\infty}\sqrt{\dfrac{2}{\pi}}e^{-x^2/2}\left(x^2-\sqrt{\dfrac{\pi}{2}}x\right)\,dx = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1996165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Trace of product of three Pauli matrices Consider the four $2\times 2$ matrices $\{\sigma_\mu\}$, with $\mu = 0,1,2,3$, which are defined as follows $$ \sigma_0 =\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right) $$ $$ \sigma_1 =\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right) $$ $$ \sigma_2 =\left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array}\right) $$ $$ \sigma_3 =\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right) $$ i.e. the identity matrix and the three Pauli matrices. For the trace of the product of any two matrices $\sigma_\mu$ one has the identity $\text{tr}(\sigma_\mu \sigma_\nu)= 2 \delta_{\mu \nu}$. I was wondering if a similar identity can be derived for the product of three sigma matrices, $$ \text{tr}(\sigma_\mu \sigma_\nu \sigma_\lambda)= \;? $$
First I note a few things about the defined matrices: 1) Commutation: $[\sigma_\mu,\sigma_\nu] = 2 i \epsilon_{0 \mu \nu \rho} \sigma_\rho$ 2) Anti-commutation: $\{ \sigma_\mu ,\sigma_\nu \} = 2 \delta_{\{\mu \nu} \sigma_{0\}} - 4 \delta_{\mu 0} \delta_{\nu 0} \sigma_0$ Where $a_{ \{b c} d_{i\}} = a_{bc} d_i + a_{ib} d_{c} + a_{ci}d_b$ This gives: $\sigma_\mu \sigma_\nu = i \epsilon_{0 \mu \nu \rho} \sigma_\rho + \delta_{\{\mu \nu} \sigma_{0\}}-\delta_{\mu 0} \delta_{\nu 0} \sigma_0$ Thus: \begin{eqnarray} \text{tr}(\sigma_\mu \sigma_\nu \sigma_\rho) &=& \text{tr}(i \epsilon_{0 \mu \nu \alpha} \sigma_\alpha\sigma_\rho + \delta_{\{\mu \nu} \sigma_{0\}}\sigma_\rho-2\delta_{\mu 0} \delta_{\nu 0} \sigma_\rho)\\ &=& 2 i \epsilon_{0\mu\nu\rho}+2\delta_{\{\mu \nu}\delta_{0\}\rho}-4\delta_{\mu0}\delta_{\nu 0} \delta_{\rho 0 } \end{eqnarray} where I have used the trace identity sketched out in the problem statement. We can check the results of this expression by confirming that if one of the three, say $\mu=0$ then we get back the original identity. We can also look at what happens if $\mu ,\nu,\rho \ne0$. In this case we would get $2i\epsilon_{0\mu \nu \rho}$ this matches the other answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1996297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Best of seven with different home/away win rates? I'm having some issues with a question from my intro probability course and I'm hoping you can help. Here's the question: In the World Series in baseball and in the playoffs in the National Basketball Association and the National Hockey Association, the winner is determined by the best of seven games. Most teams have better records at home. Assume that the two teams are evenly matched and each has a 60% chance of winning at home and a 40% chance of winning away. In principle, both teams should have an equal chance of winning a seven game series. Determine which pattern of home games is closer to giving each team a 50% chance of winning. Consider the two common patterns: (a) two home, three away, two home; and (b) two home, two away, one home, one away, one home. Obviously, If the probabilities stayed the same I could use the binomial distribution and add the probabilities of winning after 4, 5, 6 and 7 games. since they're not, I'm not quite sure how to proceed. Any hints?
The ordering of the games doesn't matter with this stripped down model (with no "momentum" factor, say). To see, this, note that we can always assume that all seven games are played. Even though the series is probably decided before game $7$, playing the series out correctly determines the winner by simple majority. Thus the winner is determined by counting the games $Home$ wins at home and those $Home$ wins away (let's say team $Home$ has the home team advantage here). $$p(Home)=\,\sum'\binom 4a \binom 3b.6^a.6^{4-a}4^b.6^{3-b}=\sum' \binom4a\binom 3b\,.6^{3+a-b}.4^{4+b-a}$$ Where the sum is taken over pairs $a,b$ with $0≤a≤4,\,0≤b≤3,\,a+b≥4$ Here, of course, "a" denotes the number of home games $Home$ wins, and $b$ denotes the number of away games $Home$ wins. That sum is easily evaluated (with mechanical assistance) and we get $$p(Home)=\fbox {0.532032}$$ Note: For modeling, it is more interesting to introduce some path dependence. That is, assume that a team gets a probability boost for having won the prior game. Of course then the order matters a great deal (though in this case it would be tempting to simply evaluate all $2^7$ strings individually. there are only $128$ of them, after all).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1996544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof that $\prod_{k=1}^{n-1}(1+\frac1k)^k = \frac{n^n}{n!}$ for all $n \in \Bbb N \ge 2$ I've tried to prove this for a while now, but I can't get it: $\prod_{k=1}^{n-1}(1+\frac1k)^k = \frac{n^n}{n!}$ for all $n \in \Bbb N \ge 2$ Solution: $\prod_{k=1}^{(n+1)-1}(1+\frac1k)^k=\frac{(n+1)^{(n+1)}}{(n+1)!}$ $\left(1+\frac1n\right)^n\cdot\prod_{k=1}^{n-1}\left(\left(1+\frac1k\right)^k\right)= \frac{(n+1)^n\cdot(n+1)}{n!\cdot(n+1)}$ $\frac{n^n}{n!}\left(1+\frac1n\right)^n= \frac{(n+1)^n}{n!}$ $n^n(1+\frac{1}{n})^n=(n+1)^n$ $(n+1)^n=(n+1)^n$
The following telescopic product in disguise: $$ \prod_{k=1}^{n-1}\left(1+\frac{1}{k}\right) = n \tag{1}$$ leads to $$ \prod_{k=1}^{n-1}\left(1+\frac{1}{k}\right)^k = \frac{n^n}{\prod_{k=1}^{n-1}\left(1+\frac{1}{k}\right)^{n-k}}=\frac{n^n}{n\cdot(n-1)\cdot\ldots\cdot 1}=\frac{n^n}{n!}.\tag{2}$$ In the opposite direction, we may notice that $\prod_{k=1}^{n-1}\left(1+\frac{1}{k}\right)^k = \frac{n^n}{n!}$ holds for $n=1$ and $$\prod_{k=1}^{n}\left(1+\frac{1}{k}\right)^k/\prod_{k=1}^{n-1}\left(1+\frac{1}{k}\right)^k=\frac{(n+1)^n}{n^n}=\frac{(n+1)^{n+1}}{(n+1)!}/\frac{n^n}{n!}.\tag{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1996735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why $F_2[X]/(X^2+X+1)$ has $4$ elements and what are those? I don't understand the three claims that some $F_a[X]/(p(x))$ has some $n$ elements in the following text (from Adkins' Algebra): For example Why $F_2[X]/(X^2+X+1)$ has $4$ elements and what are those? I think that since $(X^2+X+1)$ is an ideal so $F_2[X]={\{X^2+X+1}\}F_2[X]=(X^2+X+1)$ so $F_2[X]/(X^2+X+1)$ is singleton? Same as the other two: Why $F_3[X]/(X^2+1)$ (or $F_2[X]/(X^3+X+1)$) has $9$ (or $8$) elements and what are those? Text is elementry itself but here it doesn't explain them well. Simple detailed explanation would be much apprecaited.
Hint: What $R=\mathbb{F}_2/(X^2+X+1)$ means that $X^2+X+1=0$ in $R$. Therefore, any time that you see a power of $X$, you can reduce it to a lower power using $X^2=-X-1=X+1$ (we can drop the negatives since $-1=1$ in characteristic $2$). For example, $$ X^3\equiv X(X+1)=X^2+X\equiv (X+1)+X=2X+1=1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1997022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
$G$ a connected topological group, U open, then $\mathcal{U} = \cup_{i \in \mathbb{N}}\; U^i$ is a subgroup of $G$. Let $G$ a connected topological group, and $U$ an open subset of $G$ that contains the identity element of $G$. I want to solve the following problem: If $U^n$ is the set of the products $u_1 u_2 ...u_n$, with $u_i$ in $U$, show that $\mathcal{U} = \bigcup_{i \in \mathbb{N}}\; U^i$ is an open subgroup of $G$, and hence $\mathcal{U}=G$ . The only part that is giving me problem is to prove that $\mathcal{U}$ is a subgroup. A hint given to the problem is to consider first an open set $V$, such that if $g$ in $V$ then $g^{-1}$ is in $V$. Obviously with $V$ in place of $U$, the property of subgroup is of direct comprobation. Any help?
Choose an open set $V\subseteq U$ containing the identity such that if $g\in V$ then $g^{-1}\in V$ (for instance, $V=U\cap U^{-1}$). As you say, you can then easily show that $\mathcal{V}=\bigcup_{i\in\mathbb{N}} V^i$ is an open subgroup of $G$. Thus $\mathcal{V}=G$. But $\mathcal{V}\subseteq\mathcal{U}$ since $V\subseteq U$, and so you must also have $\mathcal{U}=G$. (Note that without the assumption that $G$ is connected, it is not necessarily true that $\mathcal{U}$ is a subgroup. For instance, take $G=\mathbb{Z}$ and $U=\{0,1\}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1997162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why does a unitary in the Calkin algebra always lift to an (co-)isometry? Cf. the the title, consider a separable infinite-dimensional Hilbert space H, and the short exact sequence $$0 \to \mathcal{K}(H) \to \mathcal{B}(H) \to \mathcal{Q}(H) \to 0,$$where $\mathcal{K}(H)$ is the compact operators and $\mathcal{Q}(H)$ is the quotient, also known as the Calkin algebra. Let $u$ be a unitary in $\mathcal{Q}(H)$. This means that $u = T + \mathcal{K}$, for a $T \in \mathcal{B}(H)$ where the differences $TT^* - I$ and $T^*T - I$ are compact. How can one show that $u$ lifts to an isometry, or a co-isometry? That is, why is there an isometry (or co-isometry) $S \in \mathcal{B}(H)$ such that $\pi (S) = u$, where $\pi$ denotes the quotient mapping? (Equivalently $S - T$ is compact) This is an exercise in Rørdam's book on K-theory. In the previous part of the exercise one shows that whenever $E$ and $F$ are projections in $\mathcal{B}(H)$ with $Rank(E) \leq Rank(F)$ there exists a partial isometry $V$ with $V^*V = E$ and $VV^* \leq F$. I am assuming that one should use this somehow, possibly together with some results about the index map $\delta_1$, but I am really stuck on this one. Any help is appreciated!
If $u \in Q(H)$ is unitary, let $T_1 \in B(H)$ such that $\pi(T_1) = u$, then consider $$ T = T_1R^{-n} $$ where $R$ denotes the left or right shift (which exists on your separable infinite dimensional Hilbert space) so that $T$ has index zero. Now $\exists S_1 \in GL(B(H))$ such that $$ T-S_1 \in K(H) $$ Hence, $S:=S_1R^n$ satisfies $$ T_1 -S\in K(H) $$ and is an isometry or co-isometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1997315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
If $A,B,$ and $C$ are sets, then $A\times(B-C)$ = $(A \times B)$ $-$ $(A \times C)$. If $A,B,$ and $C$ are sets, then $A\times(B-C)$ = $(A \times B)$ $-$ $(A \times C)$. Proof. Observe the following sequence of equalities. $$\begin{align} A\times(B-C) &= \{(x,y)\} : (x \in A) \wedge (y \in (B-C))\} \, (\text{Definition of Cartesian Product}) \\ &=\{(x,y) : (x \in A) \wedge \big((y \in B) \wedge (y\notin C)\big)\} \, (\text{Definition of } -) \\ &=\{(x,y) : (x \in A) \wedge (x \in A) \wedge \big((y \in B) \wedge (y\notin C)\big)\} \, (P=P \wedge P) \\ &=\{(x,y) : \big((x \in A) \wedge (y \in B)\big) \wedge \big((x \in A) \wedge (y\notin C)\big)\} \, (\text{Rearrange}) \\ &=\{(x,y) : \big((x \in A) \wedge (y \in B)\big) \wedge \big((x \in A) \wedge (y\notin C)\big)\} \, (\text{Definition of }\cap) \\ \end{align}$$ I'm stuck on the last part -- $(x \in A) \wedge (y\notin C)$ translates to $(A-C)$ but I need it to be $(A \times C)$. I can't quite figure out how to reach that.
First, when you make a proof with logic, don't use the equality symbol, we use the implication symbol because the implication symbol is defined for the logic propositions. So, i'm gona show first that $(A \times B) -(A\times C) \subseteq A \times (B-C)$ Let $(x,y)\in ((A \times B) -(A\times C)) \implies (x,y)\in (A \times B) \land (x,y) \not \in (A \times C)$ $\implies (x\in A \land y\in B) \land \lnot (x\in A \land y\in C) \implies (x\in A \land y\in B) \land (x\not \in A \lor y \not \in C)$ $\implies (x \in A \land y \in B \land x\not \in A) \lor (x \in A \land y \in B \land x\not \in C) $ By TTP: $\implies (x \in A \land y \in B \land x\not \in C) \implies x\in A \land y \in(B-C)$ $\implies (x,y)\in(A \times (B-C))$ Now, i'm gonna show that $A \times (B-C) \subseteq (A \times B) -(A\times C)$ Let $(x,y)\in (A \times (B-C)) \implies x\in A \land y\in (B-C) \implies x\in A \land (y\in B \land y \not \in C)$ $\implies (x\in A \land y\in B \land y \not \in C) \lor (x \in A \land y \in B \land x\not \in A)$ $\implies (x\in A \land y\in B) \land (x\not \in A \lor y \not \in C)$ $\implies (x\in A \land y\in B) \land \lnot (x\in A \land y\in C) \implies (x,y)\in (A \times B) \land (x,y) \not \in (A \times C)$ $\implies (x,y)\in ((A \times B) -(A\times C))$ Then, $(A \times B) -(A\times C) = A \times (B-C) \blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1997439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Interesting Topic for Proofs Class I am teaching one of those university level classes where students learn enough proof theory to do their higher undergraduate courses. In such a class, you need things for the students to prove. What seems to be the norm in textbooks is a little bit of number theory (divisibility, odd/even numbers, etc.), some combinatory reasoning, and modular integers. My question concerns finding a subject of a more geometric flavor. It could be a subject from algebra, topology, or analysis, just as long as it is less discrete as the other subjects. Graph theory is taught in some books, but that boils down to combinatorics. I understand if this question is removed for not meeting the standards of this site. I'm just hoping that it is interesting enough to the community to remain. Thanks.
Bezier curves (quadratic, and then cubic ; no need to go further) and splines curves provide a wealth of interesting subjects. The associated proofs are often short and rarely difficult and mix geometry (barycenters, 2D or 3D), analysis (parameterized curves), linear algebra (systems solving, matrix exponential, description of "Bezier patches" in 3D). The success is guaranteed under the condition that the students can program some of the techniques they have seen (I have tested it several years with Matlab programming). There are many more subjects. I will hopefully come back later.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1997557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Calculate this limit : $\lim_{x\rightarrow +\infty}\left[x\left(4\arctan\left(\frac{x+1}{x}\right)-\pi\right)\right]$ Calculate the limit $$\lim_{x\rightarrow +\infty}\left[x\left(4\arctan\left(\frac{x+1}{x}\right)-\pi\right)\right]$$ Neither L'Hospital's rule nor Taylor expansions are allowed,
Herein, we present an approach that relies on only (1) a set of inequalities for the arctangent function, obtained using only elementary geometry, and (2) the squeeze theorem. To that end, we begin with the following primer. PRIMER: I showed in THIS ANSWER, using only elementary inequalities from geometry, that the arctangent function satisfies the inequalities $$\bbox[5px,border:2px solid #C0A000]{\frac{x}{\sqrt{1+x^2}} \le \arctan(x) \le x} \tag 1$$ for $x\ge 0$. Note that we can write $$\arctan\left(\frac{x+1}{x}\right)=\pi/4+\arctan\left(\frac{1}{2x+1}\right)$$ Therefore, we see that $$4\arctan\left(\frac{x+1}{x}\right)-\pi= \arctan\left(\frac4{2x+1}\right) \tag 2$$ Combining $(1)$ and $(2)$ reveals $$\frac{\frac{4x}{2x+1}}{\sqrt{1+\left(\frac{1}{2x+1}\right)^2}} \le x\,\left(4\arctan\left(\frac{x+1}{x}\right)-\pi\right) \le \frac{4x}{2x+1}$$ whereupon application of the squeeze theorem yields the coveted limit $$\bbox[5px,border:2px solid #C0A000]{\lim_{x\to \infty}x\,\left(4\arctan\left(\frac{x+1}{x}\right)-\pi\right)=2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1997709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
Calculating range of parameters for parametrization of hemisphere. I have a hemisphere $x^2+y^2+z^2=1\ ;z\geqslant 0.$ I want to represent it by the vector representation $$\vec r(u,v)=\sin u \cos v\ \hat i+\sin u\sin v \ \hat j+\cos u \ \hat k$$ I am having a very silly problem of figuring out the ranges of $u\ \&\ v$. For $u$ , I can guess that since $\cos u\geqslant 0\ ;\ u\in [-\pi/2,\pi/2]$ . but I am not getting how to calculate the range of $v$ . Could someone help?
Correction: Take $u\in[0, \pi/2]$ then $v$ is simply $0$ to $2\pi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1997801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof that field trace in $GF(2^k)$ maps half of the elements to 0 and the other half to 1. I'm reading a proof on the solutions of equation $a = z^z + z$ in $GF(2^k)$, that is, the finite field with $2^k$. At some point it uses that the trace function defined as $Tr(a) = a + a^2 + \cdots + a^{2^{k-1}}$ maps half of the elements of $GF(2^k)$ to 0 and half of the elements to 1. I'm trying to proof it. My approach The solutions of the equations are proved to be of the form $\theta$ and $\theta + 1$. As the trace is linear we have $Tr(\theta + 1) = Tr(\theta) + Tr(1)$. If k is odd then $Tr(1) = 1 + 1^2 + \cdots + 1^{2^{k-1}} = 1$ so it is natural to define from the set $$A = \{ a \in GF(2^k):Tr(a) = 0\}$$ to $$B = \{ a \in GF(2^k):Tr(a) = 1\}$$ the mapping $f(a) = a+1$. This mapping appears to me to a bijection. What happens if k is even?
The trace is a linear form, so it is a map $$Tr: GF(2^k)\to GF(2).$$ But then linear algebra tells us (rank-nullity) that the null space of this surjective (see below) map is isomorphic to $GF(2^{k-1})$, because it is a vector subspace of $GF(2^k)$ of dimension $k-1$. By definition the null space is things that map to $0$, and the cardinality is clearly half of all the elements, so the rest must go to $1$. * *This is surjective because $Tr$ is the sum of the Galois conjugates, and you know the Galois group is cyclic and generated by $x\mapsto x^2$, so anything of trace $0$ satisfies $x+x^2+x^4+x^8+\ldots +x^{2^{k-1}}=0$. But only $2^{k-1}$ things can possibly satisfy this, so something has trace $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1997975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Pumping lemma to prove the set of all words such that the sum of the number of 0s and 1s occurring in it is an even number is not regular In the alphabet {0, 1, 2}, how can I prove using the pumping lemma that there is not a regular expression that can describe the set of all words such that the sum of the number of 0s and 1s occurring in it is an even number, e.g. 222 and 02010 would be two such words, 2122 and 0201021 would not.
You can’t, because this language is regular. The easiest way to see this is to design a finite state automaton $M$ that recognizes the language. $M$ needs only two states, $q_0$ and $q_1$; $q_0$ will be the initial state and the only acceptor state. $M$ should change state whenever it reads a $0$ or a $1$, and it should remain in the same state whenever it reads a $2$. It’s not hard to check that it will be in state $q_0$ if and only if the total number of zeroes and ones that it has read is even. There are standard techniques for deriving a corresponding regular expression from $M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1998128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Damped pendulum equation Given the equation of a damped pendulum: $$\frac{d^2\theta}{dt^2}+\frac{1}{2}\left(\frac{d\theta}{dt}\right)^2+\sin\theta=0$$ with the pendulum starting with $0$ velocity, apparently we can derive: $$\frac{dt}{d\theta}=\frac{1}{\sqrt{\sqrt2\left[\cos\left(\frac{\pi}{4}+\theta\right)-e^{-(\theta+\phi)}\cos\left(\frac{\pi}{4}-\phi\right)\right]}}$$ where $\phi$ is the initial angle from the vertical. How can we derive that? Obviously $\frac{dt}{d\theta}$ is the reciprical of $\frac{d\theta}{dt}$, but I don't see how to deal with the second derivative. I've found a similar derivation at https://en.wikipedia.org/wiki/Pendulum_(mathematics), where the formula $${\frac {d\theta }{dt}}={\sqrt {{\frac {2g}{\ell }}(\cos \theta -\cos \theta _{0})}}$$ is derived in the "Energy derivation of Eq. 1" section. However, that uses a conservation of energy argument which is not applicable for a damped pendulum. So how can I derive that equation?
HINT...Write $$\dot{\theta}=x$$ Then you have $$x\frac{dx}{d\theta}+\frac 12x^2+\sin\theta=0$$ This is a Bernoulli Differential Equation
{ "language": "en", "url": "https://math.stackexchange.com/questions/1998244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Potential Energy of a string (Wave Equation PDEs) This may turn out to be an elementary question, but I'm having trouble understanding where this actually comes from. In my PDEs course, we're learning about the wave equation. We're given the following problem on our current homework set. Consider a taught string $0 ≤ x ≤ L$ with constant density $\rho$ and constant tension $T$ whose vertical displacement at time $t$ and position $x$ is described by the function $u(x, t)$. (a) Write down the integral representing the total kinetic energy in the segment $a ≤ x ≤ b$. (b) Write a formula for the total potential energy in the segment $a ≤ x ≤ b$ assuming that the potential energy is proportional to the difference between the length of the stretched string on the interval $[a, b]$ and the equilibrium length of the string on the interval $[a, b]$, and explain using dimensional analysis why the constant of proportionality is the tension. (c) Approximate the square root in your answer to (b) using Taylor’s theorem. (d) Derive the wave equation from conservation of energy. My question is regarding part (b). I've found a source online that says that $$ dl = \sqrt{dx^2 + dy^2} - dx \approx \frac{1}{2} (\frac{\partial y}{\partial x})^2 dx $$ I understand that the change in length of the string is represented by its magnitude minus the change in position, i.e. $$ dl = \sqrt{dx^2 + dy^2} - dx $$ But, I'm having trouble showing its equivalence to $$ \frac{1}{2} (\frac{\partial y}{\partial x})^2 dx $$ My attempt at it is $$ \sqrt{dx^2 + dy^2} - dx = \sqrt { 1 + (\frac{dy}{dx})^2 } dx -dx \neq \frac{1}{2} (\frac{\partial y}{\partial x})^2 dx $$ Any help towards the correct answer would be appreciated. I understand why you multiply by $T$ to get the potential energy, it's just showing the work up until that point. Thank you.
Just use the approximation $$ (1 + x)^p \approx 1 + px $$ for $x \ll 1$. From this $$ \sqrt{dx^2 + dy^2} - dx = dx\sqrt{1 + \left(\frac{\partial y}{\partial x} \right)^2} - dx \approx dx\left[ 1 + \frac{1}{2}\left(\frac{\partial y}{\partial x} \right)^2\right] - dx = \frac{1}{2}\left(\frac{\partial y}{\partial x} \right)^2dx $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1998367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the truth value of $(F \iff T) \iff (T \iff F)$ True? I think $(F \iff T) \iff (T \iff F)$ is true, but would like some confirmation.
Yes, it is indeed true. Remember that "$A\iff B$" is true if and only if $A$ and $B$ have the same truth value - both true, or both false. So both $F\iff T$ and $T\iff F$ are false, and so in turn "$(F\iff T)\iff (T\iff F)$" is true. Here's a fun fact (you can check it for a few small cases, and then prove it by induction on the complexity of the formula): suppose I have a formula built out of $\iff$s (like yours). Then it's true if and only if an even number of the components are "false" (note that zero is even); so e.g. $$(T\iff F)\iff ((F\iff T)\iff (T\iff F))$$ is false ($3$ Fs), but $$((T\iff F)\iff (F\iff F))\iff ((F\iff T)\iff (T\iff T))$$ is true ($4$ Fs).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1998527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }