Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
what is value of x in an arithmetic progression involving logarthms $\log 2,\log 2^{x-1}$, and $\log 2^{x+3}$ are $3$ consecutive terms of an arithmetic progression; find (i) the value of $x$;
Using $\log(a^b)=b\log a$ and dividing by $\log2$ we see that $1,x-1,x+3$ must be an arithmetic progression. But it's clear that $x-5,x-1,x+3$ is an arithmetic progression. So $x\dots$
{ "language": "en", "url": "https://math.stackexchange.com/questions/131185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
graph theory connectivity This cut induced confuses me.... I dont really understand what it is saying... I am not understanding what connectivity is in graph theory. I thought connectivity is when you have a tree because all the vertices are connected but the above mentions something weird like components could someone please explain what they are and what connectivity really is?
Yes, a tree is, in particular, a connected graph - one in which every pair of vertices can be connected by exactly one simple path (i.e. a path with no repeated vertices). A connected graph is something more general - it is simply one in which every pair of vertices can be connected by at least one (simple) path.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Probability theory. 100 items and 3 controllers 100 items are checked by 3 controllers. What is probability that each of them will check more than 25 items? Here is full quotation of problem from workbook: "Set of 100 articles randomly allocated to test between the three controllers. Find the probability that each controller has got to test at least 25 articles."
My try: $$\mathbb{P}(N_1\geq 25, N_2\geq 25, N_3\geq 25)=\frac{1}{3^{100}}\sum_{N_1\geq 25, N_2\geq 25, N_3\geq 25|N_1+N_2+N_3=100}\binom {100}{N_1,N_2,N_3}=$$ $$\frac{100!}{3^{100}}\sum_{N_1}\sum_{N_2}\sum_{N_3}\frac{1}{N_1!N_2!N_3!}=\frac{100!}{3^{100}}\sum_{N_1}\sum_{N_2}\frac{1}{N_1!N_2!(100-N_1-N_2)!}=$$ $$\frac{100!}{3^{100}}\sum_{N_1=25}^{50}\sum_{N_2=25}^{50-(N_1-25)}\frac{1}{N_1!N_2!(100-N_1-N_2)!}$$ You also may be interested in this and this
{ "language": "en", "url": "https://math.stackexchange.com/questions/131301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Currying for dependent functions Currying and uncurrying is defined between functions in $Z^{X \times Y}$ (the first set) and $\left( Z^Y \right)^X$ (the second set). But what if $Y$ is not a constant but is dependent on $X$? The first set would become $Z^{\sum_{i\in X}Y_i}$. What may be a proper expressing for the second set in the case of dependent $Y$?
It seems that I found a solution myself: the first set: $Z^{\sum_{i \in X} Y_i}$ the second set: $\prod_{i \in X} Z^{Y_i}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/131358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $2^{2n} = O(2^n)$? Is $2^{2n} = O(2^n)$? My solution is: $2^n 2^n \leq C_{1}2^n$ $2^n \leq C_{1}$, TRUE. Is this correct?
$x^n=o(y^n)$ iff $x&lty$, as $(\frac{x}{y})^n\rightarrow 0$. Here 4 and 2, so no.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find the expected value of a dice sum If fair dodecahedron is rolled until at least $k$($k$ is fixed between 2 and 12) is gotten, and $X$ is the sum of all numbers appeared until the last time, what is $E(X)$?
Let $n=12$ denote the number of faces. If the first roll is $i\geqslant k$, $X=i$. If the first roll is $i\lt k$, $X=i+X'$ where $X'$ is distributed like $X$. Hence, $$ \mathrm E(X)=\frac1n\sum_{i\geqslant k}i+\frac1n\sum_{i\lt k}\left(i+\mathrm E(X)\right)=\frac1n\sum_{i=1}^ni+\frac1n\mathrm E(X)\sum_{i=1}^{k-1}1, $$ that is, $$ n\mathrm E(X)=\frac{n(n+1)}2+(k-1)\mathrm E(X), $$ hence $$ \mathrm E(X)=\frac{n(n+1)}{2(n-k+1)}=\frac{78}{13-k}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/131472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
A condition in the definition of geometric quotient I am reading the first several pages of GIT by Mumford, and I need some clarification of one requirement in the definition of geometric quotient (c.f. Definition 0.4, GIT): Suppose a group scheme $G/S$ acts on scheme $X/S$ by $\sigma$, where $G,X$ are schemes over $S$. If a pair $(Y, \phi)$ consisting of a scheme $Y$ over $S$ and an $S-$morphism $\phi: X \to Y$ is a geometric quotient, then one requirement is " the fundamental sheaf $\mathcal{O}_Y$ is the subsheaf of $\phi_*(\mathcal{O}_X)$ consisting of invariant functions" i.e. If $f \in \Gamma(U,\phi_*(\mathcal{O}_X))= \Gamma(\phi^{-1}(U),\mathcal{O}_X)$, then $f \in \Gamma(U, \mathcal{O}_Y)$ if and only if: $$\begin{matrix} G \times \phi^{-1}(U)&\stackrel{\sigma}{\longrightarrow}&\phi^{-1}(U)\\ \downarrow{p_2}&&\downarrow{F}\\ \phi^{-1}(U)&\stackrel{F}{\longrightarrow}&\mathbb{A}^1 \end{matrix} $$ commutes (where $F$ is the morphism defined by $f$, and $\mathbb{A}^1 = \operatorname{Spec}(\mathbb{Z}[x])$) My questions is how to make sense of this $F$ ?
A regular function $f \in \mathcal O_X(U)$ defines a morphism $F: U \rightarrow \mathbb A^1$. In the classical case of a variety over an algebraically closed field, the affine line is identified with the base field $k$ this map is just evaluation. More generally, if $f \in \mathcal O_X(U)$ is an element of the structure sheaf over an open affine subset $U$, then there is a natural ring homomorphism $\mathbb Z[X] \rightarrow \mathcal O_X(U)$ mapping $X$ to $f$. Taking Spec gives your morphism $F: U \rightarrow \mathbb A^1$. In your case your regular function is $f \circ \phi$ defined on $\phi^{-1}U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing $2x=\left( 2n+1\right) \pi \left( 1-\cos x\right) $ has $2n+3$ roots when $n\in \mathbb{Z}_+$ I am struggling to show that the equation $$2x=\left( 2n+1\right) \pi \left( 1-\cos x\right) $$ where n is a positive integer, has $2n+3$ roots and no more and also if it possible to indicate their locations (even roughly) ? I am unsure how proceed with this one due to the presence of both $\cos x$ and $2x$ and would be great full if some one could give me a hint. Cheers
Hint: The equation easily transforms to $x=(2n+1)\frac{\pi}{2}\sin^{2}x$. It is easy to see that there are no roots for $x&lt0$ and $x>(2n+1)\frac{\pi}2$. Also, $(2n+1)\frac{\pi}{2}\sin^{2}x=0$ at $x= k\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Positive series problem: $\sum\limits_{n\geq1}a_n=+\infty$ implies $\sum_{n\geq1}\frac{a_n}{1+a_n}=+\infty$ Let $\sum\limits_{n\geq1}a_n$ be a positive series, and $\sum\limits_{n\geq1}a_n=+\infty$, prove that: $$\sum_{n\geq1}\frac{a_n}{1+a_n}=+\infty.$$
Suppose $\sum{a_n\over 1+{a_n}}$ converges. Then $\frac{a_n}{1+a_n}\to 0$. It is easy to see that: $$\lim_{n\to\infty}a_n=0\iff \lim_{n\to\infty}\frac{a_n}{1+a_n}=0.$$ (let $b_n=\frac{a_n}{1+a_n}$, then $a_n=\frac{b_n}{1-b_n}$ ) So we have $$\lim_{n\to\infty}\frac{\frac{a_n}{1+a_n}}{a_n}=1,$$ by comparision test of positive series, the series $\sum a_n$ converges, it is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 5 }
Surface Element in Spherical Coordinates In spherical polars, $$x=r\cos(\phi)\sin(\theta)$$ $$y=r\sin(\phi)\sin(\theta)$$ $$z=r\cos(\theta)$$ I want to work out an integral over the surface of a sphere - ie $r$ constant. I'm able to derive through scale factors, ie $\delta(s)^2=h_1^2\delta(\theta)^2+h_2^2\delta(\phi)^2$ (note $\delta(r)=0$), that: $$h_1=r\sin(\theta),h_2=r$$ $$dA=h_1h_2=r^2\sin(\theta)$$ I'm just wondering is there an "easier" way to do this (eg. Jacobian determinant when I'm varying all 3 variables). I know you can supposedly visualize a change of area on the surface of the sphere, but I'm not particularly good at doing that sadly.
There is yet another way to look at it using the notion of the solid angle. Then the area element has a particularly simple form: $$dA=r^2d\Omega$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/131735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 6, "answer_id": 1 }
Deriving the exponential distribution from a shift property of its expectation (equivalent to memorylessness). Suppose $X$ is a continuous, nonnegative random variable with distribution function $F$ and probability density function $f$. If for $a>0,\ E(X|X>a)=a+E(X)$, find the distribution $F$ of $X$.
Hopefully there is a more elegant solution, but let us say $\mu=\mathbb{E}[X]$, and start with the definition of conditional expectation: $$ \eqalign{ \mathbb{E}[X|X>a] &=& \int_0^{\infty}x\,\mathbb{P}[X|X>a]\,dx \\ \mu + a &=& \int_{a}^{\infty}x\,\frac{f(x)}{1-F(a)}\,dx \\ \left(\mu+a\right)\,\left(1-F(a)\right) &=& \int_{a}^{\infty}x\,f(x)\,dx \,. } $$ Differentiating with respect to $a$, we find $$ \eqalign{ 1-F(a)-\left(\mu+a\right)f(a) &=& -a\,f(a) \\\\ 1-F(a)-\mu f(a) &=& 0 \\\\ F(a) + \mu F\,'(a) &=& 1 } $$ which is an ordinary differential equation, solvable by standard methods, e.g., by multiplying by the integrating factor: $$ \eqalign{ F(x) + \mu F\,'(x) &=& 1 \qquad\text{for}\qquad x\ge0 \\\\ F\,e^{x/\mu} + \mu F\,'\,e^{x/\mu} &=& e^{x/\mu} \\\\ \left( \mu\,F\,e^{x/\mu} \right)' &=& e^{x/\mu} \\\\ \mu\,F(x)\,e^{x/\mu} &=& \int e^{x/\mu} dx = \mu \, e^{x/\mu} + c \\\\ \mu\,F(x) &=& \mu + c \, e^{-x/\mu} } $$ At $x=0$, since $X$ is continuous and nonnegative, it must be the case that $F(0)=0$, from which it follows that $c=\mu F(0)-\mu=-\mu$, giving us the CDF $$ F(x) = 1 - e^{-x/\mu} = 1 - e^{-\lambda x} $$ and the exponential density $$ f(x) = \frac1\mu\,e^{-\mu x} = \lambda \, e^{-\lambda x} $$ where the location parameter $\mu$ and (decay) rate parameter $\lambda$ are reciprocally related, i.e., $\lambda\mu=1$. EDIT: There is indeed now a more elegant solution, thanks to Didier.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Greatest Common Divisor implies Least Common Multiple? If $a$ and $b$ are elements in an integral domain with unity 1$\neq$0. Show that $a$ and $b$ have a least common multiple if $a$ and $b$ have a highest common factor. More generally there is a problem of showing that if any finite non-empty non-zero subset of the ring has a highest common factor, then any finite non-empty non-zero subset of the ring has a least common multiple. (Actually the converse of the preceding sentence is also true.)
It's true that $\rm\,lcm(a,b)\:$ exists $\Rightarrow$ $\rm\:gcd(a,b)\:$ exists - see the Theorem below. But the converse fails, e.g. as here, in $\Bbb Q[x^2,x^3]$ we have $\,\gcd(x^2,x^3)=1\,$ but ${\rm lcm}(x^2,x^3)$ does not exist. Or, for simple well-known counterexamples in quadratic number fields see the paper linked below. Theorem $\rm\;\; (a,b) = ab/[a,b] \;\;$ if $\;\rm\ [a,b] \;$ exists. Proof: $\rm\quad\quad d\:|\:a,b \;\iff\; a,b\:|\:ab/d \;\iff\; [a,b]\:|\:ab/d \;\iff\; d\:|\:ab/[a,b] \quad\;\;$ QED For further discussion see this post and see also Khurana, On GCD and LCM domains, and for basic properties of gcd and lcm (in cancellative commutative monoids) see also Section 1.6, Factorization in Commutative Monoids, in Robert Gilmer, Commutative Semigroup Rings, 1984.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
When to make a substitution in ODE The setting is on evolving hypersurfaces. So for each time $t$, $\Gamma(t)$ is a hypersurface given by the zero level set of the function $\phi(x,t)$. Consider a ball, then the hypersurface has $\phi(x,t) = x_1^2 + ... + x_n^2 - R(t)^2$ as the level set. So for each time $t, \Gamma(t)$ is a ball of radius $R(t)$. Associated with each surface $\Gamma(t)$ is a normal velocity $V(x,t)$ and a mean curvature $H(x,t)$. Suppose the surface evolves by the rule V = -H. After some calculations, let's say that $V = \dot{R}$ and $H = -\sqrt{x_1^2 + ... + x_n^2}$. * *One can then solve this ODE $V=-H$ to get R(t). *But because the mean curvature $H$ is defined only on points on the surface, can't we rewrite $H = -\sqrt{x_1^2 + ... + x_n^2}$ to be $H = -R$? Because if $x_1^2 + ... + x_n^2 - R^2(t) \neq 0$, then then $H(x,t)$ is meaningless. But if I make this substituion and then solve the ODE, i get a different answer obviously. What's the right thing to do?
Considering that * *$H$ is a function defined on a time-dependent manifold, and *$R$ is a function of time it is only possible to express $H$ as a function of $R$ if everymanifold $\Gamma(t)$ has constant mean curvature. But this is exactly the case for the spheres. The conclusion is yes, it is possible to write $H(t)=1/R(t)$. One should keep in mind that this single-variable function $H$ is not the same as $H(x,t)$, but they are related by $$H(x,t)=H(t) \quad \text{when } \ x\in \Gamma(t)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/132077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to expand undifferential function as power series? If a function has infinite order derivative at $0$ and $\lim_{n\to \infty}(f(x)-\sum_{i=1}^{n} a_{n}x^n)=0$ for every $x \in (-r,r)$,then it can be expand as power series$\sum a_{n}x^n$, My question is if this function is not differential at $0$,how to expand it as $\sum a_{n}x^n$ satisfied with $\lim_{n\to \infty}(f(x)-\sum_{i=1}^{n} a_{n}x^n)=0$ for every $x \in (-r,r)$?Is it unique ?
If $f$ is continuous in a neighborhood, then you can use the Stone-Weierstrass theorem, and write is as a sum of $x^n$, but you loose the nice interpretation of a Taylor series. Also if $f$ is only locally integrable, then you can to the same thing. But note that you always will have to change the notion of the convergence ... e.g. uniform or in $L^p$-sense. As pointed out in the comments, this does not really answer your question, since you require the coeffiecients not to change (I have overlooked that by first reading), but perhaps it will be sufficient for whatever application you have in mind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Continuous Function Greater than 0 Let $f$ be continuous on $[a, b]$. Suppose $f(x) > 0$ for all $x \in [a, b]$. I'm trying to show that there exists a $\alpha > 0$ such that $f(x) > \alpha$ for all $x \in [a, b]$. I tried to prove this by contradiction. Assume that for every $\alpha > 0$, there exists an $x \in [a, b]$ such that $f(x) \leq \alpha$. Then I let $\alpha_n = \frac{1}{n} > 0$. Then there exists an $x_n \in [a, b]$ such that $f(x_n) \leq \alpha_n$. But note that $\alpha_n \to 0$ as $n \to \infty$. This implies that there is an $x_n$ such that $f(x_n) \leq 0$, which is a contradiction. Could someone give me feedback on my proof?
It's not correct. Where do you get this $x_n$? But, by the compactness of $[a,b]$, your argument can be salvaged. There is a subsequence of $(x_n)$ that converges to an $x\in[a,b]$; and by the continuity of $f$, we would have $f(x)\le0$. Or, arguing directly, you could consider the minimum value of $f$ on $[a,b]$ (which exists, since $[a,b]$ is closed and bounded).
{ "language": "en", "url": "https://math.stackexchange.com/questions/132206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Given $\sum |a_n|^2$ converges and $a_n \neq -1$, show that $\prod (1+a_n)$ converges to a non-zero limit implies $\sum a_n$ converges. I have been working on this problem for a while and cannot seem to make any progress without coming up with something wrong or hitting a dead end. Here is what I have so far: $ \prod (1+a_n) &lt \infty \implies \sum a_n &lt \infty $: Similarly we ignore finitely many terms until $|a_n| \leq 1/2$ and we use the taylor series for the product. We have that $\prod 1+ a_n$ converging im plying that $\sum \log (1+a_n)$ converges to a nonzero limit since none of the factors are 0 as $a_n \neq -1$. We have that \begin{eqnarray} |\sum \log(1+a_n) | =| \sum (a_n-a_n^2/2 +\ldots) | \\ \geq | \sum (a_n-|a_n^2/2 +\ldots|) | \geq \left| \sum a_n-|a_n|^2-|a_n|^3-\ldots \right| \\ = \left|\sum a_n-|a_n|^2(1+|a_n|+|a_n|^2+|a_n|^3+\ldots \right| \end{eqnarray} by the triangle inequality. Thus $\infty > |\sum \log(1+a_n) | \geq \left|\sum a_n-2|a_n|^2 \right|$ from the previous part. Thus $\left|\sum a_n-2|a_n|^2 \right|$ is convergent, and since $\sum|a_n|^2$ is absolutely convergent we can split the series (I don't really know if this is even true) and we have that partial sums $|\sum a_n|$ is bounded. Any help would be appreciated! Also any good references for getting better at this kind of stuff would be great!!
For every $|a|\lt\frac12$, $0\lt a-\log(1+a)\lt a^2$. Since $\sum\limits_n|a_n|^2$ converges, $|a_n|\lt\frac12$ for every $n$ large enough. Hence $\sum\limits_n\left(a_n-\log(1+a_n)\right)$ converges absolutely as soon as every $\log(1+a_n)$ exists. This implies that $\sum\limits_na_n$ and $\sum\limits_n\log(1+a_n)$ both converge or both diverge. In particular, if $\sum\limits_n\log(1+a_n)$ converges, then $\sum\limits_na_n$ converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Taylor series of an entire function which is not a polynomial I have an entire function which is not a polynomial. Is there a way to use the Casorati-Weierstrass theorem to prove there exists a point $z_0$ such that every coefficient of the Taylor series at $z_0$ is not zero?
The set of points where the $n$-th Taylor coefficient is zero is the set $D_n = \{w \in \mathbb{C}\,:\,f^{(n)}(w) = 0\}$. This is a closed discrete set, hence it is countable (because if it were not discrete the identity theorem would imply that $f^{(n)} \equiv 0$, hence $f$ would be a polynomial of degree at most $n-1$). Thus, the set of points where at least one Taylor coefficient is zero is the countable set $D = \bigcup_{n=0}^\infty D_n$. Since $\mathbb{C}$ is uncountable, $\mathbb{C} \smallsetminus D$ is non-empty.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Question about immersions of $\mathbb{R}P^n$ into $\mathbb{R}^{n+1}$ I am currently reading a paper which takes for granted the following geometric fact: if $\mathbb{R}P^n$ can be immersed in $\mathbb{R}^{n+1}$ then for some $k$, $n=2^k-1$ or $n=2^k-2$. My initial thought was that this has something to do with Stiefel-Whitney numbers, but I can't see how that would work. Thoughts?
If $i:\mathbb P^n\hookrightarrow \mathbb R^{n+1}$ is an immersion, you have the relation on $\mathbb P^n$, involving the normal line bundle $N$: $$ i^* T_{\mathbb R^{n+1}}=T_{\mathbb P^n}\oplus N $$ From this you deduce for the total Stiefel-Whitney classes $$1=w(T_{\mathbb P^n})\cdot w(N)\in H^*(\mathbb P^n,\mathbb F_2) \quad (*)$$ Since $N$ is a line bundle, we have $w_i(N)=0$ for $i\gt 1$ and a little calculation then shows that this is only possible if for some $k$ we have $n=2^k-1$ or $n=2^k-2$. Edit Since I have some free time now, here is the "little calculation": Recall that $H^*(\mathbb P^n,\mathbb F_2)=\mathbb F_2[H]/\langle H^{n+1}\rangle= \mathbb F_2[h]$ and that $w(T_{\mathbb P^n})=(1+h)^{n+1}$. Since $N$ has rank $1$, its total Stiefel-Whitney class is $w(N)=1$ or $w(N)=1+h$ , so we dichotomize: First case: $ w(N)=1$ Then from $(*)$ we get $w(T_{\mathbb P^n})=(1+h)^{n+1}=1$, hence (from arithmetic modulo $2$ : see below) $n+1=2^k$ Second case: $ w(N)=1+h$ Then from $(*)$ we get $w(T_{\mathbb P^n})=(1+h)^{n+1}\cdot (1+h)=(1+h)^{n+2}=1 $ and again from arithmetic modulo 2 we get $n+2=2^k$. Arithmetic modulo 2 fact: If $N=2^rs$ with $s$ odd then $$(1+h)^{N}=(1+h^{2^r})^s=1\in \mathbb F_2[h] \iff 2^r\geq n+1$$ This follows from the binomial formula so dear to Professor Moriarty.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What does it mean for two random variables to have bivariate normal distribution? The following is Sheldon Ross's definition: We say that the random variables $X,Y$ have a bivariate normal distribution if, for some constants $\mu_x,\mu_y,\sigma_x>0,\sigma_y>0, -1&lt\rho &lt 1$, their joint density function is given, for all $-\infty &lt x,y &lt \infty$, by $$f(x,y)=\frac{\exp\left(-\frac1{2(1-\rho^2)}\left(\left(\frac{x-\mu_x}{\sigma_x}\right)^2+\left(\frac{y-\mu_y}{\sigma_y}\right)^2-2\rho\frac{(x-\mu_x)(y-\mu_y)}{\sigma_x\sigma_y}\right)\right)}{2\pi\sigma_x\sigma_y\sqrt{1-\rho^2}}$$ Is there a combinatorial/intuitive meaning of this definition?
I don't have a combinatorial meaning, but you can think of it as follows. $(X,Y)$ is the result of applying an affine transformation to a pair $(W,Z)$ of independent standard normal random variables. Many such transformations exist, and one in particular is $$\begin{align*} X &= \mu_x + \sigma_x W\\ Y &= \mu_y + \rho \sigma_y W + \sqrt{1-\rho^2} \sigma_y Z \end{align*}$$ See for example this set of slides. The contours of the joint density (points at equal height above the $x$-$y$ plane) are ellipses centered at $(\mu_x,\mu_y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
solving matrix equation I want to solve the following matrix equation. Could anyone give me a hand? Thanks. Given an $n \times n$ matrix $\mathbf A$ (diagonally dominant), I need to solve an $n \times n$ symmetric matrix $\mathbf X$ such that $\mathbf A\mathbf X+\mathbf X\mathbf A^\top =\mathbf I$, where $\mathbf I$ is an $n \times n$ identity matrix.
If you are just going to use it for purposes of computation etc. then it seems to me that the simplest way is to consider this as A'X'=B', where A' is $n^2 \times n^2$, X' is $n^2 \times 1$, B' is $n^2 \times 1$. X' and B' are basically 'flattened' versions of X and I, respectively. If general element of X is $x_{ij}$, then $x_{ij}$ is also $(i \times (n-1) + j)$th element of X'. Similarly for getting B' from I. $(i \times (n-1) + j), (k \times (n-1) + l)$th element of A' is number of times $x_ij$ contributes to (k,l)th element in the expansion of the LHS here. So eg. $a'_{11}$ is $2 \times a_{11}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Two linearly independent eigenvectors with eigenvalue zero What is the only $2\times 2$ matrix that only has eigenvalue zero but does have two linearly independent eigenvectors? I know there is only one such matrix, but I'm not sure how to find it.
Let $A$ be any such matrix. Let $\beta=[\mathbf{v}_1,\mathbf{v}_2]$ be a basis made up of eigenvectors of $A$. If $P$ is the matrix that has $\beta$ in the columns, then $P^{-1}AP$ is diagonal, with the eigenvalues of $A$ in the diagonals. But such a matrix is $$\left(\begin{array}{cc} 0&0\\0&0\end{array}\right).$$ So $PAP^{-1}=0$. Multiplying on the left by $P^{-1}$ and on the right by $P$, we get $A = P^{-1}0P = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What does $2^x$ really mean when $x$ is not an integer? We all know that $2^5$ means $2\times 2\times 2\times 2\times 2 = 32$, but what does $2^\pi$ mean? How is it possible to calculate that without using a calculator? I am really curious about this, so please let me know what you think.
Euler's identity would be another use of exponents outside of integers as there are some complex numbers used in the identity. Euler's formula explains how to evaluate such values which does help in some cases to evaluate the function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "218", "answer_count": 8, "answer_id": 3 }
Existence of a sequence. While reading about Weierstrass' Theorem and holomorphic functions, I came across a statement that said: "Let $U$ be any connected open subset of $\mathbb{C}$ and let $\{z_j\} \subset U$ be a sequence of points that has no accumulation point in $U$ but that accumulates at every boundary point of $U$." I was curious as to why such a sequence exists. How would I be able to construct such a sequence?
Let $\partial U$ the boundary of $U$ in $\mathbb C$, which we assume to be non-empty. Let $T = \{t_0, t_1, \dotsc \}$ be a countable dense subset of $\partial U$, and let $v$ a sequence a sequence with value in $T$ such that for all $i$ the sequence $v$ takes the value $t_i$ infinitely many times. Let $n$ be an integer. The ball $B(t_n, 2^{-n})$ cut $U$, since $\partial U$ is a subset of $\overline U$. Let $u_n$ be an element of $B(v_n, 2^{-n})\cap U$. The sequence $u$ we just defined satisfies your property. * *For all $n$, the point $t_n$ is an accumulation point of $u$. Indeed, by definition of $v$, there is an extraction $\phi$ such that $v_{\phi(k)} = t_n$, for all $k$. But then $|t_n - u_{\phi(k)}| &lt 2^{-\phi(k)}$, hence $u_{\phi(k)} \to t_n$. *The sequence $u$ has no accumulation points in $U$. Since the distance of $u_k$ to $\partial U$ tends to zero, all accumulation are in $\partial U$, which is disjoint from $U$. *The set of the accumulation points is a closed set which contains T, and which is disjoint from $U$, thus it is $\partial U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
give a counterexample of monoid If $G$ is a monoid, $e$ is its identity, if $ab=e$ and $ac=e$, can you give me a counterexample such that $b\neq c$? If not, please prove $b=c$. Thanks a lot.
Yes, there is a simpler example. Let $G$ be the free monoid over $A =\{a,b,c\}$ satisfying the relations $ab = ac =e$. Intuitively, $G$ is the set of all finite strings over $A$ that contain neither the subword $ab$ nor $ac$. Or, think of $b$ and $c$ as different but both of which cancel a single trailing $a$ when multiplied on the right. If you are concerned that we must show $b \neq c$, then do it this way instead. Forget the part about relations and just define $G$ as strings over $A=${a,b,c}, that is: $G \subset A^*$, $G = \{x \in A^*:\text{neither }ab\text{ nor }ac\text{ is a subword of }x\}$ or $G = A^* - A^*abA^* - A^*acA^*$. Multiplication in $G$ is ordinary string catentation, except for two special cases: $xa \cdot by = xy$ and $xa \cdot cy = xy$ for $x, y \in G$. You can easily verify that $G$ with this multiplication is a monoid and meets the criteria, and there is no reason to worry that $b=c$. In particular, the strings "$b$" and "$c$" are both in the free monoid $A^*$ and nothing in the subtraction of sets removes them, nor do any of the multiplication rules, so they are still in $G$. So therefore the monoid elements $b$ and $c$ in $G$ are distinct. addendum -- This G is the "simplest" possible in the sense of being universal. That is, for any morphism $f$ and monoid $M$ satisfying the criteria $f(ab)=f(ac)=f(e)=e$, with $f:A^* \rightarrow M$, there is a morphism $g:G \rightarrow M$ with $g(x)=f(x) \text{ for all } x \in A$. $g$ simply acts properly on $A$ and then extends to $A^*$ That cannot be said of Barry's monoid of functions: $S \rightarrow S$ because it has extra structure, namely $ba=ca=e$ and others. So even if you use just the closure of $\{a,b,c,e\}$ under function composition (rather that all functions: $S \rightarrow S$), you'd have extra structure in S versus the free case, which prevents it from acting universal wrt G.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 1 }
What is the best state-of-the-art numerical integral algorithm? I'm trying to implement a numerical integrator that should have the minimum relative error and is not slow. So I was looking for the best accepted state-of-the-art algorithm to do so but there seems to be so many approaches that I could not understand which one should I choose. So I'm turning to you for a recommendation. Thank you for your attention,
As @J.M noted, there are many methods, each suited for a certain purpose. If you don't know what the function is in advance, then for low-dimensional ($d < 3$) integrals a adaptive Gauss–Kronrod rule quadrature is probably the fastest. In higher dimensions, you can really only use Monte-Carlo methods. If you know apriori that the functions have a very large range, you can use a weighted Monte-Carlo approach, in which you select more points in the "large" regions, than in the "smaller" ones.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Splitting field of $x^6+x^3+1$ over $\mathbb{Q}$ I am trying to find the splitting field of $x^6+x^3+1$ over $\mathbb{Q}$. Finding the roots of the polynomial is easy (substituting $x^3=t$ , finding the two roots of the polynomial in $t$ and then taking a 3-rd root from each one). The roots can be seen here [if there is a more elegant way of finding the roots it will be nice to hear] Is is true the that the splitting field is $\mathbb{Q}((-1)^\frac{1}{9})$ ? I think so from the way the roots look, but I am unsure. Also, I am having trouble finding the minimal polynomial of $(-1)^\frac{1}{9}$, it seems that it would be a polynomial of degree 9, but of course the degree can't be more than 6...can someone please help with this ?
This polynomial is $\Phi_9(x)$, the ninth cyclotomic polynomial whose roots are precisely the primitive ninth roots of unity. A $\mathbb{Q}$-basis for this extension is $\{\zeta_1,\zeta_2,\zeta_4,\zeta_5,\zeta_7,\zeta_8\}$. So you have your splitting field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
rotating a matrix Given a rectangular matrix $A$, what is the general form to rotate the matrix about the center term, e.g. such that $$\pmatrix{a_{0,0} & a_{0,1} & a_{0,2} \\ a_{1,0} & a_{1,1} & a_{1,2} \\ a_{2,0} & a_{2,1} & a_{2,2}}\longrightarrow\pmatrix{a_{0,2} & a_{1,2} & a_{2,2} \\ a_{0,1} & a_{1,1} & a_{2,1} \\ a_{0,0} & a_{1,0} & a_{2,0}} $$ and possibly the reverse case as well.
$$\pmatrix{0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0}A^T = \pmatrix{a_{0,2} & a_{1,2} & a_{2,2} \\ a_{0,1} & a_{1,1} & a_{2,1} \\ a_{0,0} & a_{1,0} & a_{2,0}} =A_r $$ Edit: reverse is, $$A_r^T \pmatrix{0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0} = \pmatrix{a_{0,0} & a_{0,1} & a_{0,2} \\ a_{1,0} & a_{1,1} & a_{1,2} \\ a_{2,0} & a_{2,1} & a_{2,2}} = A $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/133156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Explaining Horizontal Shifting and Scaling I always find myself wanting for a clear explanation (to a college algebra student) for the fact that horizontal transformations of graphs work in the opposite way that one might expect. For example, $f(x+1)$ is a horizontal shift to the left (a shift toward the negative side of the $x$-axis), whereas a cursory glance would cause one to suspect that adding a positive amount should shift in the positive direction. Similarly, $f(2x)$ causes the graph to shrink horizontally, not expand. I generally explain this by saying $x$ is getting a "head start". For example, suppose $f(x)$ has a root at $x = 5$. The graph of $f(x+1)$ is getting a unit for free, and so we only need $x = 4$ to get the same output before as before (i.e. a root). Thus, the root that used to be at $x=5$ is now at $x=4$, which is a shift to the left. My explanation seems to help some students and mystify others. I was hoping someone else in the community had an enlightening way to explain these phenomena. Again, I emphasize that the purpose is to strengthen the student's intuition; a rigorous algebraic approach is not what I'm looking for.
The map $f$ already assigns $\color{Purple}{\sigma x\mapsto f(\sigma x)}$. In order to construct the assignment $\color{DarkBlue}x\mapsto \color{DarkOrange}{f(\sigma x)}$, we must apply the inverse $\sigma^{-1}$ in order to put $f(\sigma x)$ (originally located at $\sigma x$) "back" to $x$, that is $$(\sigma^{-1},\,\mathrm{Id}):\big(\sigma x,f(\sigma x)\big)\mapsto \big(x,f(\sigma x)\big).$$ The above is, of course, too algebraic. So here's a colorful depiction of what's going on: $\hskip 1in$ As we can see, "putting it back over the $x$-value" does the inverse of the transform to the actual graph (squiggly line) of $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 18, "answer_id": 7 }
Is $A_n$ characteristic in $S_n$? The title is the question. Is $A_n$ characteristic in $S_n$? If $\phi \in \operatorname{Aut}(S_n)$, Then $[S_n : \phi(A_n)]$ (The index of $\phi(A_n)$) is 2. Maybe the only subgroup of $S_n$ of index 2 is $A_n$? Thanks in advance.
Let $H$ be a subgroup of $S_n$. Then either $H \subseteq A_n$ or $[H:H\cap A_n]=2$ (that is either $H$ is all even or half-even and half-odd). This fact can be proven by noticing that left multiplication by an odd permutation (if there is one in $H$) sends evens to odds and odds to evens bijectively (so there must be an equal number of both evens and odds if there are any odd elements to begin with). Therefore, the only subgroup of index $2$ in $S_n$ is $A_n$ [If $H$ is all even and index $2$, it must be all of $A_n$. If $H$ is half-even and half-odd, then $A_n$ has a normal subgroup: $H \cap A_n$ of index 2 in $A_n$, but no such subgroup exists by inspection for $n=1,2,3,4$ and simplicity of $A_n$ for $n \geq 5$]. Thus $A_n$ is characteristic (being the unique subgroup of $S_n$ of order $n!/2$ it must be sent to itself by any automorphism).
{ "language": "en", "url": "https://math.stackexchange.com/questions/133233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Rank of projective module We define the rank of free module as the number of elements on the basis of free module. It may be infinity. How do we define the rank of projective module?
The rank of a projective module $M$ over $R$ is the function $\mathrm{rk} : \mathrm{Spec}(R) \to \mathrm{Card}$, $\mathfrak{p} \mapsto \mathrm{dim}_{\mathrm{Quot}(R/\mathfrak{p})}(M \otimes_R \mathrm{Quot}(R/\mathfrak{p}))$. This is the dimension of the fiber of $\tilde{M}$ at $\mathfrak{p}$. One can show that if $M$ is finitely generated, then this rank function is locally constant (without any finiteness condition this may fail). In fact, then $\tilde{M}$ is locally free of finite rank. In particular, if $\mathrm{Spec}(R)$ is connected ($\Leftrightarrow$ $R$ has only the trivial idempotents $0,1$), this function is constant. Then you have just one rank $\mathrm{rk}(M)\in \mathbb{N}$. In particular, when $R$ is an integral domain, we have $\mathrm{rk}(M) = \mathrm{dim}_K(M \otimes_R K)$, where $K=\mathrm{Quot}(R)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Why this element in this tensor product is not zero? $R=k[[x,y]]/(xy)$, $k$ a field. This ring is local with maximal ideal $m=(x,y)R$. Then the book proves that $x\otimes y\in m\otimes m$ is not zero, but I don't understand what's going on, if the tensor product is $R$-linear, then $x\otimes y=1\otimes xy=1\otimes 0=0$, where is the mistake? And also the book proves that this element is torsion: $(x+y)(x\otimes y)=(x+y)x\otimes y=(x+y)\otimes(xy)=(x+y)\otimes0=0$ why $(x+y)x\otimes y=(x+y)\otimes(xy)$?
For your first question, I suppose your tensor product is over $R$. It is enough to show that $x\otimes y$ is non zero in $(m/m^2)\otimes_R (m/m^2)$. As both sides in the latter tensor product are $k$-vector spaces, this tensor product identifies itself to the tensor product over $k$. Now $v_1\otimes v_2\ne 0$ in a tensor product of vector spaces $V_1\otimes_k V_2$, if $v_1, v_2\ne 0$ (complete them to respective basis $\{e_i\}_i, \{f_j\}_j$ of $V_1, V_2$, and use the fact that the $e_i\otimes f_j$ form a basis of $V_1\otimes V_2$, or use the isomorphism $V_1\otimes V_2\to L(V_1^{\vee}, V_2)$, $v_1\otimes v_2\mapsto \{\phi \mapsto \phi(v_1)v_2\}$). In your case, as the classes of $x,y$ in $m/m^2$ are non zero (they even form a basis), $x\otimes y\ne 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Where is complex Log continuous Write down the biggest subset $D$ of $\mathbb C$ on which ${\rm Log(z)}$ is a continuous function. Explain why ${\rm Log(z)}$ is not continuous at points outside $D.$ Anyone know the answer to this?
I'll tell you one possible answer, and you tell me why (I'm assuming the principal branch): Answer: $D$ is the complex plane minus the non-positive reals. Hint: Follow things in a semi-circle and observe the "jump" in angles.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
When is $X^n-a$ is irreducible over F? Let $F$ be a field, let $\omega$ be a primitive $n$th root of unity in an algebraic closure of $F$. If $a$ in $F$ is not an $m$th power in $F(\omega)$ for any $m\gt 1$ that divides $n$, how to show that $x^n -a$ is irreducible over $F$?
Below is a classical result: Theorem $\ $ Suppose $\,c\in F\,$ a field, and $\,0 < n\in\mathbb Z$. $\ \ \ x^n - c\ $ is irreducible over $F\! \iff c \not\in F^p\,$ for all primes $\,p\mid n,\,$ and $\ c\not\in -4F^4$ when $\, 4\mid n$ A proof is in many Field Theory textbooks, e.g. Karpilovsky, Topics in Field Theory, Theorem 8.1.6.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 2, "answer_id": 1 }
Question about the independence definition. Why does the independence definition requires that every subfamily of events $A_1,A_2,\ldots,A_n$ satisfies $P(A_{i1}\cap \cdots \cap A_{ik})=\prod_j P(A_{ij})$ where $i_1 &lt i_2 &lt \cdots &lt i_n$ and $j &lt n$. My doubt arose from this: Suppose $A_1,A_2$ and $A_3$ such as $P(A_1\cap A_2\cap A_3)=P(A_1)P(A_2)P(A_3)$. Then $$P(A_1\cap A_2)=P(A_1\cap A_2 \cap A_3) + P(A_1\cap A_2 \cap A_3^c)$$ $$=P(A_1)P(A_2)(P(A_3)+P(A_3^c))=P(A_1)P(A_2).$$ So it seems to me that if $P(A_1\cap A_2\cap A_3)=P(A_1)P(A_2)P(A_3)$ then $P(A_i\cap A_j)=P(A_i)P(A_j)$, i.e., the biggest collection independence implies the smaller ones. Why am I wrong? The calculations seems right to me, maybe my conclusion from it are wrong?
When a tetrahedral die is rolled, the outcome is the face on the bottom of the die when it comes to rest. Suppose the four faces are marked $2,3,5,30$, and these numerical outcomes occur with probabilities $\frac{11}{24}, \frac{7}{24}, \frac{5}{24}$ and $\frac{1}{24}$ respectively. Let $A$, $B$, and $C$ denote the events that the outcome is a multiple of $2$, $3$, and $5$ respectively. Then, $$\begin{align*} P(A) &= P\{2,30\} = \frac{1}{2}\\ P(B) &= P\{3,30\} = \frac{1}{3}\\ P(C) &= P\{5,30\} = \frac{1}{4}\\ P(ABC) &= P\{30\} = \frac{1}{24} = P(A)P(B)P(C) \end{align*}$$ but $AB = AC = BC = ABC$ and thus $$P(AB) \neq P(A)P(B), \quad P(AC) \neq P(A)P(C), \quad P(BC) \neq P(B)P(C)$$ On the other hand, if it is a fair die, then $P(A)=P(B)=P(C) = \frac{1}{2}$ and since $P(ABC) = \frac{1}{4}$, we have that $$P(AB) = P(A)P(B), \quad P(AC) = P(A)P(C), \quad P(BC) = P(B)P(C)$$ but $$P(ABC) \neq P(A)P(B)P(C).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/133646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Problem of finding subgroup without Sylow's Thm. Let $G$ is a group with order $p^n$ where $p$ is prime and $n \geq 3$. By Sylow's Thm, we know that $G$ has a subgroup with order $p^2$. But, I wonder to proof without Sylow's Thm.
Let's try a direct approach assuming the weaker condition $G$ finite with $p^2\mid |G|\quad$ (i.e., $G$ not necessarily a $p$-group): By Cauchy, there exists an element $x \in G$ of order $p$. Case 1: $p$ divides the index of the normalizer $\mathop{N}_G(\langle x \rangle)$ in $G$ $\langle x \rangle$ acts by right multiplication on the set of cosets $\mathop{N}_G(x)\backslash G = \{\mathop{N}_G(\langle x \rangle)g : g\in G\}\quad$. All orbits of $\langle x \rangle$ have order $1$ or $p$, implying that the number of fixed points of $\langle x \rangle$ equals the index of $\mathop{N}_G(\langle x \rangle)$ in $G$ modulo $p$, which is $0$ modulo $p$ in this case. As $\mathop{N}_G(\langle x \rangle)$ is a fixed point of $\langle x \rangle$, there has to be another fixed point, say $\mathop{N}_G(\langle x \rangle)g = \mathop{N}_G(\langle x \rangle)gx\quad$, which is equivalent to $gxg^{-1} = x^{g^{-1}} \in \mathop{N}_G(\langle x \rangle)\quad$. As $g\not\in\mathop{N}_G(\langle x \rangle)\quad$, $x^{g^{-1}}$ is an element of order $p$ not contained in $\langle x \rangle$ that normalizes $\langle x \rangle$. Hence $\langle x, x^{g^{-1}} \rangle$ is a subgroup of order $p^2$ of $G$. Case 2: The index of the normalizer $\mathop{N}_G(\langle x \rangle)$ in $G$ is coprime to $p$ Then the quotient group $\mathop{N}_G(\langle x \rangle)/\langle x \rangle\quad$ has order divisible by $p$. By Cauchy, it contains a subgroup of order $p$, whose preimage has order $p^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
limits of the sequence $n/(n+1)$ Given the problem: Determine the limits of the sequnce $\{x_n\}^ \infty_{ n=1}$ $$x_n = \frac{n}{n+1}$$ The solution to this is: step1: $\lim\limits_{n \rightarrow \infty} x_n = \lim\limits_{n \rightarrow \infty} \frac{n}{n + 1}$ step2: $=\lim\limits_{n \rightarrow \infty} \frac{1}{1+\frac{1}{n}}$ step3: $=\frac{1}{1 + \lim\limits_{n \rightarrow \infty} \frac{1}{n}}$ step4: $=\frac{1}{1 + 0}$ step5: $=1$ I get how you go from step 2 to 5 but I don't understand how you go from step 1 to 2. Again, I'm stuck on the basic highschool math. Please help
This is just algebraic manipulation from step 1 to step 2. Since $n \neq 0$, we can do the following. I will write it out in full detail so that you are clear on the steps involved. $$\begin{eqnarray*} \frac{n}{n+1} &=& n \left(\frac{1}{n+1}\right)\\ &=& (n^{-1})^{-1} \left(\frac{1}{n+1}\right)\\ &=& \left(\frac{1}{n}\right)^{-1} \left(\frac{1}{n+1}\right) \\ &=&\frac{1}{\left(\frac{1}{n}\right)}\left(\frac{1}{n+1}\right)\\ &=& \frac{1}{ \left(\frac{1}{n}\right)(n+1)}\\ &=& \frac{1}{\left(\frac{n+1}{n}\right)} \\ &=& \frac{1}{\left( \frac{n}{n} + \frac{1}{n} \right)}\\ &=& \frac{1}{ \left( 1 + \frac{1}{n} \right)} \end{eqnarray*}$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Integral related to $\sum\limits_{n=1}^\infty\sin^n(x)\cos^n(x)$ Playing around in Mathematica, I found the following: $$\int_0^\pi\sum_{n=1}^\infty\sin^n(x)\cos^n(x)\ dx=0.48600607\ldots =\Gamma(1/3)\Gamma(2/3)-\pi.$$ I'm curious... how could one derive this?
I think you are making this much more difficult than it has to be. Since $|\sin(x)\cos(x)|< 1$ you have that $$\sum_{n=1}^{\infty}(\sin(x)\cos(x))^n=\frac{\sin(x)\cos(x)}{1-\sin(x)\cos(x)}=\frac{\sin(2x)}{2-\sin(2x)}$$ And you can just find through normal calc that $$\int \frac{\sin(2x)}{2-\sin(2x)}\mathrm dx=-\left(x+\frac{2}{\sqrt{3}}\arctan\left(\frac{1-2\tan(x)}{\sqrt{3}}\right)\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/133858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Why use absolute value for Cauchy Schwarz Inequality? I see the Cauchy-Schwarz Inequality written as follows $$|\langle u,v\rangle| \leq \lVert u\rVert \cdot\lVert v\rVert.$$ Why the is the absolute value of $\langle u,v\rangle$ specified? Surely it is apparent if the right hand side is greater than or equal to, for example, $5$, then it will be greater than or equal to $-5$?
What if the inner product is a complex number which can happen if $u$ and $v$ are vectors of complex numbers? For real vectors, the Cauchy-Schwarz Inequality is better written as $$-||u||\cdot ||v|| \leq \langle u, v \rangle \leq ||u||\cdot ||v||$$ where, if $||v|| > 0$, then equality holds in the right inequality if $u = \lambda v$ with $\lambda \geq 0$ and in the left inequality if $u = \lambda v$ with $\lambda \leq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
two sides and angle between them triangle question. is it possible to find the third side of a triangle if you know the lengths of the other two and the angle between the known sides? the triangle is not equilateral. we're using the kinect camera and we can find the distance from the camera to the start of a line and the distance to the end of a line, we could calculate the angle between the two lines knowing the maximum vertical and horizontal angle but would it be possible to calculate the length of the line on the ground? the problem is that the camera won't be exactly above the line so the triangle we get wouldn't be equilateral.
How about the law of cosines? Consider the following triangle $\triangle ABC$, the $ \color{maroon} {\text{poly 1}}$ below, with sides $\color{maroon}{\overline{AB}=c}$ and $\color{maroon}{\overline{AC}=b}$ known. Further the angle between them, $\color{green}\alpha$ is known. $\hskip{2 in}$ Then, the law of cosines tell you that $$\color{maroon}{a^2=b^2+c^2-2bc\;\cos }\color{blue}{\alpha}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/134012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
A, B subgroups of G, B/A abelian. Show that BN/AN is abelian. I'm going through a past paper on Group theory and I wondered if someone could help with a solution to this. Let $A$, $B$ be subgroups of $G$ such that $A\triangleleft B$ and $B/A$ is abelian. Also $N\triangleleft G$. I proved that $AN\triangleleft BN$ and now I need to show that $BN/AN$ is abelian. Thanks
By the Isomorphism Theorems, since $A\subset B$ so $BA=B$, we have: $$\frac{BN}{AN} = \frac{B(AN)}{AN} \cong \frac{B}{B\cap AN}.$$ Since $A\leq AN$, then $A = B\cap A\leq B\cap AN$. Hence $$\frac{B}{B\cap AN}$$ is a quotient of $B/A$, hence abelian since $B/A$ is abelian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Metric Space Open Sets. Let $(X, \rho)$ be a metric. I've shown $\sigma(s,t) = \frac{\rho(s,t)}{1 + \rho(s,t)}$ is also a metric on $X$. I'm having trouble showing that the open sets defined by the metric $\rho$ are the same as the open sets defined by $\sigma$. I know I must show that an open ball in the $\rho$ metric is an open set in the $\sigma$ metric, and that an open ball in the $\sigma$ metric is an open set in the $\rho$ metric. Any hints or advice?
Hint: note that $\sigma=\frac{\rho}{1+\rho}=1-\frac{1}{1+\rho}$ is strictly increasing on $[0;+\infty)$, therefore, $\sigma&lt\epsilon$ implies $\rho&lt\delta$ (for some $\delta$) and vice versa.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Intersection of two subfields of the Rational Function Field in characteristic $0$ Let $K=F(x)$ be the rational function field over a field $F$ of characteristic $0$, let $L_1=F(x^2)$, and $L_2=F(x^2+x)$. How to show that $L_1\cap L_2 = F$?
Hint $\:$ Exploit parity: $\rm\:h(x) = f(x^2+x)\in F[x]$ is even $\rm (h(-x) = h(x))$ $\rm\:\Rightarrow\: f\in F\:$ is constant, since otherwise its highest degree term yields an odd term, namely $$\rm\:f_n (x^2+x)^n\! + f_{n-1} (x^2+x)^{n-1}+\cdots\: =\ f_n x^{2n}\! + n\:f_n\:x^{2n-1}\! + g(x),\ \ deg\ g\: \le\: 2n\!-\!2$$ The odd monomial $\rm\:x^{2n-1}$ has nonzero coefficient $\rm\:n, f_n\ne 0\:$ $\Rightarrow$ $\rm\:n\:f_n \ne 0\:$ by $\rm\:char\ F = 0,\:$ hence $\rm\:f(x)\:$ is not even. Ditto for rational functions: if $\rm\: h(x) = f(x^2+x)/g(x^2+x)\:$ is even then $\rm\:h(-x) = h(x)\:$ $\Rightarrow$ $\rm\:f(x^2+x)g(x^2-x) = f(x^2-x)g(x^2+x)\:$ is even, so $\rm\in F,\:$ so $\rm\:f,g\in F.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/134258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
inequality with modulus of complex number Let $ \displaystyle{ z_1, z_2 \in \mathbb{C} }$ where $ z_1, z_2 \neq 0$ Prove that: $\displaystyle |z_1 +z_2| \geq \frac{1}{2} \left( |z_1|+|z_2| \right) \left|\frac{z_1}{|z_1|} + \frac{z_2}{|z_2|}\right| $. P.S I think that I have to use the inequality $ Re(z_1z_2) \leq |z_1||z_2| $ but I don't know how.
Write $z_1=r_1e^{i\theta_1}$ and $z_2=r_2e^{i\theta_2}$. Since $z_1, z_2\neq 0$, we have $r_1, r_2>0$. Then $$\tag{1}\left(\frac{1}{2} \left( |z_1|+|z_2| \right) \left|\frac{z_1}{|z_1|} + \frac{z_2}{|z_2|}\right|\right)^2 =\left(\frac{1}{2}(r_1+r_2)|e^{i\theta_1}+e^{i\theta_2}|\right)^2=\frac{1}{4}(r_1+r_2)^2|e^{i\theta_1}+e^{i\theta_2}|^2$$ $$=\frac{1}{4}(r_1+r_2)^2(2+e^{i(\theta_1-\theta_2)}+e^{i(\theta_2-\theta_1)})$$ since $$\tag{2} |e^{i\theta_1}+e^{i\theta_2}|^2=(e^{i\theta_1}+e^{i\theta_2})(e^{-i\theta_1}+e^{-i\theta_2})=2+e^{i(\theta_1-\theta_2)}+e^{i(\theta_2-\theta_1)}.$$ Note also that $$\tag{3}|z_1+z_2|^2=|r_1e^{i\theta_1}+r_2e^{i\theta_2}|^2= r_1^2+r_2^2+r_1r_2e^{i(\theta_1-\theta_2)}+r_1r_2e^{i(\theta_2-\theta_1)}.$$ Subtract $(3)$ by $(1)$, we obtain $$|z_1+z_2|^2-\left(\frac{1}{2} \left( |z_1|+|z_2| \right) \left|\frac{z_1}{|z_1|} + \frac{z_2}{|z_2|}\right|\right)^2=\frac{1}{2}(r_1-r_2)^2-\frac{1}{4}(r_1-r_2)^2\big(e^{i(\theta_1-\theta_2)}+e^{i(\theta_2-\theta_1)}\big)$$ $$=\frac{1}{4}(r_1-r_2)^2(2-e^{i(\theta_1-\theta_2)}-e^{i(\theta_2-\theta_1)}) =\frac{1}{4}(r_1-r_2)^2|e^{i\theta_1}-e^{i\theta_2}|\geq 0.$$ where the last equality follows from a caluculation similar to $(2)$. So this implies that $$\displaystyle |z_1 +z_2| \geq \frac{1}{2} \left( |z_1|+|z_2| \right) \left|\frac{z_1}{|z_1|} + \frac{z_2}{|z_2|}\right|.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/134333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Irreducibility of polynomials This is a very basic question, but one that has frustrated me somewhat. I'm dealing with polynomials and trying to see if they are irreducible or not. Now, I can apply Eisenstein's Criterion and deduce for some prime p if a polynomial over Z is irreducible over Q or not and I can sort of deal with basic polynomials that we can factorise easily. However I am looking at the polynomial $t^3 - 2$. I cannot seem to factor this down, but a review book is asking for us to factorise into irreducibles over a) $\mathbb{Z}$, b) $\mathbb{Q}$, c) $\mathbb{R}$, d) $\mathbb{C}$, e) $\mathbb{Z}_3$, f) $\mathbb{Z}_5$, so obviously it must be reducible in one of these. Am I wrong in thinking that this is irreducible over all? (I tried many times to factorise it into any sort of irreducibles but the coefficients never match up so I don't know what I am doing wrong). I would really appreciate if someone could explain this to me, in a very simple way. Thank you.
If you find a root $t=a$ then $t-a$ is a factor of the original polynomial. This means (equating coefficients of $t^3$): $$t^3-2 = (t-a)(t^2+bt+c)$$ Equating coefficients we get that $ac=2$ (constant term) and $-a+b=0$ (quadratic term), so you can compute $b$ and $c$ and complete the factorisation. You know the linear term will work out because you have checked that $a$ is a root, but this can be used to check your arithmetic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Are groups algebras over an operad? I'm trying to understand a little bit about operads. I think I understand that monoids are algebras over the associative operad in sets, but can groups be realised as algebras over some operad? In other words, can we require the existence of inverses in the structure of the operad? Similarly one could ask the same question about (skew-)fields.
As far as I know, the answer is "no". The point is that the axioms of an operad must contain no repeated variables (think the associativity or commutativity law, which are written $(xy)z = x(yz)$ and $xy = yz$, or the Jacobi identity $[[x,y],z] + [[y,z],x] + [[z,x],y] = 0.$ On the hand, the axioms for a group include the axiom $x x^{-1} = 1$, which involves the same variable $x$ twice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
solving a recurrence given the general recurrence equation $ a_{n+1}-a_{n}=f(n)a_{n+2}$ (1) is this possible to find a function $ g(x)$ so $ g(x)= \sum_{n=0}^{\infty}a_{n}x^{n}$ ?? where the $ a_{n}$ are the solutions of the recurrence (1) in case $ f(n)=const$ i know how to get it but for a non constant function $f(n) $ i have no idea , for $ f(n)$ a Polynomial i guess that $ g(x)$ will satisfy a differential equation
Indeed, when $f$ is a polynomial you will get a differential equation. Define $$g(x)=\sum_{n=0}^\infty a_n x^n.$$ Then $$h(x):=\frac{g(x)-a_0}{x}-g(x)=\sum_{n=0}^\infty (a_{n+1}-a_n)x^n=\sum_{n=0}^\infty a_{n+2}f(n) x^n. \tag{1}$$ Now the falling factorials form a $\Bbb Q$-basis for the vector space of rational-coefficient polynomials, thus we can write $f(n)=\sum_k c_k (n)_k$ and obtain $$h(x) = \sum_k c_k\sum_{n=0}^\infty a_{n+2} (n)_k x^n = \left(\sum_k c_k x^k \frac{d^k}{dx^k}\right)\underbrace{\sum_{n=0}^\infty a_{n+2}x^n}_{\ell(x)}. \tag{2}$$ Note that $$\ell(x)=\frac{g(x)-a_0-a_1x}{x^2}.$$ Combining $(1)$ and $(2)$ gives the desired differential equation. Similar algebra works when $f$ is a combination of powers and exponentials as well. I'm not sure if there's a general solution to the problem, though...
{ "language": "en", "url": "https://math.stackexchange.com/questions/134723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Taking the derivative of $y = \dfrac{x}{2} + \dfrac {1}{4} \sin(2x)$ Again a simple problem that I can't seem to get the derivative of I have $\frac{x}{2} + \frac{1}{4}\sin(2x)$ I am getting $\frac{x^2}{4} + \frac{4\sin(2x)}{16}$ This is all very wrong, and I do not know why.
Jordan, The derivative of your function is $\frac{1}{2} + \frac{\cos 2x}{2}$. Now note that $\cos 2x = \cos^2 x -\sin ^2 = \cos^2 x -1 -\cos^2 x =2\cos^2x -1$. Rearranging, you get $$\cos^2 x =\frac{\cos 2x}{2} + \frac{1}{2}.$$ $$ \begin{align*} \cos 2x = \cos(x+x) & =\cos x \cos x -\sin x \sin x \\ & = \cos^2x -\sin^2x\\ & = \cos^2x -(1-\cos^2x)\qquad\text{because}~\cos^2x + \sin^2 x =1.\\ & = \cos^2x-1+ \cos^2x\\ & = 2\cos^2x-1 \end{align*} $$ So, you have $\cos2x = 2\cos^2x -1$, which is the same as $\cos 2x + 1 = 2\cos^2x$. Divide both sides by 2 to get what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
Do "Parabolic Trigonometric Functions" exist? The parametric equation $$\begin{align*} x(t) &= \cos t\\ y(t) &= \sin t \end{align*}$$ traces the unit circle centered at the origin ($x^2+y^2=1$). Similarly, $$\begin{align*} x(t) &= \cosh t\\ y(t) &= \sinh t \end{align*}$$ draws the right part of a regular hyperbola ($x^2-y^2=1$). The hyperbolic trigonometric functions are very similar to the standard trigonometric function. Do similar functions exist that trace parabolas (because it is another conic section) when set up as parametric equations like the above functions? If so, are they also similar to the standard and hyperbolic trigonometric functions?
The problem of generalized forms of trigonmetry has been touched in the past by several authors, E. Ferrari (Rome university) proposed different forms and proved the link with elliptic functions. Dattoli, Migliorati and Ricci used the Ferrari's approach to study the parabolic trigonometric functions and the relevant link with Chebyshev polynomials. The relevant papers have appeared on arXiv: * *The Parabolic-Trigonometric Functions *The parabolic trigonometric functions and the Chebyshev radicals
{ "language": "en", "url": "https://math.stackexchange.com/questions/134906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71", "answer_count": 9, "answer_id": 0 }
Morphism between projective schemes induced by surjection of graded rings Ravi Vakil 9.2.B is "Suppose that $S \rightarrow R$ is a surjection of graded rings. Show that the induced morphism $\text{Proj }R \rightarrow \text{Proj }S$ is a closed embedding." I don't even see how to prove that the morphism is affine. The only ways I can think of to do this are to either classify the affine subspaces of Proj S, or to prove that when closed morphisms are glued, one gets a closed morphism. Are either of those possible, and how can this problem be done?
Almost 7 years late! Here is my try. Hallo Thorsten! I call our maps $f \colon \operatorname{Proj} B \to \operatorname{Proj} A$ and $\varphi \colon A \to B$. Surjectivity implies that we actually have a well-defined map $\operatorname{Proj} B \to \operatorname{Proj} A$ and a morphism of schemes in this way. Being a closed immersion is affine-local on the target. Therefore we can consider some cover of open affines $\bigcup_{j \in J} V_j = \operatorname{Proj} A$ and then check that for each $j \in J$ we have a closed immersion $f \mid_{f^{-1}(V_j)} \colon f^{-1}(V_j) \hookrightarrow V_j$. This is described in Vakil's notes as an exercise. We have that the collection over all homogeneous $g \in A$ of $D(g) = \{\,p \in \operatorname{Proj} A \mid g \notin p \,\}$ cover $\operatorname{Proj} A$. As $\varphi$ is surjective, we have $f^{-1} (D(g)) = D(\varphi(g))$. We now have \begin{align*} f \mid_{D(\varphi(g))} \colon D(\varphi(g)) & \hookrightarrow D(g) \\ p & \mapsto \varphi^{-1} (p) \, . \end{align*} These sets are all open affines! For any graded ring $R$, we have for any homogeneous $h \in R$ the identification $D(h) = \operatorname{Spec}(R_h)_0 = \operatorname{Spec}\{\, \frac{x}{h^n} \mid n \in \mathbb N, \, \deg x = \deg h \cdot n \,\}$. (Sometimes, $(R_h)_0$ is confusingly written as $R_{(h)}$.) Our map can then be seen as \begin{align*} f \mid_{\operatorname{Spec} (B_{\varphi(g)})_0} \colon \operatorname{Spec} (B_{\varphi(g)})_0 & \hookrightarrow \operatorname{Spec} (A_g)_0 \\ p & \mapsto \varphi^{-1} (p) \; , \end{align*} which corresponds to the surjective ring homomorphism \begin{align*} \varphi (D(g)) \colon (A_g)_0 & \to (B_{\varphi(g)})_0 \\ \frac{x}{g^n} & \mapsto \frac{\varphi(x)}{\varphi(g)^n} \; , \end{align*} which means that $f \mid_{f^{-1}(D(g))} \colon f^{-1}(D(g)) \hookrightarrow D(g)$ is a closed immersion, concluding that $f$ is a closed immersion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
How to show that if a matrix A is diagonalizable, then a similar matrix B is also diagonalizable? So a matrix $B$ is similar to $A$ if for some invertible $S$, $B=S^{-1}AS$. My idea was to start with saying that if $A$ is diagonalizable, that means $A={X_A}^{-1}\Lambda_A X_A$, where $X$ is the eigenvector matrix of $A$, and $\Lambda$ is the eigenvalue matrix of $A$. And I basically want to show that $B={X_B}^{-1}\Lambda_B X_B$. This would mean $B$ is diagonalizable right? I am given that similar matrices have the same eigenvalues, and if $x$ is an eigenvector of $B$, then $Sx$ is an eigenvector of $A$. That is, $Bx=\lambda x \implies A(Sx)=\lambda(Sx)$. Can someone enlighten me please? Much appreciated.
Suppose $A$ diagonalisable, $B$ similar to $A$. Then we have that there is an invertible matrix $S$ such that $S^{-1}AS = B$. It follows that $SBS^{-1} =A$. Since $X_A$ is the eigenvector matrix for $A$ (if that's what you call it), it follows that $$\begin{eqnarray*} \Lambda_A &=& X_A A X_A^{-1} \\ &=& (X_A)SBS^{-1}(X_A^{-1})\\ &=& (X_AS)B(X_AS)^{-1} \end{eqnarray*}$$ is a diagonal matrix. Hence $B$ is diagonalisable, diagonalised by the matrix $(X_AS)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Projection matrix and Eigenvalue Would like to have some guidance. $P$ is projection matrix on $U$ and $0\notin v\notin \mathbb{R}^2$ I need to show that if $v$ is element of $U$ than $v$ is Eigenvector of $P$ with Eigenvalue 1. I know that for projection matrix Eigenvalue is $1$ or $0$... but why in this case only $1$?
Well here I think that you mean that if v in U than v is an eigenvector of P (you said A) with eigenvalue 1. I think all you need here is the fact that P is (By definition projection ONTO U), so what happens to a v in U under the projection to U by P?... it projects it to itself. Ie if v is not 0 and v in U, Pv = v!
{ "language": "en", "url": "https://math.stackexchange.com/questions/135065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
integration of fractions i am trying to integrate following equation $$ \int\frac 1{(x^2-1)\cdot (x+2)}\,dx$$ i can represent $(x^2-1)=(x-1)(x+1)$ so,it would be converted in the following form $$\int\frac1{(x^2-1)(x+2)}\,dx=\int \frac1{(x-1)(x+1)(x+2)}\,dx$$ or it is equal $$\int \frac1{(x-1)(x^2+3x+2)}\,dx$$ last one we can decompose into form $$ \frac1{(x-1)(x^2+3x+2)}=\frac A{x-1}+\frac{Cx+D}{x^2+3x+2}$$ am i right?or did i miss some term?
I think you can decompose it like this: $$ \frac{1}{(x^2-1)\cdot(x+2)}=\frac{a}{x-1}+\frac{b}{x+1}+\frac{c}{x+2} $$ Thus we can solve the following equations: $$ a+b+c=0\\3a+b=0\\2a-2b-c=1 $$ getting $a=1/6,b=-1/2,c=1/3$. Therefore, $$ \int\frac{dx}{(x^2-1)\cdot(x+2)}\\=\int\frac{1}{6}\cdot\frac{dx}{x-1}-\int\frac{1}{2}\cdot\frac{dx}{x+1}+\int\frac{1}{3}\cdot\frac{dx}{x+2}\\=\frac{1}{6}\cdot \log(x-1)-\frac{1}{2}\cdot \log(x+1)+\frac{1}{3}\cdot \log(x+2). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/135155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Showing $f(x)=x^4$ is not uniformly continuous I am looking at uniform continuity (for my exam) at the moment and I'm fine with showing that a function is uniformly continuous but I'm having a bit more trouble showing that it is not uniformly continuous, for example: show that $x^4$ is not uniformly continuous on $\mathbb{R}$, so my solution would be something like: Assume that it is uniformly continuous then: $$\forall\epsilon\geq0\exists\delta>0:\forall{x,y}\in\mathbb{R}\ \mbox{if}\ |x-y|&lt\delta \mbox{then} |x^4-y^4|&lt\epsilon$$ Take $x=\frac{\delta}{2}+\frac{1}{\delta}$ and $y=\frac{1}{\delta}$ then we have that $|x-y|=|\frac{\delta}{2}+\frac{1}{\delta}-\frac{1}{\delta}|=|\frac{\delta}{2}|&lt\delta$ however $$|f(x)-f(y)|=|\frac{\delta^3}{8}+3\frac{\delta}{4}+\frac{3}{2\delta}|$$ Now if $\delta\leq 1$ then $|f(x)-f(y)|>\frac{3}{4}$ and if $\delta\geq 1$ then $|f(x)-f(y)|>\frac{3}{4}$ so there exists not $\delta$ for $\epsilon &lt \frac{3}{4}$ and we have a contradiction. So I was wondering if this was ok (I think it's fine) but also if this was the general way to go about showing that some function is not uniformly continuous? Or if there was any other ways of doing this that are not from the definition? Thanks very much for any help
To show that it is not uniformly continuous on the whole line, there are two usual (and similar) ways to do it: * *Show that for every $\delta > 0$ there exist $x$ and $y$ such that $|x-y|&lt\delta$ and $|f(x)-f(y)|$ is greater than some positive constant (usually this is even arbitrarily large). *Fix the $\varepsilon$ and show that for $|f(x)-f(y)|&lt\varepsilon$ we need $\delta = 0$. First way: Fix $\delta > 0$, set $y = x+\delta$ and check $$\lim_{x\to\infty}|x^4 - (x+\delta)^4| = \lim_{x\to\infty} 4x^3\delta + o(x^3) = +\infty.$$ Second way: Fix $\epsilon > 0$, thus $$|x^4-y^4| &lt \epsilon $$ $$|(x-y)(x+y)(x^2+y^2)| &lt \epsilon $$ $$|x-y|\cdot|x+y|\cdot|x^2+y^2| &lt \epsilon $$ $$|x-y| &lt \frac{\epsilon}{|x+y|\cdot|x^2+y^2|} $$ but this describes a necessary condition, so $\delta$ has to be at least as small as the right side, i.e. $$|x-y| &lt \delta \leq \frac{\epsilon}{|x+y|\cdot|x^2+y^2|} $$ so if either of $x$ or $y$ tends to infinity then $\delta$ tends to $0$. Hope that helps ;-) Edit: after explanation and calculation fixes, I don't disagree with your proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 0 }
Find values which make a matrix singular Find all the values of c for which the following matrix is singular: $$\begin{bmatrix} 1 & c & c \\ c & c & c \\ 2 & c & 3 \end{bmatrix}$$ Anyone know how to solve this?
You could also apply Gaussian Elimination to get: $$ \begin{pmatrix} 1 & c & c \\ 0 & c^2 - c & c - c^2 \\ 0 & 0 & -c + 3\\ \end{pmatrix} $$ This matrix is singular if any element on the diagonal is zero, i.e, if: $$ c^2 - c = 0 \text{ or } -c+3 = 0 $$ which is equivalent to computing the determinant..
{ "language": "en", "url": "https://math.stackexchange.com/questions/135341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Learning Math On my own I'm a physics major and I'm pretty sure that I'm interested in theoretical physics research. However, I haven't been able to take math courses in a systematic way (as a math major would) due to scheduling difficulties. Therefore, I would like to fill the gaps in my math education by reading books on my own. Are there any suggestions on what topics I should concentrate on and what books are the best, considering that my main objective is to improve my physics insight. (I am very familiar with rigorous proofs and stuff.) Thanks!
A short (but might be helpful) answer is to consult the MIT opencourse website. See what you found interesting and you may borrow the book from the library to read it. A good thing is one can learn math entirely on one own with a computer. For physics you need to learn how to do experiments, etc. A topic you might be interested is differential geometry and geometrical topology if your interest is " theoretical physics research". Similarly you may read functional analysis, PDE, etc.... I also recommend this page which I know since high school and never finished learning from it...
{ "language": "en", "url": "https://math.stackexchange.com/questions/135452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find all connected 2-sheeted covering spaces of $S^1 \lor S^1$ This is exercise 1.3.10 in Hatcher's book "Algebraic Topology". Find all the connected 2-sheeted and 3-sheeted covering spaces of $X=S^1 \lor S^1$, up to isomorphism of covering spaces without basepoints. I need some start-help with this. I know there is a bijection between the subgroups of index $n$ of $\pi_1(X) \approx \mathbb{Z} *\mathbb{Z}$ and the n-sheeted covering spaces, but I don't see how this can help me find the covering spaces (preferably draw them). From the pictures earlier in the book, it seems like all the solutions are wedge products of circles (perhaps with some orientations?). So the question is: How should I think when I approach this problem? Should I think geometrically, group-theoretically, a combination of both? Small hints are appreciated. NOTE: This is for an assignment, so please don't give away the solution. I'd like small hints or some rules on how to approach problems like this one. Thanks!
As far as I know that one way to do this is to write the representation of the group that you have here which is then act by this group on the set {1,2} by taking a= (1), a=(12) . Then this will give you all possible covering spaces connected and disconnected. I hope that is correct and helpful for you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Using the definition of a concave function prove that $f(x)=4-x^2$ is concave (do not use derivative). Let $D=[-2,2]$ and $f:D\rightarrow \mathbb{R}$ be $f(x)=4-x^2$. Sketch this function.Using the definition of a concave function prove that it is concave (do not use derivative). Attempt: $f(x)=4-x^2$ is a down-facing parabola with origin at $(0,4)$. I know that. But what $D=[-2,2]$ is given for. Is it domain or a point? Then, how do I prove that $f(x)$ is concave using the definition of a concave function? I got the inequality which should hold for $f(x)$ to be concave: For two distinct non-negative values of $x (u$ and $v$) $f(u)=4-u^2$ and $f(v)=4-v^2$ Condition of a concave function: $ \lambda(4-u^2)+(1-\lambda)(4-v^2)\leq4-[(\lambda u+(1-\lambda)v]^2$ I do not know what to do next.
To be concave $f(x)=4-x^2$ should satisfy the condition for a concave function: For two distinct values of x (u and v) such that $f(u)=4-u^2$ and $f(v)=4-v^2$ the following inequality should be true: $$\lambda f(u)+(1-\lambda)f(v)\leq f(\lambda u+(1-\lambda)v), \;\text{for}\; 0<\lambda<1$$ which turns into the inequality below: $$ \lambda(4-u^2)+(1-\lambda)(4-v^2)\leq4-[(\lambda u+(1-\lambda)v]^2 $$ To show that the above inequality is true, first, I expanded it and made it look like the expression Joe posted(remember that $0<\lambda<1$ by definition of a concave function). After expanding LHS I got: $4\lambda -\lambda u^2 +4-4\lambda -v^2+\lambda v^2\leq4-[(\lambda u+(1-\lambda)v]^2$ Then, I canceled out the terms $4\lambda , 4$ and subtracted the RHS from the LHS: $-\lambda u^2-v^2+\lambda v^2+(\lambda+v-\lambda v)^2\leq0$ After expanding the expression in parenthesis the following terms cancel out: $v^2,\lambda v^2$. Rearranging the remaining terms I got: $\lambda^2 u^2-2\lambda^2 uv+\lambda^2 v^2\leq\lambda u^2-2\lambda uv+\lambda v^2$ Which turns into the expression Joe posted: $(\lambda u-\lambda v)^2\leq (\sqrt{\lambda}u-\sqrt{\lambda}v)^2$ Then, I factored out $\lambda 's$ and subtracted the RHS from the LHS: $\lambda ^2(u-v)^2-\lambda (u-v)^2\leq0$ Finally I factored out $\lambda (u-v)^2$ and got: $\lambda (u-v)^2(\lambda - 1)\leq0$ which is definitely true because by definition $0<\lambda<1$ which makes the LHS strictly negative: $\lambda (u-v)^2(\lambda-1)<0$ I fiddled around with the terms of the inequality that should be proved for the $f(x)=4-x^2$ to be concave and turned it into the form that clearly shows that it is true for $0<\lambda<1$ which is a part of the condition for a concave function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Subparacompact Spaces I've been trying to learn more about subparacompact spaces. While reading an article, I noticed a theorem that was stated without proof. It said: "A countably compact, subparacompact space $X$ is compact." I am not seeing why this is true. Can anyone offer any suggestions?
After giving it more thought I was able to come up with a solution. Suppose $X$ is countably compact and subparacompact, and let $\alpha$ be an open cover of $X$. Since $X$ is subparacompact, every open cover of $X$ has a $\sigma$-discrete closed refinement. I claim that in any countably compact space, any locally finite discrete collection of subsets must be finite. Otherwise, suppose not. Then, there is a locally finite discrete collection $\{D_1, D_2, \ldots , D_n , \dots \}$ that is infinite. Now, pick a point $x_i$ in each $D_i$. Then, this sequence will have no cluster point, which is a contradiction since every sequence in a countably compact space has a cluster point. Therefore, if we have a discrete refinement of $\alpha$, it will be a countable collection of sets. Since we have a $\sigma$-discrete refinement, it will be a countable union of countable sets, which is countable. Thus, every open cover of $X$ has a countable refinement, which implies that $X$ is Lindelof. Thus, $X$ is compact since it is countably compact and Lindelof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluate the $\sin$, $\cos$ and $\tan$ without using calculator? Evaluate the $\sin$, $\cos$ and $\tan$ without using calculator? $150$ degree the right answer are $\frac{1}{2}$, $-\frac{\sqrt{3}}{2}$and $-\frac{1}{\sqrt{3}} $ $-315$ degree the right answer are $\frac{1}{\sqrt{2}}$, $\frac{1}{\sqrt{2}}$ and $1$.
Two more possibilities: * *Use a table of trigonometric values. *Use a ruler and a protractor in drawing the line values of the trigonometric functions. (See, for example, this.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/135698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Average distance between two points in a circular disk How can I find an average distance between two points lying inside a circular disk of a certain radius? I wonder if there is any other way except of using a Monte Carlo method?
With the probability density you can find the average of the distance, or the distance squared, or the variance of the distance, or whatever you want. The probability density for the distance $l$ between points in a circle of radius $r$ is given by Ricardo García-Pelayo 2005 J. Phys. A: Math. Gen. 38 3475 as $$p(l)=\frac{4l}{\pi r^2}\arccos \frac{l}{2r}-\frac{2l^2}{\pi r^4}\sqrt{r^2-\frac{l^2}{4}}$$ Then the average distance is given by $$\int_0^{2r}lp(l)dl=\frac{128r}{45\pi}.$$ The average of the distance squared is given by $\int_0^{2r}l^2p(l)dl=r^2.$ Thus the standard deviation is given by $$\sqrt{\int_0^{2r}(l-\frac{128r}{45\pi})^2p(l)dl}=\sqrt{r^2-\left( \frac{128r}{45\pi} \right)^2}=\frac{r\sqrt{2025\pi^2-2^{14}}}{45\pi}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/135766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 4, "answer_id": 0 }
Huge Linear equation system Do you know any repository of huge matrices of a linear system?, Or to tell a problem in which a lot of linear equations are needed? I want to solve huge linear equations systems but have not had any luck when trying to find huge matrices.
Let $n\geq 1$ be an integer, and fix $n+1$ points in $\mathbb{R}^2$, say $(x_1,y_1)$, ... , $(x_n,y_n)$, $(x_{n+1},y_{n+1})$, such that $x_1&ltx_2&lt\ldots&ltx_n$. Problem: find the coefficients of the unique polynomial $p(x)\in \mathbb{R}[x]$ of degree $n$ that interpolates all $n+1$ points, i.e., find coefficients $a_0,\ldots,a_n\in\mathbb{R}$ such that the polynomial $$p(x)=a_0+a_1x+\cdots+a_nx^n$$ satisfies $p(x_k)=y_k$ for all $k=1,\ldots,n+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What am I understanding wrong about how matrix-norm works? I'm learning how norm works today. I think I understood how a vector norm works and now I'm trying to understand how the matrix-norm works. I can't understand why the $p=1$ is the "maximum absolute column sum of the matrix". So, here's the definition of the matrix-norm: I wanted to use a very simple example, the same the professor gave me today $$ A = \begin{pmatrix} 1 & 10 & 3 \\ -5 & -1 & 0 \\ 3i & 2 & 0 \end{pmatrix} $$ So, from the definition I thought that I could pick any $x \in K^3$, so I pick $x=(1, 1, 1) \Rightarrow ||x||_{p=1}=3$. Doing $||Ax||$ gets me $(1, 10, 3)$, and the norm from that is $1+10+3=14$ divided per $||x||$, $14/3$. And this makes no sense at all! I should only be able to get 3 results $(9, 13, 3)$. How do I get to these results? From the way I understood I could get unlimited results. Many thanks in advance!
The norm is defined to be the maximum possible value you can get as you range over all $x$. For this particular $x$, you get (EDIT: $25/3$) (taking the $p = 1$ norm), but there are $x$ out there such that you get larger, so there's still only one norm of $A$. If you take the standard basis, this maximum must be attained at one of the basis elements, which is why you only need to check three numbers. So the $p = 1$ norm of your matrix is $13$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Apostol Calculus, Volume I, Chapter 8.7, Exercises 9 & 10 (p. 320) I'm actually only having trouble with #10, but #10 relies on the statement and results of #9, so I'll include both questions for completeness. #9. In a tank are $100$ gallons of brine containing $50$ pounds of dissolved salt. Water runs into the tank at the rate of $3$ gallons per minute, and the concentration is kept uniform by stirring. How much salt is in the tank at the end of one hour if the mixture runs out at a rate of $2$ gallons per minute? #10. Refer to Exercise 9. Suppose the bottom of the tank is covered with a mixture of salt and insoluble material. Assume that the salt dissolves at a rate proportional to the difference between the concentration of the solution and that of a saturated solution (3 pounds of salt per gallon), and that if the water were fresh 1 pound of salt would dissolve per minute. How much salt will be in solution at the end of one hour? First, I define $y = f(t) =$ number of pounds of salt in solution at time $t$. To write an equation for $y'$ we want to determine the amount of salt being added by the dissolving substance and the amount being removed by the solution running off. The amount added by the dissolving substance is given by $$ k \left(3 - \dfrac{f(t)}{100+t}\right)$$ Since we are told that the rate is proportional to the difference between $3$ pounds of salt per gallon and the current concentration of salt. Further, we are told that if the concentration of salt is $0$ then the rate is $1$ pound per minute; hence, $3k = 1 \implies k = \frac{1}{3}$. Then, from #9 we have the rate at which salt is exiting the solution given by $$-2 \left(\dfrac{f(t)}{100+t}\right)$$ Putting this together we have $$y' = -2\left(\dfrac{f(t)}{100+t}\right) + 1 - \dfrac{f(t)}{3(100+t)}$$ Thus, giving us the first-order linear differential equation: $$y' + \dfrac{7}{3(100+t)}y = 1$$ From this we have the unique solution $y = f(t)$ with $f(0) = 50$ given by $$\begin{align*} y &= 50 \dfrac{100^{7/3}}{(100+t)^{7/3}} + \dfrac{1}{(100+t)^{7/3}} \int_0^t (100+x)^{7/3} dx\\ &= 50 \dfrac{100^{7/3}}{(100+t)^{7/3}} + \dfrac{3t}{10}\\ \implies f(60) &= 34.7 \text{ pounds of salt} \end{align*}$$ Unfortunately, the solution Apostol gives is $54.7$ pounds of salt. I cannot seem to find the error that is causing me to be off. The answers have an obvious similarity, but I don't know if that is just coincidence or if it means there is a small(ish) error in there somewhere. Thanks for your help.
Your calculation in the third to last displayed equation is off. You should have $$\eqalign{ {1\over (100+t)^{7/3}}\int_0^t(100+x)^{7/3}\,dx&= \color{maroon}{1\over (100+t)^{7/3}}\color{darkgreen}{ {3\over10}(100+x)^{10/3}}\Bigl|_0^t\cr &={3\over10 (100+t)^{7/3}}\Bigl[ (100+t)^{10/3} - 100^{10/3}\Bigr]\cr &= {3\over10}(100+t)- {3\cdot 100^{10/3}\over10 (100+t)^{7/3}}. } $$ I suspect you tried to cancel the maroon and darkgreen terms, which is not valid. Wolfram returns $54.6796$ as the value of your $f$, with this correction, at $t=60$, then. I did not check your work up to this point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Dividing by possibly zero value when proving an identity? In my trigonometry course, we're currently proving identities. I'm wonder if I can divide by something that could be zero while proving it. For example, $\sin{x}$, is it still a valid proof if I divide by it? edit: Sorry, what I really meant to ask was if it's alright to divide by a variable when DISproving an identity. I would assume it's alright if the disproof stands, right?
Good job noticing this issue. It is not valid to divide by a quantity that could be zero. Fortunately, there are two easy ways to deal with the situation. * *Split the proof into two cases: one where the quantity is zero, and one where it is not zero. The first case is usually pretty easy, and division is valid in the second case *Instead of dividing, try factoring instead. e.g. to solve the equation $$x \sin x = \sin x,$$ rather than dividing by $\sin x$, instead do $$x \sin x - \sin x = 0,$$ $$(x-1) \sin x = 0$$ and now you can invoke the usual theorem that says this is equivalent to "$\sin x = 0$ or $x-1 = 0$" and continue from there. Of course, this isn't too much different from option #1. In some cases, there are other things you can do. For example, you might be able to prove an identity under the assumption that a certain quantity is nonzero, and then use some other means to prove it in the remaining case. As a contrived example, suppose I had to assume $x \neq 0$ to prove $$ 2 \sin x \cos x = \sin 2x.$$ I could finish the proof by observing both sides are continuous: $$ 2 \sin 0 \cos 0 = \lim_{x \to 0} 2 \sin x \cos x = \lim_{x \to 0} \sin 2x = \sin (2 \cdot 0). $$ This is a good trick for a variety of situations. (of course, in this particular case, it would be easier to just plug in $x=0$ to verify the identity holds) (penartur's answer has other examples) Also, I should point out that this isn't just an abstract concern, or a pedantic point to counter silly "proofs" that 2=1. Sometimes, those extra cases really matter. For example, in multi-variable calculus, a lot of people have trouble with Lagrange multipliers, since the answer you want quite frequently comes from the "assume the quantity is zero" case, rather than from "divide by the quantity because I'm going to assume it's nonzero" case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Coordinates translation in space First of all sorry if the title is somewhat opaque, the problem I am trying to solve is already hard to explain properly in my first language. So, let's consider we have a plane, rectangle target in a three dimensional space. From two different points of views (the images of two cameras for example), we have the coordinates of each corner of that rectangle. Knowing these coordinates, how could we translate the coordinates of any point on one of the images to the ones it would have on the other image ? I made a (very) simple drawing that might help understanding what I'm looking for: http://i.imgur.com/x3QgZ.jpg In this situation the first observer is right in front of the target, the other one is slightly shifted to the right. We know all coordinates but the ones of the red dot on the second point of view, which we are looking for.
Your problem is a common one in all field of computer vision or mapping, and the transformation between the different point of views is known as the "Essential Matrix". You can find more about it here: Essential Matrix on Wikipedia.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Poincaré-Hopf theorem using Stokes The wiki entry on the Poincaré-Hopf theorem claims that it "relies heavily on integral, and, in particular, Stokes' theorem". However, in the sketch of proof given there which is more or less the one in Milnor's Topology from the Differentiable Viewpoint there is no integration. Does the proof become easier by using Stokes' theorem? Is there a good reference?
In step 3 of the scketch you read "the degree of the Gauss map". Now one has to look for the definition of the degree of a map $ f:M\to N$. But that map gives a linear map in de Rham cohomology $f^*:H^m(N)\to H^m(M), n=\dim(M)=\dim(N)$. Stokes theorem says that the integral of $m$-forms induces an isomorphism $\int:H^m\to R$ an this transforms $f^*$ (by change of variables in the integral) into a linear map $R\to R$. Such a linear map is the product times an scalar $d$: this is the degree of $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove this equality $ t(1-t)^{-1}=\sum_{k\geq0} 2^k t^{2^k}(1+t^{2^k})^{-1}$? Prove the equality $\quad t(1-t)^{-1}=\sum_{k\geq0} 2^k t^{2^k}(1+t^{2^k})^{-1}$. I have just tried to use the Taylor's expansion of the left to prove it.But I failed. I don't know how the $k$ and $2^k$ in the right occur. And this homework appears after some place with the $Jacobi$ $Identity$ in the book $Advanced$ $Combinatorics$(Page 118,EX10 (2)). Any hints about the proof ?Thank you in advance.
Hint $\ $ Let $\rm\:N\to\infty\ $ in $\rm\displaystyle\ \sum_{K\!\:=\!\:0}^{N-1}\!\ \frac{2^{\:\!K}\:\! t^{2^{\:\!K}}}{t^{2^K}\!+1} + \frac{t}{t-1}\: =\ \frac{2^{\:\!N}\:\! t^{2^{\:\!N}}}{t^{2^{\:\!N}}\!-1}\ =\: c\ t^{2^{\:\!N}} + \:\cdots\ $ by telescopy. Since $\rm\: t^{2^{\:\!N}} \to 0\:$ as $\rm\:N\to \infty,\:$ the desired formal power series equality follows. See here for more on convergence of formal power series (beware many make errors here). Remark $\ $ The telescopic proof is simply $\rm 2^{\:\!N}$ times below, for $\rm\:x = t^{2^N}$ $$\rm \frac{2\:x^2}{x^2-1} - \frac{x}{x-1}\: =\: \frac{x}{x+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/136333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
What is so interesting about the zeroes of the Riemann $\zeta$ function? The Riemann $\zeta$ function plays a significant role in number theory and is defined by $$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} \qquad \text{ for } \sigma > 1 \text{ and } s= \sigma + it$$ The Riemann hypothesis asserts that all the non-trivial zeroes of the $\zeta$ function lie on the line $\text{Re}(s) = \frac{1}{2}$. My question is: Why are we interested in the zeroes of the $\zeta$ function? Does it give any information about something? What is the use of writing $$\zeta(s) = \prod_{p} \biggl(1-\frac{1}{p^s}\biggr)^{-1}$$
the HIlbert Polya operator which would prove Riemann Hypothesis is the Wu-Sprung genralzed model with potential $$ f^{-1} (x)=\frac{4}{\sqrt{4x+1} } +\frac{1}{4\pi } \int\nolimits_{-\sqrt{x} }^{\sqrt{x}}\frac{dr}{\sqrt{x-r^2} } \left( \frac{\Gamma '}{\Gamma } \left( \frac{1}{4} +\frac{ir}{2} \right) -\ln \pi \right) -\sum\limits_{n=1}^\infty \frac{\Lambda (n)}{\sqrt{n} } J_0 \left( \sqrt{x} \ln n\right) $$ with boundary conditions $$ y(0)=0=y(\infty) $$ and $ H= -\frac{d^{2}}{dx^{2}}y(x)+f(x)y(x)=E_{n}$...................$E_{n}=\gamma_{n}^{2}$· however mathematician do not like it, if we take the half derivative then we find the distributional Riemann-Weil formula for the zeros $$ \begin{array}{l} \sum\limits_{n=0}^{\infty }\delta \left( x-\gamma _{n} \right) + \sum\limits_{n=0}^{\infty }\delta \left( x+\gamma _{n} \right) =\frac{1}{2\pi } \frac{\zeta }{\zeta } \left( \frac{1}{2} +ix\right) +\frac{1}{2\pi } \frac{\zeta '}{\zeta } \left( \frac{1}{2} -ix\right) -\frac{\ln \pi }{2\pi } \\[10pt] {} +\frac{\Gamma '}{\Gamma } \left( \frac{1}{4} +i\frac{x}{2} \right) \frac{1}{4\pi } +\frac{\Gamma '}{\Gamma } \left( \frac{1}{4} -i\frac{x}{2} \right) \frac{1}{4\pi } +\frac{1}{\pi } \delta \left( x-\frac{i}{2} \right) +\frac{1}{\pi } \delta \left( x+\frac{i}{2} \right) \end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/136417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72", "answer_count": 5, "answer_id": 3 }
Find the area of a surface of revolution I'm a calculus II student and I'm completely stuck on one question: Find the area of the surface generated by revolving the right-hand loop of the lemniscate $r^2 = \cos2 θ$ about the vertical line through the origin (y-axis). Can anyone help me out? Thanks in advance
I stole the formula from a website for surfaces of revolution that was linked in the comment above: http://tutorial.math.lamar.edu/Classes/CalcII/PolarSurfaceArea.aspx They prove it more generally for parametric surfaces. I am not sure what you are allowed to assume in your calculus two course; I was unsuccessful in getting a correct formula from a direct polar slicing argument. In what follows, I am going to be sloppy about whether I write $r$ as a variable or as a function $r(\theta)$ of $\theta$. Since the right half of the lemniscate is traced out between $-\pi/4$ and $\pi/4$, the integral you want is $$ 2\pi\int_{-\pi/4}^{\pi/4} r(\theta)\cos\theta\sqrt{r(\theta)^2 + r'(\theta)^2}d\theta $$ We have $$ r^2 = \cos(2\theta) $$ and so $$ 2rr' = -2\sin(2\theta), $$ so \begin{align*} (r')^2 &= \frac{\sin^2(2\theta)}{r^2}\\ &=\tan^2(2\theta). \end{align*} The integrand isn't very pretty. I used wolfram alpha and it numerically approximated the integral (without the 2 $\pi$) to be 2.1028, which seems geometrically reasonable to me. I'm sorry for the unsatisfying conclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Show that $ a,b,c, \sqrt{a}+ \sqrt{b}+\sqrt{c} \in\mathbb Q \implies \sqrt{a},\sqrt{b},\sqrt{c} \in\mathbb Q $ Assume that $a,b,c, \sqrt{a}+ \sqrt{b}+\sqrt{c} \in\mathbb Q$ are rational,prove $\sqrt{a},\sqrt{b},\sqrt{c} \in\mathbb Q$,are rational. I know that can be proved, would like to know that there is no easier way $\sqrt a + \sqrt b + \sqrt c = p \in \mathbb Q$, $\sqrt a + \sqrt b = p- \sqrt c$, $a+b+2\sqrt a \sqrt b = p^2+c-2p\sqrt c$, $2\sqrt a\sqrt b=p^2+c-a-b-2p\sqrt c$, $4ab=(p^2+c-a-b)+4p^2c-4p(p^2+c-a-b)\sqrt c$, $\sqrt c=\frac{(p^2+c-a-b)+4p^c-4ab}{4p(p^2+c-a-b)}\in\mathbb Q$.
Maybe not easier, but quite elegant : Suppose that $a,b,c$ are all non zero. Let $K=\mathbb{Q}(\sqrt{a},\sqrt{b},\sqrt{c})$ and $n = [K: \mathbb{Q}]$. Then since $Tr_{K/\mathbb{Q}}(\sqrt{a}) = Tr_{\mathbb{Q}(\sqrt{a})/\mathbb{Q}} \circ Tr_{K/\mathbb{Q}(\sqrt{a})} (\sqrt{a})$, we have $$ Tr_{K/\mathbb{Q}}(\sqrt{a}) = \begin{cases} 0,& \text{if } \sqrt{a} \notin \mathbb{Q} \\ n\sqrt{a}, &\text{if } \sqrt{a} \in \mathbb{Q}, \end{cases}$$ and same for $\sqrt{b}$ and $\sqrt{c}$. By hypothesis $\sqrt{a} + \sqrt{b} +\sqrt{c} \in \mathbb{Q}$, so $$ Tr_{K/\mathbb{Q}}(\sqrt{a}) + Tr_{K/\mathbb{Q}}(\sqrt{b}) + Tr_{K/\mathbb{Q}}(\sqrt{c}) = n\sqrt{a} + n \sqrt{b} + n \sqrt{c}.$$ It is easy to conclude that $\sqrt{a},\sqrt{b},\sqrt{c} \in \mathbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
$\lim\limits_{n \to{+}\infty}{\sqrt[n]{n!}}$ is infinite How do I prove that $ \displaystyle\lim_{n \to{+}\infty}{\sqrt[n]{n!}}$ is infinite?
In light of sdcvvc' answer, this answer may be a bit much; but you can generalize the following argument to show that if $(a_n)$ is a sequence of positive numbers and if $\lim\limits_{n\rightarrow\infty}{a_{n+1}\over a_n}=\infty$, then $\lim\limits_{n\rightarrow\infty}{\root n\of{a_n}}=\infty$. (More generally, one can show that $\limsup\limits_{n\rightarrow\infty}{\root n\of{a_n}} \le \limsup\limits_{n\rightarrow\infty}{a_{n+1}\over a_n} $ and that $ \liminf\limits_{n\rightarrow\infty}{a_{n+1}\over a_n} \le \liminf\limits_{n\rightarrow\infty}{\root n\of{a_n}} $. ) Let $a_n=n!$. One can show by induction that $a_{n+k}\ge n^k a_n$ for all positive integers $n$ and $k$. Now fix a positive integer $N$ and let $n$ be a positive integer with $n\ge N$. Then $$\tag{1} a_n =a_{N+(n-N)} \ge N^{n-N} a_N=N^n\cdot {a_N\over N^N},\qquad\qquad(n\ge N). $$ Taking the $n^{\rm th}$ roots of the left and right hand sides of $(1)$ gives $$\tag{2} \root n\of{a_n}\ge N\cdot{\root n\of {a_N}\over (N^N)^{1/n}}, \qquad\qquad(n\ge N). $$ Now, as $n\rightarrow\infty$, the righthand side of $(2)$ tends to $N$. From this it follows that $\liminf\limits_{n\rightarrow\infty} \root n\of{a_n}\ge N$. But, as $N$ was arbitrary, we must then have $\lim\limits_{n\rightarrow\infty} \root n\of{a_n}=\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 12, "answer_id": 5 }
How to explain integrals and derivatives to a 10 years old kid? I have a sister that is interested in learning "what I do". I'm a 17 years old math loving person, but I don't know how to explain integrals and derivatives with some type of analogies. I just want to explain it well so that in the future she could remember what I say to her.
Explain derivatives using the speedometer ! Ask her how one could find the speed of something , and if she goes saying the average speed , ask her how one could calculate the instantaneous speed. And boom , there comes your derivatives.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
references for singularities: does quotient singularities imply gorenstein? Is there a good place where to learn about singularities of algebraic varieties? OK, this question is terribly vague, so I'll ask what I'm currently interested in: if X is a smooth variety and G is a finite group, then is $X/G$ Gorenstein? If true, what would a reference be?
Here is a related question on mathoverflow. It sounds like the relevant part of 'Gorenstein' here is that the canonical divisor is Cartier. Or, in other words, the canonical sheaf is invertible. This is not necessarily true for quotients of smooth varieties. Here is a simple example of an isolated quotient singularity which is non-Gorenstein: Let $X \cong \mathbb{C}^3 = \mathrm{Spec}\,\mathbb{C}[z_1,z_2,z_3]$ and let $G \cong \mathbb{Z}_2$ act on $X$ as $(z_1, z_2, z_3) \to (-z_1, -z_2,-z_3)$. Then $X/G = \mathrm{Spec}\,\mathbb{C}[z_1^2,z_2^2,z_3^2,z_1 z_2, z_1 z_3, z_2 z_3]$, and this is not Gorenstein. There are (at least) a couple of ways to see this: * *If you know toric geometry, it is a simple exercise to show that $K_{X/G}$ is not Cartier. *Roughly, the stalk of $\omega_X$ at the origin is generated by $dz_1\wedge dz_2\wedge dz_3$, and this is not preserved by $G$, so $\omega_{X/G\setminus\{0\}}$ cannot be extended to a line bundle at the origin. In more detail, $\omega_{X/G}$ has three independent sections in a neighbourhood of the origin: $z_1dz_1\wedge dz_2\wedge dz_3~,~ z_2dz_1\wedge dz_2\wedge dz_3~,~ z_3dz_1\wedge dz_2\wedge dz_3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Cardinality of the power set of the set of all primes Please show me what I am doing wrong... Given the set $P$ of all primes I can construct the set $Q$ being the power set of P. Now let $q$ be an element in $Q$. ($q = \{p_1,p_2,p_3,\ldots\}$ where every $p_n$ is an element in $P$.) Now I can map every $q$ to a number $k$, where $k$ is equal to the product of all elements of $q$. ($k = p_1p_2p_3\ldots$) (for an empty set $q$, $k$ may be equal to one) Let the set $K$ consist of all possible values of $k$. Now because of the uniqueness of the prime factorization I can also map every number $k$ in $K$ to a $q$ in $Q$. (letting $k=1$ map to $q=\{\}$) Thus there exists a bijection between $Q$ and $K$. But $K$ is a subset of the natural numbers which are countable, and $Q$, being the power set of $P$, needs to be uncountably infinite (by Cantor's theorem), since $P$ is countably infinite. This is a contradiction since there cannot exist a bijection between two sets of different cardinality. What am I overlooking?
Some of the elements of $Q$ are infinite sets, and for these your proposed $k$ is not well-defined. The subset $Q'$ of $Q$ where every element of $Q'$ is a finite set of primes is indeed countable, and your argument is valid for this subset.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why do people interchange between $\int$ and $\sum$ so easily? One of the things I found curious in many texts is how in certain cases interchange the $\sum$ operator with $\int$. What are the "terms" for such a swap? I understand that integration in the early days was seen as an approximation of the area under the curve by using the very definition of multiplication and area to lend a hand with very small increments where the number of samples goes to infinity. Beyond the original question, is this also the reason why we keep the right hand $dx$ (or any other infinitesimal variable), just to remind us of the origin because it "multiplies against the function", hence giving area. Or is there more to it? Hints, answers, references to books... I'd appreciate anything you can give me.
In elementary analysis, a Riemann/Darboux integral is defined (among other equivalent definitions) as a suitable limit of a (finite) sum. Whence the folklore according to which "an integral is essentially a series". This is rather false, but you know, in elementary analysis/calculus you can almost say whatever you wish. The $\mathrm{d}x$ is clearly a deformation of $\Delta x$ in Riemann sums. Nowadays, it denotes the measure for which the integral is defined. If the integral is just a Riemann integral, some authors suggest to write $\int_a^bf$ instead of $\int_a^bf(x)\, \mathrm{d}x$. They are right, since the Riemann integral depend on $a$, $b$, and the function $f$. The variable of integration is a dummy one. Finally, remember that $\int$ is a calligraphic deformation of an "S", while $\sum$ is the greek "S". Hence many pioneers used to kind of confuse $\sum$ and $\int$ in their manuscripts. But, honestly, contemporary textbooks should not swap the two signs, since we live in 2012 and Cauchy died many years ago ;-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/136833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 3, "answer_id": 0 }
If $m$ and $n$ are positive integers, then $(F_m,F_n)=F_{(m,n)}$. Edit: The $F$'s are Fibonacci numbers. I need an idea on how to show the following: If $m$ and $n$ are positive integers, then $(F_m,F_n)=F_{(m,n)}$. I believe that using the fact that $F_{m+n}=F_mF_{n+1}+F_nF_{m-1}$ could come in handy. Moreover, Euclid's algorithm may as well be needed. But I am not certain, as there may be better methods to achieve this. Thanks in advance.
As noted in the comments by sdcvvc, this answer to an earlier question completely answers this question as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find $\int{\frac{x}{\sqrt{x^2+1}}dx}$? I started simplifying $$\int{\dfrac{x}{\sqrt{x^2+1}}dx}$$ but I always get this: $$\int{x(x^2+1)^{-1/2}dx}.$$ But I don't know how to follow by that way.
Substituting $x^2 + 1 = y$, hence $xdx = \frac{1}{2}dy$. Therefore, the integral becomes $\int \frac{1}{2\sqrt{y}}dy$ which should be simple to evaluate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Is $\bigwedge(V)$ self-injective? For a vector space $V$, is the grassman algebra $\bigwedge(V)$ always an injective module over itself? Is there a proof, or even just a brief explanation?
If $V$ is finite dimensional, then yes, owing to the existence of a special bilinear form from the algebra into the basefield. If $V$ is $n$ dimensional, the form is given by f(a,b)=coefficient of the grade n part of a*b. You can find this in a paper entitled Annihilators of Principal Ideals in the Exterior Algebra by Koc and Esin. Algebras with such a functional are Frobenius algebras, which are self-injective on both sides. I don't immediately know the answer if $V$ is infinite dimensional, but I'll try to think.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Notation for extracting the index of an element from a set OK I edited the question. Sorry for the wrong terms. What is the correct notation such that a specific function maps an element of a specific sequence/list/n-tuple to its index? I have researched about index sets but it only gives index to a set and not to the element. Is there a way to draw the notation of the index of the element like in the index sets?
As Arturo points out in the comments, this question is only meaningful for a list (or sequence), and not a set, since sets have no intrinsic order. A list can be considered as a function $F: \mathbb{N} \rightarrow S$. The notation you are seeking is simply $F^{-1}$, the inverse of $F$. For example, $F(x) = x^2$ corresponds to the list $(0, 1, 4, 9, 16, 25, 36, ...)$, and since $F(6) = 36$, we have $F^{-1}(36) = 6$ mapping the element 36 back to position 6.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
About connected Lie Groups How can I prove that a connected Lie Group is generated by any neighborhood of the identity? The result is almost trivial for $R^n$ but I tried using the open subgroup generated by this neighborhood.
An open subgroup $H$ of a topological group $G$ is closed because $$ G \smallsetminus H = \bigcup_{g \notin H} gH $$ is open as union of the open sets $gH$. Now take your neighborhood $U$ of the identity, let $H = \bigcup_{n \in \mathbb{Z}} U^{n}$ and check that $H$ is an open (hence closed) subgroup of $G$. By connectedness $G = H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 0 }
compactness property I am a new user in Math Stack Exchange. I don't know how to solve part of this problem, so I hope that one of the users can give me a hand. Let $f$ be a continuous function from $\mathbb{R}^{n}$ to $\mathbb{R}^{m}$ with the following properties:$A\subset \mathbb{R}^{n}$ is open then $f(A)$ is open. If $B\subset \mathbb{R}^{m}$ is compact then $f^{-1}(B)$ is compact. I want to prove that $f( \mathbb{R}^{n}) $ is closed.
I think that the most instructive proof is that of Proposition 5.3 of this file. Indeed, it shows that proper maps between locally compact topological spaces are always closed. This is a generalization of this discussion. I find it interesting since "standard" proofs tend to use sequences, and one might believe that everything might be lost without a metric. By the way, a very nice corollary of the proposed exercise is that non-constant maps with the two properties are always surjective, since $f(\mathbb{R}^n)$ is connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
How to find perpendicular vector to another vector? How do I find a vector perpendicular to a vector like this: $$3\mathbf{i}+4\mathbf{j}-2\mathbf{k}?$$ Could anyone explain this to me, please? I have a solution to this when I have $3\mathbf{i}+4\mathbf{j}$, but could not solve if I have $3$ components... When I googled, I saw the direct solution but did not find a process or method to follow. Kindly let me know the way to do it. Thanks.
Another way to find a vector $\vec{v}$ for a given $\vec{u}$ such that $$ \vec{u}\cdot\vec{v}=0 $$ is to use an antisymmetric matrix $A$ ($A^\top=-A$) defined as follow $$ A_{ij}u_iu_j=0\qquad(\text{sum over }ij). $$ In two dimension $A$ is $$ A=\begin{pmatrix} 0&1\\ -1&0\\ \end{pmatrix}. $$ In three dimension $A$ is $$ A=\begin{pmatrix} 0&1&1\\ -1&0&1\\ -1&-1&0\\ \end{pmatrix}. $$ In 2D only one such vector $\vec{v}=A\vec{u}$ exist, while in 3D you can apply the same matrix to the sum $\vec{u}+\vec{v}$ finding a vector perpendicular to the plane given by the other two vectors. 2D The matrix $A$ can be calculated as follow $$ A_{ij}u_iu_j=A_{11}u_1^2+(A_{12}+A_{21})u_1u_2+A_{22}u_2^2. $$ One way is to set $A_{11}=0=A_{22}$ and $A_{21}=-A_{12}$. 3D Again $$ A_{ij}u_iu_j=A_{11}u_1^2+(A_{12}+A_{21})u_1u_2+A_{22}u_2^2+(A_{13}+A_{31})u_1u_3+(A_{23}+A_{32})u_2u_3+A_{33}u_3^2, $$ and setting $A_{11}=A_{22}=A_{33}=0$ and $A_{21}=-A_{12}$, $A_{31}=-A_{13}$ and $A_{23}=-A_{32}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "78", "answer_count": 18, "answer_id": 12 }
Use the principal branch of log z $\int_{-1}^{1} log z\, dz$ Use the principal branch of log z $\int_{-1}^{1} log z\, dz$ My attempt was we see how -1 to 1 became pi to 0? That should make the e^whatever terms go away. Then we can do it by parts. $\int_{-1}^{1} log z\, dz$ =$\int_{\pi}^{0} log (e^{i\theta})ie^{i\theta}\, dz=-2+i\pi$ Is this correct result? Could please show me another method of resolution? Could someone help me through this problem?
I would say $$ \int_{-1}^1 \log(z)\,dz = \int_0^1\log(x)\,dx+\int_{-1}^0[\log(-x)+i\pi]\,dx $$ and then do some real integrals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Taylor's formula Taylor's Formula Write taylor's formula for $F(x,y)= \sin(x)\sin(y)$ using $a=0$, $b=0$, and $n=2$. $$\sin(h)\sin(k)=hk−\frac 16h(h^2+3k^2)\cos\theta h\sin\theta k−\frac 16 k(3h^2+k^2)\sin\theta h\cos\theta k$$ That's the answer, I don't get how to get it though. I don't understand how to use the taylor's formula to find this answer. I've tried and failed, I have no idea where $h$,$k$, and $\theta$ comes from. Is anyone able to help me out?
so the formula you want to apply reads \[ F(a+h, b+k) = \sum_{i,j=0}^{n} \frac 1{i!j!}\cdot \frac{\partial^{i+j}F}{\partial x^i\partial y^j}(a,b)h^ik^j + \sum_{i+j = n+1} \frac 1{i!j!} \cdot \frac{\partial^2F}{\partial x^i\partial y^j}(a+\theta h, b + \theta k)h^ik^j \] and tells you how $F$ behaves in a neighbourhood of $(a,b)$. The corresponding theorem says the for each $(h,k)$ there is some $\theta \in [0,1]$ such that this formula holds. Here we have \begin{align*} F(x,y) &= \sin x \sin y\\\ F(0,0) &= 0\\\ \frac{\partial F}{\partial x}(x,y) &= \cos x \sin y\\\ \frac{\partial F}{\partial x}(0,0) &= 0\\\ \frac{\partial F}{\partial y}(x,y) &= \sin x \cos y\\\ \frac{\partial F}{\partial y}(0,0) &= 0\\\ \frac{\partial^2 F}{\partial x^2}(x,y) &= -\sin x \sin y\\\ \frac{\partial^2 F}{\partial x^2}(0,0) &= 0\\\ \frac{\partial^2 F}{\partial x\partial y}(x,y) &= \cos x \cos y\\\ \frac{\partial^2 F}{\partial x\partial y}(0,0) &= 1\\\ \frac{\partial^2 F}{\partial y^2}(x,y) &= -\sin x \sin y\\\ \frac{\partial^2 F}{\partial y^2}(0,0) &= 0\\\ \end{align*} So in the first sum above all but one Term are zero, the non-vanishing term is \[ \frac 1{1!1!}\cdot \frac{\partial^2 F}{\partial x\partial y}(0,0)hk = hk\] Now the third partial derivatives \begin{align*} \frac{\partial^3 F}{\partial x^3}(x,y) &= -\cos x \sin y\\\ \frac{\partial^3 F}{\partial x^2\partial y}(x,y) &= -\sin x \cos y\\\ \frac{\partial^3 F}{\partial x\partial y^2}(x,y) &= -\cos x \sin y\\\ \frac{\partial^3 F}{\partial y^3}(x,y) &= -\sin x \cos y\\\ \end{align*} Pluging in, we get \begin{align*} \sum_{i+j = n+1} &\frac 1{i!j!} \cdot \frac{\partial^2F}{\partial x^i\partial y^j}(a+\theta h, b + \theta k)h^ik^j \\ &= -\frac 16\cdot\cos\theta h\sin\theta k\cdot h^3 - \frac 12\cdot \sin\theta h \cos \theta k \cdot h^2k\\\ &\quad{}- \frac 12\cos\theta h \sin \theta k\cdot hk^2 - \frac 16 \cdot \sin\theta h \cos \theta k \cdot k^3\\\ &= -\frac 16 h(h^2 + 3k^2)\cos\theta h\sin \theta k -\frac 16 k(3h^2 +k^2) \sin\theta h\cos\theta k. \end{align*} .. as we wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find equation of a plane that passes through point and contains the intersection line of 2 other planes Find equation of a plane that passes through point P $(-1,4,2)$ that contains the intersection line of the planes $$\begin{align*} 4x-y+z-2&=0\\ 2x+y-2z-3&=0 \end{align*}$$ Attempt: I found the the direction vector of the intersection line by taking the cross product of vectors normal to the known planes. I got $\langle 1,10,6\rangle$. Now, I need to find a vector normal to the plane I am looking for. To do that I need one more point on that plane. So how do I proceed?
Find any point that belongs to the line by looking for the intersections of this line with the coordinate planes. E.g., put $z=0$ and find $x$ and $y$ from the system $$ 4x-y=2,\quad 2x+4=3. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/137629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Limit inferior taken on the norm of a sequence Let $E$ a normed vector space and let $(x_n)$ be a sequence in $E$. Suppose that $x_n$ converges weakly (i.e. wrt the weak topology) to $x$. Why is it that from the inequality $$ |f(x_n)| \leq \|f\| \|x_n\|, $$ passing to the limit we obtain $$ |f(x)| \leq \|f\| \lim\inf\|x_n\| $$ ? Particularly, why can't we simply write $\lim \|x_n\|$ ?
You do not know that $\lim\limits_{n\rightarrow\infty} \Vert x_n\Vert$ exists. For instance, the sequence $(e_1, 2e_2, e_3, 2e_4,\ldots)$ converges weakly to $0$ in $\ell_2$. But, as the $\Vert x_n\Vert$ are reals, $\liminf\limits_{n\rightarrow\infty}\Vert x_n\Vert$ exists, and you can find a subsequence $\Vert x_{n_k}\Vert$ converging to its value. Then since $(x_{n_k})$ converges weakly to $x$ $$\tag{1} |f(x)|= \lim\limits_{k\rightarrow\infty}|f(x_{n_k})| \le \lim\limits_{k\rightarrow\infty}(\,\Vert x\Vert\Vert x_{n_k}\Vert\,) =\Vert x\Vert \liminf\limits_{n\rightarrow\infty}\Vert x_{n }\Vert. $$ Here we are just using the result for real numbers: Suppose $a_n\le b_n$ for each $n$. Then if $a_n\rightarrow a$ and if $b_n\rightarrow b$, it follows that $a\le b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Primitive roots as roots of equations. Take $g$ to be a primitive root $\pmod p$, and $n \in \{0, 1,\ldots,p-2\}$ write down a necessary sufficient condition for $x=g^n$ to be a root of $x^5\equiv 1\pmod p$ . This should depend on $n$ and $p$ only, not $g$. How many such roots $x$ of this equation are there? This answer may only depend on $p$. At a guess for the first part I'd say as $g^{5n} \equiv g^{p-1}$ it implies for $x$ to be a root $5n \equiv p-1 \pmod p$. No idea if this is right and not sure what to do for second part. Thanks for any help.
$g^{m} \equiv 1 \mod p$ if and only if $\mathrm{ord}_p(g)$ divides $m$. Since $g$ is primitive root, we get that $p-1=\mathrm{ord}_p(g)$ has to divide $5n$. Can you finish the problem now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/137769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A group where every two elements different than 1 are conjugate has order 1 or 2. I need help showing this: Let G be a finite group such that for every $x$, $y$ in G, $x\neq 1$ and $y\neq 1$, we have that $x$ and $y$ are conjugates. Under those conditions, G must have order 1 or 2. This is under the topic "actions of groups on sets", but I couldn't figure out a way to start it. Since every element is conjugate, then G must have only one conjugation class, which is itself, but how can this information help?
Here's another proof, not quite as elegant, but with a different flavor that I feel is also worth seeing. In this proof I use a few of the theorems you may have seen in the "group actions" section of the book: Since conjugate elements have the same order, all nonidentity elements have the same order. Thus only one prime number, $p$, divides the order of the group, since for every prime dividing $|G|$ we have an element whose order is that prime (This follows from Cauchy's Theorem and the Sylow Theorems, which I expect you'll encounter soon). But we know $p$-groups have nontrivial centers (by the class equation, one of the most important elementary results using group actions), and elements of the center are their own conjugacy classes. Since the order of the center is at least $p$ and the center has at most one nonidentity element, $p=2$, and $|Z(G)|=2$. If $|G|$ is 4 or more, then we have elements not in the center, which cannot be conjugate to elements of the center. Thus $|G|=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Symmetric and exterior power of representation Does there exist some simple formulas for the characters $$\chi_{\Lambda^{k}V}~~~~\text{and}~~~\chi_{\text{Sym}^{k}V},$$ where $V$ is a representation of some finite group? Thanks.
I ran into this question when trying to find a reference for some formulas that I think should be true. I will leave them here, in case they are of some help to someone. Here are the formulas, and I comment on the derivation below: $$ \sum_{n=0}^{\infty}t^n\chi_{S^n\alpha}(g)=\exp\left\{\sum_{p=1}^{\infty}t^p\frac{\chi_\alpha(g^p)}{p}\right\},\\ \sum_{n=0}^{\infty}t^n\chi_{\Lambda^n\alpha}(g)=\exp\left\{-\sum_{p=1}^{\infty}(-t)^p\frac{\chi_\alpha(g^p)}{p}\right\}. $$ And in particular $$ \sum_{n=0}^{\infty}t^n\chi_{S^n\alpha}=\frac{1}{\sum_{n=0}^{\infty}(-t)^n\chi_{\Lambda^n\alpha}}. $$ The latter formula also can be seen as a direct consequence of the facts described in anon's answer and the relation between generating functions for complete homogeneous symmetric polynomials and for elementary symmetric polynomials. As for the first two formulas, the key observation is given in Qiaochu's answer $$ \chi_{\Lambda^n \alpha}(g)=\frac{1}{n!}\sum_{\pi\in S_n}\mathrm{sgn}{\pi} \,\mathrm{tr}_\pi \alpha(g), $$ where $\mathrm{tr}_\pi(A)$ is defined in Qiaochu's answer and in anon's answer above. Slight modification gives $$ \chi_{S^n \alpha}(g)=\frac{1}{n!}\sum_{\pi\in S_n}\mathrm{tr}_\pi \alpha(g). $$ Now these can be more or less straightforwardly made into the generating functions above by the use of Polya enumeration (Qiaochu gives the relevant corollary, $Z_G$ is defined here). One just has to express the sign of the permutation in terms of the cycle type.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 3, "answer_id": 1 }
Does convergence in $L^p$ imply convergence almost everywhere? If I know $\| f_n - f \|_{L^p(\mathbb{R})} \to 0$ as $n \to \infty$, do I know that $\lim_{n \to \infty}f_n(x) = f(x)$ for almost every $x$?
Let $\rho(x)=\chi_{[0,1]}$ be the characteristic function of the interval $[0,1]$. Then take the "dancing" sequence $$ f_n(x) = \rho(2^mx-k) $$ where $n=2^m+k$ with $0\leq k<2^m$. This sequence converges to $0$ in $L^p$ but for any $x\in(0,1)$ we have $f_n(x)$ is not convergent. However, it is a general fact that one can always extract a subsequence converging almost everywhere to $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "110", "answer_count": 3, "answer_id": 0 }
Factoring over a finite field Consider $f=x^4-2\in \mathbb{F}_3[x]$, the field with three elements. I want to find the Galois group of this polynomial. Is there an easy or slick way to factor such a polynomial over a finite field?
Recall that over $\mathbb{F}_q$, the polynomial $x^{q^n} - x$ is precisely the product of all irreducible polynomials of degree dividing $n$. The following then gives a straightforward algorithm to determine the degrees of the irreducible factors of a polynomial $f(x)$ over $\mathbb{F}_q$: * *Initialize $g(x) := \frac{f(x)}{\gcd(f(x), f'(x))}$ (this removes repeated factors) and $n := 1$. *Compute $\gcd(g(x), x^{q^n} - x)$ via the Euclidean algorithm. This is the product of all irreducible factors of $f$ of degree $n$. *Set $g(x) := \frac{g(x)}{\gcd(g(x), x^{q^n} - x)}$ and $n := n+1$. *Repeat. In this case by inspection $f$ has no linear factors so we only have to check for quadratic factors, hence we only have to compute $\gcd(x^4 - 2, x^9 - x)$. But again by inspection, $$(x^4 - 2)(x^4 + 2) = x^8 - 1$$ so in fact $x^4 - 2$ must be a product of two (distinct) irreducible quadratics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Probability of 2 Events StackExchange has been amazing in the past, and I want to thank the collective hive-mind in advance. I have a pretty basic probability problem that I have no idea how to solve. I'm not looking for the answer, but rather, how to tackle it. "Suppose that an experiment leads to events A and B with the following probabilities: $P(A) = 0.6$ and $P(B) = 0.7$. Show that $P(A \cap B) \geq 0.3$." I suppose we know the events cannot be mutually exclusive, since $P(A) + P(B) > 1$. If the events are independent, then $P(A \cap B) = P(A)*P(B) = 0.42$, which I suppose is a (lower or upper?) bound for the joint probability. Am I on the right track? Assuming that $A$ and $B$ are not completely independent, but also not mutually exclusive, what would my next step be? Thanks, Adam
Hint: you need to remember that $$ P(A\cup B)=P(A)+P(B)-P(A\cap B)\leq1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/138280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Simple applications of complex numbers I've been helping a high school student with his complex number homework (algebra, de Moivre's formula, etc.), and we came across the question of the "usefulness" of "imaginary" numbers - If there not real, what are they good for? Now, the answer is quite obvious to any math/physics/engineering major, but I'm looking for a simple application that doesn't involve to much. The only example I've found so far is the formula for cubic roots applied to $x^3-x=0$, which leads to the real solutions by using $i$. Ideally I'd like an even simpler example I can use as motivation. Any ideas?
Here You might find some lucid and illustrative discussions within its first chapters. Basically, this book intends exactly to make complex numbers friendly.^^
{ "language": "en", "url": "https://math.stackexchange.com/questions/138325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 4 }
Formal second-order statements of Archimedean and completeness properties I am trying to translate the following statements from English to second-order logic, and I want to know if I got them right. I have a language for an ordered field $(F,+,\cdot,0,1,\leq)$, i.e., I don't have a predicate symbol for some equivalent of the natural numbers. Lowercase letters denote first-order variables; uppercase denote second-order. Archimedean property: For every element $x$ of the field, there exists some natural number $y$ such that $y > x$. $$\exists X \; \forall x \; \exists y \; \forall z \; [X0 \land (Xz \rightarrow Xz+1) \land Xy \land x \leq y \land x \neq y]$$ Dedekind completeness: Every nonempty subset of the field that is bounded above has a least upper bound in the field. $$\forall X \; \exists Y \; \exists z \; \forall x,y \; [(Xx \land Yy) \rightarrow (x \leq y \land Yz \land z \leq y)]$$ This is not homework, by the bye, but a hint will do if something is wrong. Thanks.
The problem with your attempt at the Archimedean property is that it guarantees only that $X$ contains the $F$-analogue of $\Bbb N$, not that $X$ is that set. $X$ could be any inductive subset of $F$ containing $0$, including, as Chris Eagle points out, $F$ itself. You want to add something to ensure that $X$ is the smallest inductive set containing $0$. An easy way to do this is to mimic the induction axiom of Peano’s axioms: $$\exists X\Big(X0\,\land\forall x(Xx\to Xx+1)\land\forall Y\big(Y0\,\land\forall y(Yy\to Yy+1)\big)\to\forall x(Xx\to Yy)\Big)$$ (For greater clarity I’ve left the quantifiers in their natural locations instead of pulling them out to the front.) Intuitively this says that $X$ contains $0$ and is inductive, and if $Y$ is any inductive subset of $F$ that contains $0$, then $X\subseteq Y$. In other words, $X$ is the smallest inductive subset of $F$ containing $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Calculating limits of a function of 2 or 3 variables I have to calculate these two limits, and have no idea where to start from. Your guidance for how should I start working with it can help me a lot. 1) $\lim\limits_{(x,y,z)\rightarrow (0,0,0)} (1+xyz)^{(x^2+y^2+z^2)^{-1}}$ 2)$\lim\limits_{(x,y)\rightarrow (0,0)} \dfrac{4y^2+3xy^2+2x^2}{x^2+2y^2}$ Thanks in advance.
Hint: * *Write $$ \lim_{(x,y,z)\to (0,0,0)} (1+xyz)^{(x^2+y^2+z^2)^{-1}} = \lim_{(x,y,z)\to (0,0,0)} \left[(1+xyz)^{\frac{1}{xyz}}\right]^{\frac{xyz}{x^2+y^2+z^2}} $$ The limit of the base behaves like $\lim\limits_{t\to 0}(1+t)^{\frac{1}{t}}$, while for $\lim\limits_{(x,y,z)\to (0,0,0)} {\frac{xyz}{x^2+y^2+z^2}}$, the top has degree three, while the bottom has only degree two, hence the top should be approaching $0$ in a higher order than the bottom, and the limit $\lim\limits_{(x,y,z)\to (0,0,0)} {\frac{xyz}{x^2+y^2+z^2}}$ should be $0$(this step is left for you to argue in a more mathematical way), and the whole limit should be 1. EDIT: you mention you haven't learned polar/spherical coordinates(that would be the easier way), here we could make use of AMGM inequality to get: $$ \lim\limits_{(x,y,z)\to (0,0,0)} {\frac{|xyz|}{x^2+y^2+z^2}} \leq \lim\limits_{(x,y,z)\to (0,0,0)}{\frac{|xyz|}{3\sqrt[3]{x^2 y^2 z^2}}} $$ and evaluating the limit for the right hand side is not that difficult. *Write $$ \lim\limits_{(x,y)\to (0,0)} \dfrac{4y^2+3xy^2+2x^2}{x^2+2y^2} = 2 + \lim\limits_{(x,y)\to (0,0)}\frac{3xy^2}{x^2+2y^2} $$ like Didier said in the comment, now the argument is kinda similar, the top again is a one degree higher polynomial than the bottom(use AMGM inequality again for the bottom).
{ "language": "en", "url": "https://math.stackexchange.com/questions/138469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Showing a series is a solution to a differential equation I am attempting to show that the series $y(x)\sum_{n=0}^{\infty} a_{n}x^n$ is a solution to the differential equation $(1-x)^2y''-2y=0$ provided that $(n+2)a_{n+2}-2na_{n+1}+(n-2)a_n=0$ So i have: $$y=\sum_{n=0}^{\infty} a_{n}x^n$$ $$y'=\sum_{n=0}^{\infty}na_{n}x^{n-1}$$ $$y''=\sum_{n=0}^{\infty}a_{n}n(n-1)x^{n-2}$$ then substituting these into the differential equation I get: $$(1-2x+x^2)\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n-2}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$ $$\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n-2}-2\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n-1}+\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$ relabeling the indexes: $$\sum_{n=-2}^{\infty}(n+2)(n+1)a_{n+2}x^{n}-2\sum_{n=-1}^{\infty}n(n+1)a_{n+1}x^{n}+\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$ and then cancelling the $n=-2$ and $n=-1$ terms: $$\sum_{n=0}^{\infty}(n+2)(n+1)a_{n+2}x^{n}-2\sum_{n=0}^{\infty}n(n+1)a_{n+1}x^{n}+\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$ but this doesn't give me what I want (I don't think) as I have $n^2$ terms as I would need $(n^2+3n+2)a_{n+2}-(2n^2+n)a_{n+1}+(n^2-n-2)a_{n}=0$ I'm not sure where I have gone wrong? Thanks very much for any help
You are right till the last step. You have $$\sum_{n=0}^{\infty}(n+2)(n+1)a_{n+2}x^{n}-2\sum_{n=0}^{\infty}n(n+1)a_{n+1}x^{n}+\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$ which gives us $$\sum_{n=0}^{\infty} \left((n+2)(n+1) a_{n+2} - 2 n (n+1) a_{n+1} + (n^2-n-2)a_n \right)x^n = 0$$ and not $$\sum_{n=0}^{\infty} \left((n+2)(n+1) a_{n+2} - (2n^2+n) a_{n+1} + (n^2-n-2)a_n \right)x^n = 0$$ as you have written. Hence, setting the coefficients of $x^n$ to zero, we get that $$\left((n+2)(n+1) a_{n+2} - 2 n (n+1) a_{n+1} + (n^2-n-2)a_n \right)=0$$ Factorizing $n+1$ out, we get what you need i.e. $$\left((n+2)(n+1) a_{n+2} - 2 n (n+1) a_{n+1} + (n^2-n-2)a_n \right)= (n+1) \left((n+2) a_{n+2} - 2 n a_{n+1} + (n-2)a_n \right)$$ Hence, we get that $$(n+2) a_{n+2} - 2 n a_{n+1} + (n-2)a_n = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/138520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
A coarser topology has more compact sets A coarser topology has more compact sets. Is this claim easy to verify? An if so, how can we prove it?
More accurately, a coarser topology has no fewer compact sets. Consider the discrete and indiscrete topologies on $\{0,1\}$, for instance: the latter is strictly coarser than the former, but they both have exactly the same compact sets, namely, $\varnothing,\{0\},\{1\}$, and $\{0,1\}$. The claim is trivial to verify when it’s stated properly: Proposition: Let $\tau$ and $\tau'$ be topologies on a set $X$ such that $\tau'\subseteq\tau$. If $K\subseteq X$ is $\tau$-compact, then $K$ is also $\tau'$-compact. In other words, $\langle X,\tau'\rangle$ has all of the compact sets of $\langle X,\tau\rangle$ and possibly more besides. To prove this, suppose that $K\subseteq X$ is $\tau$-compact, and let $\mathscr{U}$ be a cover of $K$ by $\tau'$-open sets. Then $\mathscr{U}\subseteq\tau$, so $\mathscr{U}$ is also a $\tau$-open cover of $K$, and as such it has a finite subcover $\mathscr{V}$. Thus, $K$ is $\tau'$-compact. $\dashv$
{ "language": "en", "url": "https://math.stackexchange.com/questions/138591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Squaring an arbitrary summation? I'm trying to find a recurrence relation for the coefficients for the Maclaurin series for $\tan(x)$ by substituting $y=\sum_{k=0}^{\infty}C_{2k+1}x^{2k+1}$ into the differential equation $y'=1+y^2$. This is because $\tan(x)$ is the solution to the initial value problem for the aforementioned DE with the initial condition $y(0)=0$; this is also where the form $\sum_{k=0}^{\infty}C_{2k+1}x^{2k+1}$ comes from (the fact that $\tan(x)$ is an odd function and that $y(0)=0$ which implies $C_0=0$). But I have no clue how to work "around" the expression $y^2=\big(\sum_{n=1}^{\infty}C_{2k+1}x^{2k+1}\big)^2$. How can I find a recurrence relation with an infinite squared summation? Any help is appreciated, thank you.
Take the Cauchy product: $$\begin{align*} \left(\sum_{k\ge 0}C_{2k+1}x^{2k+1}\right)^2&=x^2\left(\sum_{k\ge 0}C_{2k+1}x^{2k}\right)^2\\\\ &=x^2\sum_{k\ge 0}D_{2k}x^{2k}\;, \end{align*}$$ where $$D_{2k}=\sum_{i=0}^kC_{2i+1}C_{2(k-i)+1}\;.$$ Thus, the differential equation becomes $$\begin{align*} \sum_{k\ge 0}C_{2k+1}(2k+1)x^{2k}&=1+x^2\sum_{k\ge 0}\sum_{i=0}^kC_{2i+1}C_{2(k-i)+1}x^{2k}\\ &=1+\sum_{k\ge 1}\sum_{i=0}^{k-1}C_{2i+1}C_{2(k-i)-1}x^{2k}\;, \end{align*}$$ and we have $C_1=1$ and $$C_{2k+1}=\frac1{2k+1}\sum_{i=0}^{k-1}C_{2i+1}C_{2(k-i)-1}$$ for $k\ge 1$. E.g., $$\begin{align*} C_3&=\frac13 C_1^2=\frac13\;,\\ C_5&=\frac15(2C_1C_3)=\frac2{15}\;,\text{ and}\\ C_7&=\frac17(2C_1C_5+C_3^2)=\frac{17}{315}\;. \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/138657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Complex Numbers and polar form I am given the following information: $$x[n]= s^n,\qquad n=0,\pm 1,\pm 2,\ldots$$ where $s=\sigma + j\omega = re^{i\theta}$ is a complex number in general. I was wondering how the following is concluded (proof?): For $\sigma = 0$ then $x[n] = r^n$
Note that if $u=re^{i\theta}$ is a complex number, then $$u^n = (re^{i\theta})^n = r^n(e^{i\theta})^n = r^n e^{in\theta}.$$ So if $\theta=0$, then $u^n = r^n$. (This is sometimes known as DeMoivre's Forumla) In particular, if $s = re^{i\theta}$ and $\theta=0$, then $s^n = r^n$. Added. If $\sigma=0$, then $s$ is purely imaginary, $r=|j|$ and $\theta=\pi/2$ if $j\gt0$ and $\theta=-\pi/2$ if $\lt 0$. If $n$ is a multiple of $4$, then $s^n = r^n$; if $n=4k+2$, then $s^n=-r^n$; if $n=4k+1$, then $s^n=\mathrm{sgn}(j)ir^n$, and if $n=4k+3$ then $s^n=-\mathrm{sgn}(j)ir^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
what is the formula to calculate the permutations I am new to the permutations. I have a problem with me for which I am not able to use proper formula - Problem: There are X boxes in which balls need to be placed. The balls are of two colors - BLUE RED. We have unlimited balls of both colors. We need to find the number of permutations / ways in which the balls will be placed in the boxes in such a way that BLUE balls never come together. Solution So far I have reached to the below formula- = (Total number of arrangements - Arrangements of BLUE balls sitting together) = (2^X - Arrangements of BLUE balls sitting together) I am stuck for Arrangements of BLUE balls sitting together What should be the right formula for this ?
Denote by $b(n)$ the number of arrangements on $n$ boxes such that at least two blue balls are neighbours (I'll call this a "bad" arrangement). I'm going for a recurrence relation. Assume that we know $b(k)$ for $k < n$. Now consider the first box. If it contains a red ball, we have $b(n-1)$ combinations which complete this to a bad arrangement. If it contains a blue ball and the second box also contains a blue ball, the arrangement is bad no matter what the other boxes contain, so we have $2^{n-2}$ arrangements in this case. If the second box contains a red ball, we have $b(n-2)$ combinations which yield a bad arrangement. So $b(n) = b(n-1) + b(n-2) + 2^{n-2}$. The initial values are $b(1) = 0$ and $b(2) = 1$. You should be able to find a closed form expression using methods described e.g. here, but right now I don't have the time to do it myself. ;)
{ "language": "en", "url": "https://math.stackexchange.com/questions/138788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }