Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Find eigenvalues and eigenvectors of the operator $A$ The question is: Find the eigenvalues and eigenvectors of the operator $A$ on $\Bbb{R}^3$ given by $A\mathbf{x}=|\mathbf{a}|^2 \mathbf{x}- (\mathbf{a} \cdot \mathbf{x}) \mathbf{a}$, where $\mathbf{a}$ is a given constant vector. How do you know without any calculations that $A$ must have an orthonormal eigenbasis?
I have seem examples similar to this question. I'm wondering if there's any systematic way to solve this kind of questions. Someone showed to me that you first get $\mathbf{x}=\lambda\mathbf{a}$. What is the reason behind that and how does this help to solve the problem?
|
Notice that $A$ is a symmetric matrix. Because $A_{ij}=(Ae_{j}\cdot e_{i})=|a|^{2}(e_{j}\cdot e_{i})-(a\cdot e_{j})(a\cdot e_{i})=(Ae_{i}\cdot e_{j})=A_{ji}$. So it is diagonalizable and has an orthonormal eigen basis.
Intuition behind the eigen vectors: Notice that $A'x:=(a\cdot x)a$ is a projection matrix projecting every vector along the direction of $a$. Modify a little bit and define $A''x:=\frac{1}{|a|^{2}}(a\cdot x)a$. Then $A''$ is an orthogonal projection, and $x-\frac{1}{|a|^{2}}(a\cdot x)a$ is the vector orthogonal to the direction of $\frac{1}{|a|^{2}}(a\cdot x)a$.
(Geometrically) It is clear that $a$ is an eigen vector of $A''$ corresponding to the eigenvalue $1$. Another eigenvalue of $A''$ is $0$ and eigenvectors are the vectors which are orthogonal to $a$, let's call them $a_{1}^{\bot}$ and $a_{2}^{\bot}$.
Now if we look at $I-A''$, then it is a projection onto the space generated by $a_{1}^{\bot},a_{2}^{\bot}$. In fact, $(I-A'')a_{i}^{\bot}=a_{i}^{\bot}$ for $i=1,2$ and $(I-A'')a=0$, i.e., $a_{i}^{\bot}$ s are the eigenvectors of $(I-A'')$ corresponding to the eigenvalue $1$, and $a$ is an eigenvector of $(I-A'')$ corresponding to the eigenvalue $0$.
In our case $A=|a|^{2}(I-A'')$. Using the above intuition we find the eigenvalues of $A$ are $|a|^{2}, 0$ and the corresponding eigenvectors are $a_{1}^{\bot}, a_{2}^{\bot}$ and $a$ respectively.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1034376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
How to do this limit: $\lim\limits_{n\to \infty}\large\frac{\sum_{k=1}^n k^p}{n^{p+1}}$? $$\large\lim_{n\to \infty}\large\frac{\sum_{k=1}^n k^p}{n^{p+1}}$$
I'm stuck here because the sum is like this: $1^p+2^p+3^p+4^p+\cdots+n^p$.
Any ideas?
|
Hint: Recall that if $f$ is integrable on $[a,b]$, then:
$$
\int_a^b f(x)~dx = \lim_{n\to \infty} \dfrac{b-a}{n}\sum_{k=1}^n f \left(a + k \left(\dfrac{b-a}{n}\right) \right)
$$
Can you rewrite the given sum in the above form? What might be an appropriate choice for $a$, $b$, and $f(x)$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1034522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Multiplication in symmetric product space STATEMENT: Let $V=\mathbb{R}^2$.Take $Y:=\left\{x\cdot y: x,y\in V\right\}$ where $S_2(V)$ is the symmetric product of $V$.
QUESTION: What is multiplication in the symmetric product space?
|
The symmetric power $S^p(V)$ of a vector space $V$ is defined as the quotient $V^{\otimes p} / (v_1 \otimes \dotsc \otimes vp = v_{\sigma(1)} \otimes \dotsc \otimes v_{\sigma(p)} : v_i \in V, \sigma \in \Sigma_p)$. The natural isomorphism $V^{\otimes p} \otimes V^{\otimes q} \to V^{\otimes p+q}$ extends to a linear map $S^p(V) \otimes S^q(V) \to S^{p+q}(V)$. It maps $[v_1 \otimes \dotsc \otimes v_p] \otimes [w_1 \otimes \dotsc \otimes w_q]$ to $[v_1 \otimes \dotsc \otimes v_p \otimes w_1 \otimes \dotsc \otimes w_q]$. Usually one writes $[v_1 \otimes \dotsc \otimes v_p]$ in $S^p(V)$ as $v_1 \cdot \dotsc \cdot v_p$, so that $S^p(V) \otimes S^q(V) \to S^{p+q}(V)$ is really just a multiplication. In your question, $p=q=1$. Hence, $S^2(V) = V^{\otimes 2}/(v \otimes w = w \otimes v)$, where $[v \otimes w]$ is denoted by $v \cdot w$ (which equals $w \cdot v$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1034602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Sum of convergent and non-convergent series, does it converge? And how to prove? Series $a_n$ is convergent and $b_n$ is not-convergent. Will the sum $a_n + b_n$ converge? I think it will not converge, But how do I show it?
I believe I have to use the definition.
$|a_n - A| < \epsilon$
$|b_n - B| >= \epsilon$
Then choose $N > n$ for both, and try to achieve and equation that shows $ | a_n + b_n - A - B | >= \epsilon $
|
It is not converge. because $$b_n=(b_n+a_n)-a_n,$$ so if $\sum b_n+a_n $ converges then $b_n$ so is.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1034718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Basic probabilities: one poisson then 2 binomial (Wasserman 2.14 - 11) I am self-refreshing some stats concepts by reading "All of Statistics" (Wasserman 2004) and am puzzled by the following problem (section 2.14, exercise 11):
Let N ~ Poisson($\lambda$) and suppose we toss a coin N times and let $p$ be the probability of heads. Let $X$ and $Y$ be the number of heads and tails. Show that $X$ and $Y$ are independent.
It seems obvious that $X$ and $Y$ are dependent when conditioned by $N$ since they must sum up to N, so $f(X,Y|N) \ne f_X(X|N) f_Y(Y|N)$ but the question is about showing that $f_{XY}(X,Y) = f_X(X) f_Y(Y)$.
I tried obtaining $f(X,Y)$ by first expressing $f(X,Y,N) = f(N)f(X|N)f(Y|X,N)$ and then marginalising over $N$, the first and second factors are clearly a poisson and binomial respectively but the last one appears to be $1$ for $Y=N-X$ and $0$ otherwise (and I'm stuck there ^__^).
Any help or comment welcome! Thanks!
|
Let us first partition the event $(Y=y,X=x)$ into disjoint events indexed by $N$:
$$\begin{align}
\mathbb{P}(X=x,Y=y)&=\sum_n \mathbb{P}(X=x,Y=y,N=n),\\
&=\sum_n \mathbb{P}(X=x,Y=y|N=n)\mathbb{P}(N=n),\\
&=\sum_n \mathbb{P}(Y=y|X=x,N=n)\mathbb{P}(X=x|N=n)\mathbb{P}(N=n).
\end{align}$$
Up to this point this is exactly what you did. Now recall that you wish to calculate the probability that $X=x$ and $Y=y$, that is, $x$ and $y$ are fixed. Now since $y$ is fixed, the factor $\mathbb{P}(Y=y|N=n,X=x)$ will take out the values of the summation for which $n\neq x+y$ (given $x$ and $y$, there will be a value of $n$ for which this term doesn't vanish, namely $n=x+y$), this yields:
$$\begin{align}
\mathbb{P}(X=x,Y=y)&=\mathbb{P}(X=x|N=x+y)\mathbb{P}(N=x+y),\\
&=\frac{\lambda^{x+y}}{x! y!}e^{-\lambda}p^x (1-p)^{y}.
\end{align}$$
Calculating the marginals through $f_X(x)=\sum_{n=x}^\infty f(x|n)f(n)$ (and the analogous formula for $Y$) and multiplying them together produces the density stated above (I checked it), thus $X$ and $Y$ are independent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1034829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A converse of schur's lemma Suppose $\rho: G \rightarrow GL(V)$ is a representation. and if $T: V \rightarrow V$ is a linear operator such that $T\circ \rho_g= \rho_g\circ T$ for all $g\in G$ implies $T=k\cdot Id$ for some number $k$. (i.e. $T$ is $G$-invariant/ $G$-intertwining implies $T$ is a homothety). Prove that $\rho$ is irreducible.
Attempt: Suppose $W$ is a vector subspace of $V$ such that $\rho_g (W) \subseteq W$ for all $g\in G$, I want to prove that $W=\{0\}$ or $V$. The only theorem I have learnt to prove irreducibility is $\langle \chi,\chi \rangle=1$ but I dont think it is useful here. But then I have no idea how to make use of the condition "the only $G$-interwining linear operator is homothety". Please helps
|
Hint: If there is a non-trivial proper subspace, find a $G$-complement. Think about the projection map to one of the factors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1034901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Sum of all triangle numbers Does anyone know the sum of all triangle numbers? I.e
1+3+6+10+15+21...
I've tried everything, but it might help you if I tell you one useful discovery I've made:
I know that the sum of alternating triangle numbers, 1-3+6-10... Is equal to 1/8 and that to change
1+3+6... Into 1-3+6... You would subtract 6+20+42+70... which is every other triangular number (not the hexagonals) multiplied by two.
1/8 plus this value is 1+3+6+10+...
A final note: I tried to split the triangle numbers into hexagonals and that series and then I got the squares of the odd numbers. Using dirichlet lambda functions This gave me 0 but I don't think this could be right. A number of other sums gave me -1/24 and 3/8 but I have no idea
|
The $r$-th triangular number is
$$T_r=\frac {r(r+1)}2=\binom {r+1}2$$
i.e. $1, 3,6, 10, ...$ for $r=1, 2, 3, 4, ...$.
The sum of the first $n$ triangular numbers is
$$S_n=\sum_{r=1}^n T_r=\color{blue}{\sum_{r=1}^n \binom {r+1}2=\binom {n+2}3}=\frac {(n+2)(n+1)n}6$$
i.e. $1, 4, 10, 20, ...$ for $n=1, 2, 3, 4...$. This is also known as the $n$-th tetrahedral number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1034994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 3
}
|
Find flux using usual method and divergence theorem (results don't match) I'm trying to calculate complete flux through a pyramid formed by a plane and the axis planes. I can't to come to the same answer using usual method and the divergence theorem.
The plane is
$2x+y+z-2=0$
The vector field is
$\vec F=(-x+7z)\vec k$
Using the divergence theorem:
$div\vec F = \frac{d}{dz}(-x+7z) = 7 $
$Ф=\int_{0}^{1}\int_{0}^{2-2x}\int_{0}^{2-2x-y}div\vec F*dzdydx = 7\int_{0}^{1}\int_{0}^{2-2x}\int_{0}^{2-2x-y}dzdydx = -\frac{70}{3}$
Using the usual method:
since the field has only k-component, full flux will be a sum of fluxes through the $2x+y+z-2=0$ plane and through the xy-plane (flux through zy and zx will be zero).
Flux through $2x+y+z-2=0$:
$\int_{0}^{1}\int_{0}^{2-2x}(-x+7z)\vec k\cdot (\frac{2\vec i + \vec j + \vec k}{\sqrt 6})\sqrt 6dydx = \frac{13}{3}$,
where $(\frac{2\vec i + \vec j + \vec k}{\sqrt 6})$ is a normal unit vector and $(-x+7z)\vec k\cdot (\frac{2\vec i + \vec j + \vec k}{\sqrt 6})$ is a dot product.
Flux through the xy plane:
The unit vector will be -1. Since we are in the xy plane, the vector field will be $-x+7*0 = -x$. The integral is then:
$\int_{0}^{1}(-x\vec k\cdot -\vec k)(2-2x)dx = \frac{1}{3}$
The total flux: $\frac{13}{3} + \frac{1}{3} = \frac{14}{3}$
$\frac{14}{3} \neq -\frac{70}{3}$
What did I do wrong?
|
Since the divergence of the given vector field is constant, we can just multiply the volume of the pyramid with the divergence. The pyramid has ground surface (in the $xy$-plane) $1$, because it is a right triangle with base $1$ and height $2$. The height of the pyramid is $2$, so the pyramid volume is $\frac{1}{3}\cdot 1 \cdot 2=\frac{2}{3}$. Multiplying with the divergence $7$ gives an answer $\frac{14}{3}$.
Let's check this by doing the integral in your question:
\begin{align}
\Phi&=\int_{0}^{1}\int_{0}^{2-2x}\int_{0}^{2-2x-y}dzdydx\\
&=\int_{0}^{1}\int_{0}^{2-2x}\left[\int_{0}^{2-2x-y}dz\right]dydx\\
&=\int_{0}^{1}\left[\int_{0}^{2-2x}(2-2x-y)dy\right]dx\\
&=\int_{0}^{1}\left[2y-2xy-\frac{1}{2}y^2\right]_0^{2-2x}dx\\
&=2\int_{0}^{1}(x-1)^2dx\\
&=\frac{2}{3}\left[(x-1)^3\right]_0^1\\
&=\frac{2}{3}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1035086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Primes of the form $an^2+bn+c$? Wondering if this has been proven or disproven. Given:
$a,b,c$ integers
$a$, $b$, and $c$
coprime
$a+b$ and $c$ not both even
$b^2$-$4ac$
not a perfect square
are there infinite primes of the form $an^2 + bn + c$?
|
Take $a = 1$, $b = 0$, $c = 1$. It is currently open whether there are infinitely many primes of the for $x^2 + 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1035192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is this theory complete? I have a language $L=\{P\}$ with equation, where $P$ is binary predicate symbol. Language's formulas are:
$\varphi \equiv \forall x \forall y (\neg P(x,x) \land (P(x,y) \to P(y,x)))$,
$\psi \equiv \forall x \exists y \exists z(y\ne z \land P(x, y) \land P(x,z) \land \forall v (P(x,v) \to (v=y \lor v =z )))$
and for every $n \in N^+$
$\xi_n \equiv \exists x_1 \exists x_2 ... \exists x_n
(\bigwedge \limits _{i=1}^{n-1}P(x_i, x_{i+1}) \land \bigwedge \limits _{i=1 }^{n}\bigwedge \limits _{j=i+1}^{n} x_i \ne x_j)$
I'm trying to prove that theory $T=\{\varphi,\psi\} \cup \{\xi_n | n \in N^+\}$ is complete and is not contradictory.
But still with no success. Could somebody suggest me some flow of proof?
|
This theory axiomatizes the class of graph of degree $2$ such that for all positive integers $n$ there exists a path of length $n$. Needless to say, such a graph has to be infinite to satisfy the last condition.
To prove that the theory is not contradictory, it suffices to come up with a model. An example of a model is the infinite path on $\mathbb Z$ where $P$ is the successor predicate.
The theory is not complete, however. Consider the structure $(\mathbb Z, s)$ of the integers with successor and add three new points $a,b,c$ that form a triangle for the predicate $P$, and are disjoint from the rest of the structure. This is again a 2-regular graph that satisfies all the $\xi_n$, thus it is a new model that is not elementarily equivalent to the former one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1035260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Converting expressions to polynomial form My question is from Apostol's Vol. 1 One-variable calculus with introduction to linear algebra textbook.
Page 57. Exercise 12. Show that the following are polynomials by converting them to the form $\sum_{k=0}^{m}a_kx^k$ for a suitable $m$. In each case $n$ is a positive integer.
$a)$ $(1+x)^{2n}.$
$b)$ $\frac{1-x^{n+1}}{1-x}, x\not=1.$
$c)$ $\prod_{k=0}^{n}(1+x^{2^k}).$
The attempt at a solution: a) part of the problem is pretty easy I guess, it is example of binomial theorem, so the answer would be $(1+x)^{2n}=\sum_{k=0}^{2n}(^{2n}_{k})x^k.$ Answer to part b) would be the following: $$\frac{1-x^{n+1}}{1-x}=\frac{(1-x)(1+x+x^2+\cdots+x^n)}{1-x}=1+x+x^2+\cdots+x^n=\sum_{k=0}^{n}x^k.$$thanks to @DiegoMath's hint.
As for part c), we have $$\prod_{k=0}^{n}(1+x^{2^k})=(1+x)(1+x^2)(1+x^4)(1+x^8)\cdots(1+x^{2^n})$$ and I have trouble "converting" this to sum which would be of a form of polynomial.
|
I think I understand all questions from this problem and I have the solutions, so since no one is posting good answer I will, hope its ok.
Answer to $a)$ part of the problem I already listed but here it is anyway.
$a)$ Using binomial theorem, we have $$(1+x)^{2n}=\sum_{k=0}^{2n}(^{2n}_{k})x^k.$$
For $b)$, @DiegoMath's hint helped me, and answer would be:
$b)$ $$\frac{1-x^{n+1}}{1-x}=\frac{(1-x)(1+x+x^2+\cdots+x^n)}{1-x}=1+x+x^2+\cdots+x^n=\sum_{k=0}^{n}x^k.$$
$c)$ We have $$\prod_{k=0}^{n}(1+x^{2^k})=(1+x)(1+x^2)(1+x^4)(1+x^8)\cdots(1+x^{2^n}),$$and$$(1+x)(1+x^2)\cdots(1+x^{2^n})=1+x+x^2+\cdots+x^{2^{n+1}-1},$$therefore$$\prod_{k=0}^{n}(1+x^{2^k})=\sum_{k=0}^{2^{n+1}-1}x^k.$$
This is my solutions, please post if you have better solutions or if you think my reasoning is wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1035360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
show that the solution is a local martingale iff it has zero drift Most financial maths textbook state the following:
Given an $n$-dimensional Ito-process defined by
\begin{equation}
X_t = X_0 + \int_0^{t} \alpha_s \,d W_s + \int_0^{t} \beta_s \,d s,
\end{equation}
where $(\alpha_t)_{t \geq0}$ is a predictable process that is valued in the space of $n \times d$ matrices and $(W_t)_{t \geq 0}$ is a $d$-dimensional Brownian motion,
\begin{equation}
(X_t) \text{ is a local martingale } \quad \Longleftrightarrow \text{ It has zero drift.}
\end{equation}
Can anyone show me a reference for the proof of this statement or at least give a hint of how to construct this proof? (I know how to prove the ($\Leftarrow$) direction, but I am not so sure about the other one.)
|
Suppose that $(X_t)_{t \geq 0}$ is a local martingale. Since
$$X_0 + \int_0^t \alpha_s dW_s$$
is also a local martingale, this means that
$$M_t := X_t - \left( X_0 + \int_0^t \alpha_s \, dW_s \right) = \int_0^t \beta_s \, ds$$
is a local martingale. Moreover, $(M_t)_{t \geq 0}$ is of bounded variation and has continuous sample paths. It is widely known that any continuous local martingale of bounded variation is constant, see e.g. Brownian Motion - An Introduction to Stochastic Processes by René Schilling and Lothar Partzsch, Proposition A.22.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1035461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Every ideal is contained in a prime ideal that is disjoint from a given multiplicative set Let $R$ be a ring $I\subset R$ an ideal and $S\subset R$ be a set for which holds:
$1)$ $1\in S$
2) $a,b \in S\Rightarrow a\cdot b\in S$
Show that there exists a prime ideal $P$ in $R$ containing $I$ with $P\cap S =\emptyset$
Any ideas?
|
It's a standard argument in commutative algebra. Consider the set of all ideals that contain $I$ and avoid $S$, ordered by inclusion. By Zorn's lemma there are maximal elements (since unions are upper bounds) and you can prove that an ideal maximal via inclusion among those avoiding $S$ and containing $I$ turns out to be prime.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1035572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Arithmetic progressions. "Consider an 4 term arithmetic sequence. The difference is 4, and the product of all four terms is 585. Write the progression".
My way of finding the progression seems like it will take too long, but here it is, anyway:
$$a_1\cdot a_2\cdot a_3\cdot a_4=585$$
$$a_1\cdot (a_1+4)\cdot (a_1+8)\cdot (a_1+12)=585$$
and after some operations
$$a^4 +4a^3+196a^2+384a-585=0 $$
Is there a faster, less frustrating way of solving this? Thanks in advance/
|
Another way to do it is to factor $585$, which turns out to be
$$3^2\cdot 5\cdot 13$$
And using the factorization, find four factors which each differ by $4$.
$$1\times 5\times 9\times 13$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1035630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
If $\gcd(a,b) = 1,$ then why is the set of invertible elements of $\mathbb Z_{ab}$ isomorphic to that of $\mathbb Z_a\times \mathbb Z_b$? If $\gcd(a,b) = 1,$ then why is the set of invertible elements of $\mathbb Z_{ab}$ isomorphic to that of $\mathbb Z_a\times \mathbb Z_b$?
I know the proof that as rings, $\mathbb Z_{ab}$ is congruent to $\mathbb Z_a\times \mathbb Z_b.$ Does this extend to the sets of their invertible (aka relatively prime elements)? If so, why? Is this equivalent to the assertion that $\gcd(a,bc) = 1$ iff $\gcd(a,b) = 1$ and $\gcd(a,c) = 1?$
Note: I selected the answer as the one that does not utilize the Chinese Remainder Theorem, since I was using this to prove the Chinese Remainder Theorem. Thank you for the help!
|
gcd$(a, bc) = 1 \Rightarrow$ there exists no prime integer $p$ such that $p|a$ and $p|bc \Rightarrow$ there exists no prime integers $r, s$ such that $r|a, r|b$ and $s|a, s|c \Rightarrow$ gcd$(a, b) = 1$ and gcd$(a, c) = 1.$
On the other hand, suppose gcd$(a, b) = 1$ and gcd$(a, c) = 1.$ Let $p$ be a prime integer such that $p|a, p|bc.$ Then $p|b$ or $p|c,$ a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1035701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Find and solve a recurrence relation for the number of n-digit ternary sequences in which no 1 appears to the right of any 2. Find and solve a recurrence relation for the number of n-digit ternary sequences in which no 1 appears to the right of any 2.
$a_1=3$ and $a_2=8$
I am having trouble creating the recurrence relation as well as evaluating.
My thought for the relation is:
$a_{n+1}=2a_n+2^n$ or $a_n=2a_{n-1}+2^{n-1}$
|
We interpret "to the right" as meaning immediately to the right.
Let $b_n$ be the number of "good" strings that do not end in $2$, and let $c_n$ be the number of good strings that end in $2$.
We can see that $b_{n+1}=2b_n+c_n$. For we append a $0$ or a $1$ to a good string that does not end in $2$, or a $0$ to a good string that ends in $2$.
Note that $b_n+c_n=a_n$. So $b_{n+1}=b_n+a_n$ and $c_{n+1}=a_n$.
From $c_{n+1}=a_n$ we get $a_{n+1}-b_{n+1}=a_n$. Thus
$$b_{n+1}=a_{n+1}-a_n \quad\text{and therefore } \quad b_{n}=a_{n}=a_{n-1}.$$
Now substitute in $b_{n+1}=b_n+a_n$. We get
$$a_{n+1}=a_n=a_n-a+{n-1}+a_n,$$
which simplifies to
$$a_{n+1}=3a_n-a_{n-1}.$$
We can solve this with any of the standard methods, for example by finding the zeros of the characteristic polynomial.
Another way: Apart from the unfortunate restriction about $1$ not following $2$, we would have $a_{n+1}=3a_n$. How much does $3a_n$ overcount? It overcounts by including in the count the bad strings where we append a $21$ to a good string of length $n-1$. So, instantly
we get
$$a_{n+1}=3a_n-a_{n-1}.$$
Remark: In hindsight the problem is trivial. I wrote it up the way I did because in fact I first did it the clumsy way, a matrix version of the first approach.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1035805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $F(1) + F(3) + F(5) + ... + F(2n-1) = F(2n)$ (These are Fibonacci numbers; $f(1) = 0$, $f(3) = 1$, $f(5) = 5$, etc.) I'm having trouble proving this with induction, I know how to prove the base case and present the induction hypothesis but I'm unfamiliar with proving series such as this. Any help would be great. :)
|
Hint: If you can use the that the sequence of Fibonacci numbers is defined by the recurrence relation $$F(n)=F(n-1)+F(n-2)$$ then you can prove it by induction, since $$\begin{align*}F(2(n+1))&=F(2n+2)\\&=F(2n+1)+F(2n)\\&=F(2n+1)+\underbrace{F(2n-1)+\ldots+F(5)+F(3)+F(1)}_{=F(2n) \text{ by induction hypothesis}}\end{align*}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1035861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
}
|
Integral on sphere and ellipsoid Let $a,b,c \in \mathbb{R},$ $\mathbf{A}=\left[\begin{array}{*{20}{c}}
\mathbf{a}&{0}&{0}\\
{0}&\mathbf{b}&{0}\\
{0}&{0}&\mathbf{c}
\end{array}\right] , ~~\det A >1$
Let $~D = \{(x_1,x_2,x_3): x_1^2 + x_2^2 +x_3^2 \leq 1 \}~$ and
$~E = \left\{(x_1,x_2,x_3): \frac{x_1^2}{a^2} + \frac{x_2^2}{b^2} + \frac{x_3^2}{c^2} \leq 1 \right\}~.$
Then for a compactly supported continuous function $f$ on $\mathbb{R}^3$, could anyone tell me which of the following are correct?
1.$\int_D f(Ax)dx = \int_E f(x)dx $
2.$\int_D f(Ax)dx = \frac{1}{abc} \int_D f(x)dx $
3.$\int_D f(Ax)dx = \frac{1}{abc} \int_E f(x)dx $
4.$\int_{\mathbb{R}^3} f(Ax)dx = \frac{1}{abc} \int_{\mathbb{R}^3} f(x)dx $
|
A function $~f~$ is said to be compactly supported if it is zero outside a compact set.
Let $~x=(x_1,x_2,x_3)\in\mathbb R^3~,$ be any arbitrary vector.
$$\therefore~~Ax=\left[\begin{array}{*{20}{c}}
\mathbf{a}&{0}&{0}\\
{0}&\mathbf{b}&{0}\\
{0}&{0}&\mathbf{c}
\end{array}\right]\left[\begin{array}{*{20}{c}}
{x_1}\\{x_2}\\{x_3}
\end{array}\right]=(ax_1,bx_2,cx_3)$$
Now $$\int_D f(Ax)dx =\iiint_{x_1^2+x_2^2+x_3^2\le1} f(ax_1,bx_2,cx_3)~dx_1~dx_2~dx_3\tag1$$
Putting $~~ax_1=y_1,~~bx_2=y_2,~~cx_3=y_3 \implies x_1=\dfrac{y_1}{a},~~x_2=\dfrac{y_2}{b},~~x_3=\dfrac{y_3}{c}$ $$\implies dx_1=\dfrac{dy_1}{a},~~dx_2=\dfrac{dy_2}{b},~~dx_3=\dfrac{dy_3}{c}$$
So $$x_1^2+x_2^2+x_3^2\le1\implies \dfrac{y_1^2}{a^2}+\dfrac{y_2^2}{b^2}+\dfrac{y_3^2}{c^2}\le1$$
So from $(1)$, $$\int_D f(Ax)dx =\iiint_{\frac{y_1^2}{a^2}+\frac{y_2^2}{b^2}+\frac{y_3^2}{c^2}\le1} f(y_1,y_2,y_3)~\dfrac{dy_1}{a}~\dfrac{dy_2}{b}~\dfrac{dy_3}{c}$$
$$=\dfrac{1}{abc}\int_E f(y) dy=\dfrac{1}{abc}\int_E f(x) dx$$ $($ as $\det A>1$, therefore $abc>1$ $)$
Thus option $(3)$ is correct but option $(1)$ and $(2)$ are not correct.
Similarly, we can show that $$\int_{\mathbb R^3} f(Ax)dx =\dfrac{1}{abc}\int_{\mathbb R^3} f(x) dx$$ Hence option $(3)$ and option $(4)$ are the only correct options.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1036047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Definitions of Platonic and Archimedean Solids using Symmetry Groups? A Platonic Solid is defined to be a convex polyhedron where all the faces are congruent and regular, and the same number of faces meet at each vertex. An Archimedean Solid drops the requirement that all the faces have to be the same, but they must still all be regular, and each vertex must have the same arrangement of faces.
However, for Archimedean Solids, the pseudorhombicuboctahedron fits this definition, despite not being vertex-transitive (meaning that the rotation group of the solid does not act transitively on the vertices).
I was wondering: For Platonic Solids, is it equivalent to define them as convex polyhedra that are face, vertex, and edge-transitive (where for Archimedean solids, we drop the face-transitivty condition)? Face-transitivity forces all the faces to be congruent, edge-transitivity forces all the faces to be equilateral, and vertex-transitivity forces the same number (or arrangement, in terms of Archimedean solids). It's not immediately obvious to me that these conditions force the faces to be equiangular as well as equilateral...does it indeed follow, or is there a counterexample?
|
The main observation here is the following:
Being vertex transitive (and considering bounded solids only) requires the vertices to lie on sphere. Faces on the other hand are bound to be planar by definition, thence those are contained within a supporting plane. The intersection of the plane and the sphere (ball) simply provides a circle (disc). Now, using equisized edges only (being an Archimedean solid), it becomes evident that the edge circuit of each polygonal face has all its vertices on that very circle, they will be evenly spaced appart, so in effect those each happen to be regular polygons only.
In fact this even would apply for their non-convex counterparts as well, just that the faces then might become non-convex regular polygrams as well. But still all corner angles thereof would be forced to the same size.
Note that the same argument also applies to the other non-flat geometry, i.e. for the hyperbolic tilings. Again the cross-section of a planar face plane and the there supporting hyperboloid defines a circle. - In contrast this argument would break down within any flat geometry (euclidean space of arbitrary dimension). So for instance there is a vertex- edge-, and face-transitive tiling, which still isn't a regular tiling of the plane: the rhombic tiling (of non-square rhombs).
--- rk
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1036164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Compute the map $H^*(CP^n; \mathbb{Z}) \rightarrow H^*(CP^n, \mathbb{Z})$ I'm trying to solve problem 3.2.6 in Hatcher. The problem is stated:
Use cup products to compute the map $H^*(CP^n; \mathbb{Z}) \rightarrow H^*(CP^n, \mathbb{Z})$ induced by the map $CP^n \rightarrow CP^n$ that is a quotient of the map $C^{n+1} \rightarrow C^{n+1}$ raising each coordinate to the $d^{th}$ powder, $(z_0, ..., z_n) \rightarrow (z_0^d, ... , z_n^d)$, for a fixed integer $d > 0$. First do the case $n = 1$.
I'm guessing this is going to be some sort of induction proof after I calculate the $n = 1$ case? But I'm not sure how to do that. So if that's true could someone possibly help me with the $n = 1$ case and I could probably figure it out from there?
Many thanks.
|
Le $f:P^n\to P^n$ be the map you describe and let $f^*:H^*(P^n)\to H^*(P^n)$ be the induced map on cohomology. Since $f^*$ is a map of rings and $H^*(P^n)$ is generated as a ring by its degree two component, it is enough to describe $f^2:H^2(P^n)\to H^2(P^n)$.
Moreover, $H^2(P^n)$ is a free $\mathbb Z$-module or rank $1$, generated by one class $\alpha\in H^2(P^n)$ which can be described quite explicitly (for example, using the usual CW structure on $P^n$ with one cell in each even dimension (and this CW structure happens to play very nicely with our map)), so it is enough to compute $f^*(\alpha)$.
Can you do that?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1036256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Spectrum of Multiplication Operator $T$ in $C[0,1]$ "Let $X=C[0,1]$ and $v \in X$ be a fixed function. Let $T$ be the multiplication operator by $v$, i.e. $Tx(t)=v(t)x(t)$. Find the spectrum of $T$."
This is an exercise from a PDF of notes I found online. I'm trying to better understand Spectral Theory.
So $\lambda$ is a regular value if $(T-\lambda I)^{-1}$ exists, is bounded, and dense (I think there is a lemma which lets us not worry about the dense part). The set of all regular values is $\rho(T)$ and the spectrum is $\sigma(T)=\mathbb{C}\setminus \rho(T)$.
It seems to me that $(T-\lambda I)^{-1}$ maps some $y(t)$ to $\frac{y(t) + \lambda}{v(t)}$. However, this would be problematic if $v(t)=0$ for some values of of $t \in [0,1]$
I don't have much of an understanding of all these definitions so if someone could give a solution to this example, I think that would help clear up some of the ideas for me.
Thanks in advance.
|
The map $(T-\lambda I)^{-1}$ is defined as the solution mapping of the equation
$$
(T-\lambda I) y = z,
$$
i.e. $ y= (T-\lambda I)^{-1}z$. In case of the multiplication with $v$, this is equivalent to
$$
(v(t)-\lambda) y(t) = z(t).
$$
First consider the case that $\lambda\ne v(t)$ for all $t\in [0,1]$. Then the above equation has a unique solution, moreover, $(T-\lambda I)^{-1}$ can be shown to be bounded.
Second, suppose there is $t_0$ such that $\lambda = v(t_0)$. Then $((T-\lambda I)y)(t_0)=0$ for all $y\in X$, hence the range of $(T-\lambda I)$ and its closure are not equal to $X$. Thus, the spectrum of $T$ is the set of all function values of $v$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1036373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Show sequence is convergent and the limit Given the sequence
$$\left\{a_n \right\}_{n=1}^\infty $$
which is defined by
$$a_1=1 \\
a_{n+1}=\sqrt{1+2a_n} \ \ \ \text{for} \ n\geq 1 $$
I have to show that the sequence is convergent and find the limit.
I am quite stuck on this, hope for some help.
|
Monotonicity
$$a_2>a_1$$ $\boxed{\therefore\ a_{n+1}> a_n \implies 1+2a_{n+1}>1+2a_{n} \implies \sqrt{1+2a_{n+1}}>\sqrt{1+2a_{n}} \implies a_{n+2}>a_{n+1}}$
Boundedness
$$a_1<4$$ $$\boxed{\therefore\ a_n<4 \implies 1+2a_n<9 \implies \sqrt{1+2a_n}<3 \implies a_{n+1}<4}$$
Limit
$$\displaystyle\lim_{n\to \infty}a_{n}=l$$ $\therefore \displaystyle\lim_{n\to \infty}a_{n+1}=\displaystyle\lim_{n\to \infty}\sqrt{1+2a_{n}} \implies l=\sqrt{1+2l} \implies l^2-2l=1 \implies l=1\pm \sqrt{2}$
$${\boxed{l\neq 1-\sqrt{2}\ (?) \implies l=1+\sqrt{2}}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1036526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Two roots of the polynomial $x^4+x^3-7x^2-x+6$ are $2$ and $-3$. Find two other roots. I have divided this polynomial first with $(x-2)$ and then divided with $(x+3)$ the quotient. The other quotient I have set equal to $0$ and have found the other two roots. Can you explain to me if these actions are correct and why?
|
Yes, provided the computings are OK, what you have done is what must be done.
Given a polynomial $P(x)$, and a root of its, $a$, we have that $P(a)=0$. But if you divide $P(x)$ by $x-a$, you will get a quotient $Q(x)$ and a remainder $r$, which is a number because the degree of the divisor is $1$.
Then
$$P(x)=Q(x)(x-a)+r$$
and therefore
$$0=P(a)=Q(a)(a-a)+r=r$$
that is, the division has remainder $0$.
Now, we know that $r=0$, so if $b$ is another root of $P$,
$$0=P(b)=Q(b)(b-a)$$
and since $b-a\neq 0$, $Q(b)=0$, that is, $b$ is a root of $Q$.
So yes, if you divide the given polynomial $P$ by the factors associated to the known roots (as you have done), the roots of the quotient are the remaining roots of $P$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1036803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Simplifying Double Integrals to Single-Variable Integrals Let D be a subset of $\mathbb{R}^2$ defined by $ |x| + |y| \leq 1$, and let $f$ be a continuous single-variable function on the interval $[-1,1]$. Show that
$$
\iint\limits_D \,f(x+y) \, \mathrm{d}x \, \mathrm{d}y = \int_{-1}^{-1} \, f(u) \, \mathrm{d}u
$$
This makes sense when you consider the region D since the values of x and y essentially range from -1 to 1 but I can't figure out a first solid step into the proof. Intuitively it looks plausible to me but that's it. Any help?
|
Write a change of variables, $u = x + y$, $v = x - y$. Then the Jacobian $J = \partial(x,y)/\partial(u,v) = 1/2$ and hence
$$\iint_D f(x+y) \ dx \ dy = \iint_D f(u) \ J \ du \ dv = \frac{1}{2} \int_{-1}^1\int_{-1}^1 f(u) \ du \ dv$$
Can you take it from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1036887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
how many positive integer solutions to the following equation? $a^2 + b^2 + 25 = ab + 5a + 5b$
I have tried looking for a factorisation that could solve this question but couldn't find anything useful - found $(a+b+5)^2$ - don't know if this is useful
The equation does look similar to an equation of a circle - can you use this idea?
|
The intuition is that the right-hand side is, with a few possible exceptins, smaller than the left-hand side.
Note that $(a-b)^2\ge 0$, so $ab\le \frac{a^2+b^2}{2}$. Thus the right-hand side is $\le \frac{a^2+b^2}{2}+5a+5b$.
It follows that
$$(a^2+b^2+25)-(ab+5a+5b)\ge \frac{a^2+b^2}{2}+25-5a-5b.$$
The right=hand side above is
$$\frac{1}{2}\left(a^2+b^2-10a-10b+50\right).$$
This is
$$\frac{1}{2}\left((a-5)^2+(b-5)^2\right),$$
which is $\gt 0$ unless $a=b=5$.
Remark: We can make the proof more myaterious by writing the magic identity
$$2\left[(a^2+b^2+25)-(ab+5a+5b)\right]=(a-5)^2+(b-5)^2+(a-b)^2,$$
and concluding that the left-hand side is $0$ precisely if $a=b=5$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1036956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
$I$ is an ideal in $R$ implies that $I[x]$ is an ideal in $R[x]$. Is the following statement right?
If $I$ is an ideal in the ring $R$, then $I[x]$ is an ideal in the polynomial ring $R[x]$.
If so, how can I prove it?
|
Suppose $I$ is an ideal in $R$. Clearly $I[x]$ is a subring of $R[x]$ since $I$ must be a subring of $R$. By definition, $\forall r\in R\;\forall x\in I\;rx,xr\in I$. Then since the coefficients of the product of two polynomials result from products of the coefficients, it is clear that $\forall p\in R[x]\;\forall q\in I[x]\;pq,qp\in I[x]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Irrational number inequality : $1+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}>\sqrt{3}$ it is easy and simple I know but still do not know how to show it (obviously without simply calculating the sum but manipulation on numbers is key here.
$$1+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}>\sqrt{3}$$
|
$2 \lt 3 \lt 4$ so $\sqrt{2} \lt \sqrt{3} \lt 2$ and
$$1+\frac1{\sqrt{2}}+\frac1{\
\sqrt{3}} \gt 1+\frac12+\frac12 = 2 \gt \sqrt{3}.$$
Of course that does not generalise in any nice way.
However $2 \lt \frac{100}{49}$ with $3 \lt \frac{49}{16}$ and $5 \lt \frac{81}{16}$
does yield $$1+\frac1{\sqrt{2}}+\frac1{\
\sqrt{3}} \gt 1+\frac7{10}+\frac47=\frac{159}{70} =\frac{636}{280}\gt \frac{630}{280}=\frac{9}{4} \gt \sqrt{5}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 4
}
|
Expressing $\sin\theta$ and $\cos\theta$ in terms of $\tan(\theta/2)$ This is the question:
Let $t = \tan(\theta/2)$. Express the following functions in terms of $t$:
*
*$\tan \theta$
*$\sin\theta$
*$\cos\theta$
I know that for part (1),
$$\tan\theta = \frac{2t}{1-t^2}$$
How do I get parts (2) and (3)?
If $\tan\theta = \frac{2t}{1-t^2}$ then I would multiply by $\cos \theta$ to get $$\sin \theta = 2t\frac{\cos \theta}{1-t^2}$$
but that doesn't look right.
|
$$z_{\frac{\theta}2} = \cos\frac{\theta}2 + i \sin \frac{\theta}2 = (1+t^2)^{-\frac12}(1+it)
$$
by deMoivre's theorem
$$
z_{\theta} = z_{\frac{\theta}2}^2=(1+t^2)^{-1}\left((1-t^2)+2it\right) = \cos\theta+i\sin\theta
$$
hence, equating real parts:
$$
\cos \theta = \frac{1-t^2}{1+t^2}
$$
and imaginary parts:
$$
\sin \theta = \frac{2t}{1+t^2}
$$
then taking the ratio:
$$
\tan \theta = \frac{2t}{1-t^2}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 2
}
|
Proof that the harmonic series is < $\infty$ for a special set.. In one of my books i found a very interesting task, i am really curios about the solution:
Let $M = \{2,3,4,5,6,7,8,9,20,22,...\} \subseteq \mathbb{N}$ be a set that contains all natural numbers, that don't contain a "1" in their depiction. Show that:
$$\sum_{n\in M} {\frac{1}{n}} < \infty$$
The Set M that we created is infinite right? I will always be able to find a natural number not containing a "1" that is larger. So isn't it kind of the same deal, like the harmonic series, from which we know it diverges?
Sure,$\frac{1}{n} $ will converge "quicker" against 0 so the series might converge then.. but i find that kind of hard to prove.
How "quick" does the sequence 1/n have to converge against 0, so that our series is convergent? What do we need to be able to say about the partial sums? Is there a convergence criteria applicable here?
(I think $<\infty$ means it has to converge at some stage right?)
I also read the proof that the "normal" harmonic series is divergent. It estimates the partial sums (I can provide this proof if necessary)... But in this case, what can you say about $M$ to lead a proof like this to a contradiction?
I got such an foggy conception of this, if somebody could shed some light on this that would be awesome!
|
Begin by defining
$$S(1;9) := \sum_{i=2}^9 \frac{1}{i}$$
Let $a = S(1;9)$. Then $a < 2$. Let $S(10;99)$ be the sum over the fractions of all numbers between $10$ and $99$ that don't contain a $1$ in their depiction.
Prove that $S(10;99) < \frac{9}{10}a$. Similarly, define $S(100;999)$ to be the sum over the fractions of all numbers between $100$ and $999$ that don't contain a $1$ in their depiction. Prove that $S(100;999) < (\frac{9}{10})^2a$ etc etc.
The sum we are looking for is
$$S(1;9) + S(10;99) + S(100;999)...$$
But now we can easily give an upper bound for this sum.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Real analysis book suggestion I am searching for a real analysis book (instead of Rudin's) which satisfies the following requirements:
*
*clear, motivated (but not chatty), clean exposition in definition-theorem-proof style;
*complete (and possibly elegant and explicative) proofs of every theorem;
*examples and solved exercises;
*possibly, the proofs of the theorems on limits of functions should not use series;
*generalizations of theorem often given for $\mathbb{R}$ to metric spaces and also to topological spaces.
Thank you very much in advance for your assistance.
|
Analysis 1 and Analysis 2 by Terence Tao.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 11,
"answer_id": 7
}
|
Proof that $\sin10^\circ$ is irrational Today I was thinking about proving this statement, but I really could not come up with an idea at all. I want to prove that $\sin10^\circ$ is irrational. Any ideas?
|
Suppose that $x=\sin(10^\circ)$ and we want to prove the irrationality of $x$. Then we can use the Triple Angle Formula for $\sin$ to get $-4x^3+3x = \sin(30^\circ)=\frac12$; in other words, $x$ is a solution of the equation $-8x^3+6x-1=0$.
But now that we have this equation we can use another tool, the Rational Root Theorem : any rational root $\frac pq$ of the equation must have its numerator $p$ dividing $1$, and its denominator $q$ dividing $-8$. This implies that any rational root of the polynomial must be one of $\{\pm 1, \pm\frac12, \pm\frac14, \pm\frac18\}$; now you can test each of these values directly by plugging them in to the cubic to show that none of them is a root.
Alternately, if you don't want to go to that much trouble, we can use yet another tool: Eisenstein's Criterion. First, we 'flip' our equation by substituting $y=\frac1x$; we know that $x$ is finite and non-zero, so $y$ is also finite (and rational, if $x$ is) and non-zero, and we can multiply by it: rewriting our equation in terms of $y$ we get $-8\frac1{y^3}+6\frac1y-1=0$, and multiplying this by $y^3$ we get $-y^3+6y^2-8=0$. Now, Eisenstein's criteria doesn't directly apply here (because the only prime that divides our constant coefficient $8$ is $2$, but we do have $2^2=4$ dividing $8$), but we can start playing around with simple substitutions like $z=y\pm 1$, $z=y\pm2$. Trying $z=y-1$ (so $y=z+1$) first, we discover that the equation converts to $-z^3+3z^2+9z-3=0$. And now Eisenstein's Criterion does apply, with $p=3$, and we can conclude that this polynomial in $z$ (and so our polynomial in $y$, and so our polynomial in $x$) is irreducible over the rationals.
Incidentally, the fact that this particular polynomial ($-8x^3+6x-1$) is irreducible has consequences for a famous classical problem:
Define the degree of an algebraic number as the order of the minimal (i.e. lowest-order) polynomial that it's a zero of; this is one of many equivalent definitions (though the equivalence is a deep theorem in its own right). Now, since $-8x^3+6x-1$ is irreducible, its roots (and in particular, $\sin(10^\circ)$) must have degree $3$; if their degree were lower, then their minimal polynomial would be a factor of $-8x^3+6x-1$. But it's known that any number that's constructible with ruler and compass must have degree $2^n$ for some $n$; informally, compasses can take square roots but not cube roots. Since $\sin(10^\circ)$ has degree $3$, this implies that it's not constructible with ruler and compass — and in particular, that a $10^\circ$ angle isn't constructible. But we know that a $30^\circ$ angle is constructible, so it must be impossible to get from a $30^\circ$ angle to a $10^\circ$ angle. In other words, trisecting arbitrary angles is impossible with ruler and compass!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
}
|
Counting Stones If you have a bucket of stones and remove two at a time, one will be left. If you remove three at a time, two will be left. If they're removed four, five, or six at at time, then three, four, and five stones will remain. If they're removed seven at a time, no stones will be left over.
What is the smallest possible number of stones that could be in the bucket? How do you know?
|
Based on the comment below, I am adding a bit more honesty to this response:
The conditions of the problem are to find $n$ such that $n \equiv 1 \text{ (mod 2)}$, $n \equiv 2 \text{ (mod 3)}$, $n \equiv 3 \text{ (mod 4)}$, $n \equiv 4 \text{ (mod 5)}$, $n \equiv 5 \text{ (mod 6)}$, $n \equiv 0 \text{ (mod 7)}$.
$n \equiv 1 \text{ (mod 2)} \wedge n \equiv 2 \text{ (mod 3)} \Rightarrow n \equiv 5 \text{ (mod 6)}$, so the first two conditions don't really help us get anywhere. Next, $n \equiv 5 \text{ (mod 6)} \wedge n \equiv 3 \text{ (mod 4)} \Rightarrow n \equiv 11 \text{ (mod 12)}$. Lastly, $n \equiv 11 \text{ (mod 12)} \wedge n \equiv 4 \text{ (mod 5)} \Rightarrow n \equiv 59 \text{ (mod 60)}$, since the number has to end in a 4 or a 9 and also 11, 23, 35, 47, 59 are candidates. So we need to find that the smallest $n$ which is divisible by 7 such that $n \equiv 59 \text{ (mod 60)}$. The second possible choice is 119.
First, I had solved it computationally:
two_store = vector()
three_store = vector()
four_store = vector()
five_store = vector()
six_store = vector()
for (i in 1:100){
cur = 7*i;
if (cur%%2 == 1){
two_store = c(two_store, cur);
}
if (cur%%3 == 2){
three_store = c(three_store, cur);
}
if (cur%%4 == 3){
four_store = c(four_store, cur);
}
if (cur%%5 == 4){
five_store = c(five_store, cur);
}
if (cur%%6 == 5){
six_store = c(six_store, cur);
}
}
two_three = intersect(two_store,three_store)
four_five = intersect(four_store,five_store)
two_five = intersect(two_three, four_five)
total = intersect(two_five,six_store)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Last 2 digits of $\displaystyle 2014^{2001}$ How to find the last 2 digits of $2014^{2001}$? What about the last 2 digits of $9^{(9^{16})}$?
|
Hint:
For finding last two digits you need to reduce this modulo $100$. That is you ne need to find
$$2014^{2001} \equiv ? \pmod{100}.$$
This is the same as asking
$$14^{2001} \equiv ? \pmod{100}.$$
Now in order to facilitate computation, you need to use Euler's Theorem. But keep in mind that $\gcd(14,100) = 2.$ So you need to adjust things a bit.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Proof that an involutory matrix has eigenvalues 1,-1 I'm trying to prove that an involutory matrix (a matrix where $A=A^{-1}$) has only eigenvalues $\pm 1$.
I've been able to prove that $det(A) = \pm 1$, but that only shows that the product of the eigenvalues is equal to $\pm 1$, not the eigenvalues themselves.
Does anybody have an idea for how the proof might go?
Thanks.
|
You can easily prove the following statement:
Let $f: V\to V$ be an endomorphism. If $\lambda$ is an eigenvalue of $f$, then $\lambda^k$ is an engeinvalue of $\underbrace {f\ \circ\ ...\ \circ f}_{k \text{ times}}$
In this case, let $A$ be a matrix of an endomorphism $f$ such that $f\circ f = I$. This means that $A$ is an involutory matrix (Because $AA=I$). So if $\lambda$ is an eingenvalue of $f$, then $\lambda ^2$ is an eigenvalue for $f \circ f = I$. The only eigenbalue of the identity funcion is $1$, so $\lambda^2 = 1$ meaning that $\lambda = \pm1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 6,
"answer_id": 4
}
|
Probability that last child is a boy Johnny has 4 children. It is known that he has more daughters
than sons. Find the probability that the last child is a boy.
I let A be the event that the last child is a boy, P(A) = $\frac{1}{2}$.
and B be the event that he as more daughters than sons. But im not sure how to calculate P(B) and what are the subsequent steps to take after.
Appreciate any help.
Thanks
|
The number of girls in the family would have a binomial distribution, so the prior probability that there are 3 or 4 girls in the family would be:
$$\begin{align}
\mathsf P(B) & = {4\choose 3}(\tfrac 1 2)^3(\tfrac 1 2)+{4\choose 4}(\tfrac 1 2)^4
\\ & = \frac 5{16}
\end{align}$$
Now for the probability that the last child in the family is a boy and that there are more girls than boys in the family is equal to: the prior probability that the first three children are girls and the last is a boy:
$$\begin{align}
\mathsf P(A\cap B) & = \frac{1}{16}
\end{align}$$
Thus the posterior probability, that the last child is a boy given that their are more girls in the family than boys is:
$$\begin{align}
\mathsf P(A\mid B) & = \frac{\mathsf P(A\cap B)}{\mathsf P(B)}
\\ & = {\frac 1 {16}}\bigg/\frac 5 {16}
\\ & = \dfrac 1 5
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 2
}
|
$(-3)^{3/2} \neq (-3)^{6/4}$ $(-3)^{\frac{3}{2}}=-3\sqrt{3}i$
$(-3)^{\frac{6}{4}}=\sqrt{27}$
(not the same thing).
What's the deal? It's interesting because people work with fractional exponents all the time and I've never seen someone bother to check whether the top and bottom maintain their parity when canceling, but clearly it makes a difference if you can have a negative base.
More precisely, how are exponents (especially of negative numbers) defined (in a rigorous sense), so that I can understand the problem here?
I don't think the solution is just to form a convention in which you simplify as much as possible before doing operations. I know it would give consistent results, but by the same reasoning, we could have chosen 0!=0. We chose not to make it that way for good reason. There are many applications in which 0! = 1 is the only elegant possibility. Having asked that... does anyone know of applications of this sort of thing?
|
You have to be careful with these kind of things if your base is not a non-negative real number. For example,
$$1=1^{1/2}=[(-1)^2]^{1/2}=-1.$$
The reason for this can be found when you look at the "true" definition of $x\mapsto a^x$ when $a\in\mathbb{C}\setminus[0,\infty)$. We define this as:
$$a^x:=e^{x\log a}$$
which of course requires some kind of definition for the logarithm. The usual one is
$$\log x:=\log|x|+i\arg x$$
which is of course multi-valued since $\arg$ is. We thus don't have, in general, $(ab)^c=a^cb^c$ or $(a^b)^c=a^{bc}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1037934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is $\mathbb{C}\bigotimes_\mathbb{R}\mathbb{C}\simeq \mathbb{C}\bigotimes_{\mathbb{C}}\mathbb{C}$? I'm trying to see if for several cases changing the ring in a tensor product affects the result or doesn't. Now I'm trying to prove $\mathbb{C}\bigotimes_\mathbb{R}\mathbb{C}\simeq \mathbb{C}\bigotimes_{\mathbb{C}}\mathbb{C}$ if it's true, or to show why it isn't.
I've been unable to find an isomporphism between those two, but I don't know how would I proceed in order to show that there is no possible function that could define one.
|
There is an isomorphism of rings $\mathbb{C} \otimes_{\mathbb{R}} \mathbb{C} \cong \mathbb{C} \times \mathbb{C}$ (Hint: Use $\mathbb{C}=\mathbb{R}[x]/(x^2+1)$ and then CRT), but $\mathbb{C} \otimes_{\mathbb{C}} \mathbb{C} = \mathbb{C}$. So these are not isomorphic rings, since $\mathbb{C} \times \mathbb{C}$ has zero divisors for example. But they are isomorphic $\mathbb{Q}$-vector spaces, since the dimension is $c$ (continuum) in each case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1038037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Any object in a locally noetherian Grothendieck category has a noetherian subobject If $\mathcal{A}$ is a locally noetherian Grothendieck category, is that straightforward the fact that any object $M$ in $\mathcal{A}$ has a noetherian subobject?
|
Yes: Suppose $\{M_i\}_{i\in I}$ is a generating set of Noetherian objects in the given locally Noetherian Grothendieck category ${\mathscr A}$. Then for any nonzero $X\in{\mathscr A}$ there exists some $i\in I$ and a non-zero morphism $\varphi: M_i\to X$. The image of this morphism is a nonzero Noetherian subobject of $X$.
Even more: $X$ is the direct limit of the direct system of Noetherian subobjects.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1038107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Secret Santa Perfect Loop problem
*
*(n) people put their name in a hat.
*Each person picks a name out of the hat to buy a gift for.
*If a person picks out themselves they put the name back into the hat.
*If the last person can only pick themselves then the loop is invalid and either
. start again
. or step back until a valid loop can be reached.
What is the probability that if n is 33 that the chain creates a perfect loop?
An example of a perfect loop where n is 4:
*
*A gives to B
*B gives to C
*C gives to D.
*D gives to A.
An example of a valid but not perfect loop where n is 4:
*
*A gives to B
*B gives to A
*C gives to D.
*D gives to C.
|
You are asking for the chance of a single cycle given that you have a derangement. For $n$ people, the number of derangements is the closest integer to $\frac {n!}e$ To have a cycle, person $1$ has $n-1$ choices, then that person has $n-2$ choices, then that person has $n-3$, etc. So there are $(n-1)!$ cycles. The odds are then (just about) $\frac e{n}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1038219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
}
|
Symbols for "odd" and "even" Let $A$ be a sequence of letters $\langle a,b,c,d,e,f \rangle$. I want to create two subsequences, one with the values with odd index and other with the values with even index: $A_\mathrm{odd} = \langle a,c,e \rangle$ and $A_\mathrm{even} = \langle b,d,f \rangle$.
My question is: is there any usual symbol that could substitute the words "odd" and "even" in the name of the subsequence?
Thanks!
|
How about $A_\mathcal O$ and $A_\mathcal E$?
To produce these: A_\mathcal O and A_\mathcal E
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1038340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 7,
"answer_id": 0
}
|
$A$-stability of Runge-Kutta methods I am studying Runge-Kutta methods, but I can't understand why explicit Runge-Kutta methods are not $A$-stable. Can someone please explain it to me?
|
First, recall the definition of A-stability in the context of Dahlquist's test equation
\begin{align}
u(t_0 = 0) &= 1\\
u'(t) &= \lambda u(t) =: f\big(u(t)\big) \tag{1} \label{1}
\end{align}
which reads:
A method is called A-stable if $\forall z = \lambda \Delta t : \text{Re}(z) \leq 0$ it holds that
$$\vert u_{n+1} \vert \leq \vert u_n \vert \quad \forall \: n \tag{2} \label{2}$$
where $u_n$ denotes the approximation to $u(t)$ at the $n$'th timestep $t_n = n \Delta t$.
A Runge-Kutta method computes the next iterand $u_{n+1}$ as
$$u_{n+1} = u_n + \Delta t \sum_{k=1}^S b_i k_i \tag{3} \label{3} $$
and the stages $k_i$ are for autonomous $f \neq f(t) = f\big(u(t) \big)$ given by
$$k_i = f\left(u_n + \Delta t \sum_{k=1}^{i-1}a_{ij} k_j \right). \tag{4} \label{4}$$
For the test equation \eqref{1}, \eqref{4} simplifies to
$$k_i = \lambda \left(u_n + \Delta t \sum_{k=1}^{i-1}a_{ij} k_j \right) = \lambda u_n + \Delta t \lambda \sum_{j=1}^{i-1}a_{ij} k_j . \tag{5} \label{5}$$
It is instructive to take a look at the first stages:
\begin{align}
k_1 &= \lambda u_n \tag{6}\\
k_2 &= \lambda u_n + \Delta t \lambda a_{21} k_1=\lambda u_n + \Delta t \lambda a_{21} \lambda u_n = \big(\lambda + \Delta t \lambda^2 a_{21}\big) u_n\tag{7}\\
k_3 &= \lambda u_n + \Delta t \big(\lambda a_{31} k_1 + \lambda a_{32} k_2 \big) \tag{8}\\&= \lambda u_n + \Delta t \lambda a_{31} \lambda u_n + \Delta t \lambda a_{32} \big(\lambda u_n + \Delta t \lambda a_{21} \lambda u_n\big) \tag{9}\\
&= \Big(\lambda + \Delta t \lambda^2 a_{31} + \Delta t \lambda a_{32} \big(\lambda + \Delta t \lambda a_{21} \lambda \big) \Big)u_n. \tag{10}
\end{align}
In particular, $k_i$ is a polynomial of order $i-1$ in $\Delta t$ ( a rigorous proof can be done trough induction) where all coefficients contain $u_n$.
The state update \eqref{3} can thus be written as
$$u_{n+1} = u_n + \sum_{k=1}^S b_i p_i(\lambda \Delta t) u_n = u_n \tilde{p}_S(\lambda \Delta t)\tag{11}$$
where $p_i(\lambda \Delta t), \tilde{p}_S(\lambda \Delta t)$ denote a (general) polynomial in $\lambda \Delta t$ of degree $i$ or $S$, respectively.
To satisfy \eqref{2}, it is clear that we need that
$$ \vert \tilde{p}_S (\lambda \Delta t) \vert \leq 1 \quad \forall \: \lambda \Delta t: \text{Re}(\lambda \Delta t) \leq 0\tag{12}$$
However, it is clear that all non-constant polynomials $p_S(z)$ tend for $z \to \pm \infty$ in absolute value to $\infty$ and thus \eqref{2} is only satisfied for restricted $\lambda \Delta t$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1038403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
possible pizza orders You are ordering two pizzas. A pizza can be small, medium, large, or extra large, with any combination of 8 possible toppings (getting no toppings is allowed, as is gettting all 8). How many possibilities are there for your two pizzas?
Would it be ${\large[}4{\large[}{8\choose8}+{8\choose7}+{8\choose6}+{8\choose5}+{8\choose4}+{8\choose3}+{8\choose2}+{8\choose1}+{8\choose0}{\large]}{\large]}^2$
|
Note that this is problem 14 in chapter 1 of Introduction to Probability by Blitzstein and Hwang.
The problem amounts to sampling with replacement where the order doesn't matter, which is similar to problem 13 in and the hint there (use Bose Einstein) applies here too. This is also known as the "stars and bars" method.
Imagine an order form with $4 \times 2^8$ = 1024 columns for each distinct kind of pizza. With 2 pizzas there are $\binom{1024+2-1}{2}$ = $\binom{1025}{2}$ unique orders.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1038524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
}
|
Eigendecomposition Parameterization of Real Matrix Given a set of distinct non-real eigenvalues $\lambda_1, \dots, \lambda_N$, so that $\lambda_{2n} = \overline{\lambda_{2n+1}}$. Accordingly given a set of non-real orthonormal eigenvectors $v_1, \dots, v_N$, so that $v_{2n} = \overline{v_{2n+1}}$. (N is even.)
We define $V = [v_1, \dots, v_N]$ and $\Lambda = \textrm{diag}(\lambda_1, \dots, \lambda_N)$.
Is the matrix $V \Lambda V^{-1}$ real?
|
First, one minor observation. I believe you meant $v_{2n}=\overline{v_{2n-1}}$ (instead of $\overline{v_{2n+1}}$) otherwise $v_1$ is not conjugated of any vector.
Now, write $v_{2n-1}=a_{2n-1}-ia_{2_n}$ and $v_{2n}=a_{2n-1}+ia_{2_n}$, where $a_{2n-1},a_{2n}\in\mathbb{R}^n$.
Notice tha $a_{2n-1}=\dfrac{v_{2n-1}+v_{2n}}{2}$ and $a_{2n}=\dfrac{i(v_{2n-1}-v_{2n})}{2}$.
Now $V\Lambda V^{-1}a_{2n-1}=\dfrac{\lambda_{2n-1}v_{2n-1}+\lambda_{2n}v_{2n}}{2}=\dfrac{\lambda_{2n-1}v_{2n-1}+\overline{\lambda_{2n-1}v_{2n-1}}}{2}\in\mathbb{R}^n$
$V\Lambda V^{-1}a_{2n}=\dfrac{\lambda_{2n-1}iv_{2n-1}-\lambda_{2n}iv_{2n}}{2}=\dfrac{\lambda_{2n-1}v_{2n-1}+\overline{i\lambda_{2n-1}v_{2n-1}}}{2}\in\mathbb{R}^n$.
Notice that $v_1,\ldots,v_n$ are linear combinations of $a_1,\ldots,a_n$. Thus, $\{a_1,\ldots,a_n\}$ generates $\mathbb{C}^n$. Therefore, they are linear independent over the complex numbers. So they are linear independent over the real numbers and they form a basis of $\mathbb{R}^n$.
Let $e_1,\ldots,e_n$ be the canonical basis of $\mathbb{R}^n$. Thus, $e_i=\sum_{j=1}^n\beta_{ij}a_j$, where $\beta_j\in\mathbb{R}$.
Thus, $V\Lambda V^{-1}e_i=\sum_{j=1}^n\beta_jV\Lambda V^{-1}a_j\in\mathbb{R}^n$.
Finally notice that $V\Lambda V^{-1}e_i$ is the column $i$ of $V\Lambda V^{-1}$. Thus, $V\Lambda V^{-1}$ is a real matrix.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1038641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Simple indefinite integral of a vector function I am having trouble with this simple integration. I am not sure of the process or steps to follow to solve this type of problem:
If $\mathbf{V}(t)$ is a vector function of $t$, find the indefinite integral:
$$\int \Big( \mathbf{V} \times \frac{d^2\mathbf{V}}{dt^2}\Big)\hspace{1mm}dt$$
My intuition is to use integration by parts, but I'm not sure how to do this with a cross product. I am currently learning only basic triple products, and this is listed as a "challenge problem". How does one integrate a cross product?
|
I am doubtful that this problem can be solved by integrating by parts. Even if possible, it would be tedious since you have to separate in terms of each directional components.
Step 1:
Remember what differentiation of a cross product looks like:
$\frac{d (V \times U)}{dt} = \frac{dV}{dt} \times U + V \times \frac{dU}{dt}$. Therefore, you can reasonably suspect that $V \times \frac{d^2V}{dt^2}$ is a differential of two vectors.
Step 2:
Take a guess. Since $\frac{d (V \times U)}{dt}$ has a term $V \times \frac{dU}{dt}$, it's reasonable to think that $\frac{dU}{dt} = \frac{d^2V}{dt^2}$, hence, $U =\frac{dV}{dt}$.
Step 3: Try it out!
$\frac{d (V \times \frac{dV}{dt})}{dt} = \frac{dV}{dt} \times \frac{dV}{dt} + V \times \frac{d}{dt}(\frac{dV}{dt}) = \left|\frac{dV}{dt}\right|^2\sin(0) + V \times \frac{d}{dt}(\frac{dV}{dt}) = 0 + V\times\frac{d^2V}{dV^2}$.
Step 4: Conclude.
Therefore, $\int \Big( \mathbf{V} \times \frac{d^2\mathbf{V}}{dt^2}\Big)\hspace{1mm}dt = V\times\frac{dV}{dt} + C$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1038730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to solve $\cos(5\alpha + \pi/2) = \cos(2\alpha + \pi/8)$ for $a$? I missed the lecture. I don't want you to solve my homework, I just want to learn how to solve equations like this one. Since I have no idea, I'll post the task I got for homework, rather than obfuscating it beyond recognition. Please give general directions on how to solve equations like this one.
$$\cos(5\alpha + \pi/2) = \cos(2\alpha + \pi/8)$$
|
Use Cos(C) - Cos(D) formula.
U can also solve by graphical method. You know the graph of Cosx then apply transformation to draw LHS and RHS intersection point will be the solution
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1038829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Basis and dimension of the subspace of solutions to $A\mathbf{x}=\mathbf{0}$
Consider
$$
A =\left( \begin{matrix}
1 & -1 & 0 & -2 \\
0 & 0 & 1 & -1 \\
\end{matrix}
\right)
$$
and find a basis and the dimension of $S(A,0)$, where $S(A,0)$ is the subspace of all the solutions $\mathbf{x}\in\mathbb{R}^n$ to the linear equations $A\mathbf{x}=\mathbf{0}$.
We need to solve:
$$w-x-2z=0$$
$$y-z=0$$
So for any $\lambda,\mu\in\mathbb{R}$ we get $w=\lambda$, $x=\lambda - 2\mu$, $y=\mu$, $z=\mu$.
Does this mean that $S(A,0)$ is spanned by two vectors? For example;
*
*$\lambda = 1, \mu = 0$ gives the vector $(1,1,0,0)$
*$\lambda = 0, \mu = 1$ gives the vector $(0,-2,1,1)$
So then the dimension of $S(A,0)$ would be 2?
|
It's worth mentioning that one can easily check what the dimension of $S$ (usually called the null space) is going to be.
For an $m \times n$ matrix $A$, the rank-nullity theorem states that $\operatorname{Rank}(A) + \operatorname{Null}(A) = n$.
In this case, $n = 4$, and $\operatorname{Rank}(A) = 2$ since it is already in echelon form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1038984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to simpify $\cos x - \sin x$ How does one simplify
$$\cos x - \sin x$$
I tried multiplying by $\cos x + \sin x$, but that just gets me $$\cos x - \sin x = \frac{\cos 2x}{\cos x + \sin x}$$ which is worse.
Yet wolframalpha gives me $\cos x - \sin x = \sqrt{2}\sin\left(\dfrac{\pi}{4}-x\right)$. How does one obtain this algebraically?
|
$$
s = \cos x - \sin x \\
s^2 = \cos^2 x - 2 \cos x \sin x + \sin^2 x = 1 - \sin 2x \\
= 1 - \cos (\frac{\pi}2 -2 x)\\
= 1 - \left(1 - 2 \sin^2(\frac{\pi}4 - x)\right)\\
=2 \sin^2(\frac{\pi}4 - x)
$$
so
$$
s = \pm \sqrt{2} \sin(\frac{\pi}4 - x)
$$
and evaluating at $x=0$ shows that the positive sign must be taken
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1039072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
Prove $8n^{3}$ $+$ $√n$ $∈$ $Θ$($n^{3})$ just wondering if I proved this question correctly. Any hints, help, or comments would be appreciated.
There are two cases to consider to prove $8n^{3}$ $+$ $√n$ $ϵ$ $Θ(n^{3})$
*
*$8n^{3}$ $+$ $√n$ $ϵ$ $O$$(n^{3})$
*$8n^{3}$ $+$ $√n$ $ϵ$ $Ω$$(n^{3})$
1.)
There should exist a constant $c > 0$ and $k$ where $8n^{3}+ √n < cn^{3}$ for every $n > k$.
In this case consider $c = 9$, then there must exist a case where $8n^{3}+ √n < 9n^{3}$ holds true.
Therefore, when $k = 1$ then $8n^{3}$ $+$ $√n$ $ϵ$ $O$$(n^{3})$ because the inequality $8n^{3}+ √n < 9n^{3}$ will be true in every case of $n > k$.
2.)
There should exist a constant $d > 0$ and $j$ where $8n^{3}+ √n > dn^{3}$ for every $n > j$.
In this case consider $d = 8$, then there must exist a case where $8n^{3}+ √n > 8n^{3}$ holds true.
Therefore, when $j = 0$ then $8n^{3}$ $+$ $√n$ $ϵ$ $Ω$$(n^{3})$ because the inequality $8n^{3} + √n > 8n^{3}$ will be true in every case of $n > j$
|
An alternative is to use the limit test. Consider $f(n) = 8n^{3} + \sqrt{n}$ and:
$$L = lim_{n \to \infty} \frac{f(n)}{n^{3}}$$
Note $f(n) \in o(n^{3})$ iff $L = 0$ (and little-o is the strict inequality, which implies Big-O).
Similarly, $0 < L < \infty \implies f(n) \in \Theta(n^{3})$.
And finally, $L = \infty \implies f(n) \in \omega(n^{3}) \implies f(n) \in \Omega(n^{3})$. Little-omega is also the strict inequality of Big-Omega.
I think this is easier, but your proof is valid.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1039148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find sequence of differentiable functions $f_n$ on $\mathbb{R}$ that converge uniformly, but $f'_n$ converges only pointwise Question:
Find a sequence of differentiable functions $f_n$ on $\mathbb{R}$ that converge uniformly to a differentiable function $f$, such that $f'_n$ converges pointwise but not uniformly to $f'$.
Attempt:
I have tried a number of possibilities, such as $f_n=x^n$ or $f_n=\frac{x^n}{n}$ but I don't know what the right approach is to construct the function. I am initially thinking that it's easiest to construct such a sequence of functions on the interval $[0,1]$ so that in the limit of $n$, part of the function goes to $0$ and the other part goes to $1$. However, this would make the resulting $f$ non-differentiable.
|
Let $f_n(x)=0$ if $|x|\ge 1/n.$ For $|x|<1/n$ let $f_n(x)=n^3(x^2-1/n^2)^2.$
$|f_n(x)|\le 1/n$ for all $x$ so $f_n$ converges uniformly to $f=0.$ So $f'=0.$
It is easy to confirm that $f'_n(x)$ exists when $x=\pm 1/n.$
$-1=\frac {f_n(1/n)-f_n(0)}{1/n-0}=f'_n(y_n)$ for some $y_n\in (0,1/n)$ so $f'_n$ does not converge uniformly to $0=f'.$
$f'_n(0)=0=f'(0)$ for every $n.$
If $x\ne 0$ then $\{n\in\Bbb N: f'_n(x)\ne f'(0)=0\}=\{n\in\Bbb N:n<1/|x|\}$ is a finite set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1039207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
}
|
Combinatorial identity with sum of binomial coefficients How to attack this kinds of problem? I am hoping that there will some kind of shortcuts to calculate this.
$$\sum_{k=0}^{38\,204\,629\,939\,869} \frac{\binom{38\,204\,629\,939\,869}{k}}{\binom{76\,409\,259\,879\,737}{k}}\,.$$
EDIT:
As I see, the numerator is $n \choose k$ and the denominator is ${2n-1} \choose k$, where $n =38\,204\,629\,939\,869$. i.e $$\sum_{k=0}^n {\frac {n \choose k} {{2n-1} \choose k}} = 2.$$
|
According to the Gosper's algorithm (Maxima command
AntiDifference(binomial(n,k)/binomial(2*n-1,k),k),
also implemented in Mathematica and Maple):
$$
{\frac {n \choose k} {{2n-1} \choose k}}
=
{{\left((k+1)-2n\right){{n}\choose{k+1}}}\over{n{{2n-1}\choose{k+1
}}}}
-{{\left(k-2n\right){{n}\choose{k}}}\over{n{{2n-1}\choose{k}}}}
$$
and the sum telescopes :
$$
\sum_{k=0}^n{\frac{n \choose k}{{2n-1} \choose k}}
=
\sum_{k=0}^n{{\left((k+1)-2n\right){{n}\choose{k+1}}}\over{n{{2n-1}\choose{k+1}}}}
-{{\left(k-2n\right){{n}\choose{k+1}}}\over{n{{2n-1}\choose{k}}}}=
{{\left(1-n\right){{n}\choose{n+1}}}\over{n{{2n-1}\choose{n}}}}-
{{\left(-2n\right){{n}\choose{0}}}\over{n{{2n-1}\choose{0}}}}=0-(-2)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1039332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 5,
"answer_id": 3
}
|
Resolvent matrix Suppose $A$ is a triangular matrix. What is the most efficient known algorithm to compute the polynomial (in $x$) matrix $(xI-A)^{-1}$?
Of course, $(xI-A)^{-1}= N(x)/p_A(x)$, where $p_A$ is the characteristic polynomial of $A$, which is easy to compute once we know an eigendecomposition of $A$. But what about $N(x)$?
I am aware of the Leverrier-Fadeev algorithm, which requires $O(n^4)$ operations if $A$ is $n\times n$. Moreover, it makes use of power iteration, which can lead to numerical instability.
|
Since $A$ is triangular, you may try to first diagonalize if, $A=PDP^{-1}$. You already know what the eigenvalues of $A$ are. Then, $$(xI-A)^{-1} = (P(xI-D)P^{-1})^{-1} = P(xI-D)^{-1}P^{-1}$$ and $(xI-D)^{-1}$ is trivial to calculate. Does this help?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1039453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
New proof about normal matrix is diagonalizable. I try to prove normal matrix is diagonalizable.
I found that $A^*A$ is hermitian matrix. I know that hermitian matrix is diagonalizable.
I can not go more. I want to prove statement use only this fact.
I need you help.
(professor said that we can prove only use this fact.
|
This Wikipedia article contains a sketch of a proof. It has three steps.
*
*If a normal matrix is upper triangular, then it's diagonal. (Proof: show the upper left corner is the only nonzero entry in that row/column using a matrix-norm argument; then use induction.)
Details of proof: write $A$ as $Q T Q^{-1}$ for some unitary matrix Q, where $T$ is upper triangular. From $A A^{*} = A^{*} A$, conclude that $T T^{*} = T^{*} T$. Observe that the left hand side is the matrix whose $ij$ entry is $\langle t_i, t_j\rangle$, where $t_i$ is the $i$th column of $T$. The right hand side has an $ij$ entry that's $\langle s_i, s_j\rangle$, where $s_i$ is the $i$ the row.
Considering $i = 1$, you can conclude that the norm of the first row is the same as the norm of the first column, so all the non-diagonal entries of that row must be zero. Now look at the $(2,2)$ entries of the two matrices: the second column has the form $(0, t_{2,2}, 0, \ldots, 0)$; the second row may have nonzero entries after the second. But the norms of these two vectors must be equal, so all those potentially nonzero entries must be zero. Continue in this vein until you've shown that all off-diagonal entries are zero.
*Show that every matrix is (unitarily) similar to an upper triangular one [that's called the Schur Decomposition], and that the similarity doesn't change "normality": if $A$ is similar to upper-triangular $T$, and $A$ is normal, then so is $T$.
*Conclude that a normal matrix is similar to an upper-triangular normal matrix, which is necessarily diagonal, by step 1, so you're done.
I know that's not the proof you asked for, but as @lhf points out, your proposed proof goes via a route that doesn't require normality, so it can't possibly work.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1039530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
}
|
Continuous functions satisfying $f(x)+f(2x)=0$? I have to find all the continuous functions from $\mathbb{R}$ to $\mathbb{R}$ such that for all real $x$,
$$f(x)+f(2x)=0$$
I have shown that $f(2x)=-f(x)=f(x/2)=-f(x/4)=\cdots$ etc. and I have also deduced from the definition of continuity that for any $e>0$, there exists a $d>0$ so if we have that:
$|2x-x|=|x|< d$, this implies that
$|f(2x)-f(x)|=|-2f(x)| < e$.
Is this the correct way to begin? And if so, how should I continue?
Thank you!
|
first of all for $x=0$ we have
$$f(0)+f(2\cdot0)=0\Leftrightarrow f(0)=0$$
On the other hand
$$f(x)+f(2x)=0$$
$$-f(2x)-f(4x)=0$$
$$f(4x)+f(8x)=0$$
$$.......$$
$$f(2^nx)+(-1)^nf(2^{n+1}x)=0$$
Adding both sides respectively yields
$$f(x)+(-1)^nf(2^{n+1}x)=0\Rightarrow f(x)=(-1)^{n+1}f(2^{n+1}x)$$
The LHS is continuous by the hypothesis of the problem, however the RHS is alternating in sign depending on the value of $n$ and hence not continuous, unless $f(x)\equiv0$ for all $x\in\mathbb{R}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1039622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Countable collection of countable sets and Axiom of choice Do we need Axiom of choice(or weaker version axiom of countable choice) to say countable Cartesian product of countable sets is nonempty? I think yes.
I read somewhere answer no giving argument: each countable set can be well ordered and after well ordering each countable set we choose least element in each to prove their Cartesian product is non empty. But I see gap in this argument because there are many ways a countable set can be well ordered. So which way we will well order sets?
|
Even when free the sets have a natural well order to them, the countable union of countable sets is not necessarily countable.
For example, in some models of $\sf ZF$ the first uncountable ordinal, $\omega_1$ is the countable union of countable ordinals.
And no, the countable product of finite sets doesn't have to be non-empty without choice, let alone that of countable sets. Not only that, it is true that the statement "every countable product of countable sets is non-empty" is strictly weaker than the axiom of countable choice.
In fact! It can Bethe case that the countable product of countable sets are non-empty, but there is a countable family of countable sets whose union is not countable. Because in the proof of the latter we choose from sets of size continuum, not just countable sets.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1039725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Connected sum of projective plane $\cong$ Klein bottle How can I see that the connected sum $\mathbb{P}^2 \# \mathbb{P}^2$ of the projective plane is homeomorphic to the Klein bottle?
I'm not necessarily looking for an explicit homeomorphism, just an intuitive argument of why this is the case. Can we see it using fundamental polygons?
|
Here's an answer more in the spirit of the question. All figures should be read from upper left to upper right to lower left to lower right.
Fig 1: A Klein bottle...gets a yellow circle drawn on it; this splits it into two regions, which we reassemble, at which point they're obviously both Mobius bands:
To see that $P^2$ minus a disk is a Mobius band, look at the following. In the upper left is $P^2$, drawn as a fundamental polygon with sides identified. In the upper right, I've removed a disk. The boundary of the now-missing disk is drawn at the lower left as a dashed line, and the two remaining parts of the edge of the fundamental polygon are color-coded to make the matching easier to see. In the lower right, I've morphed things a bit, and if you consider the green-followed-by-red as a single edge, you can see that when you glue the left side to the right, you get a M-band.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1039819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 3,
"answer_id": 0
}
|
Proof by Induction - Wrong common factor I'm trying to use mathematical induction to prove that $n^3+5n$ is divisible by $6$ for all $n \in \mathbb{Z}^+$.
I can only seem to show that it is divisible by $3$, and not by $6$. This is what I have done:
Let $f(n) = n^3+5n$.
Basis Step: When $n=1$, $f(n)= 6$ and clearly $6$ divides $6$.
Assumption Step: Assume that $6$ divides $f(n)$ when $n=k$.
Inductive Step: Consider $f(k+1)-f(k)$:
$$f(k+1)-f(k) = [(k+1)^3+5(k+1)]-[k^3+5k]$$
$$=3k^2+3k+6$$
$$=3(k^2+k+2)$$
It follows that $f(k+1) = f(k)+3(k^2+k+2)$. I really wanted a common factor of $6$.
|
From your proof...
Note that if k is odd, k^2 + k + 2 is even and hence divisible by 2, and that the same is true if k is even. Therefore k^2 + k + 2 is divisible by 2 for all k. this gives you the extra factor of 2 you need to get a factor of 6.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1039927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Proving of $\frac{\pi }{24}(\sqrt{6}-\sqrt{2})=\sum_{n=1}^{\infty }\frac{14}{576n^2-576n+95}-\frac{1}{144n^2-144n+35}$ This is a homework for my son, he needs the proving.I tried to solve it by residue theory but I couldn't.
$$\frac{\pi }{24}(\sqrt{6}-\sqrt{2})=\sum_{n=1}^{\infty }\frac{14}{576n^2-576n+95}-\frac{1}{144n^2-144n+35}$$
|
You can apply the residue theorem after a bit of playing with the sums:
\begin{align*}&\sum_{n=1}^\infty\frac{14}{576n^2-576n+95}-\sum_{n=1}^\infty\frac4{576n^2-576+140}=\\&\sum_{n=1}^\infty\left(\frac1{24n-19}-\frac1{24n-5}\right)-\sum_{n=1}^\infty\left(\frac1{24n-14}-\frac1{24n-10}\right)=\\&\sum_{n=1}^\infty\left(\frac1{24n-19}+\frac1{24(1{-}n)-19}\right)-\sum_{n=1}^\infty\left(\frac1{24n-14}+\frac1{24(1{-}n)-14}\right)=\\&\sum_{n=-\infty}^\infty\frac1{24n-19}-\sum_{n=-\infty}^\infty\frac1{24n-14}=\sum_{n=-\infty}^\infty\frac5{(24n-19)(24n-14)}\end{align*}
Now consider
$$\lim_{n\to\infty}\int_{\varphi_{n+1/2}}\frac{5\pi\cot\pi z}{(24z-19)(24z-14)}dz=0,$$
where $\varphi_{n+1/2}$ is the circle of radius $n{+}\small 1/2$. The sum of all residues of the integrated function is also $0$ and the residues at points of $\mathbb Z$ gives us the terms of the sum.
But there are $2$ more, the residues at $\frac{19}{24}$ and $\frac{7}{12}$ are $\frac{5\pi\cot\frac{19}{24}\pi}{24\cdot(19-14)}$ and $\frac{5\pi\cot\frac7{12}\pi}{(14-19)\cdot 24}$ respectively, so your sum is equal to
$$-\left(\frac{\pi\cot\frac{19}{24}\pi}{14}-\frac{\pi\cot\frac7{12}\pi}{24}\right)=\frac{\pi}{24}\left(\cot\tfrac{7}{12}\pi-\cot\tfrac{19}{24}\pi\right)=\ldots=\frac{\pi}{24}(\sqrt6-\sqrt2).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Formula for perfect squares spectrum. I have been working on exercises from "A first Course in Logic" by S. Hedman.
Exercise 2.3 (d) asks to find a first-order sentence $\varphi$ having the set of perfect squares as a finite spectrum. But I am not sure whether or not my understanding of concepts of model and spectrum is correct.
My solution is:
$\varphi = (\exists x)((x^2 = b) \wedge (\forall y)(y\leq b))$, i.e., there is $x$ such that $x^2 = b$ and for all $y$, $y$ is less than or equal to a constant $b$. Hence, any set of positive integer numbers, with the maximal element that is a perfect square, models this sentence. For example,
$\{1\}, \{1,2,3,4\}, \{1,\ldots, 9\}$ and so on.
Could someone confirm my solution (am I on the right track)? Or am I missing something?
|
You can do this with the signature that has one unary relation $A(x)$ and one binary function $f(x,y)$. The sentence will say, essentially, that $f$ is a bijection between $A \times A$ and $M$.
It is much more difficult to do this sort of thing with the signature of arithmetic. You would need to include in $\phi$ several axioms about the addition and multiplication operations, the order relation, and how they are related. By comparison, the sentence obtained from the previous paragraph is relatively simple.
Remember that, for spectrum problems, you can choose any signature that you like. Choosing the right signature can make the problem much simpler.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
}
|
Evaluate $\int_0^1 \sec^3x \sin x \,dx $ Was working on some trig-based integration. I've been confident with easier ones, but can't seem to approach this one correctly.
Evaluate $\displaystyle\int_0^1 \sec^3x \sin x \,dx $
Which method of integration should I use to solve this integral?
|
$$
\int_0^1 sec^3(x)sin(x)dx = \int_0^1tan(x)sec^2(x)
$$
setting $u = tan(x)$
$du = sec^2(x)dx$
$dx = cos^2(x) du \therefore$
$$
\int_{tan(0)}^{tan(1)} u du
$$
$$
= \frac{tan^2(1)}{2}-\frac{tan^2(0)}{2}
$$
$$= \frac{tan^2(1)}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Covariance of uniform distribution and it's square I have $X$ ~ $U(-1,1)$ and $Y = X^2$ random variables, I need to calculate their covariance.
My calculations are:
$$
Cov(X,Y) = Cov(X,X^2) = E((X-E(X))(X^2-E(X^2))) = E(X X^2) = E(X^3) = 0
$$
because
$$
E(X) = E(X^2) = 0
$$
I'm not sure about the $X^3$ part, are my calculations correct?
|
We know it is $E(X^3)$ so:
$$E(X^3)=\int_{-1}^{1}x^3f(x)=0$$
So it is correct
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Counterexample for the infinitely many primes between two primes in a Noetherian ring Consider the following Proposition:
Proposition: Let $R$ be a noetherian ring. If $p_0 \subsetneq p_1 \subsetneq p_2$ is a chain of distinct prime ideals in $R$, then there exist infinitely many distinct primes $q$ such that $p_0 \subsetneq q \subsetneq p_2$.
For a proof, see for instance this question. I would like to see a counterexample if we drop the noetherian hypothesis. Should such a ring exists I would find it rather interesting because it would be an example where a "finiteness" hypothesis implies that there are infinitely many of something!
|
Consider a non-noetherian valuation ring of rank two. For such an example you can take a look at Examples of Non-Noetherian Valuation Rings.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Graph theory proofs I am trying to prove that half of the vertex cover of graph is less than it's matching number. The problem is I don't know how to start and what the solution should be like, please help!
|
Let $G=(V,E)$ be a graph. Denote by $F\subseteq E$ be a maximal matching of $G$ and by $U$ a minimal vertex cover of $G$. For each edge $e=(u,v)\in E$ we know that at least one of $u$ and $v$ is in $U$, so for each edge we have either one vertex in $U$ or two vertices, thus $|F|\leq 2\cdot |U|$. Hence, making it $|U|\geq \frac{|F|}{2}$.
Note that if our $G$ is a bipartite graph, then by König's theorem, we have an equality between the sizes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Visualizing Balls in Ultrametric Spaces I've been reading about ultrametric spaces, and I find that a lot of the results about balls in ultrametric spaces are very counter-intuitive. For example: if two balls share a common point, then one of the balls is contained in the other.
The reason I find these results so counter-intuitive is that I can easily picture "counter-examples," the problem being that these "counter-examples" are balls in Euclidean space.
My issue is not that I cannot prove these results. My issue is that I don't know how to think about/picture balls in ultrametric spaces, which makes it more difficult for me to actually come up with the proofs. Hence, does anyone have any hints as to how to think about/picture balls in ultrametric spaces?
|
Think of the Cantor set and its basic closed-and-open intervals, pictured below.
Note that for any two such intervals, they either do not intersect, or one is contained in the other.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
}
|
What is the limit of this sequence? Problem 3 in the Exercises after Chapter 3 in Principles of Mathematical Analysis by Walter Rudin, 3rd edition:
Let $s_1 \colon= \sqrt{2}$, and let
$$s_{n+1} \colon= \sqrt{2+\sqrt{s_n}} \mbox{ for } n = 1, 2, 3, \ldots. $$
Then how to rigorously calculate $$\lim_{n\to\infty} s_n,$$ justifying each step from the definition of convergence of a sequence and the theorems on convergence of sequences as have been proved by Rudin upto this point in his book?
I know that this sequence is increasing and bounded (above) and hence convergent.
|
As you have mentioned already, $L = \lim_{n \rightarrow \infty} s_n$ exists in $\mathbb{R}$. Also $L \geq 0$. Then
$$ L = \sqrt{2 + \sqrt{L}} $$
$$ L^2 = 2 + \sqrt{L} $$
Let $k = \sqrt{L}$. We have
$$k^4 = 2 + k$$
$$(k + 1)(k^3 - k^2 + k - 2) = 0$$
Since $k \geq 0$, we have
$$ k^3 - k^2 + k - 2 = 0$$
which has one positive real root around 1.35. Then $L = k^2$ is around 1.83.
For an exact value of $L$, please see Claude Leibovici's answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Can every positive integer be expressed as a difference between integer powers? In mathematical notation, I am asking if the following statement holds:
$$\forall\,n>0,\,\,\exists\,a,b,x,y>1\,\,\,\,\text{ such that }\,\,\,\,n=a^x-b^y$$
A few examples:
*
*$1=9-8=3^2-2^3$
*$2=27-25=3^3-5^2$
*$3=128-125=2^7-5^3$
I wrote a small Python script to compute a handful of examples:
powers = []
for i in range(2,8):
for j in range(2,8):
power = i**j
if power not in powers:
powers.append(power)
diffs = []
for i in range(0,len(powers)):
for j in range(i+1,len(powers)):
diff = abs(powers[i]-powers[j])
if diff not in diffs:
diffs.append(diff)
print sorted(diffs)
The first few missing values are $6$, $10$ and $14$.
But it doesn't prove of course that no such $a,b,x,y$ exist for them.
How should I tackle this problem? Any links to related research will also be appreciated.
|
See OEIS sequence A074981 and references there. $10$ does have a solution as $13^3-3^7$, but apparently no solutions are known for $6$ and $14$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
An example of a function in $L^1[0,1]$ which is not in $L^p[0,1]$ for any $p>1$ Title says most of it. Could you help me find an example?
It is easy obviously to show a function that would not be in $L^p[0,1]$ for a specific $p$ (say $(1/x)^{1/p}$, but I can't see how it would be done for all $p$. The reason I'm asking is because we proved in class that $L^p[0,1]$ is nowhere dense as a subset of $L^1[0,1]$, so there must be some $L^1[0,1]$ like this..
Thanks :)
Added: thanks for all the comments. there was some missing parts about how to use convergence theorems that i couldn't complete my own so i'd love assistance :)
|
Take
$$f(x) = a_n \quad\text{ if }\quad x\in\left(\frac{1}{2^n},\frac{1}{2^{n-1}}\right]$$
for $n = 1, 2, \dots$ and for some $a_n$.
You'll get
$$\int_0^1 f(x)\, dx = \sum_{n=1}^{\infty}\frac{a_n}{2^n}$$
$$\int_0^1 f^p(x)\, dx = \sum_{n=1}^{\infty}\frac{a_n^p}{2^n}$$
Then choose the sequance $a_n$ so that the first sum is convergent, while the second one is divergent for any $p>1$. For example
$$a_n = \frac{2^n}{n^2}.$$
$$\sum_{n=1}^{\infty}\frac{a_n}{2^n} = \sum_{n=1}^{\infty}\frac{1}{n^2}<\infty$$
$$\sum_{n=1}^{\infty}\frac{a_n^p}{2^n} = \sum_{n=1}^{\infty}\frac{2^{n(p-1)}}{n^{2p}} = \infty$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Is a flat coherent sheaf over a connected noetherian scheme already a vector bundle? Let $A$ be a connected noetherian ring (not necessarily irreducible), $M$ be a finitely presented flat $A$-module. Then $M_{\mathfrak{p}}$ is a free $A_{\mathfrak{p}}$-module for each $\mathfrak{p} \in \operatorname{Spec}(A)$.
Is it also true that $M$ is locally free in the sense that we find a generating system $f_1,...,f_n$ of $A$ such that the $M_{f_i}$ are free? When trying to come up with a proof, I needed that $A$ is irreducible, so I wonder if there is a counterexample if $A$ is reducible?
|
It is a standard fact in commutative algebra that, for an arbitrary commutative ring $A$, an $A$-module $M$ is finitely presented flat if and only if $M$ is finitely genertated projective if and only if $M$ is locally free of finite rank (:= there are elements $f_1,\dotsc,f_n \in A$ generating the unit ideal such that $M_{f_i}$ is finite free over $A_{f_i}$ for each $i$). A proof can be found in books on commutative algebra, and also in the stacks project. It follows that for an arbitrary scheme $X$ the flat $\mathcal{O}_X$-modules of finite presentation coincide with the locally free $\mathcal{O}_X$-modules of finite rank.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1040999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is the empty set the only possible set for $A$ such that $A=\{x|x\not\in A\}$? Is the empty set the only possible set for $A$ such that $A=\{x|x\not\in A\}$?
|
Not even the empty set has that property.
For any set $A$ we can show $A\ne\{x\mid x\notin A\}$. Namely, either $42\in A$ (in which case $42\notin \{x\mid x\notin A\}$), or $42\notin A$ (in which case $42\in \{x\mid x\notin A\}$). In both cases we have found something that is a member of exactly one of the collections, so they're not the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1041190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Find a closed form for the equations $1^3 = 1$, $2^3 = 3 + 5$, $3^3 = 7 + 9 + 11$ This is the assignment I have:
Find a closed form for the equations
$1^3 = 1$
$2^3 = 3+5$
$3^3 = 7+9+11$
$4^3 = 13+15+17+19$
$5^3 = 21+23+25+27+29$
$...$
Hints. The equations are of the form $n^3 = a1 +a2 +···+an$, where
$a_{i+1} = a_i +2$ and $a_0 =n(n−1)+1$.
My reasoning:
We have to find a formula that give us $n^3$ summing operands. (why is this useful?)
We know that the first operand (or term) of the sum is $a_0 =n(n−1)+1$.
In fact, if you put $n = 3$, then $a_0 = 3(3 − 1) + 1 = 3*2 + 1 = 7$, which is exactly the first number of sum.
Then I notice that each $n$ sum has $n$ operands, and each operand differs from one another of 2.
Thus I came out with this formula:
$$
\sum\limits_{i=0}^{n-1} a_0 + 2 \cdot i
$$
where $a_0 =n(n−1)+1$
For example, if $n = 3$, then we have
$(n(n−1)+1 + 2 \cdot 0) + (n(n−1)+1 + 2 \cdot 1) + (n(n−1)+1 + 2 \cdot 2) \equiv$
$\equiv (7 + 0) + (7 + 2) + (7 + 4) \equiv$
$\equiv 7 + 9 + 11$
Which is what is written as third example.
I don't know if this is correct form or even if this is a closed form, that's why I am asking...
|
$$n^3=\sum_{k=0}^{n-1}(n^2-(n-1)+2k)$$
Since
$\sum_{k=0}^{n-1}(n^2-(n-1)+2k)=n^3-n(n-1)+2(\frac{n(n-1)}{2})=n^3$
So it is the summation of $n$ consequitive odd number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1041244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
prove linear independence of polynomials Let $f_1,...,f_k$ be polynomials at field $K$ not including the zero polynomial and $\deg f_i \neq \deg f_j$ for every $i \neq j$ Show that polynomials $f_1,...,f_k $ are linear independent.
I don't know how to use the information that $\deg f_i \neq \deg f_j$.
I will try to consider a linear combination $a_1f_1+...+a_kf_k=0$ and show that then $a_1=...=a_k=0$ but I don't know how do derive it.
|
Suppose that $f_1 ,f_2 ,...,f_k$ are linear dependent and let $f_j $ be the polynomial whose degree is the largest then $f_j = c_1f_1+ ...c_{j-1}f_{j-1} + c_{j+1}f_{j+1}+.... +c_{k}f_{k}$ but the degree of right hand side is less thar left hand side and this is contradiction
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1041331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Multiplying game strategy Given the following game, what is the strategy to win?
Given $X,N\in \mathbb{N}$ such that $N>X$ and $N>1000$, two players play against each other. Each player multiply $X$ by $2$ or by $3$ by his own choice. The player who reach $N$ or above- wins.
I realized that if it's my turn and my opponent reached $\lceil \frac{N}{3} \rceil$ I win, so I tried to see how can I "make" him get there recursively, but nothing solid came to my mind so I'm pretty stuck.
Any help would be appreciated.
|
The best method of attack for this is probably to work backwards. So, you see that if the number given to you is above $\frac{N} 3$, you win. What numbers, less than this, can you give to your opponent such that they have to give you a number at least $\frac{N}3$? Well, since they have to multiply by at least two, if you give them some number between $\frac{N}6$ and $\frac{N}3$, you will win on your next turn. For what numbers is it possible for you to give your opponent such a number? Well, anything between $\frac{N}{18}$ and $\frac{N}{6}$ will suffice, since you can choose which move to do.
You can continue backwards to figure out which numbers you have a winning strategy for (i.e. how can you force your opponents move to be in the desired interval)? An important hint on seeing the general strategy is this:
No matter what your opponent does, you can always ensure that the number increases by a factor of $6$ between their turns.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1041445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Procedure to find a level curves I'm having trouble finding level curves. What's the procedure? In this case, for example:
$z=x^2+y^2=k$ $\hookrightarrow y=\sqrt(k-x^2)$
Then I sketch this based on knowing the formulas of circunference, parabola, ellipse, etc? Or do I sketch this based on first, second derivatives and general procedures of graph sketching?
Or every formula will lead to common things like hyperbole, parabola, circunferences,etc?
What should I focus my study on? Right now I'm not good in solving level curves problems...
Like finding level curves for this: $$ z=e^{x^2-y^2}$$
Thanks in advance.
|
Either you use a computer, or you sketch the curves based on recognizing the equation as something you already know.
In the first case you should recognize $x^2+y^2=k$ as the equation for a circle with radius $\sqrt k$ rather than try to rewrite it to get $y$ as a function of $x$.
Similarly for $e^{x^2-y^2}=k$ you would first take the logarithm on both sides to get $x^2-y^2=\log k$ and then recognize the latter as a hyperbola with asymptotes $x=\pm y$ -- or, when $\log k=0$, the crossing lines $x=\pm y$ themselves.
Of course you're not always going to get something you recognizing. In that case it is either the computer, or a lot of labor with pencil and paper to find enough points to connect them freehand.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1041525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Understanding a step in a double series proof I'm really confused, how do they get from the first line to the second line ?
$$\begin{align*}
S&=\frac12\left[\sum_{m=1}^\infty\sum_{n=1}^\infty\frac{m^2n}{3^m(n\cdot3^m+m\cdot3^n)}+\sum_{m=1}^\infty\sum_{n=1}^\infty\frac{n^2m}{3^n(n\cdot3^m+m\cdot3^n)}\right]\\\\
&=\frac12\sum_{m=1}^\infty\sum_{n=1}^\infty\frac{mn}{3^{m+n}}
\end{align*}$$
Can anyone explain this step?
|
Making a common denominator and factoring, observe that:
\begin{align*}
\frac{m^2n}{3^m(n \cdot 3^m + m \cdot 3^n)} + \frac{n^2m}{3^n(n \cdot 3^m + m \cdot 3^n)}
&= \frac{m^2n \cdot 3^n + n^2m \cdot 3^m}{3^m3^n(n \cdot 3^m + m \cdot 3^n)} \\
&= \frac{mn(m \cdot 3^n + n \cdot 3^m)}{3^{m+n}(n \cdot 3^m + m \cdot 3^n)} \\
&= \frac{mn}{3^{m+n}} \\
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1041625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show $\inf_f\int_0^1|f'(x)-f(x)|dx=1/e$ for continuously differentiable functions with $f(0)=0$, $f(1)=1$. Let $C$ be the class of all real-valued continuously differentiable functions $f$ on the interval $[0,1]$ with $f(0)=0$ and $f(1)=1$. How to show that
$$\inf_{f\in C}\int_0^1|f'(x)-f(x)|dx=\frac{1}{e}?$$
I have been able to show that $1/e$ is a lower bound. Indeed,
$$\begin{align*}
\int_0^1|f'(x)-f(x)|dx &= \int_0^1|f'(x)e^{-x}-f(x)e^{-x}|e^xdx \\
&\geq \int_0^1\left(f'(x)e^{-x}-f(x)e^{-x}\right) dx \\
&= \int_0^1 \frac{d\left(f(x)e^{-x}\right)}{dx}dx \\
&= f(1)e^{-1}-f(0)e^{0}\\
&=\frac{1}{e}.
\end{align*}$$
But how to show this is the infimum? Is there a function $f\in C$ such that we get $\int_0^1|f'(x)-f(x)|dx=1/e$?
|
Here is another method.
Let $f \in C^1([0,1])$ with $f(0)=0$ and $f(1)=1$, and $g=f'-f$.
then solving the ODE : $f' =f +g$ with $f(0)=0$ gives
$$f(x) = e^x \int_0^x e^{-t} g(t) dt.$$
Since $f(1)=1$, we get
$$ \int_0^1 e^{-t} g(t) dt = \frac{1}{e} \quad \quad (1). $$
By Holder we have : $| \int_0^1 e^{-t} g(t) dt| \leq \|g\|_1$.
Hence
$$\|g\|_1 = \int_0^1 |f'-f| \geq 1/e,$$
and equality holds iff $g$ goes to the dirac $1/e.\delta_0$ (with the condition (1)).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1041713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
}
|
How to integrate $\int_{0}^{1}\ln\left(\, x\,\right)\,{\rm d}x$? I encountered this integral in the quantum field theory calculation. Can I do this:
$$
\left. \int_{0}^{1}\ln\left(\, x\,\right)\,{\rm d}x
=x\ln\left(\, x\,\right)\right\vert_{0}^{1}
-\int_{0}^{1}\,{\rm d}x
=\left. x\ln\left(\, x\,\right)\right\vert_{\, x\ =\ 0}\ -\ 1
$$
So the first term looks divergent. But Mathematica gives finite result and the integral is $-1$. Why isn't the first term divergent ?.
|
Use L'Hopital's Rule to resolve the indeterminate form: $$\lim_{x\to 0^+}x\ln x=\lim_{x\to 0^+}{\ln x\over x^{-1}}=\lim_{x\to 0^+}{x^{-1}\over -x^{-2}}=\lim_{x\to 0^+}(-x)=0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1041898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
}
|
$ (3+\sqrt{5})^n+(3-\sqrt{5})^n\equiv\; 0 \; [2^n] $ Proof that for all $n\in \mathbb{N}$ :
$$
(3+\sqrt{5})^n+(3-\sqrt{5})^n\equiv\; 0 \; [2^n]
$$
|
HINT:
If $a_n=(3+\sqrt5)^n+(3-\sqrt5)^n$
$$a_{n+2}-6a_{n+1}+4a_n=0$$
Now use Strong induction like $2^m\mid a_m$ for $1\le m\le n$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1041975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Finding polynomials with their values at points Is there any way I can find a polynomial given any $2$ points (with $x$ coordinate OF MY CHOICE): Let's say there's some polynomial I don't know $(p(x)=2x^3+x^2+3)$, but my machine will give me an output. I give one $x$ value of my choice, and it returns $p(x)$, where $p(x)$ is the polynomial function. I give another value of my choice., $x+h$, and get the output $p(x+h)$. Given these outputs, I have to find $p(x)$ as a polynomial.
What I've done is plugged in $0$, which gives me the final term of the polynomial that is not multiplied by any power of $x$. Then I plug in $1$, getting another output. When I find the "slope" of the two points, I get the sum of all the coefficients of all the terms that are powers of $x$. If I do this for the given $p(x)$, I get $3$, which is the sum of $2$ and $1$. However, I can't figure out what powers of $x$ there are and what specific coefficients there are. Does anyone know how to solve this?
@GerryMyerson and @Shash said I can find the polynomial given the bound of the coefficients. I am confused as to what that means. There is only one number that is the sum of the coefficients. How is there a bound? Also, how do I find this sum of coefficients with just one value? I need to use one more value, M+1, as Shash said, so I can't use 2 values to find the max/sum, as I won't be able to ask for a value that is M+1. Can anyone help? Thanks.
EDIT: Non-negative integer coefficients are assumed.
|
No. You would a minimum of $n$ points, where $n$ is the dimension of the polynomial. For instance, let's try to find a cubic polynomial $p$ where $p(0)=0$ and $p(1)=1$. Notice that $p_1(x)=x^3-x^2+1$ and $p_2(x)=x^3-x+1$ both satisfy the given criteria.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1042061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 1
}
|
Real subfield of cyclotomic field is generated by $\zeta+\zeta^{-1}$ Let $p\neq 2$ a prime number, $\zeta=e^{\frac{2i\pi}{p}}$ and $\alpha=2\cos\left(\frac{2\pi}{p}\right)$. We consider the field extensions $F=\mathbb Q(\zeta)$ and $E=F\cap \mathbb R$ of $\mathbb Q$. I have shown that $[F:\mathbb Q]=p-1$, that $\zeta+\zeta^{-1}\in E$ and that $t^2-(\zeta+\zeta^{-1})t+1$ is the minimal polynomial of $\zeta$ on $E$. Now I'm trying to show that $E=\mathbb Q(\alpha)$. I have shown that $E\supset \mathbb Q(\alpha)$ but I do not arrive to show the inclusion $E\subset \mathbb Q(\alpha)$.
|
Basically you have $Q(\xi):Q(\xi+\xi^{-1})=2$ because of the minimal polynomial you've found.
Then notice that $Q(\xi):E >1$, then since $Q(\xi+\xi^{-1}) \subset E$, we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1042178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
boundedness of the sequence $a_n=\frac{\sin (n)}{8+\sqrt{n}}$ How can i prove boundedness of the sequence
$$a_n=\frac{\sin (n)}{8+\sqrt{n}}$$ without using its convergence to $0$? I know since it is convergent then it is bounded.
|
$$\Big|\frac{sin (n)}{8+\sqrt{n}}\Big|\le\Big|\frac{1}{8+\sqrt{n}}\Big|\le\dfrac{1}{9}, \forall n\in\mathbb{N}. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1042285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Bell number vs Factotial We have $B_n$ is Bell number and $n!$ - factorial.
So, what is greater: $n!$ or $B_n$ ?
How it can be proven?
|
Factorials are bigger than Bell numbers, except for the initial cases when there is equality.
A comment from Emeric Deutsch on OEIS A048742 says that the difference counts
Number of permutations of $[n]$ which have at least one cycle that has
at least one inversion when written with its smallest element in the
first position. Example: $a(4)=9$ because we have $(1)(243)$, $(1432)$,
$(142)(3)$, $(132)(4)$, $(1342)$, $(1423)$, $(1243)$, $(143)(2)$ and $(1324)$.
Since a count cannot be negative, and there is at least one example when $n \gt 2$, we need not look further.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1042398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Directional derivative of a function Feel like I may have gone wrong somewhere with this question:
Find the directional derivative of the function $f(x,y) = \displaystyle\dfrac{2x}{x-y}$ at the point $P(1, 0)$ in the direction of the vector $v=(4, 3)$.
I got: $f_x(x, y) = \dfrac{-2y}{(x-y)^2}$ $f_y(x, y) = \dfrac{2x}{(x-y)^2}$
$D_v(x, y) = 4f_x(x,y)+3f_y(x,y) = \dfrac{6x-8}{(x-y)^2}$
$D_v(1, 0) = \dfrac{6-8}{1} = -2$
Should I have normalised the vector to $v = (\dfrac{4}{5},\dfrac{3}{5})$ so the answer would be $\dfrac{-2}{5}$?
|
You made a mistake in the second-to-last step. You should get $-8y+6x$ instead of $-8+6x$ in the numerator.
Other than that your approach and calculations look fine to me.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1042652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to solve a recursively defined system It has been a while since I have tackled a problem like this and could use a refresher. I have a recursive system of equations that looks a little like:
$$x_{n}\left(\frac{1}{2}\right)+y_{n}\left(\frac{1}{4}\right)=x_{n+1} \\
x_{n}\left(\frac{1}{2}\right)+y_{n}\left(\frac{3}{4}\right)=y_{n+1}$$
Where the initial $x$ & $y$ values are given else where.
My question is how do I find the value of $x$ at $n=5$ iterations? Of course there is the naive method where you simply iterate through the $x$ and $y$ equations $5$ times. But something tells me there is a more efficient way to do this.
|
Probably the best answer uses linear algebra. You can rewrite this system as:
$$\begin{bmatrix} x_{n+1} \\ y_{n+1} \end{bmatrix} = \begin{bmatrix} \frac{1}{2} & \frac{1}{4} \\ \frac{1}{2} & \frac{3}{4} \end{bmatrix} \begin{bmatrix} x_n \\ y_n \end{bmatrix}.$$
This implies that
$$\begin{bmatrix} x_n \\ y_n \end{bmatrix} = A^n \begin{bmatrix} x_0 \\ y_0 \end{bmatrix}$$
where $A$ is the matrix above.
It is hard to exponentiate a general matrix but it is easy to exponentiate a diagonal matrix. This is why we diagonalize $A$ in this problem, which means we need the eigenvalues and eigenvectors. Here the characteristic polynomial is
$$p(\lambda)=\lambda^2-\frac{5}{4} \lambda + \frac{1}{4}.$$
You can find the roots $\lambda_1,\lambda_2$ of this polynomial with the quadratic formula. The eigenvectors $v_1,v_2$ are the solutions to $(A-\lambda I)v=0$ for each eigenvalue $\lambda$. Then the solution to your problem is given by
$$\begin{bmatrix} x_n \\ y_n \end{bmatrix} = c_1 \lambda_1^n v_1 + c_2 \lambda_2^n v_2$$
where $c_1,c_2$ are chosen such that $c_1 v_1 + c_2 v_2 = \begin{bmatrix} x_0 \\ y_0 \end{bmatrix}$. In an equivalent matrix form we have:
$$\begin{bmatrix} x_n \\ y_n \end{bmatrix} = \begin{bmatrix} v_1 & v_2 \end{bmatrix}\begin{bmatrix} \lambda_1^n & 0 \\ 0 & \lambda_2^n \end{bmatrix} \begin{bmatrix} v_1 & v_2 \end{bmatrix}^{-1} \begin{bmatrix} x_0 \\ y_0 \end{bmatrix}.$$
Here the calculations are actually fairly nice because you get $\lambda_1,\lambda_2 = \frac{\frac{5}{4} \pm \sqrt{\frac{25}{16} - 1}}{2} = \frac{\frac{5}{4} \pm \frac{3}{4}}{2} = \frac{5}{8} \pm \frac{3}{8}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1042755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Convergence test of the series $\sum\sin100n$ I need to prove that $$\sum_{n=1}^{\infty} {\sin{100n}} \; \text{diverges}$$ I think the best way to do it is to show that $\lim_{n\to \infty}{\sin{100n}}\not=0$. But how do I prove it?
|
We know that every subgroup of $\mathbb{R}$ is either discrete/cyclic or dense.
Consider the subgroup $G=\left\{100n+2m\pi:n,m\in\mathbb{Z}\right\}$. A simple argument shows that this subgroup is not cyclic (if $\gamma$ was a generator, then $\gamma$ would have to be rational because $100\in G$, but would also have to be irrational because $2\pi\in G$). Thus, $G$ is dense in $\mathbb{R}$.
Therefore, there exists a sequence of distinct elements of $G$ with $\pi/4<|g_k|<\pi/2$, say $g_k=100n_k+2\pi m_k$, and changing $g_k$ by $-g_k$ if necessary, we may assume $n_k>0$. Clearly, the set $\left\{n_k:k=1,2,\ldots\right\}$ is infinite (if not, then the $|g_k|$ would go to $\infty$). But
$$\sin(100n_k)=\sin g_k>\sin\pi/4$$
so there exists infinitely many indices $n_k\in\mathbb{N}$ such that $\sin(100n_k)>\sin\pi/4$, so $\sin(100n)$ does not converge to $0$>
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1042823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
expected value of a random palindrome If you choose a 6-digit palindrome at random, what is the expected value for the number?
All possible palindromes are equally likely to be chosen. Beginning number must be NONZERO, so numbers like 012321 are NOT allowed.
I'm not sure where to start. What values and the probabilities of the values should I look at?
|
Well, first of all, you need to count all possible outcomes.
In other words, you need to count all possible palindromes. All your palindromes are uniquely defined by their first three digits. You have $9$ possibilities for the first digit (can't be zero) and $10$ possibilities for the second and third.
Secondly, since the numbers are chosen with equal probabilities, you need to count their sum. The hint for this part is that $${abccba} = 100001\cdot a + 10010\cdot b + 1100\cdot c.$$
Can you take it from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1042903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Summation of trigonometric functions such as $\sin x$ I am currently studying Integration (a very basic introduction) and I have a question regarding the summation of trigonometric functions. Given $f(x) = \sin x$, determine the area under the curve between a and b. By definition of a definite integral (using sigma notation rather than antiderivatives),
$$
\int_a^b f(x)\,dx = \lim_{n\to\infty} \sum_{i=1}^\infty f(x_i)\,\Delta x
$$
Before I proceed in determining the integral, is there a way that I can determine the value of
$$
\sum_{i=1}^n\sin i
$$
To put this in context, I am wondering if there is a formula that one can use for the summation of trig functions. There are formulas for summation of polynomials. i.e
$$
\sum_{i=1}^n i = \frac{n(n+1)}{2}
$$
I would like to expand this method to all trig functions!
Any help would be much appreciated.
|
What you really want for the Riemann sum of $\int_a^b \sin x \, dx$ is
to take $\sin x$ at $n$ uniform steps within the interval $[a,b].$
So you want $x_i$ to be something like $a + i\Delta x,$
or even better,
$$x_i = a + i\Delta x - \tfrac12\Delta x \quad \text{where}
\quad \Delta x = \frac{b-a}{n},$$
so that $x_1 = a + \tfrac12\Delta x$ and $x_n = b - \tfrac12\Delta x.$
If you define $x_i$ and $\Delta x$ in that way,
the summation you're looking for is
$$ \sum_{i=1}^n \sin\left(x_i\right) \Delta x. $$
Here's a handy trigonometric identity you can use for this problem:
$$\sin A \sin B = \tfrac12 \cos(A−B) − \tfrac12 \cos(A+B).$$
We can apply it as follows: let $A = x_i$ and let $B = \frac12 \Delta x.$
Then
$$\sin (x_i) \sin \left(\tfrac12 \Delta x\right) =
\tfrac12 \cos\left(x_i - \tfrac12 \Delta x\right)
− \tfrac12 \cos\left(x_i + \tfrac12 \Delta x\right). \tag1$$
Since you want to compute a summation over $\sin (x_i) \Delta x$ rather than
$\sin (x_i) \sin \left(\tfrac12 \Delta x\right),$
let's multiply both sides of equation$\ (1)$ by
$\dfrac{\Delta x}{\sin \left(\tfrac12 \Delta x\right)}$ to obtain
$$\sin (x_i) \Delta x =
\frac{\Delta x}{2\sin\left(\tfrac12\Delta x\right)}
\cos\left(x_i - \tfrac12\Delta x\right)
− \frac{\Delta x}{2\sin \left(\tfrac12 \Delta x\right)}
\cos\left(x_i + \tfrac12 \Delta x\right). $$
The big fraction on the right side of this equation will occur at least once in every
equation we write after this; to reduce clutter, let
$k = \dfrac{\Delta x}{2\sin\left(\tfrac12\Delta x\right)}$ so that we can write
$$\sin (x_i) \Delta x = k \cos\left(x_i - \tfrac12\Delta x\right)
− k \cos\left(x_i + \tfrac12 \Delta x\right).$$
Now let's take a look at the next term in the summation, $\sin (x_{i+1}) \Delta x.$
Since $x_{i+1} = x_i + \Delta x,$
$$\begin{eqnarray}
\sin (x_{i+1}) \Delta x
&=& k \cos\left(x_{i+1} - \tfrac12 \Delta x\right)
− k \cos\left(x_{i+1} + \tfrac12 \Delta x\right)\\
&=& k \cos\left(x_i + \tfrac12 \Delta x\right)
− k \cos\left(x_{i+1} + \tfrac12 \Delta x\right).
\end{eqnarray}$$
Now notice what happens if we add $\sin (x_{i+1})$ to $\sin (x_i).$
The two terms $k \cos\left(x_i + \tfrac12 \Delta x\right)$ cancel,
and we're left with
$$\sin (x_i) \Delta x + \sin (x_{i+1}) \Delta x =
k \cos\left(x_i - \tfrac12 \Delta x\right)
− k \cos\left(x_{i+1} + \tfrac12 \Delta x\right). $$
This is what we call a "telescoping sum," and it simplifies the summation wonderfully:
if we write each term $\sin (x_i) \Delta x$ as a difference of two cosines
(times a constant), as in the equations above, a pair of cosines cancel each other
each time we add another term to the sum, and we're left with just a difference
of two cosines (times a constant) at the end.
More formally,
$$\begin{eqnarray}
\sum_{i=1}^n \sin\left(x_i\right) \Delta x
&=& \sum_{i=1}^n \left( k \cos\left(x_i - \tfrac12 \Delta x\right)
− k \cos\left(x_i + \tfrac12 \Delta x\right) \right) \\
&=& k \left( \sum_{i=1}^n \cos\left(x_i - \tfrac12 \Delta x\right)
− \sum_{i=1}^n \cos\left(x_i + \tfrac12 \Delta x\right) \right).
\end{eqnarray}$$
The last $n - 1$ terms of
$\sum_{i=1}^n \cos\left(x_i - \tfrac12 \Delta x\right)$
are the same as the first $n - 1$ terms of
$\sum_{i=1}^n \cos\left(x_i + \tfrac12 \Delta x\right),$
so these cancel, leaving
$$\begin{eqnarray}
\sum_{i=1}^n \sin\left(x_i\right) \Delta x
&=& k \left( \cos\left(x_1 - \tfrac12 \Delta x\right)
− \cos\left(x_n + \tfrac12 \Delta x\right) \right) \\
&=& \frac{\Delta x}{2 \sin \left(\tfrac12 \Delta x\right)} (\cos a − \cos b). \tag2
\end{eqnarray}$$
That's the sum. To derive a definite integral from this, take the limit
of the right-hand side of equation$\ (2)$ as $\Delta x \to 0.$
You'll find it's the same answer as you get from the antiderivative.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1043012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Multivariable Calculus Integral volume question
Volume is:
$$
\int_{x=0}^{1}\int_{z=4x^2}^{z=5-x^2}(1-x)\;dz\;dx
$$
The picture above is the solution for the question that I need to find the volume of the region bounded by $z=5-x^2, z=4x^2,$ and the planes $y=0, x+y=1$. Is the integral set up correctly?
I'm not sure what it means by the shaded area is part of the vertical plane $x+y=1$. Also how come x cannot be less than 0? ie what is it bounded by?
|
Those limits don't work unless your teacher meant something else or I have misunderstood the question.
$z=4x^2$ and $z=5-x^2$ define a pair of parabolic cylinders with the y-axis as the axis. These intersect at$ x=\pm 1, z=4$.
The lower face of this volume is capped by the xz plane ($y=0$).
If the upper face had been another constant $y=c$, notice that you would use $x=-1$ in your limit of integration - this is the minimum x-value always. However, since the plane $x+y=1$ bounds the volume, the upper x-limit is always $1-y$. The lower x-limit is always -1. When $y=2$, the upper and lower limits coincide (no more of the volume to integrate) so your volume is
$$
\int_{y=0}^{y=2} \int_{x=-1}^{x=1-y} \int_{z=4x^2}^{5-x^2} dz dx dy
$$
Edited to add: The limits you posted would work if you took $x=0$ as another bounding plane.
Then you would have
$$
\int_{x=0}^{x=1} \int_{y=0}^{y=1-x} \int_{z=4x^2}^{5-x^2} dz dy dx
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1043093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Countrymen seated around a round table probability question Seated around the table are:
- 2 Americans
- 2 Canadians
- 2 Mexicans
- 2 Jamaicans.
Each countryman is distinguishable.
How many ways possible can all 8 people be seated such that AT LEAST TWO men from the same country sit next to each other?
Rotations are considered the same.
I know that I have to use PIE somehow on the two men, but it is slightly more confusing because there are four groups and two men in each group. Any hints?
EDIT 12/1/2014: I checked the original problem statement and it said that "In how many ways can all eight people be seated such that at least two pairs of countrymen are seated together?". I mistakenly put "at least two men" rather than "at least two pairs". I'm terribly sorry for the confusion!
|
Let us try to find those arrangements in which no two countrymen sit together.Let A1A2,B1B2,C1C2 and D1D2 represent same countrymen. Since we are talking about circular permutations,let A1 be reference point and let B1,C1 and D1 be seated at poition 3,5 and 7 respectively.
1
A1
8 2
7 D1 B1 3
6 4
C1
5
At position 2 we can have C2,D2 only,at 4-A2,D2 at 6-A2,B2 and at 8-B2,C2.If C2 is chosen at 2, other places get fixed since 8 can have B2 only,6-A2 and 4 D2. Thus, we can have only 2 permutaions if we select 2 as C2 and D2.
Now,since postions 3,5 and 7 may be selected with any 3 people out of 7 thus we have a total of P(7,3)*2 permutations in which no two sit together.
Thus 7!-P(7,3)*2 permutations are those in which two or more countrymen sit together.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1043181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
}
|
Condition for plane $ax+by+cz+d = 0$ to touch surface $px^2+qy^2+2z=0$ Q. Show that the plane
$$ax+by+cz+d = 0$$
touches the surface
$$px^2+qy^2+2z=0$$
if $a^2/p + b^2/q +2cd = 0$.
How to start to solve this problem?
|
Using pole-and-polar relation
*
*Polar for a pole $(X,Y,Z)$
$$pXx+qYy+(z+Z)=0$$
*
*Identifying with the given plane:
$$\frac{pX}{a}=\frac{qY}{b}=\frac{1}{c}=\frac{Z}{d}$$
*
*For tangency, $(X,Y,Z)$ should be on the quadric, that is
\begin{align}
0 &= pX^2+qY^2+2Z \\
0 &=
p\left( \frac{a}{pc} \right)^2+
q\left( \frac{b}{qc} \right)^2+
\frac{2d}{c} \\
0 &= \frac{a^2}{p}+\frac{b^2}{q}+2cd
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1043284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
How to evaluate the following integral $\int_0^{\pi/2}\sin{x}\cos{x}\ln{(\sin{x})}\ln{(\cos{x})}\,dx$? How to evaluate the following integral
$$\int_0^{\pi/2}\sin{x}\cos{x}\ln{(\sin{x})}\ln{(\cos{x})}\,dx$$
It seems that it evaluates to$$\frac{1}{4}-\frac{\pi^2}{48}$$
Is this true? How would I prove it?
|
Find this
$$I=\int_{0}^{\frac{\pi}{2}}\sin{x}\cos{x}\ln{(\cos{x})}\ln{(\sin{x})}dx$$
Solution
Since
$$\sin(2x) = 2\sin(x)\cos(x)$$
then
$$I=\dfrac{1}{8}\int_{0}^{\frac{\pi}{2}}\ln{(\sin^2{x})}
\ln{(\cos^2{x})}\sin{(2x)}dx$$
Let $\cos{(2x)}=y$, and since
$$\cos(2x) = 2\cos^2x - 1 = 1 - 2\sin^2x$$
we get
$$I=\dfrac{1}{16}\int_{-1}^{1}\ln{\left(\dfrac{1-y}{2}\right)}
\ln{\left(\dfrac{1+y}{2}\right)}dy$$
Let $\dfrac{1-y}{2}=z$, then we have
\begin{align*}I&=\dfrac{1}{8}\int_{0}^{1}\ln{z}\ln{(1-z)}dz=\dfrac{-1}{8}\sum_{n=1}^{\infty}\dfrac{1}{n}
\int_{0}^{1}z^n\ln{z}dz\\
&=\dfrac{1}{8}\sum_{n=1}^{\infty}
\dfrac{1}{n(n+1)^2}=\dfrac{1}{8}\sum_{n=1}^{\infty}
\left(\dfrac{1}{n}-\dfrac{1}{n+1}\right)-\dfrac{1}{8}\sum_{n=1}^{\infty}
\dfrac{1}{(n+1)^2}\\
&=\dfrac{1}{4}-\dfrac{\pi^2}{48}
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1043407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 1
}
|
Use the definition of infinite limit to prove $\lim_{x \to 1+} \frac{x}{x^2-1}=\infty$
Prove
$$\lim_{x \to 1+} \frac{x}{x^2-1}=\infty$$
And I was given the solution like this: but I could not understand how it removes the complicated terms.
Let $\delta=\min(0.5,\frac{1}{5M})$.
$$\frac{x}{x^2-1}=\frac{x}{(x+1)(x-1)}
\geq\frac{0.5}{\left(\frac{1}{5M}\right)(1.5+1)}=5M\times0.2=M$$
I understand the definition of $M-\delta$, but what I don't understand is what is the solution doing, I mean the process of estimation to get rid of the complicated terms. Anyone can enlighten me? thanks!
|
When $M$ is large, it is enough to take $\delta=\frac{1}{2M}$. This is because if we have $\delta=\frac{1}{2M}$ then the numerator is larger than $1$ while the denominator is less than $\left ( 1 + \frac{1}{2M} \right )\frac{1}{2M}$. Hence the quotient is larger than $\frac{2M}{1+\frac{1}{2M}}$. To make this last expression larger than $M$, we need $\frac{1}{2M} \leq 1$, or in other words $M \geq \frac{1}{2}$.
The problem is that we're in control of $\delta$, not $M$, so we can't require $M \geq \frac{1}{2}$. A workaround is to take $\delta = \min \left \{ \frac{1}{2M},1 \right \}$. Then if $M<\frac{1}{2}$ then the numerator is still larger than $1$ and the denominator is still less than 2, so the quotient is still larger than $\frac{1}{2}>M$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1043682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Help me to find $\frac{\mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}}{<(1,1,1),(1,3,2)>}$. I have previously asked this question. But now I'm stuck in finding $\frac{\mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}}{<(1,1,1),(1,3,2)>}$.
Please give me hints to find it.
|
Since the matrix
$$
\begin{pmatrix}
1 & 1 & 1\\
1 & 3 & 0\\
1 & 2 & 1
\end{pmatrix}
$$
has determinant $1$, then $\{v_1, v_2, v_3\} = \{(1,1,1), (1,3,2), (1,0,1)\}$ is a basis for $\mathbb{Z}^3$. Then
\begin{align*}
\frac{\mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}}{\langle(1,1,1),(1,3,2)\rangle} \cong \frac{\mathbb{Z} v_1 \oplus \mathbb{Z} v_2 \oplus \mathbb{Z} v_3}{\mathbb{Z}v_1 \oplus \mathbb{Z}v_2} \cong \mathbb{Z} v_3 \cong \mathbb{Z} \, .
\end{align*}
Alternatively, consider the homomorphism
\begin{align*}
\varphi : \mathbb{Z}^3 &\to \mathbb{Z}\\
(x,y,z) & \mapsto -x - y + 2z \, .
\end{align*}
One can show that $\varphi$ is onto and $\ker \varphi = \langle (1,1,1), (1,3,2) \rangle$, so we can apply the first isomorphism theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1043760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Determine the minimal polynomial of $\sqrt 3+\sqrt 5$ I am struggling in finding the minimal polynomial of $\sqrt{3}+\sqrt{5}\in \mathbb C$ over $\mathbb Q$
Any ideas? I tried to consider its square but it did not helped..
|
I propose the following way to prove the polynomial $\;x^4-16x^2+4\;$ is irreducible over $\;\Bbb Q\;$ . First, factor it over the reals:
$$x^4-16x^2+4=(x^2-2\sqrt5\,x+2)(x^2+2\sqrt5\,x+2)$$
(this is way easier than what can thought at first, at least in this an other similar cases).
From here, it's clear the polynomial cannot be factores any other essentially different way as $\;\Bbb R[x]\;$ is a UFD and, of course, $\;\Bbb Q\subset\Bbb R\;$ . End the argument now.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1043830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 2
}
|
Basis for a vector space consisting of the set of linear combinations of these functions S is the vector space consisting of the set of all linear combinations of the functions $f_1(x)=e^x$, $f_2(x)=e^{-x}$, $f_3=sinh(x)$. Find a basis for S and find the dim[S].
First, I let
$c_1f_1$+$c_2f_2$+$c_3f_3$=$f$
Then, since $sinh(x) =\frac{e^x-e^{-x}}{2}$,
$f_3=\frac{f_1-f_2}{2}$
Substituting back in, I can get
$c_1f_1$+$c_2f_2$+$c_3\frac{f_1-f_2}{2}$ =$f$
$(c_1+\frac{1}{2}c_3)f_1$+$(c_2-\frac{1}{2}c_3)f_2$ =$f$
Am I on to something?
|
$f_1$ and $f_2$ are linearly independent (check using the definition), while $f_3$ is a linear combination of the first two. So the basis (i.e. linearly independent set which spans the space) is just $\{f_1,f_2\}$. Since this set contains two elements, the space is two dimensional.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1043906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
How to show that $\lim_{n\to\infty} c^n\sqrt n = 0$, where $c\in(0,1)$?
How to show that $\lim_{n\to\infty} c^n\sqrt n = 0$, where $c\in(0,1)$?
We often claim that exponent is "stronger" than root (and we even proved it last semester) but I can't remember how.
|
$c^n\sqrt n=e^{n\ln c}\sqrt n=e^{n\ln c+\ln\sqrt n}$ but
$n\ln c+\ln\sqrt n=n(\ln c+\dfrac{\ln \sqrt n}{n})\to -\infty$ ((because $\dfrac{\ln \sqrt n}{n}\to 0$ and $\ln c<0$)). Hence $c^n\sqrt n\to 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1043982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Local tilt angle based on height-field I'm trying to implement a mathematical formula in a program I'm making, but while the programming is no problem I'm having trouble with some of the math. I need to calculate $\sin(\alpha(x,y))$ with $\alpha(x,y)$ the local tilt angle in $(x,y)$.
I have $2$-dimentional square grid, with at each point the height, representing a $3$-dimentional terrain. To find the tilt in a point I can use the height of its direct neighbors. So $h(x+1,y)$ can be used, however $h(x+2,y)$ not. I also know the distance between two neighboring points ($dx$). By tilt I mean the angle between the normal at a point on the terrain and a vector pointing straight up.
This seems like a not too hard problem, but I can't seem to figure out how to do it. Anyone got a good way to do this?
Thanks!
|
A helpful construct here would be the normal vector to our terrain.
Our terrain is modeled by the equation
$$
z = h(x,y)
$$
Or equivalently,
$$
z - h(x,y) = 0
$$
We can define $g(x,y,z) = z - h(x,y)$. It turns out that the vector normal to this level set is given by
$$
\operatorname{grad}(g) =
\newcommand{\pwrt}[2]{\frac{\partial #1}{\partial #2}}
\left\langle \pwrt{g}{x},\pwrt gy, \pwrt gz \right \rangle =
\left\langle -\pwrt{h}{x},-\pwrt hy, 1 \right \rangle := v(x,y)
$$
We can calculate the angle between this normal and the vertical $\hat u = \langle 0,0,1 \rangle$ using the formula
$$
\cos \theta = \frac{u \cdot v}{\|u\| \|v\|}
$$
in particular, we find that
$$
\cos \theta = \frac{\hat u \cdot v}{\|\hat u\| \|v\|} =
\frac{1}{\sqrt{1 + \left( \pwrt hx \right)^2 + \left( \pwrt hy \right)^2}}
$$
We may approximate
$$
\pwrt hx(x,y) \approx \frac{h(x+dx,y) - h(x-dx,y)}{2(dx)}\\
\pwrt hy(x,y) \approx \frac{h(x,y+dy) - h(x,y-dy)}{2(dy)}
$$
Note: since you have to calculate $\sin \theta$, you find
$$
\sin \theta = \sqrt{1 - \cos^2 \theta} =
\frac{\sqrt{\left( \pwrt hx \right)^2 + \left( \pwrt hy \right)^2}}{\sqrt{1 + \left( \pwrt hx \right)^2 + \left( \pwrt hy \right)^2}}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1044044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
The inverse of a fractional ideal is a fractional ideal
Let $A$ be an integral domain and $K$ its field of fractions. If $M$ is a non-zero fractional ideal of $A$, then $$N=\{x \in K : xM \subseteq A\}$$
is also a fractional ideal of $A$.
The proof I am trying to follow is given in Dummit and Foote.
I agree that it is easy to check that $N$ is an $A$-submodule of $K$.
The next part of the proof goes as follows:
"By definition, there exists some $d \in A\setminus\{0\}$ such that $dM\subseteq A$ and so $M$ contains non-zero elements of $A$."
I really can't see how this follows - any help would be greatly appreciated. I'm sure I'm just being dense but it's driving me crazy!
|
$M\ne 0$ and thus there is $x\in M$, $x\ne 0$. What about $dx$? (In fact, you don't need $dM\subset A$ in order to find a non-zero element in $M$ which is also in $A$: write $x=a/b$ with $a,b\ne 0$, and notice that $bx=a\in M\cap A$.)
Edit. In order to conclude that $N$ is a fractional ideal note that $(dx)N\subseteq A$, and we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1044144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
We're looking for a treasure Here's a problem my sister-in-law just sent me and she doesn't find the answer. It's to help her daughter.
We have a map of an island. On this island there's a palm tree P, a house M and a big rock R. The rock is 8 meters far from the palm tree, and 5 meters far from the house. The house is 4 meters far from the palm tree. And we know that:
*
*the treasure is < 6 meters from the palm tree
*the treasure is > 4 meters from the house
*the treasure is < 5 meters from the rock
On the island below put points M and P using the previous clues.
Color the zone where we have to dig to be sure to find the treasure.
Could you explain the basic principles of how to solve that problem/draw what's asked (the picture is not the actual one)?
|
My answer is not much different than what @Nick, who was quicker, already posted. But here it is, there is one detail at the end that is a bit different.
See the picture below. Start with a circle centered at the rock, R, with radius $8$, I have labeled this circle 8R, meaning "8 from the rock". Pick a point P on that circle, and call it the palm. The house is on the circle 4P and also on 5R, these two circles intersect at two points, just pick one of them to be the house H. The treasure is inside <6P, outside >4H,and inside <5R, so it looks like a triangular area on the picture.
But, given we do not know which direction is North, etc, and which direction the palm is from the rock, etc, we have only determined how far away from the rock the treasure could be. If the palm was at a different direction from the rock, then that triangular area would end up in a different direction too, although certainly same distance from the rock. So it could be anywhere in the area sprinkled with blue on the picture, outside the green circle and inside the <5R circle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1044229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
}
|
Convergence of a function with $e$ in the denominator $$\int^{\infty}_1\frac{dx}{x^3(e^{1/x}-1)}$$
I'm given the hint that the function $y = e^x$ has a tangent $y=x+1$ when $x=0\land y=1$.
How do I prove its convergence and find a upper-limit for the improper integral's value?
What I've tried myself
Note that $e^{1/x}-1 \ge 1 \iff e^{1/x} \ge 2$.
Hence:
$$\int^{\infty}_1\frac{dx}{x^3(e^{1/x}-1)} = \int^{1/\ln 2}_1\frac{dx}{x^3(e^{1/x}-1)} + \int_{1/\ln 2}^\infty\frac{dx}{x^3(e^{1/x}-1)}$$
Now note that our first term in RHS converges as per
$$\frac{dx}{x^3(e^{1/x}-1)} \le \frac{1}{x^3},\quad x\in[1,\ln 2]$$
And since $\int^{1/\ln 2}_1\frac{1}{x^3}$ converges, so does our first term in RHS.
Am I doing this wrong? I don't know where I'm supposed to use the hint.
|
What you tried concerns ''0''. The
provided hint concerns what about $+\infty .$ Note that the exponential
function is above its tangent at $x=0,$ that is,
$$
e^{x}\geq x+1,\ \ \ \ \ \ for\ all\ x\in
%TCIMACRO{\U{211d} }
%BeginExpansion
\mathbb{R}
%EndExpansion
$$
then, for $t=\frac{1}{x}>0$ one has
\begin{eqnarray*}
e^{1/t} &\geq &\frac{1}{t}+1>0 \\
e^{1/t}-1 &\geq &\frac{1}{t}>0 \\
t &\geq &\frac{1}{e^{1/t}-1}>0 \\
\frac{1}{t^{2}} &\geq &\frac{1}{t^{3}(e^{1/t}-1)}>0
\end{eqnarray*}%
so since
$$
\int_{\ln 2}^{+\infty }\frac{1}{t^{2}}dt\text{ converges}
$$
then
$$
\int_{\ln 2}^{+\infty }\frac{dt}{t^{3}(e^{1/t}-1)}\text{ converges
too.}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1044325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
What quantity does a line integral represent? I'm currently trying to wrap my head around line integrals, Green's theorem, and vector fields and I'm having a bit of difficulty understanding what a line integral represents geometrically.
Is it basically the arc length of a curve, for a scalar field?
And then when you bring the concept into a vector field, then what does it represent?
|
There are at least two worthwhile interpretations of a line integral in two-dimensions.
First, $\int_C (\vec{F} \cdot T)ds = \int_C Pdx+Qdy$ measures the work or circulation of the vector field along the oriented curve $C$. This integral is largest when the vector field aligns itself along the tangent direction of $C$. As this relates to Green's Theorem we obtain the usual form of Green's Theorem which is identified with the $z$-component of the curl a bit later in the course. For $C = \partial R$
$$ \int_{\partial R} (\vec{F} \cdot T)ds = \iint_R (\nabla \times \vec{F})_z dA $$
Second, $\int_C (\vec{F} \cdot n)ds = \int_C Pdy-Qdx$ measures the flux of the vector field emitted through the oriented curve $C$. This integral is largest when the vector field aligns itself along the normal direction of $C$. As this relates to Green's Theorem we obtain the so-called divergence-form of Green's Theorem which is related to the Divergence Theorem in due course.
$$ \int_{\partial R} (\vec{F} \cdot n)ds = \iint_R (\nabla \cdot \vec{F}) dA $$
If you want to read more, one source would be pages 357-367 of my multivariable calculus notes which were heavily influenced by Taylor's excellent calculus text.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1044431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
For what values of $x$ does the series converge: $\sum \limits_{n=1}^{\infty} \frac{x^n}{n^n}$? For what values of $x$ do the following series converge or diverge
$$\sum \limits_{n=1}^{\infty} \frac{x^n}{n^n}$$
I tried to solve this using the ratio test where the series converge when
$$\lim \limits_{n \to \infty} \frac{x^{n+1}n^n}{(n+1)^{n+1}x^n} <1$$
$$\lim \limits_{n \to \infty} \frac{xn^n}{(n+1)^{n+1}} <1$$
but then I am not sure what to do next.
Please give me some ideas or hints on how to solve this question, thanks to anybody who helps.
|
Note that
$$
n^n \ge n! \implies \frac{x^n}{n^n} \le \frac{x^n}{n!}
$$
and use the comparison test against the series formulation of $e^x$
Note the above only works for positive $x$, however consider showing that the series is absolutely convergent for any $x$ using the above inequalities and thus it is convergent for any $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1044506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.